Forget T-Shaped, We Need V-Shaped Architects

A recent blog from the Open Group discusses the benefits of so called “T-shaped people”. According to this blog, T-shaped people are what HR are looking for these days. To quote from the blog a T-shaped person is someone who: “combines the broad level of skills and knowledge (the top horizontal part of the T) with specialist skills in a specific functional area (the bottom, vertical part of the T). They are not generalists because they have a specific core area of expertise but are often also referred to as generalizing specialists as well as T-shaped people“. The picture below shows this.

Traditionally for software architects the specialism that T-shaped people usually have has come from their entry-level skills or the ones that got them into the profession in the first place. This is usually a skill in a particular programming language,  development approach (agile, scrum or whatever) or other areas related to software development such as test or configuration management. As you progress through your career and begin to build on your skills (learning more programming languages, understanding more about design etc) you may add to the verticals in your T’s with these other specialisms. This, at least, has traditionally been the approach. The problem is that in some organisations in order to “progress” (i.e. earn more money) you almost need to know more about less. You need to generalise more and more quickly. No one is going to employ you to be a Java programmer if your salary is ten times that of what a Java programmer in India or China earns. This is not meant to be a criticism against software professionals in India or China by the way. It’s just the way of things. Soon people in India and China will be out-sourcing to lower cost regions and so the cycle will go on. It does however raise an interesting problem of how those core specialisms will be developed in people just entering the profession. I spent a good 15 years as a programmer before I moved into architecture and would like to think that what I learnt there gave me a good set of core, fundamental skills that I can still apply as an architect. I firmly believe that the fundamentals I learnt from programming (encapsulation, design by contract, the importance of loose coupling etc) never go out of fashion.

As I have blogged before, I believe that whilst good “generalizing specialists” can also make good architects there is another dimension to what makes a true architect who has the skills necessary to solve the really hard business as well as socio-political (e.g. global warming, global terrorism, resource shortages etc) problems that the world faces today. Gartner coined the term “versatilist” back in 2005 and whilst this does not seem to have really taken off (there is a versatilist web site but it seems to be little used) I like the fact that the ‘V’ of versatilist makes a nice paradigm for what 21st century IT architects need to be. V-shaped people are not just ones who have deep skills in specific functional areas but also have skills in other disciplines. Further a good V-shaped person is one who has skills not just in technical disciplines but also business and artistic disciplines. So why does this matter?

The concept of bringing interdisciplinary teams together to break down boundaries in solving difficult or wicked problems is not a new one. It is recognised that pooling different academic schools of thought can often throw up solutions to problems that any of the individual disciplines could not. It follows therefore that if an individual can be well rounded and at least have some level of knowledge in an area completely outside his or her core discipliines then they to may be able to shed new light on difficult problems. This is what being a versatilist is about. As shown below its not just about specialising in different functional areas within a discipline but also across disciplnes. If these disciplines can be a mix of the arts as well as the sciences that exercise both right and left brains then so much the better.

So how should versatilists develop their skills? Here are some suggestions I give to IT students when discussing how they might survive as professionals in the 21st century world of work:

  • Objectively view experiences and roles – When you have finished an assignment note down what you learnt from it, what you could have done better and maybe ask others what they thought of your performance.
  • Look further than current roles. Today you are working on a particular project however always have in mind what you want to do next and an idea of what you want to do after that. Don’t become stereotyped, prepare to move on even if you are in an area you know well.
  • Plan opportunities and assignments – This follows on from the last one. make sure each assignment really builds on and develops your skills. Step out of your comfort zone in each new assignment.
  • Explore other possibilities. Never assume there is only one option. Think differently and look at alternatives. Like Paul Arden said, “Whatever You Think, Think The Opposite“.
  • Pursue lifelong learning – What it says. never stop exploring!
  • Identify companies that will increase professional value. Companies are out to get what they can from you. make sure you do the same with them.

So as we enter the second decade of the 21st century can we not look for more T-shaped people but start the search for V-shaped people instead? These are the ones who will really make a difference and be able to address the really wicked problems that are out there.

Watson, Turing and Clarke

So what do these three have in common?

  • Thomas J. Watson Sr, CEO and founder of IBM (100 years old this year). Currently has a computer named after him.
  • Alan Turing, mathematician and computer scientist (100 years old next year). Has a famous test named after him.
  • Aurthur C. Clarke, scientist and writer (100 years old in 1917). Has a set of laws named after him (and is also the creator of the fictional HAL computer in 2001: A Space Odyssey).

Unless you have moved into a hut, deep in the Amazon rain forest you cannot have missed the publicity over IBM’s ‘Watson’ computer having competed in, and won, the American TV quiz show Jeopardy. I have to confess that until last week I’d not heard of Jeopardy, possibly because a) I’m not a fan of quizzes, b) I’m not American and c) I don’t watch that much television. To those as ignorant as me on these matters the unique thing about Jeopardy is that contestants are presented with clues in the form of answers, and must phrase their responses in the form of a question.

This, it turns out, is what makes this particular quiz such a hard nut for a computer to crack. The clues in the ‘question’ rely on subtle meanings, puns, and riddles; something humans excel at and computers do not. Unlike IBM’s previous game challenger Deep Blue, which defeated chess world champion Gary Kasparov, it’s not sufficient to rely on raw computing ‘brute force’ but this time the computer has to interpret meaning and the nuances of the human language. So has Watson achieved, met or passed the Turing test (which is basically a measure of whether computer can demonstrate intelligence)?

The answer is almost certainly ‘no’. Turing’s test is a measure of a machines ability to exhibit human intelligence. The test, as originally proposed by Turing was that a questioner should ask a series of questions of both a human being and a machine and see whether he can tell which is which through the answers they give. The idea being that if the two were indistinguishable then the machine and the human must both appear to be as intelligent as each other.

As far as I know Turing never stipulated any constraint on the range or type of questions that could be answered which leads us to the nub of the problem. Watson is supremely good at answering Jeopardy type questions just as Deep Blue was good at playing chess. However neither could do what the other does (at least as well). They have been programmed for that given task. Given that Watson is actually a cluster of POWER7 servers any suitably general purpose computer that could win at Jeopardy, play chess as well as exhibit the full range of human emotions and frailties that would be needed to fool a questioner would presumably occupy the area of several football pitches and consume the power of a small city.

That however misses the point completely. The ability of a computer to almost flawlessly answer a range of questions, phrased in a particular way on a range of different subject areas, blindingly fast has enormous potential in fields of medicine, law and other disciplines where questions based on a huge foundation of knowledge built up over decades need to be answered quickly (for example in accident and emergency where quick diagnoses may literally be a matter of life and death). This indeed is one of IBM’s Smarter Planet goals.

Which brings us to Clarke’s third law which states that “any sufficiently advanced technology is indistinguishable from magic”. This is surely something that is attributable to Watson. The other creation of Clarke of course is HAL the computer aboard the spaceship Discovery One on a trip to Saturn that becomes overwhelmed by guilt at having to keep secret the true nature of the spaceships mission and starts killing members of the crew. The point of Clarke’s story (or one of them) being that the downside to a computer that is indistinguishable from a human being is that the computer may also end up mimicking human frailties and weaknesses.  Maybe it’s a good job Watson hasn’t passed Turing’s test then?

Think Like An Architect

A previous entry suggested hiring an architect was a good idea because architects take existing components and assemble them in interesting and important ways. So how should you “think architecturally” in order to create things that are not only interesting but also solve practical, real-world problems? Architectural thinking is about balancing three opposing “forces”: what people want (desirability), what technology can provide (feasibility) and what can actually be built given the constraints of cost, resource and time (viability).

It is basically the role of the architect to help resolve these forces by assembling components “in interesting ways”. There is however a fourth aspect which is often overlooked but which is what separates great architecture from merely good architecture. That is the aesthetics of the architecture.
Aesthetics is what separates a MacBook from a Dell, the Millau Viaduct in France from the Yamba Dam Bridge in Japan and the St Mary Axe from the Ryugyong Hotel in North Korea. Aesthetics is about good design which is what you get when you add ‘significance’ (aesthetic appeal) to ‘utility’ (something that does the job). IBM, the company I work for, is 100 years old this year (check out the centennial video here) and Thomas Watson, IBM’s founder, famously said that “good design is good business”. Watson knew what Steve Jobs, Tim Brown and many other creative designers know; aesthetics is not only good for the people that use or acquire these computers/buildings/systems it’s also good for the businesses that create them. In a world of over-abundance good design/architecture both differentiates companies as well as giving them a competitive advantage.

How Much Does Your Software Weigh, Mr Architect?

Three apparently unrelated events actually have a serendipitous connection which have led to the title of this weeks blog. First off, Norman Foster (he of the “Gherkin” and “Wobbly Bridge” fame) has had a film released about his life and work called How Much Does You Building Weigh, Mr Foster. As a result there have been a slew of articles about both Foster and the film including this one in the Financial Times. One of the things that comes across from both the interviews and the articles about Foster is the passion he has for his work. After all, if you are still working at 75 then you must like you job a little bit! One of the quotes that stands out for me is this one from the FT article:

The architect has no power, he is simply an advocate for the client. To be really effective as an architect or as a designer, you have to be a good listener.”

How true. Too often we sit down with clients and a jump in with solutions before we have really got to the bottom of what the problem is. It’s not just about listening to what the client says but also what she doesn’t say. Sometimes people only say what they think you want them to hear not what they really feel, So, it’s not just about listening but developing empathy with the person you are architecting for. Related to this is not closing down discussions too early before making sure everything has been said which brings me to the second event.

I’m currently reading Resonate by Nancy Duarte which is about how to put together presentations that really connect with your audience using techniques adopted by professional story tellers (like film makers for example). In Duarte’s book I came across the diagram below which Tim Brown also uses in his book Change by Design.

For me the architects sits above the dotted line in this picture ensuring as many choices as possible get made and then making decisions (or compromises) that are the right balance of the sometimes opposing “forces” of the requirements that come from multiple choices.

One of the big compromises that often needs to be made is how much can I deliver in the time I have available and, if its not everything, what is dropped? Unless the time can change then its usually the odd bit of functionality (good if these functions can be deferred to the next release) or quality (not good under any circumstances). This leads me to the third serendipitous event of the week: discovering “technical debt”.

Slightly embarrassingly I had not heard of the concept of technical technical debt before and it’s been around for a long time. It was originally proposed by Ward Cunningham in 1992 who said the following:

Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation.

Technical debt is a topic that has been taken up by the Software Engineering Institute (SEI) who are organising a workshop on the topic this year. One way of understanding technical debt is to see it as the gap between the current state of the system and what was originally envisaged by the architecture. Here, debt can be “measured” by the number of known defects and features that have not yet been implemented. Another aspect to debt however is the amount of entropy that has set in because the system has decayed over time (changes have been made that were not in line with the specified architecture). This is a more difficult thing to measure but has a definite cost in terms of ease of maintenance and general understandability of the system.

Which leads to the title of this weeks blog. Clearly software (being ‘soft’) carries no weight (the machines it runs on do not count) but nonetheless can have a huge, and potentially damaging weight in terms of the debt it may be carrying in unstructured, incorrect or hard to maintain code. Understanding the weight of this debt, and how to deal with it, should be part of the role of the architect. The weight of your software may not be measurable in terms of kilograms but it surely has a weight in terms of the “debt owed”.

Learning Architecture (Or Anything Really)

I spent most of 2010 travelling the world teaching Architectural Thinking for a client. (Here is a reasonable description of some of what this covers. It’s the best publicly available description I can find but please contact me if you would like more information on this class).I always reckon that you learn just as much as a teacher as you do as a student (or should do) so here’s some stuff I learnt myself. This is not rocket science and many people may consider this obvious but for those for whom this is not the case I hope you find it useful.

  1. People learn best when they have some fun. This doesn’t mean you have to be a great comedian to deliver an effective training class however it does help if you can arrange some fun activities as part of the learning. Quizzes (that also inject an element of competition) work well as a way of re-enforcing peoples learning.
  2. Ensure that at least half the time (and preferably two thirds of it) are spent on getting the attendees to do something. This does not have to be a full-blown case study (though you certainly need one of those) but should at least include plentiful opportunities for discussions and Q&A sessions (where the questions are not just asked by the students).
  3. Less really is more. When delivering a lecture, or a complete class, especially one you are very familiar with. It is tempting to cram more and more information in as you deliver more classes. People ask a question, you answer it and think “hey, why don’t I create a slide for that for next time”. Don’t. Slide-creep is one of the great evils of our time. Rather thank thinking “what can I add” think “what can I remove”. Hand out detail as additional reading. Keep the main-deal brief.
  4. Try, whenever you can, to tell stories rather than deliver dry facts. For me a teacher is, above all else, an experienced practitioner. Introducing your own “war stories” at appropriate points is what makes a great and teacher.
  5. Great public speakers (Richard Feynman, Steve Jobs, Banjamin Zander) inject passion into what they have to say. If you are not passionate about what you are saying then maybe you should not be standing up in front of others saying it! Think about what first made you interested in the topic you are delivering and weave that into the storyline. Injecting some of your personal self into a subject helps engage the audience and make them believe in what you have to say.

Finally take a look at this great advice from Seth Godin on organising a retreat. It may not be a full blown retreat you are organising but it contains great advice for just about any learning event where you want to get the best out of people.

2011 Architecture Survival Guide

An article in last Sundays Observer newspaper about Facebook has set me thinking about how we architects can not only survive in today’s rapidly changing technological environment but also actually make a positive difference to the world (even if it’s not on the scale of Facebook, assuming you think that has made a positive impact on the world).The article by John Naughton examines the claim by the Winklevoss twins that they were ripped off when they reached a settlement with Mark Zuckerburg in 2008 after they claimed it was they who had invented Facebook. Their claim is that the number of Facebook shares they acquired was based on a false valuation. For an entertaining view of this see, or rent, The Social Network which goes into the history of how Facebook came into being. The article goes on to pose the question: would we now be looking at a social networking service with 600 million users if the Winklevoss twins had been the ones to develop Facebook?

Naughton thinks not and goes on to explain that although the Winklevoss twins were not stupid they probably “laboured under two crippling disadvantages”:

  1. They were, and probably still are, conventional people who may have been good at “creating businesses in established sectors but who find it hard to operate in arenas where there are no rules”.
  2. The twins weren’t techies and so had no real insight into the technology they were creating and its possibilities. They were therefore less likely to “spot the importance of allowing Facebook to become a software platform on which other people could run applications”.

Here’s my takeaway from this if you want to come up with new ideas, at whatever scale, no one else has thought of.

  1. Don’t think conventionally. Conventional thinking will end up creating conventional business models. Conventional means doing what you’ve been told or what your peers do. Someone once said “fear of our peers makes us conservative in our thinking“. Zuckerburg was not only fearless of his peers (the Winklevoss twins) but had no qualms about using (some would say stealing) their ideas and using them for his own ends. I guess it poses an interesting moral dilemma about when it is right to steal someone elses idea because you think you can do more with it. Facebook paid for this by handing over cash and shares to the Winklevoss twins but have benefited from this ‘investment’ many times over.
  2. Don’t think like everyone else. Walter Lippmann (a writer and political commentator) once said “where we all think alike, no one thinks very much.” Some people claim that Zuckeburg (if you believe the movie at any rate) exhibits characteristics that place him on the autistic spectrum. (actually as having Asperger syndrome). One of the characteristics of someone with Aspergers is that they display behavior, interests, and activities that are restricted and repetitive and are sometimes abnormally intense or focused. Zuckeburg not only thought differently to everyone else but took an idea and focused on it intensely (many, many hours of programming) until Facebook was created.
  3. Think visually. Interesting related to number 2. People on the autistic spectrum are often more visual thinkers than those who are not. We often joke about “back of an envelope” or “back of a fag packet” designs but setting aside the medium the ability to visualise your thoughts quickly and succinctly is a key characteristic it’s worth fostering. One of my more memorable ad-hoc design sessions took place over a meal in a restaurant where we used the table cloth as a our drawing canvas. Luckily it was a paper table cloth!
  4. Don’t get out of touch with technology. One of the dangers of becoming an architect in order to make yourself “more valuable” (see Dilbert below) is you not only lose touch with technology but you lose the ability to exploit it in ways others may not see. Making Facebook an open platform has been one of the key factors in its runaway success. I’ve discussed before the importance of being a versatilist (broad in several disciplines and deep in a few specialisms) and this ones all about picking your technology (we can’t all be good at everything) and specialising yourself in it!

Dilbert.com

Software Developments Best Kept Secret

A few people have asked what I meant in my previous entry when a said we should be “killing off the endless debates of agile versus waterfall.” Don’t get me wrong, I’m a big fan of doing development in as efficient a way as possible, after all why would you want to be doing things in a ‘non-agile’ way! However I think that the agile versus waterfall debate really does miss the point. If you have ever worked on anything but the most trivial of software development projects you will quickly realise that there is no such thing as a ‘one size fits all’ software delivery lifecycle (SDLC) process. Each project is different and each brings its own challenges in terms of the best way to specify, develop, deliver and run it. Which brings me to the topic of this entry, the snappily titled Software and Systems Process Engineering Metamodel or ‘SPEM’ (but not SSPEM).

SPEM is a standard owned by the Object Management Group (OMG), the body that also owns the Unified Modeling Language (UML), the Systems Modeling Language (SysML) and a number of other open standards. Essentially SPEM gives you the language (the metamodel) for defining software and system processes in a consistent and repeatable way. SPEM also allows vendors to build tools that automate the way processes are defined and delivered. Just like vendors have built system and software modeling tools based around UML so too can vendors build delivery process modeling tools built around SPEM.

So what exactly does SPEM define and why should you be interested in it? For me there are two reasons why you should look at adopting SPEM on your next project.

  1. SPEM separates out what you create (i.e. the content) from how you create it (i.e. the process) whilst at the same time providing instructions for how to do these two things (i.e. guidance).
  2. SPEM (or at least tools that implement SPEM) allows you to create a customised process by varying what you create and when you create it.

Here’s a diagram to explain the first of these.

SPEM Method Framework
The SPEM Method Framework represents a consistent and repeatable approach to accomplishing a set of objectives based on a collection of well-defined techniques and best practices. The framework consists of three parts:
  • Content: represents the primary reusable building blocks of the method that exist outside of any predefined lifecycle. These are the work products that are created as a result of roles performing tasks.
  • Process: assembles method content into a sequence or workflow (represented by a work breakdown structure) used to organise the project and develop a solution. Process includes the phases that make up an end-to-end SDLC, the activities that phases are broken down into as well as reusable chunks of process referred to as ‘capability patterns’.
  • Guidance: is the ‘glue’ which supports content development and process execution. It describes techniques and best-practice for developing content or ‘executing’ a process.

As well as giving us the ‘language’ for building our own processes SPEM also defines the rules for building those processes. For example phases consist of other phases or activities, activities group tasks, tasks take work products as input and output other work products and so on.

This is all well and good you might say but I don’t want to have to laboriously build a whole process every time I want to run a project. This is where the second advantage of using SPEM comes in. A number of vendors (IBM and Sparx to name two) have built tools that not only automate the process for building a process but which also contain one or more ‘ready-rolled’ processes to get you started. You can either use those ‘out of the box’, extend them by adding your own content or start from scratch (not recommended for novices). What’s more the Eclipse foundation have developed an open software tool, called the Eclipse Process Framework (EPF) that not only gives you a tool for building processes but also comes with a number of existing processes, including OpenUP (open version of the Rational Unified Process) as well as Scrum and DSDM.

If you download and install EPF together with the appropriate method libraries you can use these as the basis for creating your own processes. Here’s what EPF looks like when you open the OpenUP SDLC.

EPF and OpenUP

The above view shows the browsing perspective of EPF, however there is also an authoring perspective which allows you to not only reconfigure a process to suit your own project but also add and remove content (i.e. roles, tasks and work products). Once you have made your changes you can republish the new process (as HTML) and anyone with a browser can then view the process together with all of it work products and, most crucially, associated guidance (i.e. examples, templates, guidelines etc) that allow you to use the process in an effective way.

This is, I believe, the true power of using a tool like EPF (or IBM’s Rational Method Composer which comes preloaded with the Rational Unified Process). You can take an existing SDLC (one you have created or one you have obtained from elsewhere) and customise it to meet the needs of your project. The amount of agility and number of iterations etc that you want to run will depend on the intricacies of your project and not what some method guru tells you that you should be using!

By the way for an excellent introduction and overview of EPF see here and here. The Eclipse web site also contains a wealth of information on EPF. You can also download the complete SPEM 2 specification from the OMG web site here.

Architecture is Architecture?

At OT 99 (that’s Object Technology 1999, now known as SPA for Software Process Advancement) I attended a session by Kent Beck called Software is Software – Beyond the Horseless Carriage. The basic premise of Kent’s talk was that it was about time the software business “grew up” and its practitioners recognise it for what it is, a discipline in its own right which no longer needs to continuously borrow terms and techniques from other industries and disciplines. The title of the session refers to the time when the automobile was first invented and people called them horseless carriages because horse-drawn carriages were the only frame of reference they had. Unfortunately, 12 years later, I don’t think we have quite got around to jettisoning our horseless carriages, especially in the upfront work that is done in trying to map out the major system components and their relationships, sometime referred to as architecture (a word which itself is borrowed from another profession of course). On the face of it this may not seem to be a problem; after all those other industries (civil engineering, auto-engineering even film making) have been around a lot longer and so must be able to offer good advice and guidance to the business of software mustn’t they? Actually, I think there is a problem and Kent Beck was, and still is, right.

  • The business of ‘making’ software is fundamentally different from any other human endeavor. Software is infinitely malleable and potentially changeable right up to (and sometimes after) it has gone into production. No other engineering discipline has that flexibility. At some point drawings and blueprints have to be signed off, factories and production lines have to be built, building sites prepared and production begun. After this any change becomes prohibitively expensive. With software the perception (and sometimes the reality) is that code changes can be made right up to the moment the software ships.
  • Most other engineering disciplines have fairly well defined job roles, often with their own professional organisations, training programmes and qualifications and well understood and mature tools. These roles are usually carried out by separate individuals (in the construction industry it’s unlikely the architect will roll her sleeves up and start laying bricks).
  • The engineering and manufacturing approach, or process, is by and large pretty well understood and has been refined over a long period of time (sometimes hundreds of years). The approaches can be taught and are an integral part of the role of being an architect or aero-engineer. Further, these approaches are built around a common language which is also taught and well understood by its practitioners.

A rigorous approach to the field of software architecture needs to recognise the differences whilst at the same time understanding its constraints and build a solid, engineering based approach to its development. This should include killing off the endless debates of agile versus waterfall or structured versus object-oriented and any of the other interminable ‘religious wars’ that we seem to love embarking on and focus in on what matters: applying IT in a reasoned and structured way to solving real-world (and sometimes complex) business problems.

As we enter the new year lets celebrate the field of software development for what it is and help forge the right amount of rigor and discipline in creating a ‘proper’ profession that finally loses the shackles of all those other industries. After all as this guy says (far more eloquently than me) “the processor is an expression of human potential” and is “akin to a painter’s blank canvas” (see this great drawing). I’d like to think of us architects as the painters ready to fill that canvas with great art. Oh heck, but that means we are now comparing architecture with art and that would never do.

On Being a Software Architect

Thanks to a wonderful bit of “webendipity” (something unexpected and useful found from surfing the web, a conjunction of web-serendipity) in this case a blog causing me to follow that blogger on twitter whose tweet pointed me to another blog, I came across An Overly Long Guide to Being a Software Architect by David Ing today which nicely parallels my previous effort on how not to be a (Software) Architect.

I particularly like Ing’s number 11:

Finally my last tip is to never take advice from Top 11 tip lists. In nearly all cases there was only about 3 good points and with about 8 padding.

Definitely a lesson for me there I think. Maybe my New Years resolution should be to publish fewer lists!

You Can No Longer Call Yourself an Architect When…

Ten behaviours that might mean you can no longer call yourself an Architect…

  1. The majority of your output is created using Microsoft’s Office suite.
  2. Your days are spent fighting the system rather than creating a system.
  3. You’re fitting business to technology rather than the other way around.
  4. You think reuse is what you do with your shirt when you unexpectedly have to spend one extra day with the client (possibly wasting your time doing 2 above).
  5. The only stakeholders you deal with are your non-vegetarian colleagues during your evening meals at the Angus Steakhouse.
  6. You can no longer remember the difference between a For loop and a While loop.
  7. You think a view is something you don’t normally get from your room in the tourist class hotels your company puts you up in.
  8. Your definition of a non-functional requirement is a functional requirement that isn’t.
  9. You spend more time in the box than out of it.
  10. Add your favourite reason below.

This is probably my last post of the year and most certainly the last one before Christmas. Merry Christmas to everyone out there and, as it used to say on the front of The Beano at this time of year:

Happy New Year to All My Readers!