Architectural Granularity

There’s an old joke about defining granularity:Granularity is a bit like pornography, it’s hard to define but you know it when you see it.

The whole topic of granularity of software components is important to the Software Architect for a number of reasons:

  • If you are creating too many fine-grained components then you are probably not doing architecture but design.
  • As I discussed previously, getting components at the right level of granularity is important when it comes to placing those components (on nodes) and satisfying non-functional requirements such as availability and performance.
  • The granularity of components impacts performance. Too many fine-grained components will probably result in increased traffic on the network because the client using the component must make several queries to complete a task or get the information it needs.
  • Generally speaking, the coarser grained a component the more likely it is to have a direct correlation to something the business finds useful (e.g. a ‘Reservation’ component that handles all aspects of allowing a customer to reserve a room is more useful than one that ‘finds a vacant room’).

The terms “fine-grained”, “medium-grained” and “coarse-grained” are frequently used to describe architectural components (or indeed services) but there seems to be no common definition for these terms. This leads to confusion and ambiguity in their use. Whilst there is no agreed definition of what these terms mean there does seem to be a consensus that granularity is about more than just the number of operations an interface on a component has. Everware-CBDI (see their June 2005 edition which is only available via subscription) suggests that other factors might be (and I’m simplifying here):

  • The number of components (C) in the directed-acyclic-graph (DAG) that are invoked through a given operation on that components interface.
  • The function points (F) for each component.
  • The number of database tables (D) created, read, updated or deleted.

So one measure of granularity (G) might be:

G = C + F + D

Put simply, if a component is self-contained (C = 1), has a single function point (F = 1) and updates a single entry in a database (D = 1) it will have a granularity of ‘3’. Such a ‘component’ is going to be something like a class ‘Integer’ with a single operation such as ‘increment’ which adds one to the previous value. Whilst such a component may be infinitely reusable it is not particularly useful from an architectural (or business) standpoint. We could honestly call such a component ‘fine-grained’. At the other extreme a CRM system will probably have C, F and D each equal to several thousand giving it a granularity in the tens of thousands. Again, from an architectural standpoint, this is not very useful. There is not much we can do with such a ‘component’ other than observe it is (very) coarse-grained. I suggest that we architects are more likely to be interested in creating, and (re)using components that sit somewhere in between these extremes (maybe in the low 100’s region using this measure of granularity).

What we are actually talking about here is what ZapThink refer to as ‘functional granularity’ (as opposed to ‘interface granularity’) that is, granularity of the component itself rather than the granularity of any interfaces the component may expose. For the architect it is getting this functional granularity right that is most important. The coarser the (functional) granularity of the component the more likely it is to have a useful business context. So a ‘Reservation’ component that deals with all aspects of reserving a room for a customer (i.e. takes the dates, room preferences, hotel location, customer loyalty status etc) and finds a sutable room is a useful component at both an architectural level (i.e it addresses the points I started with above) as well as a business level (i.e. it is understandable by the business and can therefore be mapped to business users requirements).

When composing systems into components working out what level of granularity you need is a key aspect of creating the systems architecture so you have the right number of components that are most useful in both an architectural as well as business context.

EA Wars

Lots of great discussion in the blogosphere right now on the relevance of Enterprise Architecture in the brave new world of the ‘extended’ enterprise and whether architecture is something that is planned or ’emerges’. This is largely prompted, I suspect, by the good folk at ZapThink asking Why Nobody is Doing Enterprise Architecture (possibly to plant seeds of doubt in the minds of CIOs and send them rushing to one of their “licensed ZapThink architecture courses”). For a nice, succinct and totally dismissive riposte to the ZapThink article check out David Sprott’s blog entry here. For a more reasoned and skeptical discussion on whether emergent architecture actually exists (I think it does, see below) read Richard Veryard’s blog entry here.However… despite some real FUD’iness on the part of ZapThink there are some elements in their argument that definitely ring true and which I have observed in a number of clients I have worked with over the last few years. In particular:

  • Emergence (i.e. the way complex systems and patterns arise out of a multiplicity of relatively simple interactions) is, like it or not, a definite factor in the architecture of modern-day enterprises. This is especially true when the human dimension is factored into the mix. The current Gen Y and upcoming Gen V are not going to hang around while the EA department figure out how to factor their 10th generation iPhone, which offers 3-D holographic body-time, into an EA blueprint. They are just going to bypass the current systems and use it regardless. The enterprise had better quickly figure out the implications of such devices (whatever they turn out to be) or risk becoming a technological backwater.
  • EA departments seem to very quickly become disjoint from both the business which they should be serving and the technicians which they should be governing. One is definitely tempted to ask “who is governing the governors” when it comes to the EA department? Accountability in many organisations definitely seems to be lacking. This feels to me like another example of gapology that seems to be increasingly apparent in such organisations.
  • Even though we are better placed than ever to be able to capture good methodological approaches to systems development I still see precious little adoption of true systems development lifecycles (SDLC’s) in organisations. Admittedly methods have had very bad press over the years as they are often seen as been an unnecessary overhead which, with the rise of agile, have been pushed into the background as something an organisation must have in order to be ISO 9000 compliant or whatever but everyone really just gets on with it and ignores all that stuff.
  • Finally, as with many things in IT, the situation has been confused by having multiple and overlapping standards and frameworks in the EA space (TOGAF, Zachman and MODAF to name but three). Whilst none of these may be perfect I think the important thing is to go with one and adapt it accordingly to what works for your organisation. What we should not be doing is inventing more frameworks (and standards bodies to promote them). As with developing an EA itself the approach an EA department should take to selecting an EA framework is to start small and grow on an as needs basis.

Applying Architectural Tactics

The use of architectural tactics, as proposed by the Software Engineering Institute, provides a systematic way of dealing with a systems non-functional requirements (sometime referred to as the systems quality attributes or just qualities). These can be both runtime qualities such as performance, availability and security as well as non-runtime such as maintainability, portability and so on. In my experience, dealing with both functional and non-functional requirements, as well as capturing them using a suitable modeling tool is something that is not always handled very methodically. Here’s an approach that tries to enforce some architectural rigour using the Unified Modeling Language (UML) and any UML compliant modeling tool.

Architecturally, systems can be decomposed from an enterprise or system-wide view (i.e. meaning people, processes, data and IT systems), to an IT system view to a component view and finally to a sub-component view as shown going clock-wise in Figure 1. These diagrams show how an example hotel management system (something I’ve used before to illustrate some architectural principles) might eventually be decomposed into components and sub-components.

Figure 1: System Decomposition

This decomposition typically happens by considering what functionality needs to be associated with each of the system elements at different levels of decomposition. So, as shown in Figure 1 above, first we associate ‘large-grained’ functionality (e.g. we need a hotel system) at the system level and gradually break this down to finer and finer grained levels until we have attributed all functionality across all components (e.g. we need a user interface component that handles the customer management aspects of the system).

Crucially from the point of view of deployment of components we need to have decomposed the system to at least that of the sub-component level in Figure 1 so that we have a clear idea of each of the types of component (i.e. do they handle user input or manage data etc) and know how they collaborate with each other in satisfying use cases. There are a number of patterns which can be adopted for doing this. For example the model-view-controller pattern as shown in Figure 2 is a way of ascribing functionality to components in a standard way using rules for how these components collaborate. This pattern has been used for the sub-component view of Figure 1.

Figure 2: Model-View-Controller Pattern

So far we have shown how to decompose a system based on functional requirements and thinking about which components will realise those requirements. What about non-functional requirements though? Table 1 shows how non-functional requirements can be decomposed and assigned to architectural elements as they are identified. Initially non-functional requirements are stated at the whole system level but as we decompose into finer-grained architectural elements (AKA components) we can begin to think about how those elements support particular non-functional requirements also. In this way non-functional requirements get decomposed and associated with each level of system functionality. Non-functional requirements would ideally be assigned as attributes to each relevant component (preferably inside our chosen UML modelling tool) so they do not get lost or forgotten.

Table 1
System Element Non-Functional Requirement
Hotel System (i.e. including all actors and IT systems). The hotel system must allow customers to check-in 24 hours a day, 365 days a year. Note this is typically the accuracy non-functional requirements are stated at initially. Further analysis is usually needed to provide measurable values.
Hotel Management System (i.e. the hotel IT system). The hotel management system must allow the front-desk clerk to check-in a customer 24 hours a day, 365 days a year with a 99.99% availability value.
Customer Manager (i.e. a system element within the hotel’s IT system). The customer manager system element (component) must allow customer details to be created, read or updated (but not deleted) 24 hours a day, 365 days a year with a 99.99% availability value.
Customer Manager Interface (i.e. the user interface that belongs to the Customer Manager system element). The customer manager interface must allow customer details to be created, read or updated (but not deleted) 24 hours a day, 365 days a year with a 99.99% availability value.

Once it is understood what non-functional requirement each component needs to support we can apply the approach of architectural tactics proposed by the Software Engineering Institute (SEI) to determine how to handle those non-functional requirements.

An architectural tactic represents “codified knowledge” on how to satisfy non-functional requirements by applying one or more patterns or reasoning frameworks (for example queuing or scheduling theory) to the architecture. Tactics show how (the parameters of) a non-functional requirement (e.g. the required response time or availability) can be addressed through architectural decisions to achieve the desired capability.

In the example we are focusing on in Table 1 we need some tactics that allow the desired quality attribute of 99.99% availability (which corresponds to a downtime of 52 min, 34 sec per year) to be achieved by the customer manager interface. A detailed set of availability tactics can be found here but for the purposes of this example availability tactics can be categorized according to whether they address fault detection, recovery, or prevention. Here are some potential tactics for these:

  • Employing good software engineering practices for fault prevention such as code inspections, usability testing and so on to the design and implementation of the interface.
  • Deploying components on highly-available platforms which employ fault detection and recovery approaches such as system monitoring, active failover etc.
  • Developing a backup and recovery approach that allows the platform running the user interface to be replaced within the target availability times.

As this example shows, not all non-functional requirements can be realised suitably by a component alone; sometimes full-realisation can only be done when that component is placed (deployed) onto a suitable computer platform. Once we know what non-functional requirements need to be realised by what components we can then think about how to package these components together to be deployed onto the appropriate computer platform which supports those non-functional requirements (for example on a platform that will support 99.99% availability and so on). Figure 3 shows how this deployment can be modelled in UML adopting the Hot Standby Load Balancer pattern.

Figure 3: Deployment View

Here we have taken one component, the ‘Customer Manager’, and showed how it would be deployed with other components (a ‘Room Manager’ and a ‘Reservation Manager’’) that support the same non-functional requirements onto two application server nodes. A third UML element, an artefact, packages together like components via a UML «manifest» relationship. It is the artefact that actually gets placed onto the nodes. An artefact is a standard UML element that “embodies or manifests a number of model elements. The artefact owns the manifestations, each representing the utilization of a packageable element”.

So far all of this has been done at a logical level; that is there is no mention of technology. However moving from a logical level to a physical (technology dependent level) is a relatively simple step. The packaging notion of an artefact can equally be used for packaging physical components, for example in this case the three components shown in Figure 3 above could Enterprise Java components or .NET components.

This is a simple example to illustrate three main points:

  1. Architecting a system based on functional and non-functional requirements.
  2. Use of a standard notation (i.e. UML) and modelling tool.
  3. Adoption of tactics and patterns to show how a systems qualities can be achieved.

None of it rocket science but something you don’t see done much.

Oops There Goes Our Reputation

I’m guessing that up to two weeks ago most people, like me, had never heard of a company called Epsilon. Now, unfortunately for them, too many people know of them for all the wrong reasons. If you sign up to any services from household names such as Marks and Spencer, Hilton, Marriott or McKinsey you will have probably had several emails in the last two weeks advising you of a security breach which led to “unauthorized entry into Epsilon’s email system”. Unfortunately because Epsilon is a marketing vendor that manages customer email lists for these and other well known household brands chances are your email has been obtained by this unauthorised entry as well. Now, it just might be a pure coincidence, but in the last two weeks I have also received emails from the Chinese government inviting me to a conference on some topic I’ve never heard of, from Kofi Annan, ex-Secretary General of the United Nations and from a lady in Nigeria asking for my bank account details so she can deposit $18.4M into the account so she can leave the country!According to the information on Epsilon’s web site the information that was obtained was limited to email addresses and/or customer names only. So, should we be worried by this and what are the implications on architecture of such systems?

I think we should be worried for at least three reasons:

  1. Whilst the increased spam that is seemingly inevitable following an incident such as this is mildly annoying a deeper concern is how could the criminal elements who now have information on the places I do business on the web put this information together to learn more about me and possibly construct a more sophisticated phishing attack? Unfortunately it’s not only the good guys that have access to data analytics tools.
  2. Many people probably have a single password to access multiple web sites. The criminals who now have your email as well as knowledge of which sites you do business at only have to crack one of these and potentially have access to multiple sites, some of which may have more sensitive information.
  3. Finally how come information I trusted to well known (and by implication ‘secure’) brands and their web sites has been handed over to a third party without me even knowing about it? Can I trust those companies not to be doing this with more sensitive information and should I be withdrawing my business from them?. This is a serious breach of trust and I suspect that many of these brands own reputations will have been damaged.

So what are the impacts to us as IT architects in a case like this? Here are a few:

  1. As IT architects we make architectural decisions all the time. Some of these are relatively trivial (I’ll assign that function to that component etc) whereas others are not. Clearly decisions about which part of the system to entrust personal information to is not trivial. I always advocate documenting significant architectural decisions in a formal way where all the options you considered are captured as well as the rationale and implications behind the decision you made. As our systems get ever more complex and distributed the implications of particular decisions become harder to quantify. I wonder how many architects consider the implications to a companies reputation of entrusting even seemingly low grade personal information to third parties?
  2. It is very likely that incidents such as this are going to result in increased legislation that covers personal information just like there is legislation on Payment Card Industry (PCI) standards. This will demand more architectural rigour as new standards essentially impose new constraints on how we design our systems.
  3. As we trundle slowly to a world where more and more of our data is to be held in the cloud using a so called multi-tenant deployment model it’s surely only a matter of time before unauthorised access to one of our cloud data stores will result in access to many other data sources and a wealth of our personal information. What is needed here is new thinking around patterns of layered security that are tried and tested and, crucially, which can be ‘sold’ to consumers of these new services so they can be reassured that their data is secure. As Software-as-a-Service (SaaS) takes off and new providers join the market we will increasingly need to be reassured they are to be trusted with our personal data. After all if we cannot trust existing, large corporations how can we be expected to trust new, small startups?
  4. Finally I suspect that it is only a matter of time before legislation aimed at systems designers themselves is enforced that make us as IT architects liable for some of those architectural decisions I mentioned earlier. I imagine there are several lawyers engaged by the parties whose customers email addresses were obtained and whose trust and reputation with those customers may now be compromised. I wonder if some of those lawyers will be thinking about the design of such systems in the first place and, by implication, the people who designed those systems?

Open vs. Closed Architectures

There has been much Apple bashing in cyberspace as well as the ‘dead-wood’ parts of the press of late. To the extent that some people are now turning on those that own one of Apple’s wunder-devices (an iPad) accusing them of being “selfish elites“. Phew! I thought it was a typically British trait to knock anything and anyone that was remotely successful but it now seems that the whole world has it in for Mr Jobs’ empire.Back in the pre-google days of 1994 Umberto Eco declared thatthe Macintosh is Catholic and that DOS is Protestant. Indeed, the Macintosh is counter-reformist and has been influenced by the ratio studiorum of the Jesuits. It is cheerful, friendly, conciliatory; it tells the faithful how they must proceed step by step to reach — if not the kingdom of Heaven — the moment in which their document is printed.

The big gripe most people have with Apple is their closed architecture which controls not only who is allowed to write apps for their OS’s but who can produce devices that actually run those OS’s (er, that would be Apple). It’s one of life’s great anomalies as to why Apple is so successful in building products with closed architectures when most everyone would agree that open architectures and systems are ultimately the way to go as, in the end, they lead to greater innovation, wider-usage and, presumably, more profit for those involved. The classic case of an open architecture leading to wide-spread usage is that of the original IBM Personal Computer. Because IBM wanted to fast-track its introduction many of the parts were, unusually for IBM, provided by third-parties including, most significantly the processor (from Intel) and the operating system (from the fledgling Microsoft). This together with the fact that the technical information on the innards of the computer were made publicly available essentially made the IBM PC ‘open’. This more than anything gave it an unprecedented penetration into the marketplace allowing many vendors to provide IBM PC ‘clones’.

There is of course a ‘dark side’ to all of this. Thousands of vendors all providing hardware add-ons and extensions as well as applications resulted in huge inter-working problems which in the early days at least required you to be something of a computer engineer if you wanted to get everything working together. This is where Apple stepped in. As Umberto Eco said, Apple guides the faithful every step of the way. What they sacrifice in openness and choice they gain in everything working out the box, sometimes in three simple steps.

So, is open always best when it comes to architecture or does it sometimes pay to have a closed architecture? What does the architect do when faced with such a choice? Here’s my take:

  • Know your audience. The early PC’s, like it or not were bought by technophiles who enjoyed technology for the sake of technology. The early Mac’s were bought by people who just wanted to use computers to get the job done. In those days both had a market.
  • Know where you want to go. Apple stuck solidly with creating user friendly (not to mention well designed devices) that people would want to own and use. The plethora of PC providers (which there soon were) couldn’t by and large give a damn about design. They just wanted to sell as many devices as possible and let others worry about how to stitch everything together. This in itself generated a huge industry which in a strange self-fulfilling way led to more devices and world domination of the PC and left Apple in a niche market. Openness certainly seemed to be paying.
  • Know how to capitalise on your architectural philosopy. Ultimately openness leads to commoditization. When anyone can do it price dominates and the cheapest always wins. If you own the space then you control the price. Apple’s recent success has been not to capitalise on an open architecture but to capitalise on good design which has enabled it to create high value, desirable products showing that good design trounces an open architecture.

So how about combining the utility of an open architecture with the significance of a well thought through architecture to create a great design? Which funnily enough is what Dan Pink meant by this:

Significance + Utility = Design

Huh, beaten to a good idea again!

It’s Only Television But I Like It

Yes I know it’s a television program and yes I know they are playing up to the camera and yes I know we only see the ‘edited highlights’ but Jamie’s Dream School on Channel 4 last night was an exemplar on how to deliver motivational talks to a disinterested audience. As I discussed last time the ‘teachers’ (actually people at the leading edge in their field) are truly inspirational, passionate individuals who use every trick in the book to engage with and inspire their students. Not only that they are incredibly humble, as typified by one of the pupils asking Robert Winston if he “had ever cured anything” to which he replied he “thought they had helped with some advances, yes”. As well as all this inspirational and motivational teaching you will see there is not a single PowerPoint slide in sight. It’s all about naked presenting (well, apart from the odd prop or two) and story telling.I’ve recently been reading Nancy Duarte’s book Resonate which looks at how storytelling as done by great writers and film-makers can be used by presenters to really engage with their audience. If you want a book that helps you with presentations that is something other than the boring ‘how to’ guides on structuring PowerPoint presentations then it’s definitely worth a read.

So what’s this got to do with IT architecture? Nothing and everything! At one level architecture is just a pile of models and diagrams describing ways for solving business problems. However architecture also needs to be ‘bought alive’ if the ideas it encompasses are to be explained and the costs of implementing it justified to non-technical people. Explaining and presenting architecture is probably one of the most important aspects of the architects role and communication skills should definitely be up their as one of the key competencies possessed by architects. Without these architectures will just remain a bunch of ideas gathering virtual dust in a modeling tool.

On Lego, Granularity, Jamie Oliver and Architecture

There’s a TV program running here in the UK called Jamie’s Dream School where the chef, entrepreneur, restaurateur and Sainsbury’s promoter Jamie Oliver brings together some of Britain’s most inspirational individuals (Robert Winston, Simon Callow, Rankin and Rolf Harris to name but four) to see if they can persuade 20 young people who’ve left school with little to show for the experience to give education a second chance. The central theme seems to be one of hands-on student involvement and live demos, the more outrageous the better. The highlight so far being Robert Winston hacking away at rats and pigs with scalpels and a circular saw resulting in several students vomiting in the playground.This set me thinking about how best to demonstrate software architecture to a bunch of students of a similar age to those in the Jamie Oliver program for a talk I gave at a UK university last week. Much has been written about the analogy between LEGO (R) and services (see this article from Zapthink and another from ZDNet for example). Okay it may not be quite as imaginative as pig carcasses being hacked about but LEGO was the best I had at hand! Here’s how the demo works:

  1. First I give them my favourite defnition of architecture, namely: Architects take existing components and assemble them in interesting and important ways. (Seth Godin).
  2. Then I invite an unsuspecting candidate to come and assemble the body (importantly excluding the wheels) of a car out of LEGO (actually Duplo as its bigger) making a big thing of tipping a bag of bricks out onto the table. This they usually do without too much hassle, the key learning point being that they have created an “interesting” construct out of “existing components”.
  3. I then ask them to add some wheels and tip out a bag of K’NEX (R). As I’m sure even non-parents know K’NEX and LEGO are essentially different “systems” and the two don’t (easily) connect to each other. This usually ends up in bemused looks and a good deal of fiddling around with wheels and bricks trying to figure out how to make the two systems connect to each other.

Depending on how much time and energy you have as well as the attention span of the students there are lots of great learning points to be made here. In order of increasing depth these are:

  1. LEGO (components) have a well defined interface and can easily be assembled in lots of interesting ways.
  2. K’NEX is a different system and has been designed with a different interface. K’NEX and LEGO were not designed to work with each other. One of the jobs of an architect is to watch out for incompatible interfaces and figure out ways of making them work with each other. This is possibly done using a third party product e.g Sellotape (R). I guess an extension to this demo could be a roll of this.
  3. It may be in the component providers interest to use different interfaces as it results in vendor lock-in which means you have to keep going back to that vendor for more components,
  4. Granularity (i.e. in the case of LEGO the number of “nobbles”) is important. Small bricks (few nobbles, fine-grained) may be very reusable but you need lots and lots of them to do anything interesting. Conversely LEGO have now taken to quite specialized pieces (not “bricks” any more but large-grained pieces) that perform one function well (the LEGO rocket for example) but cannot be reused so easily. The optimum for re-usability is somewhere in-between these two.
  5. LEGO may be aimed at children who, with relatively little expertise or training, may be able to assemble interesting things but they are not about to build LEGOland. For that you need an architect!
  6. Finally, if you are feeling really mean, disassemble the lovingly built construction of your student then ask her to re-build it in exactly the same way. Chances are that even for a relatively simple system they won’t be able to. What might have helped was some type of document that described what they had done.

I’m sure there are other interesting analogies to be drawn but I’ll finish by saying that this is not quite as trivial as it sounds. Not only was this a good learning exercise for my students I happen to know a client who is using building blocks like LEGO to help their architects architect systems. The key thing it helps demonstrate is the importance of collaboration in assembling such systems.

Skills for Building a Smarter Planet

This is the transcript of a talk I gave to a group of sixth formers, who are considering a career in IT, at a UK university this week. The theme was “What do IT architects do all day” however I expanded it into “What will IT architects be doing in the future?”.What I want to do in the next 30 minutes or so is not only tell you what I, as an IT architect, do but what I think you will be doing should you choose to take up a career as an IT architect and what skills you wil need to do the job. In particular I’d like to explain what I mean by this:

Today’s world is full of wicked problems. Solving these problems, and building a smarter planet needs new skills. I believe that IT architects need to be a versatile and adaptive breed of systems thinkers.

Here’s the best explanation I’ve seen of what architects do:

Architects take existing components and assemble them in interesting and important ways. (Seth Godin)

As an example of this consider something that we use everyday, the (world-wide) web. Invented by Tim Berners-Lee just 20 short years ago, Tim basically assembled the web from three components that already existed: hypertext, internet protocols and what are referred to as markup languages. All these things existed, what Tim did was to assemble them in an “interesting” way. So what I do is to use IT to try and solve interesting and important business problems by assembling (software) components. I’m not just interested in any problems though, the type of problems that interest me are the “wicked” variety. What do I mean by these?

Wicked problems are ones that you often don’t really understand until you’ve formulated a solution to it. It’s often not even possible to really state what the problem is and because there is no clear statement of the problem, there can be no clear solution so you never actually know when you are finished. For wicked problems ‘finished’ usually means you have run out of time, money, patience or all three! Further, solutions to wicked problems are not “right” or “wrong”. Wicked problems tend to have solutions which are ‘better’, or maybe ‘worse’ or just ‘good enough’. Finally, every wicked problem is essentially novel and unique. Because there are usually so many factors involved in a wicked problem no two problems are ever the same and each solution needs a unique approach.

But there’s a problem! Here’s a headline from last year Independent newspaper: “Labour’s computer blunders cost £26bn”. What’s going on here? This is your and my money being wasted on failed IT projects. And it’s not just government projects that are failing. Here’s an estimate from the British Computer Society of how many IT projects re actually successful. 20%! How poor is that? It projects ‘fail’ for many reasons but interstingly it’s rarely for just technical reasons. More often than not it’s due to poor project and risk management, lack of effective stakeholder management or no clear senior management ownership. So we have a real problem here. As we’ll see in a minute,  problems are not only getting harder to fix (more ‘wicked’) but our ability to solve them does not seem to be improving!!

So what are these wicked problems I keep talking about? They are many and numerous but many of them are attributable to inefficiencies that exist in the “systems” that exist in the world. Economists estimate that globally we waste $15 trillion of the worlds precious resources each year. Much – if not most – of this inefficiency can be attributed to the fact that we have optimized the way the world works within silos, with little regard for how the processes and systems that drive our planet interrelate. These complex, systemic inefficiencies are interwoven in the interactions among our planet’s core systems. No business, government or institution can solve these issues in isolation. To root out inefficiencies and reclaim a substantial portion of that which is lost, businesses, industries, governments and cities will need to think in terms of systems, or more accurately, a system of systems approach. This means we will need to collaborate at unprecedented levels. For example no single organization owns the world’s food system, and no single entity can fix the world’s healthcare system. Success will depend upon understanding the full set of cause-and-effect relationships that link systems and using this knowledge to create greater synergy. Basically many of the problems the world faces today are cause by the fact that our systems don’t talk to each other. What do I mean by this? Here’s a simple example to illustrate the point.

Imagine you are driving your car around town trying to find a parking space. You can be sure that somewhere in town there’s a parking meter looking for a car to park in it. How do we marry your car with that parking meter? Actually the technology to do this pretty much exists already. However the challenge of actually fixing this problem stretches beyond just technology. A solution to this problem includes at least: intelligent sensors, communications, public and private finance, local government involvement, control and policing as well as well established open standards.

Like I said, from a pure technology point of view we are in pretty good shape to solve problems like this. We now have an unprecedented amount of: instrumentation, interconnectedness and intelligence
such that organisations (and societies) can think and act in radically new ways. However in order to solve problems like the parking one as well as significantly more ‘wicked’ ones I believe we need skills that stretch beyond the mere technological. If you are to help solve these problems then you need to be a versatile and adaptive systems thinker. A systems thinker is someone who not only uses her left-brained logical thinking capabilities but also uses her right-brained creative and artistic capabilities. Here are six attributes (from Dan Pink’s book A Whole New Mind) that a good systems thinker needs to adopt which I think will help in solving some of the worlds wicked problems:

  • Design – It is no longer sufficient or acceptable to create a product or service that merely does the job. Today it is both economically critical as well as  aesthetcially rewarding to create something that is beautiful and emotionally engaging.
  • Story – We are living in a time of information overload. If you want your sales pitch or point of view to be heard above the cacophony of background noise that is out there you have to create a compelling narrative.
  • Symphony – We live in a world of silos. Siloed processes, siloed systems and siloed societies. Success in business and in life is about breaking down these silos and pulling all the pieces together. Its about synthesis rather than analysis.
  • Empathy – Our capacity for logical thought has gone a very way to creating the technological society we live in today. However in a world of ubiquitous information that is available at the touch of a button logic alone will no longer cut the mustard.In order to thrive we need to understand what makes our fellow humans tick and really get beneath their skin and to forge new relationships.
  • Play – In a world where we are all having to meet targets, pass tests and  achieve the right grades in order to get on it is easy to forget the importance of play. There is a lot of evidence out there of the benefits to our health and general well-being of the benefits of play, not only outside work but also inside.
  • Meaning – We live in a world of material plenty put spiritual scarcity. Seeking meaning in life that transcends above “things” is vital if we are to achieve some kind of personal fulfilment.

A Gartner report published in 2005 predicted that by 2010, IT professionals will need to possess expertise in multiple domains. Technical aptitude alone will no longer be enough. IT professionals must prove they can understand business realities – industry, core processes, customer bases, regulatory environment, culture and constraints. Versatility will be crucial. It predicted that by By 2011, 70 percent of leading-edge companies will seek and develop “versatilists” while deemphasizing specialists.

Versatilists are people whose numerous roles, assignments and experiences enable them to synthesise knowledge and context in ways that fuel business value. Versatilists play different roles than specialists or generalists. Specialists generally have deep technical skills and narrow scope, giving them expertise that is recognized by peers, but it is seldom known outside their immediate domains. Generalists have broad scope and comparatively shallow skills, enabling them to respond or act reasonably quickly, but often at a cursory level. Versatilists, in contrast, apply depth of skill to a rich scope of situations and experiences, building new alliances, perspectives, competencies and roles. They gain the confidence of peers and partners. To attain versatilist skills, IT professionals should..

  • Look outside the confines of current roles, regions, employers or business units. The more informed a professional is about a company, its industry segment and the forces that affect it, the greater the contextual grasp.
  • Lay out opportunities and assignments methodically. Focus on the areas and challenges that fall outside the comfort zone; those areas generally will be the areas of greatest growth.
  • Explore possibilities outside the world of corporate business. Not-for-profit ventures, startup companies, government agencies and consumer IT service providers offer powerful ways to bolster experiences, behavioral competencies or management skills.
  • Enroll in advanced degree programs or in qualified education courses to expand perspective.
  • Identify companies, projects, assignments, education and training that will increase professional value.

I believe we’ve only just begun to scratch the surface of what’s possible on a “smarter planet”. However if we are to really address the truly wicked problems that are out there in order to make our world a better, and maybe even a fairer place, we need people like you to make it happen.

Finally you might be tempted in these hard economic times when you are being asked to pay outrageous amounts for your education not to bother with university. However bear this in mind:

“Unskilled labor is what you call someone who merely has skills that most everyone else has. If it’s not scarce, why pay extra? Skills matter. The unemployment rate for US workers without a college education is almost triple that for those with one. Even the college rate is still too high, though.  On the other hand, the unemployment rate for skilled neurosurgeons, talented database designers and motivated recombinant DNA biologists is essentially zero, despite the high pay in all three fields. Unskilled now means not-specially skilled”.

The only real investment you have for the future is the piece of grey matter between your ears. Make sure you continue to nurture and nourish it throughout your life by stimulating both the left and right sides.

Thank you and good luck with whatever path you choose to take in life.

The Next Generation?

Demographers, social scientists and new media watchers are fond of dividing people into generations based on what recent (i.e. post-World War II) period of history they were born in. Whilst there are no consistent definitions of when these generations begin and end they roughly fall into these periods:

  • Baby-boomers: 1940 – 1960. Those born during the post–World War II demographic boom in births. This generation more than any other rejected the moral and religous beliefs of their parents and created their own sets of values. This is the generation that invented sex, drugs and rock’n’roll and is still largely the one that is ruling the roost so to speak (President Obama, born in 1961, catching the tail end of this particular demographic).
  • Generation X (post boomers): 1960 – 1980. This term was apparently coined by the great Magnum photographer Robert Capa in the early 1950s. He used it as the title for a photo-essay about young men and women growing up immediately after the Second World War. Sometimes referred to as the “unknown” or “lost” genaration this group has signified people without identity who face an uncertain, ill-defined (and perhaps hostile) future. This is the generation that grew up during the fall of the Berlin war, the end of the Cold War and various economic crises (such as the 1979 olil crisis) and were most likely to be the children of divorced parents.
  • Generation Y (the Millenial generation): 1980 – 2000. This is the culturally liberal generation that witnessed the start and wide-spread adoption of the internet and are the children of baby-boomers. This is the generation that owns, and is most comfortable with using, most computers, mobile phones and MP3 players.

So what is the next generation born during the last 10 years and possibly the next 10 to be called? The obvious name would be “Generation Z” although this would mean we will have run out of letters already so will have problems naming the post-2020 generation. Rather than following the obvious trend therefore how about naming this upcoming generation who will be entering the higher education system and workforce during the next 10 years “Generation V”, the versatilist generation? These are the people, more than any others, who will need to adopt a whole new set of skills if they are to survive and prosper during their lifetimes. These are the ones who will be suffering the after-shocks of the baby-boom, X and Y generations and who will need to fix the wicked problems those generations have left in their wake. This is the generation that will probably have more jobs, in their lifetime, than the other three generations put together and who will, as Daniel Pink has suggested have to survive in a world dominated by the three A’s:

  • Automation – Jobs can be done faster and more efficiently by computers.
  • Abundance – We have more stuff than we know what to do with and it is increasingly being produced at cheaper and cheaper rates.
  • Asia (or Africa) – More and more work is outsourced to these low cost economies.
The skills that this generation will need to adopt will be many and varied and include:
  • Objectively viewing experiences and roles, learning from these (failures as well as successes) and using this knowledge to gain new roles.
  • Looking outside the confines of current roles, regions, employers or business units. The more informed a professional is about a company, its industry segment and the forces that affect it, the greater chance will the person have to predict and survive economic downturns.
  • Laying out opportunities and assignments methodically. Focusing on the areas and challenges that fall outside the comfort zone; those areas generally will be the areas of greatest growth.
  • Exploring possibilities outside the world of large, corporate business. Charities, startup companies, government agencies, even your own web-startup offer new and interesting ways to build experiences, learn new skills and maybe even modify behaviours.
  • Enrolling in advanced education courses to expand perspective, preferably outside your current discipline and area of expertise.
  • Targeting companies, projects, assignments, education and training courses that will increase professional value and make you more marketable.

Sadly Gartner seem to have coined the use of “Generation V” already, where V is for virtual. Pity, as they also coined the term “virtualist”, missed opportunity I reckon.

Is Agile Architecture an Oxymoron?

Much has been written by many great minds on agile development processes and how (or indeed whether) architecture fits into such processes. People like:

  • Scott Ambler (Architecture Envisioning: An Agile Best Practice). “Some people will tell you that you don’t need to do any initial architecture modeling at all.  However, my experience is that doing some initial architectural modeling in an agile manner offers several benefits”.
  • Mike Cohn (Succeeding with Agile). “On an architecturally complicated or risky project, the architect will need to work closely with the product owner to educate the product owner about the architectural implications of items on the product backlog”.
  • Walker Royce (Top 10 Management Principles of Agile Software Delivery) : “Reduce uncertainties by addressing architecturally significant decisions first”.
  • Andrew Johnston (Role of the Agile Architect): “In an agile development the architect has the main responsibility to consider change and complexity while the other developers focus on the next delivery”.
  • Martin Fowler (Is Design Dead?):  “XP does give us a new way to think about effective design because it has made evolutionary design a plausible strategy again”.

I’ve been contemplating how the discipline of architecture, as well as the role of the architect, fits with the agile approach to developing systems whilst reviewing the systems development lifecycle (SDLC) for a number of clients of late. In my experience most people have an “either-or” approach when it comes to SDLC’s. They either do waterfall or do agile and have some criteria which they use to decide between the two approaches on a project by project basis. Unfortunately these selection criteria are often biased toward the prejudices of the person writing the criteria and will push their favourite approach.

Rather than treating agile and waterfall as mutually exclusive I would prefer to adopt a layered approach to defining SDLC’s as shown here.

The three layers have the following characteristics:

  1. Basic Process. Assume all projects adopt the simplest approach to delivery as possible (but no simpler). For most software product development projects this will amount to an agile approach like Scrum which uses iterations (sprints) and a simplified set of roles, namely: Product Manager, Scrum Master and Team where Team is made up of people adopting multiple roles (architect, programmer, tester etc). On such projects decide up front which artefacts you want to adopt. These don’t need to be heavy-weight documents but could be contained in tools or as sketches. Here the team member performing the architect role needs to manage an “emergent architecture”. The role of architect maybe shared or not dedicated to a single individual.
  2. Complex Process. At the next level of complexity where multiple-products need to be built as part of a single program of work and all these have dependencies some level of governance usually needs to be in place that ensures everything comes together at the right time and is of a consistent level of quality. At a micro-level this can be done using a scrum-of-scrums approach where a twice or thrice weekly scrum that brings together all the Scrum Masters happens. Here the architect role is cross-product as it needs to ensure all products fit together (including maybe third party products and packages). This may still be a shared role but is more likely to be a dedicated individual. This may involve some program level checkpoints that at least ensure all iterations have created a shippable product that is ready to integrate or deploy. The architecture is not just emergent any more but may also needs to be “envisioned” up-front so everyone understands where their product fits with others.
  3. Complex Integration Process. The final level of complexity is caused when not only multiple products are being developed but also existing systems need to be incorporated which have complex (or poorly understood) interfaces. Here the role of the architect is not only cross-product but cross-system as she has to deal with the complexity of multiple systems, some of which may need to be changed by people other than the core development team. Here the level of ceremony as well as the number of artefacts to control and manage the decisions around the complexity will increase. The architecture is certainly not emergent any more but also needs to be “envisioned” up-front so everyone understands where their product fits and also what interfaces they need to work with. This is something Scott Ambler suggests happens during “iteration zero“.

Each of these layers is built on a common architectural as well as process “language” so everyone has a common understanding of terms used in the project. I’m much in agreement with Mike Cohn’s comment on his blog recently where, in reflecting on the tenth anniversary of the Agile Manifesto he says: “The next change I’d like to see (and predict will occur) over the next ten years also occurred in the OO world: We stop talking about agile” and goes on to say “Rather than “agile software development” it is just “software development”—and of course it’s agile”.

I would like to see an agile or lean based approach as the de-facto standard that only adds additional artefacts or project checkpoints as needed rather than thinking that every time we need to either adopt an agile approach or a waterfall approach.