EA Wars

Lots of great discussion in the blogosphere right now on the relevance of Enterprise Architecture in the brave new world of the ‘extended’ enterprise and whether architecture is something that is planned or ’emerges’. This is largely prompted, I suspect, by the good folk at ZapThink asking Why Nobody is Doing Enterprise Architecture (possibly to plant seeds of doubt in the minds of CIOs and send them rushing to one of their “licensed ZapThink architecture courses”). For a nice, succinct and totally dismissive riposte to the ZapThink article check out David Sprott’s blog entry here. For a more reasoned and skeptical discussion on whether emergent architecture actually exists (I think it does, see below) read Richard Veryard’s blog entry here.However… despite some real FUD’iness on the part of ZapThink there are some elements in their argument that definitely ring true and which I have observed in a number of clients I have worked with over the last few years. In particular:

  • Emergence (i.e. the way complex systems and patterns arise out of a multiplicity of relatively simple interactions) is, like it or not, a definite factor in the architecture of modern-day enterprises. This is especially true when the human dimension is factored into the mix. The current Gen Y and upcoming Gen V are not going to hang around while the EA department figure out how to factor their 10th generation iPhone, which offers 3-D holographic body-time, into an EA blueprint. They are just going to bypass the current systems and use it regardless. The enterprise had better quickly figure out the implications of such devices (whatever they turn out to be) or risk becoming a technological backwater.
  • EA departments seem to very quickly become disjoint from both the business which they should be serving and the technicians which they should be governing. One is definitely tempted to ask “who is governing the governors” when it comes to the EA department? Accountability in many organisations definitely seems to be lacking. This feels to me like another example of gapology that seems to be increasingly apparent in such organisations.
  • Even though we are better placed than ever to be able to capture good methodological approaches to systems development I still see precious little adoption of true systems development lifecycles (SDLC’s) in organisations. Admittedly methods have had very bad press over the years as they are often seen as been an unnecessary overhead which, with the rise of agile, have been pushed into the background as something an organisation must have in order to be ISO 9000 compliant or whatever but everyone really just gets on with it and ignores all that stuff.
  • Finally, as with many things in IT, the situation has been confused by having multiple and overlapping standards and frameworks in the EA space (TOGAF, Zachman and MODAF to name but three). Whilst none of these may be perfect I think the important thing is to go with one and adapt it accordingly to what works for your organisation. What we should not be doing is inventing more frameworks (and standards bodies to promote them). As with developing an EA itself the approach an EA department should take to selecting an EA framework is to start small and grow on an as needs basis.

Applying Architectural Tactics

The use of architectural tactics, as proposed by the Software Engineering Institute, provides a systematic way of dealing with a systems non-functional requirements (sometime referred to as the systems quality attributes or just qualities). These can be both runtime qualities such as performance, availability and security as well as non-runtime such as maintainability, portability and so on. In my experience, dealing with both functional and non-functional requirements, as well as capturing them using a suitable modeling tool is something that is not always handled very methodically. Here’s an approach that tries to enforce some architectural rigour using the Unified Modeling Language (UML) and any UML compliant modeling tool.

Architecturally, systems can be decomposed from an enterprise or system-wide view (i.e. meaning people, processes, data and IT systems), to an IT system view to a component view and finally to a sub-component view as shown going clock-wise in Figure 1. These diagrams show how an example hotel management system (something I’ve used before to illustrate some architectural principles) might eventually be decomposed into components and sub-components.

Figure 1: System Decomposition

This decomposition typically happens by considering what functionality needs to be associated with each of the system elements at different levels of decomposition. So, as shown in Figure 1 above, first we associate ‘large-grained’ functionality (e.g. we need a hotel system) at the system level and gradually break this down to finer and finer grained levels until we have attributed all functionality across all components (e.g. we need a user interface component that handles the customer management aspects of the system).

Crucially from the point of view of deployment of components we need to have decomposed the system to at least that of the sub-component level in Figure 1 so that we have a clear idea of each of the types of component (i.e. do they handle user input or manage data etc) and know how they collaborate with each other in satisfying use cases. There are a number of patterns which can be adopted for doing this. For example the model-view-controller pattern as shown in Figure 2 is a way of ascribing functionality to components in a standard way using rules for how these components collaborate. This pattern has been used for the sub-component view of Figure 1.

Figure 2: Model-View-Controller Pattern

So far we have shown how to decompose a system based on functional requirements and thinking about which components will realise those requirements. What about non-functional requirements though? Table 1 shows how non-functional requirements can be decomposed and assigned to architectural elements as they are identified. Initially non-functional requirements are stated at the whole system level but as we decompose into finer-grained architectural elements (AKA components) we can begin to think about how those elements support particular non-functional requirements also. In this way non-functional requirements get decomposed and associated with each level of system functionality. Non-functional requirements would ideally be assigned as attributes to each relevant component (preferably inside our chosen UML modelling tool) so they do not get lost or forgotten.

Table 1
System Element Non-Functional Requirement
Hotel System (i.e. including all actors and IT systems). The hotel system must allow customers to check-in 24 hours a day, 365 days a year. Note this is typically the accuracy non-functional requirements are stated at initially. Further analysis is usually needed to provide measurable values.
Hotel Management System (i.e. the hotel IT system). The hotel management system must allow the front-desk clerk to check-in a customer 24 hours a day, 365 days a year with a 99.99% availability value.
Customer Manager (i.e. a system element within the hotel’s IT system). The customer manager system element (component) must allow customer details to be created, read or updated (but not deleted) 24 hours a day, 365 days a year with a 99.99% availability value.
Customer Manager Interface (i.e. the user interface that belongs to the Customer Manager system element). The customer manager interface must allow customer details to be created, read or updated (but not deleted) 24 hours a day, 365 days a year with a 99.99% availability value.

Once it is understood what non-functional requirement each component needs to support we can apply the approach of architectural tactics proposed by the Software Engineering Institute (SEI) to determine how to handle those non-functional requirements.

An architectural tactic represents “codified knowledge” on how to satisfy non-functional requirements by applying one or more patterns or reasoning frameworks (for example queuing or scheduling theory) to the architecture. Tactics show how (the parameters of) a non-functional requirement (e.g. the required response time or availability) can be addressed through architectural decisions to achieve the desired capability.

In the example we are focusing on in Table 1 we need some tactics that allow the desired quality attribute of 99.99% availability (which corresponds to a downtime of 52 min, 34 sec per year) to be achieved by the customer manager interface. A detailed set of availability tactics can be found here but for the purposes of this example availability tactics can be categorized according to whether they address fault detection, recovery, or prevention. Here are some potential tactics for these:

  • Employing good software engineering practices for fault prevention such as code inspections, usability testing and so on to the design and implementation of the interface.
  • Deploying components on highly-available platforms which employ fault detection and recovery approaches such as system monitoring, active failover etc.
  • Developing a backup and recovery approach that allows the platform running the user interface to be replaced within the target availability times.

As this example shows, not all non-functional requirements can be realised suitably by a component alone; sometimes full-realisation can only be done when that component is placed (deployed) onto a suitable computer platform. Once we know what non-functional requirements need to be realised by what components we can then think about how to package these components together to be deployed onto the appropriate computer platform which supports those non-functional requirements (for example on a platform that will support 99.99% availability and so on). Figure 3 shows how this deployment can be modelled in UML adopting the Hot Standby Load Balancer pattern.

Figure 3: Deployment View

Here we have taken one component, the ‘Customer Manager’, and showed how it would be deployed with other components (a ‘Room Manager’ and a ‘Reservation Manager’’) that support the same non-functional requirements onto two application server nodes. A third UML element, an artefact, packages together like components via a UML «manifest» relationship. It is the artefact that actually gets placed onto the nodes. An artefact is a standard UML element that “embodies or manifests a number of model elements. The artefact owns the manifestations, each representing the utilization of a packageable element”.

So far all of this has been done at a logical level; that is there is no mention of technology. However moving from a logical level to a physical (technology dependent level) is a relatively simple step. The packaging notion of an artefact can equally be used for packaging physical components, for example in this case the three components shown in Figure 3 above could Enterprise Java components or .NET components.

This is a simple example to illustrate three main points:

  1. Architecting a system based on functional and non-functional requirements.
  2. Use of a standard notation (i.e. UML) and modelling tool.
  3. Adoption of tactics and patterns to show how a systems qualities can be achieved.

None of it rocket science but something you don’t see done much.

Oops There Goes Our Reputation

I’m guessing that up to two weeks ago most people, like me, had never heard of a company called Epsilon. Now, unfortunately for them, too many people know of them for all the wrong reasons. If you sign up to any services from household names such as Marks and Spencer, Hilton, Marriott or McKinsey you will have probably had several emails in the last two weeks advising you of a security breach which led to “unauthorized entry into Epsilon’s email system”. Unfortunately because Epsilon is a marketing vendor that manages customer email lists for these and other well known household brands chances are your email has been obtained by this unauthorised entry as well. Now, it just might be a pure coincidence, but in the last two weeks I have also received emails from the Chinese government inviting me to a conference on some topic I’ve never heard of, from Kofi Annan, ex-Secretary General of the United Nations and from a lady in Nigeria asking for my bank account details so she can deposit $18.4M into the account so she can leave the country!According to the information on Epsilon’s web site the information that was obtained was limited to email addresses and/or customer names only. So, should we be worried by this and what are the implications on architecture of such systems?

I think we should be worried for at least three reasons:

  1. Whilst the increased spam that is seemingly inevitable following an incident such as this is mildly annoying a deeper concern is how could the criminal elements who now have information on the places I do business on the web put this information together to learn more about me and possibly construct a more sophisticated phishing attack? Unfortunately it’s not only the good guys that have access to data analytics tools.
  2. Many people probably have a single password to access multiple web sites. The criminals who now have your email as well as knowledge of which sites you do business at only have to crack one of these and potentially have access to multiple sites, some of which may have more sensitive information.
  3. Finally how come information I trusted to well known (and by implication ‘secure’) brands and their web sites has been handed over to a third party without me even knowing about it? Can I trust those companies not to be doing this with more sensitive information and should I be withdrawing my business from them?. This is a serious breach of trust and I suspect that many of these brands own reputations will have been damaged.

So what are the impacts to us as IT architects in a case like this? Here are a few:

  1. As IT architects we make architectural decisions all the time. Some of these are relatively trivial (I’ll assign that function to that component etc) whereas others are not. Clearly decisions about which part of the system to entrust personal information to is not trivial. I always advocate documenting significant architectural decisions in a formal way where all the options you considered are captured as well as the rationale and implications behind the decision you made. As our systems get ever more complex and distributed the implications of particular decisions become harder to quantify. I wonder how many architects consider the implications to a companies reputation of entrusting even seemingly low grade personal information to third parties?
  2. It is very likely that incidents such as this are going to result in increased legislation that covers personal information just like there is legislation on Payment Card Industry (PCI) standards. This will demand more architectural rigour as new standards essentially impose new constraints on how we design our systems.
  3. As we trundle slowly to a world where more and more of our data is to be held in the cloud using a so called multi-tenant deployment model it’s surely only a matter of time before unauthorised access to one of our cloud data stores will result in access to many other data sources and a wealth of our personal information. What is needed here is new thinking around patterns of layered security that are tried and tested and, crucially, which can be ‘sold’ to consumers of these new services so they can be reassured that their data is secure. As Software-as-a-Service (SaaS) takes off and new providers join the market we will increasingly need to be reassured they are to be trusted with our personal data. After all if we cannot trust existing, large corporations how can we be expected to trust new, small startups?
  4. Finally I suspect that it is only a matter of time before legislation aimed at systems designers themselves is enforced that make us as IT architects liable for some of those architectural decisions I mentioned earlier. I imagine there are several lawyers engaged by the parties whose customers email addresses were obtained and whose trust and reputation with those customers may now be compromised. I wonder if some of those lawyers will be thinking about the design of such systems in the first place and, by implication, the people who designed those systems?

Open vs. Closed Architectures

There has been much Apple bashing in cyberspace as well as the ‘dead-wood’ parts of the press of late. To the extent that some people are now turning on those that own one of Apple’s wunder-devices (an iPad) accusing them of being “selfish elites“. Phew! I thought it was a typically British trait to knock anything and anyone that was remotely successful but it now seems that the whole world has it in for Mr Jobs’ empire.Back in the pre-google days of 1994 Umberto Eco declared thatthe Macintosh is Catholic and that DOS is Protestant. Indeed, the Macintosh is counter-reformist and has been influenced by the ratio studiorum of the Jesuits. It is cheerful, friendly, conciliatory; it tells the faithful how they must proceed step by step to reach — if not the kingdom of Heaven — the moment in which their document is printed.

The big gripe most people have with Apple is their closed architecture which controls not only who is allowed to write apps for their OS’s but who can produce devices that actually run those OS’s (er, that would be Apple). It’s one of life’s great anomalies as to why Apple is so successful in building products with closed architectures when most everyone would agree that open architectures and systems are ultimately the way to go as, in the end, they lead to greater innovation, wider-usage and, presumably, more profit for those involved. The classic case of an open architecture leading to wide-spread usage is that of the original IBM Personal Computer. Because IBM wanted to fast-track its introduction many of the parts were, unusually for IBM, provided by third-parties including, most significantly the processor (from Intel) and the operating system (from the fledgling Microsoft). This together with the fact that the technical information on the innards of the computer were made publicly available essentially made the IBM PC ‘open’. This more than anything gave it an unprecedented penetration into the marketplace allowing many vendors to provide IBM PC ‘clones’.

There is of course a ‘dark side’ to all of this. Thousands of vendors all providing hardware add-ons and extensions as well as applications resulted in huge inter-working problems which in the early days at least required you to be something of a computer engineer if you wanted to get everything working together. This is where Apple stepped in. As Umberto Eco said, Apple guides the faithful every step of the way. What they sacrifice in openness and choice they gain in everything working out the box, sometimes in three simple steps.

So, is open always best when it comes to architecture or does it sometimes pay to have a closed architecture? What does the architect do when faced with such a choice? Here’s my take:

  • Know your audience. The early PC’s, like it or not were bought by technophiles who enjoyed technology for the sake of technology. The early Mac’s were bought by people who just wanted to use computers to get the job done. In those days both had a market.
  • Know where you want to go. Apple stuck solidly with creating user friendly (not to mention well designed devices) that people would want to own and use. The plethora of PC providers (which there soon were) couldn’t by and large give a damn about design. They just wanted to sell as many devices as possible and let others worry about how to stitch everything together. This in itself generated a huge industry which in a strange self-fulfilling way led to more devices and world domination of the PC and left Apple in a niche market. Openness certainly seemed to be paying.
  • Know how to capitalise on your architectural philosopy. Ultimately openness leads to commoditization. When anyone can do it price dominates and the cheapest always wins. If you own the space then you control the price. Apple’s recent success has been not to capitalise on an open architecture but to capitalise on good design which has enabled it to create high value, desirable products showing that good design trounces an open architecture.

So how about combining the utility of an open architecture with the significance of a well thought through architecture to create a great design? Which funnily enough is what Dan Pink meant by this:

Significance + Utility = Design

Huh, beaten to a good idea again!

On Lego, Granularity, Jamie Oliver and Architecture

There’s a TV program running here in the UK called Jamie’s Dream School where the chef, entrepreneur, restaurateur and Sainsbury’s promoter Jamie Oliver brings together some of Britain’s most inspirational individuals (Robert Winston, Simon Callow, Rankin and Rolf Harris to name but four) to see if they can persuade 20 young people who’ve left school with little to show for the experience to give education a second chance. The central theme seems to be one of hands-on student involvement and live demos, the more outrageous the better. The highlight so far being Robert Winston hacking away at rats and pigs with scalpels and a circular saw resulting in several students vomiting in the playground.This set me thinking about how best to demonstrate software architecture to a bunch of students of a similar age to those in the Jamie Oliver program for a talk I gave at a UK university last week. Much has been written about the analogy between LEGO (R) and services (see this article from Zapthink and another from ZDNet for example). Okay it may not be quite as imaginative as pig carcasses being hacked about but LEGO was the best I had at hand! Here’s how the demo works:

  1. First I give them my favourite defnition of architecture, namely: Architects take existing components and assemble them in interesting and important ways. (Seth Godin).
  2. Then I invite an unsuspecting candidate to come and assemble the body (importantly excluding the wheels) of a car out of LEGO (actually Duplo as its bigger) making a big thing of tipping a bag of bricks out onto the table. This they usually do without too much hassle, the key learning point being that they have created an “interesting” construct out of “existing components”.
  3. I then ask them to add some wheels and tip out a bag of K’NEX (R). As I’m sure even non-parents know K’NEX and LEGO are essentially different “systems” and the two don’t (easily) connect to each other. This usually ends up in bemused looks and a good deal of fiddling around with wheels and bricks trying to figure out how to make the two systems connect to each other.

Depending on how much time and energy you have as well as the attention span of the students there are lots of great learning points to be made here. In order of increasing depth these are:

  1. LEGO (components) have a well defined interface and can easily be assembled in lots of interesting ways.
  2. K’NEX is a different system and has been designed with a different interface. K’NEX and LEGO were not designed to work with each other. One of the jobs of an architect is to watch out for incompatible interfaces and figure out ways of making them work with each other. This is possibly done using a third party product e.g Sellotape (R). I guess an extension to this demo could be a roll of this.
  3. It may be in the component providers interest to use different interfaces as it results in vendor lock-in which means you have to keep going back to that vendor for more components,
  4. Granularity (i.e. in the case of LEGO the number of “nobbles”) is important. Small bricks (few nobbles, fine-grained) may be very reusable but you need lots and lots of them to do anything interesting. Conversely LEGO have now taken to quite specialized pieces (not “bricks” any more but large-grained pieces) that perform one function well (the LEGO rocket for example) but cannot be reused so easily. The optimum for re-usability is somewhere in-between these two.
  5. LEGO may be aimed at children who, with relatively little expertise or training, may be able to assemble interesting things but they are not about to build LEGOland. For that you need an architect!
  6. Finally, if you are feeling really mean, disassemble the lovingly built construction of your student then ask her to re-build it in exactly the same way. Chances are that even for a relatively simple system they won’t be able to. What might have helped was some type of document that described what they had done.

I’m sure there are other interesting analogies to be drawn but I’ll finish by saying that this is not quite as trivial as it sounds. Not only was this a good learning exercise for my students I happen to know a client who is using building blocks like LEGO to help their architects architect systems. The key thing it helps demonstrate is the importance of collaboration in assembling such systems.

Skills for Building a Smarter Planet

This is the transcript of a talk I gave to a group of sixth formers, who are considering a career in IT, at a UK university this week. The theme was “What do IT architects do all day” however I expanded it into “What will IT architects be doing in the future?”.What I want to do in the next 30 minutes or so is not only tell you what I, as an IT architect, do but what I think you will be doing should you choose to take up a career as an IT architect and what skills you wil need to do the job. In particular I’d like to explain what I mean by this:

Today’s world is full of wicked problems. Solving these problems, and building a smarter planet needs new skills. I believe that IT architects need to be a versatile and adaptive breed of systems thinkers.

Here’s the best explanation I’ve seen of what architects do:

Architects take existing components and assemble them in interesting and important ways. (Seth Godin)

As an example of this consider something that we use everyday, the (world-wide) web. Invented by Tim Berners-Lee just 20 short years ago, Tim basically assembled the web from three components that already existed: hypertext, internet protocols and what are referred to as markup languages. All these things existed, what Tim did was to assemble them in an “interesting” way. So what I do is to use IT to try and solve interesting and important business problems by assembling (software) components. I’m not just interested in any problems though, the type of problems that interest me are the “wicked” variety. What do I mean by these?

Wicked problems are ones that you often don’t really understand until you’ve formulated a solution to it. It’s often not even possible to really state what the problem is and because there is no clear statement of the problem, there can be no clear solution so you never actually know when you are finished. For wicked problems ‘finished’ usually means you have run out of time, money, patience or all three! Further, solutions to wicked problems are not “right” or “wrong”. Wicked problems tend to have solutions which are ‘better’, or maybe ‘worse’ or just ‘good enough’. Finally, every wicked problem is essentially novel and unique. Because there are usually so many factors involved in a wicked problem no two problems are ever the same and each solution needs a unique approach.

But there’s a problem! Here’s a headline from last year Independent newspaper: “Labour’s computer blunders cost £26bn”. What’s going on here? This is your and my money being wasted on failed IT projects. And it’s not just government projects that are failing. Here’s an estimate from the British Computer Society of how many IT projects re actually successful. 20%! How poor is that? It projects ‘fail’ for many reasons but interstingly it’s rarely for just technical reasons. More often than not it’s due to poor project and risk management, lack of effective stakeholder management or no clear senior management ownership. So we have a real problem here. As we’ll see in a minute,  problems are not only getting harder to fix (more ‘wicked’) but our ability to solve them does not seem to be improving!!

So what are these wicked problems I keep talking about? They are many and numerous but many of them are attributable to inefficiencies that exist in the “systems” that exist in the world. Economists estimate that globally we waste $15 trillion of the worlds precious resources each year. Much – if not most – of this inefficiency can be attributed to the fact that we have optimized the way the world works within silos, with little regard for how the processes and systems that drive our planet interrelate. These complex, systemic inefficiencies are interwoven in the interactions among our planet’s core systems. No business, government or institution can solve these issues in isolation. To root out inefficiencies and reclaim a substantial portion of that which is lost, businesses, industries, governments and cities will need to think in terms of systems, or more accurately, a system of systems approach. This means we will need to collaborate at unprecedented levels. For example no single organization owns the world’s food system, and no single entity can fix the world’s healthcare system. Success will depend upon understanding the full set of cause-and-effect relationships that link systems and using this knowledge to create greater synergy. Basically many of the problems the world faces today are cause by the fact that our systems don’t talk to each other. What do I mean by this? Here’s a simple example to illustrate the point.

Imagine you are driving your car around town trying to find a parking space. You can be sure that somewhere in town there’s a parking meter looking for a car to park in it. How do we marry your car with that parking meter? Actually the technology to do this pretty much exists already. However the challenge of actually fixing this problem stretches beyond just technology. A solution to this problem includes at least: intelligent sensors, communications, public and private finance, local government involvement, control and policing as well as well established open standards.

Like I said, from a pure technology point of view we are in pretty good shape to solve problems like this. We now have an unprecedented amount of: instrumentation, interconnectedness and intelligence
such that organisations (and societies) can think and act in radically new ways. However in order to solve problems like the parking one as well as significantly more ‘wicked’ ones I believe we need skills that stretch beyond the mere technological. If you are to help solve these problems then you need to be a versatile and adaptive systems thinker. A systems thinker is someone who not only uses her left-brained logical thinking capabilities but also uses her right-brained creative and artistic capabilities. Here are six attributes (from Dan Pink’s book A Whole New Mind) that a good systems thinker needs to adopt which I think will help in solving some of the worlds wicked problems:

  • Design – It is no longer sufficient or acceptable to create a product or service that merely does the job. Today it is both economically critical as well as  aesthetcially rewarding to create something that is beautiful and emotionally engaging.
  • Story – We are living in a time of information overload. If you want your sales pitch or point of view to be heard above the cacophony of background noise that is out there you have to create a compelling narrative.
  • Symphony – We live in a world of silos. Siloed processes, siloed systems and siloed societies. Success in business and in life is about breaking down these silos and pulling all the pieces together. Its about synthesis rather than analysis.
  • Empathy – Our capacity for logical thought has gone a very way to creating the technological society we live in today. However in a world of ubiquitous information that is available at the touch of a button logic alone will no longer cut the mustard.In order to thrive we need to understand what makes our fellow humans tick and really get beneath their skin and to forge new relationships.
  • Play – In a world where we are all having to meet targets, pass tests and  achieve the right grades in order to get on it is easy to forget the importance of play. There is a lot of evidence out there of the benefits to our health and general well-being of the benefits of play, not only outside work but also inside.
  • Meaning – We live in a world of material plenty put spiritual scarcity. Seeking meaning in life that transcends above “things” is vital if we are to achieve some kind of personal fulfilment.

A Gartner report published in 2005 predicted that by 2010, IT professionals will need to possess expertise in multiple domains. Technical aptitude alone will no longer be enough. IT professionals must prove they can understand business realities – industry, core processes, customer bases, regulatory environment, culture and constraints. Versatility will be crucial. It predicted that by By 2011, 70 percent of leading-edge companies will seek and develop “versatilists” while deemphasizing specialists.

Versatilists are people whose numerous roles, assignments and experiences enable them to synthesise knowledge and context in ways that fuel business value. Versatilists play different roles than specialists or generalists. Specialists generally have deep technical skills and narrow scope, giving them expertise that is recognized by peers, but it is seldom known outside their immediate domains. Generalists have broad scope and comparatively shallow skills, enabling them to respond or act reasonably quickly, but often at a cursory level. Versatilists, in contrast, apply depth of skill to a rich scope of situations and experiences, building new alliances, perspectives, competencies and roles. They gain the confidence of peers and partners. To attain versatilist skills, IT professionals should..

  • Look outside the confines of current roles, regions, employers or business units. The more informed a professional is about a company, its industry segment and the forces that affect it, the greater the contextual grasp.
  • Lay out opportunities and assignments methodically. Focus on the areas and challenges that fall outside the comfort zone; those areas generally will be the areas of greatest growth.
  • Explore possibilities outside the world of corporate business. Not-for-profit ventures, startup companies, government agencies and consumer IT service providers offer powerful ways to bolster experiences, behavioral competencies or management skills.
  • Enroll in advanced degree programs or in qualified education courses to expand perspective.
  • Identify companies, projects, assignments, education and training that will increase professional value.

I believe we’ve only just begun to scratch the surface of what’s possible on a “smarter planet”. However if we are to really address the truly wicked problems that are out there in order to make our world a better, and maybe even a fairer place, we need people like you to make it happen.

Finally you might be tempted in these hard economic times when you are being asked to pay outrageous amounts for your education not to bother with university. However bear this in mind:

“Unskilled labor is what you call someone who merely has skills that most everyone else has. If it’s not scarce, why pay extra? Skills matter. The unemployment rate for US workers without a college education is almost triple that for those with one. Even the college rate is still too high, though.  On the other hand, the unemployment rate for skilled neurosurgeons, talented database designers and motivated recombinant DNA biologists is essentially zero, despite the high pay in all three fields. Unskilled now means not-specially skilled”.

The only real investment you have for the future is the piece of grey matter between your ears. Make sure you continue to nurture and nourish it throughout your life by stimulating both the left and right sides.

Thank you and good luck with whatever path you choose to take in life.

Is Agile Architecture an Oxymoron?

Much has been written by many great minds on agile development processes and how (or indeed whether) architecture fits into such processes. People like:

  • Scott Ambler (Architecture Envisioning: An Agile Best Practice). “Some people will tell you that you don’t need to do any initial architecture modeling at all.  However, my experience is that doing some initial architectural modeling in an agile manner offers several benefits”.
  • Mike Cohn (Succeeding with Agile). “On an architecturally complicated or risky project, the architect will need to work closely with the product owner to educate the product owner about the architectural implications of items on the product backlog”.
  • Walker Royce (Top 10 Management Principles of Agile Software Delivery) : “Reduce uncertainties by addressing architecturally significant decisions first”.
  • Andrew Johnston (Role of the Agile Architect): “In an agile development the architect has the main responsibility to consider change and complexity while the other developers focus on the next delivery”.
  • Martin Fowler (Is Design Dead?):  “XP does give us a new way to think about effective design because it has made evolutionary design a plausible strategy again”.

I’ve been contemplating how the discipline of architecture, as well as the role of the architect, fits with the agile approach to developing systems whilst reviewing the systems development lifecycle (SDLC) for a number of clients of late. In my experience most people have an “either-or” approach when it comes to SDLC’s. They either do waterfall or do agile and have some criteria which they use to decide between the two approaches on a project by project basis. Unfortunately these selection criteria are often biased toward the prejudices of the person writing the criteria and will push their favourite approach.

Rather than treating agile and waterfall as mutually exclusive I would prefer to adopt a layered approach to defining SDLC’s as shown here.

The three layers have the following characteristics:

  1. Basic Process. Assume all projects adopt the simplest approach to delivery as possible (but no simpler). For most software product development projects this will amount to an agile approach like Scrum which uses iterations (sprints) and a simplified set of roles, namely: Product Manager, Scrum Master and Team where Team is made up of people adopting multiple roles (architect, programmer, tester etc). On such projects decide up front which artefacts you want to adopt. These don’t need to be heavy-weight documents but could be contained in tools or as sketches. Here the team member performing the architect role needs to manage an “emergent architecture”. The role of architect maybe shared or not dedicated to a single individual.
  2. Complex Process. At the next level of complexity where multiple-products need to be built as part of a single program of work and all these have dependencies some level of governance usually needs to be in place that ensures everything comes together at the right time and is of a consistent level of quality. At a micro-level this can be done using a scrum-of-scrums approach where a twice or thrice weekly scrum that brings together all the Scrum Masters happens. Here the architect role is cross-product as it needs to ensure all products fit together (including maybe third party products and packages). This may still be a shared role but is more likely to be a dedicated individual. This may involve some program level checkpoints that at least ensure all iterations have created a shippable product that is ready to integrate or deploy. The architecture is not just emergent any more but may also needs to be “envisioned” up-front so everyone understands where their product fits with others.
  3. Complex Integration Process. The final level of complexity is caused when not only multiple products are being developed but also existing systems need to be incorporated which have complex (or poorly understood) interfaces. Here the role of the architect is not only cross-product but cross-system as she has to deal with the complexity of multiple systems, some of which may need to be changed by people other than the core development team. Here the level of ceremony as well as the number of artefacts to control and manage the decisions around the complexity will increase. The architecture is certainly not emergent any more but also needs to be “envisioned” up-front so everyone understands where their product fits and also what interfaces they need to work with. This is something Scott Ambler suggests happens during “iteration zero“.

Each of these layers is built on a common architectural as well as process “language” so everyone has a common understanding of terms used in the project. I’m much in agreement with Mike Cohn’s comment on his blog recently where, in reflecting on the tenth anniversary of the Agile Manifesto he says: “The next change I’d like to see (and predict will occur) over the next ten years also occurred in the OO world: We stop talking about agile” and goes on to say “Rather than “agile software development” it is just “software development”—and of course it’s agile”.

I would like to see an agile or lean based approach as the de-facto standard that only adds additional artefacts or project checkpoints as needed rather than thinking that every time we need to either adopt an agile approach or a waterfall approach.

Forget T-Shaped, We Need V-Shaped Architects

A recent blog from the Open Group discusses the benefits of so called “T-shaped people”. According to this blog, T-shaped people are what HR are looking for these days. To quote from the blog a T-shaped person is someone who: “combines the broad level of skills and knowledge (the top horizontal part of the T) with specialist skills in a specific functional area (the bottom, vertical part of the T). They are not generalists because they have a specific core area of expertise but are often also referred to as generalizing specialists as well as T-shaped people“. The picture below shows this.

Traditionally for software architects the specialism that T-shaped people usually have has come from their entry-level skills or the ones that got them into the profession in the first place. This is usually a skill in a particular programming language,  development approach (agile, scrum or whatever) or other areas related to software development such as test or configuration management. As you progress through your career and begin to build on your skills (learning more programming languages, understanding more about design etc) you may add to the verticals in your T’s with these other specialisms. This, at least, has traditionally been the approach. The problem is that in some organisations in order to “progress” (i.e. earn more money) you almost need to know more about less. You need to generalise more and more quickly. No one is going to employ you to be a Java programmer if your salary is ten times that of what a Java programmer in India or China earns. This is not meant to be a criticism against software professionals in India or China by the way. It’s just the way of things. Soon people in India and China will be out-sourcing to lower cost regions and so the cycle will go on. It does however raise an interesting problem of how those core specialisms will be developed in people just entering the profession. I spent a good 15 years as a programmer before I moved into architecture and would like to think that what I learnt there gave me a good set of core, fundamental skills that I can still apply as an architect. I firmly believe that the fundamentals I learnt from programming (encapsulation, design by contract, the importance of loose coupling etc) never go out of fashion.

As I have blogged before, I believe that whilst good “generalizing specialists” can also make good architects there is another dimension to what makes a true architect who has the skills necessary to solve the really hard business as well as socio-political (e.g. global warming, global terrorism, resource shortages etc) problems that the world faces today. Gartner coined the term “versatilist” back in 2005 and whilst this does not seem to have really taken off (there is a versatilist web site but it seems to be little used) I like the fact that the ‘V’ of versatilist makes a nice paradigm for what 21st century IT architects need to be. V-shaped people are not just ones who have deep skills in specific functional areas but also have skills in other disciplines. Further a good V-shaped person is one who has skills not just in technical disciplines but also business and artistic disciplines. So why does this matter?

The concept of bringing interdisciplinary teams together to break down boundaries in solving difficult or wicked problems is not a new one. It is recognised that pooling different academic schools of thought can often throw up solutions to problems that any of the individual disciplines could not. It follows therefore that if an individual can be well rounded and at least have some level of knowledge in an area completely outside his or her core discipliines then they to may be able to shed new light on difficult problems. This is what being a versatilist is about. As shown below its not just about specialising in different functional areas within a discipline but also across disciplnes. If these disciplines can be a mix of the arts as well as the sciences that exercise both right and left brains then so much the better.

So how should versatilists develop their skills? Here are some suggestions I give to IT students when discussing how they might survive as professionals in the 21st century world of work:

  • Objectively view experiences and roles – When you have finished an assignment note down what you learnt from it, what you could have done better and maybe ask others what they thought of your performance.
  • Look further than current roles. Today you are working on a particular project however always have in mind what you want to do next and an idea of what you want to do after that. Don’t become stereotyped, prepare to move on even if you are in an area you know well.
  • Plan opportunities and assignments – This follows on from the last one. make sure each assignment really builds on and develops your skills. Step out of your comfort zone in each new assignment.
  • Explore other possibilities. Never assume there is only one option. Think differently and look at alternatives. Like Paul Arden said, “Whatever You Think, Think The Opposite“.
  • Pursue lifelong learning – What it says. never stop exploring!
  • Identify companies that will increase professional value. Companies are out to get what they can from you. make sure you do the same with them.

So as we enter the second decade of the 21st century can we not look for more T-shaped people but start the search for V-shaped people instead? These are the ones who will really make a difference and be able to address the really wicked problems that are out there.

Think Like An Architect

A previous entry suggested hiring an architect was a good idea because architects take existing components and assemble them in interesting and important ways. So how should you “think architecturally” in order to create things that are not only interesting but also solve practical, real-world problems? Architectural thinking is about balancing three opposing “forces”: what people want (desirability), what technology can provide (feasibility) and what can actually be built given the constraints of cost, resource and time (viability).

It is basically the role of the architect to help resolve these forces by assembling components “in interesting ways”. There is however a fourth aspect which is often overlooked but which is what separates great architecture from merely good architecture. That is the aesthetics of the architecture.
Aesthetics is what separates a MacBook from a Dell, the Millau Viaduct in France from the Yamba Dam Bridge in Japan and the St Mary Axe from the Ryugyong Hotel in North Korea. Aesthetics is about good design which is what you get when you add ‘significance’ (aesthetic appeal) to ‘utility’ (something that does the job). IBM, the company I work for, is 100 years old this year (check out the centennial video here) and Thomas Watson, IBM’s founder, famously said that “good design is good business”. Watson knew what Steve Jobs, Tim Brown and many other creative designers know; aesthetics is not only good for the people that use or acquire these computers/buildings/systems it’s also good for the businesses that create them. In a world of over-abundance good design/architecture both differentiates companies as well as giving them a competitive advantage.

How Much Does Your Software Weigh, Mr Architect?

Three apparently unrelated events actually have a serendipitous connection which have led to the title of this weeks blog. First off, Norman Foster (he of the “Gherkin” and “Wobbly Bridge” fame) has had a film released about his life and work called How Much Does You Building Weigh, Mr Foster. As a result there have been a slew of articles about both Foster and the film including this one in the Financial Times. One of the things that comes across from both the interviews and the articles about Foster is the passion he has for his work. After all, if you are still working at 75 then you must like you job a little bit! One of the quotes that stands out for me is this one from the FT article:

The architect has no power, he is simply an advocate for the client. To be really effective as an architect or as a designer, you have to be a good listener.”

How true. Too often we sit down with clients and a jump in with solutions before we have really got to the bottom of what the problem is. It’s not just about listening to what the client says but also what she doesn’t say. Sometimes people only say what they think you want them to hear not what they really feel, So, it’s not just about listening but developing empathy with the person you are architecting for. Related to this is not closing down discussions too early before making sure everything has been said which brings me to the second event.

I’m currently reading Resonate by Nancy Duarte which is about how to put together presentations that really connect with your audience using techniques adopted by professional story tellers (like film makers for example). In Duarte’s book I came across the diagram below which Tim Brown also uses in his book Change by Design.

For me the architects sits above the dotted line in this picture ensuring as many choices as possible get made and then making decisions (or compromises) that are the right balance of the sometimes opposing “forces” of the requirements that come from multiple choices.

One of the big compromises that often needs to be made is how much can I deliver in the time I have available and, if its not everything, what is dropped? Unless the time can change then its usually the odd bit of functionality (good if these functions can be deferred to the next release) or quality (not good under any circumstances). This leads me to the third serendipitous event of the week: discovering “technical debt”.

Slightly embarrassingly I had not heard of the concept of technical technical debt before and it’s been around for a long time. It was originally proposed by Ward Cunningham in 1992 who said the following:

Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation.

Technical debt is a topic that has been taken up by the Software Engineering Institute (SEI) who are organising a workshop on the topic this year. One way of understanding technical debt is to see it as the gap between the current state of the system and what was originally envisaged by the architecture. Here, debt can be “measured” by the number of known defects and features that have not yet been implemented. Another aspect to debt however is the amount of entropy that has set in because the system has decayed over time (changes have been made that were not in line with the specified architecture). This is a more difficult thing to measure but has a definite cost in terms of ease of maintenance and general understandability of the system.

Which leads to the title of this weeks blog. Clearly software (being ‘soft’) carries no weight (the machines it runs on do not count) but nonetheless can have a huge, and potentially damaging weight in terms of the debt it may be carrying in unstructured, incorrect or hard to maintain code. Understanding the weight of this debt, and how to deal with it, should be part of the role of the architect. The weight of your software may not be measurable in terms of kilograms but it surely has a weight in terms of the “debt owed”.