The Essence of Being an Architect

There are many skills frameworks out there that tell us what skills we should have for ‘doing architecture’. My company has (at least) one and I’m sure yours does as well. There are also organisations that specialise in creating such frameworks (check out the Skills Framework for the Information Age for example). Whilst there are some very specific skills that software architects need (developing applications using xyz programming language, building systems using a particular ERP package and so on) which come and go as technology evolves there are some enduring skills which I believe all architects must acquire as they progress in their careers. What I refer to as being the essence of being an architect. Seth Godin recently posted a blog entry called What’s high school for? where he listed 10 things all schools should be teaching that should sit above any of the usual stuff kids get taught (maths, chemistry, history etc). A sort of list of meta-skills if you like. Borrowing from, and extending, this list gives me my list of essential architect skills.

  1. How to focus intently on a problem until it’s solved. There is much talk these days about how the internet, the TV networks and the print media are leading to a dumbed down society in which we have an inability to focus on anything for longer than 30 minutes. Today’s business problems are increasingly complex and often require prolonged periods of time to really focus on what the problem is before charging in with a (software) solution. Unfortunately the temptation is always often to provide the cheapest or the quickest to market solution. You need to fight against these pressures and stay focused on the problem until it is solved.
  2. How to read critically. As architects we cannot hope to understand everything there is to know about every software product, technology or technique that is out there. Often we need to rely on what vendors tell us about their products. Clearly there is a danger here that they tell us what we, or our clients, want to hear glossing over faults or features that are more ‘marketechture’ than architecture. Learn how to read vendor product descriptions and whitepapers with a critical eye and ask difficult questions.
  3. The power of being able to lead groups of peers without receiving clear delegated authority. The role of an architect is to build solutions by assembling components in new and interesting ways. You are the person who needs to both understand what the business wants and how to translate those ‘wants’ into technology. Business people, by and large, cannot tell you how to do that. You need to lead your peers (both business people and technologists) to arrive at an effective solution.
  4. How to persuasively present ideas in multiple forms, especially in writing and before a group. Obvious really, you can have the greatest idea in the world but if you cannot present it properly and effectively it will just stay that, an idea.
  5. Project management, self-management and the management of ideas, projects and people. How to manage your and others time to stay focused and deliver what the client wants in a timely fashion.
  6. An insatiable desire (and the ability) to learn more. Forever! This job cannot be done without continuous learning and acquiring of knowledge. Everyone has their own learning style and preferences for how they acquire knowledge, find out what your style is and deploy it regularly. Don’t stick to IT, I’ve discussed the role of the versatilist extensively (see here for example). Be ‘V’ shaped not ‘T’ shaped.
  7. The self-reliance that comes from understanding that relentless hard work can be applied to solve problems worth solving. Belief in ones ideas and the ability to deploy them when all around you are doubting you is probably one of the hardest skills to acquire. There is a fine balance between arrogance and self-belief. In my experience this is not an easily repeatable skill. Sometimes you will be wrong!
  8. Know how to focus on what is important and to ignore what is not. If you have not heard of Parkinson’s Law of Triviality take a look at it.
  9. Know who the real client is and focus on satisfying him/her/them. There can be lots of distractions in our working lives, and I’m not just talking about twittering, blogging (sic) and the rest of the social networking gamut. Projects can sometimes become too inward focused and lose track of what they are meant to be delivering. We live in a world where numbers have achieved ascendency over purpose. We can sometimes spend too much time measuring, reviewing and meeting targets rather than actually doing. I love this quote from Deming: “If you give a manager a numerical target, he’ll make it, even if he has to destroy the company in the process”. There is little merit in a well executed project that no one wants the output from.
  10. Use software/system delivery lifecycle (SDLC) processes wisely. SDLC’s are meant to be enablers but can end up being disablers! Always customise an SDLC to fit the project not the other way around.

If all of this seems hard work that’s because it is. As Steven Pressfield says in his book The War of Art:

The essence of professionalism is the focus upon the work and its demands, while we are doing it, to the exclusion of all else.

Sketching with the UML

In his book UML Distilled – A Brief Guide to the Standard Object Modeling Language Martin Fowler describes three ways he sees UML being used:

  • UML as sketch: Developers use UML to communicate some aspect of a system. These sketches can be used in both forward (i.e. devising new systems) as well as reverse (i.e. understanding existing systems) engineering. Sketches are used to selectively talk about those parts of the system of interest. Not all of the system is sketched. Sketches are drawn using lightweight drawing tools (e.g. whiteboards and marker pens) in sessions lasting anything from a few minutes to a few hours. The notation and adherence to standards is non-strict. Sketching works best in a collaborative environment. Sketches are explorative.
  • UML as a blueprint: In forward engineering the idea is that developers create a blueprint that can be taken and built from (possibly with some more detailed design first). Blueprints are about completeness. The design/architecture should be sufficiently complete that all of the key design decisions have been made leaving as little to chance as possible, In reverse engineering blueprints aim to convey detailed information about the system, sometimes graphically. Blue prints require more sophisticated modelling tools rather than drawing tools. Blueprints are definitive.
  • UML as a programming language: Here developers draw UML diagrams that are compiled down to executable code. This mode requires the most sophisticated tooling of all. The idea of forward and reverse engineering does not exist because the model is the code. Model Driven Architecture (MDA) fits this space.

In practice there is a spectrum of uses of UML which I’ve shown below in Figure 1. In my experience very few organisations are up at level 10 and I would say most are in the range 3 – 5. I would classify these as being those folk who use UML tools to capture key parts of the system (either in a forward or reverse engineering way) and export these pictures into documents which are then reviewed and signed-off.

Figure 1

An interesting addition to the ‘UML as sketch’ concept is that at least two vendors that I know of (IBM and Sparx) are offering ‘sketching’ capabilities within their modeling tools. In Rational Software Architect the elements in a sketch include shapes such as rectangles, circles, cylinders, stick figures that represent people, and text labels. Checkout this YouTube video for a demo. Unlike other types of models, sketches have only a few types of elements and only one type of relationship between elements. Also, sketches do not contain multiple diagrams; each sketch is an independent diagram. You can however move from a sketch to a more formal UML diagram or create a sketch out of an existing UML diagram so allowing you to work with a diagram in a more abstract way.

Below in Figure 2 is an example of a sketch for my example Hotel Management System, that I’ve used a few times now to illustrate some architectural concepts, drawn using the current version of Rational Software Architect. I guess you might call this an Architecture Overview used to show the main system actors and architectural elements.

Figure 2

I guess the ability to be able to sketch with modeling tools could be a bit of a double edged sword. On the one hand it means sketches are at least captured in a formal modeling environment which means they can be kept in one centrailsed repository and can be maintained more effectively. It also means they can potentially be turned into more formal diagrams thus providing a relatively automated way of moving along the scale shown in Figure 1. The downside might be that sketching is as far as people go and never bother to provide anything more formal. I guess only time will tell whether this kind of capability gains much traction amongst developers and architects alike. For my part I would like to see sketching in this way as a formal part of a process which encourages architects and developers to create models using this approach, get them roughly right and then turn them ino a more formal and detailed model.

Change Cases and the Limits of Testing

This recent blog entry from Seth Godin on “the culture of testing” set me thinking about software architecture and testing. Is it possible to ‘over test’ applications or systems? Is there a point at which you need to stop testing and let your software ‘go free’ so users can ‘complete’ testing themselves? Let’s be clear here:

  • Software needs testing very rigorously. No one wants to fly on an airplane where the on-board flight control software has not been fully tested.
  • I’m also not talking about the well established practice of releasing beta versions of software where you get together a bunch of early adopters to complete testing for you and iron out the “last few bugs”.
  • Testing software against known requirements is most definitely a good thing to do and I’m not advocating just running your system tests against a subset of those functional and non-functional requirements.

Where it gets interesting is when you don’t overly constrain the architecture so that it allows users to take resulting application or system and evolve it (test it) in new and interesting ways. When defining an architecture we usually talk about it having to address functional as well as non-functional requirements but there is a third, often overlooked, class of requirement referred to as change cases or future requirements. Change cases are used to describe new potential requirements for a system or modifications to existing requirements. Change cases usually come from the specifier of the system (e.g. “in three years time we want introduce a loyalty scheme for our regular guests”). Some change cases however are not envisaged in advance and it’s only when users of the application get hold of it that they explore and find new ways of using the application that may not have originally been thought of. Such applications need to be carefully architected, and tested, such that these change cases can be discovered without, of course, breaking the application altogether.

So, by all means ship systems or application that do what they say on the tin but also lets users do other, possibly more interesting things, with them that may lead to new and innovative uses that the original specifiers of the software had not thought about.

Architectural Granularity

There’s an old joke about defining granularity:Granularity is a bit like pornography, it’s hard to define but you know it when you see it.

The whole topic of granularity of software components is important to the Software Architect for a number of reasons:

  • If you are creating too many fine-grained components then you are probably not doing architecture but design.
  • As I discussed previously, getting components at the right level of granularity is important when it comes to placing those components (on nodes) and satisfying non-functional requirements such as availability and performance.
  • The granularity of components impacts performance. Too many fine-grained components will probably result in increased traffic on the network because the client using the component must make several queries to complete a task or get the information it needs.
  • Generally speaking, the coarser grained a component the more likely it is to have a direct correlation to something the business finds useful (e.g. a ‘Reservation’ component that handles all aspects of allowing a customer to reserve a room is more useful than one that ‘finds a vacant room’).

The terms “fine-grained”, “medium-grained” and “coarse-grained” are frequently used to describe architectural components (or indeed services) but there seems to be no common definition for these terms. This leads to confusion and ambiguity in their use. Whilst there is no agreed definition of what these terms mean there does seem to be a consensus that granularity is about more than just the number of operations an interface on a component has. Everware-CBDI (see their June 2005 edition which is only available via subscription) suggests that other factors might be (and I’m simplifying here):

  • The number of components (C) in the directed-acyclic-graph (DAG) that are invoked through a given operation on that components interface.
  • The function points (F) for each component.
  • The number of database tables (D) created, read, updated or deleted.

So one measure of granularity (G) might be:

G = C + F + D

Put simply, if a component is self-contained (C = 1), has a single function point (F = 1) and updates a single entry in a database (D = 1) it will have a granularity of ‘3’. Such a ‘component’ is going to be something like a class ‘Integer’ with a single operation such as ‘increment’ which adds one to the previous value. Whilst such a component may be infinitely reusable it is not particularly useful from an architectural (or business) standpoint. We could honestly call such a component ‘fine-grained’. At the other extreme a CRM system will probably have C, F and D each equal to several thousand giving it a granularity in the tens of thousands. Again, from an architectural standpoint, this is not very useful. There is not much we can do with such a ‘component’ other than observe it is (very) coarse-grained. I suggest that we architects are more likely to be interested in creating, and (re)using components that sit somewhere in between these extremes (maybe in the low 100’s region using this measure of granularity).

What we are actually talking about here is what ZapThink refer to as ‘functional granularity’ (as opposed to ‘interface granularity’) that is, granularity of the component itself rather than the granularity of any interfaces the component may expose. For the architect it is getting this functional granularity right that is most important. The coarser the (functional) granularity of the component the more likely it is to have a useful business context. So a ‘Reservation’ component that deals with all aspects of reserving a room for a customer (i.e. takes the dates, room preferences, hotel location, customer loyalty status etc) and finds a sutable room is a useful component at both an architectural level (i.e it addresses the points I started with above) as well as a business level (i.e. it is understandable by the business and can therefore be mapped to business users requirements).

When composing systems into components working out what level of granularity you need is a key aspect of creating the systems architecture so you have the right number of components that are most useful in both an architectural as well as business context.

EA Wars

Lots of great discussion in the blogosphere right now on the relevance of Enterprise Architecture in the brave new world of the ‘extended’ enterprise and whether architecture is something that is planned or ’emerges’. This is largely prompted, I suspect, by the good folk at ZapThink asking Why Nobody is Doing Enterprise Architecture (possibly to plant seeds of doubt in the minds of CIOs and send them rushing to one of their “licensed ZapThink architecture courses”). For a nice, succinct and totally dismissive riposte to the ZapThink article check out David Sprott’s blog entry here. For a more reasoned and skeptical discussion on whether emergent architecture actually exists (I think it does, see below) read Richard Veryard’s blog entry here.However… despite some real FUD’iness on the part of ZapThink there are some elements in their argument that definitely ring true and which I have observed in a number of clients I have worked with over the last few years. In particular:

  • Emergence (i.e. the way complex systems and patterns arise out of a multiplicity of relatively simple interactions) is, like it or not, a definite factor in the architecture of modern-day enterprises. This is especially true when the human dimension is factored into the mix. The current Gen Y and upcoming Gen V are not going to hang around while the EA department figure out how to factor their 10th generation iPhone, which offers 3-D holographic body-time, into an EA blueprint. They are just going to bypass the current systems and use it regardless. The enterprise had better quickly figure out the implications of such devices (whatever they turn out to be) or risk becoming a technological backwater.
  • EA departments seem to very quickly become disjoint from both the business which they should be serving and the technicians which they should be governing. One is definitely tempted to ask “who is governing the governors” when it comes to the EA department? Accountability in many organisations definitely seems to be lacking. This feels to me like another example of gapology that seems to be increasingly apparent in such organisations.
  • Even though we are better placed than ever to be able to capture good methodological approaches to systems development I still see precious little adoption of true systems development lifecycles (SDLC’s) in organisations. Admittedly methods have had very bad press over the years as they are often seen as been an unnecessary overhead which, with the rise of agile, have been pushed into the background as something an organisation must have in order to be ISO 9000 compliant or whatever but everyone really just gets on with it and ignores all that stuff.
  • Finally, as with many things in IT, the situation has been confused by having multiple and overlapping standards and frameworks in the EA space (TOGAF, Zachman and MODAF to name but three). Whilst none of these may be perfect I think the important thing is to go with one and adapt it accordingly to what works for your organisation. What we should not be doing is inventing more frameworks (and standards bodies to promote them). As with developing an EA itself the approach an EA department should take to selecting an EA framework is to start small and grow on an as needs basis.

Applying Architectural Tactics

The use of architectural tactics, as proposed by the Software Engineering Institute, provides a systematic way of dealing with a systems non-functional requirements (sometime referred to as the systems quality attributes or just qualities). These can be both runtime qualities such as performance, availability and security as well as non-runtime such as maintainability, portability and so on. In my experience, dealing with both functional and non-functional requirements, as well as capturing them using a suitable modeling tool is something that is not always handled very methodically. Here’s an approach that tries to enforce some architectural rigour using the Unified Modeling Language (UML) and any UML compliant modeling tool.

Architecturally, systems can be decomposed from an enterprise or system-wide view (i.e. meaning people, processes, data and IT systems), to an IT system view to a component view and finally to a sub-component view as shown going clock-wise in Figure 1. These diagrams show how an example hotel management system (something I’ve used before to illustrate some architectural principles) might eventually be decomposed into components and sub-components.

Figure 1: System Decomposition

This decomposition typically happens by considering what functionality needs to be associated with each of the system elements at different levels of decomposition. So, as shown in Figure 1 above, first we associate ‘large-grained’ functionality (e.g. we need a hotel system) at the system level and gradually break this down to finer and finer grained levels until we have attributed all functionality across all components (e.g. we need a user interface component that handles the customer management aspects of the system).

Crucially from the point of view of deployment of components we need to have decomposed the system to at least that of the sub-component level in Figure 1 so that we have a clear idea of each of the types of component (i.e. do they handle user input or manage data etc) and know how they collaborate with each other in satisfying use cases. There are a number of patterns which can be adopted for doing this. For example the model-view-controller pattern as shown in Figure 2 is a way of ascribing functionality to components in a standard way using rules for how these components collaborate. This pattern has been used for the sub-component view of Figure 1.

Figure 2: Model-View-Controller Pattern

So far we have shown how to decompose a system based on functional requirements and thinking about which components will realise those requirements. What about non-functional requirements though? Table 1 shows how non-functional requirements can be decomposed and assigned to architectural elements as they are identified. Initially non-functional requirements are stated at the whole system level but as we decompose into finer-grained architectural elements (AKA components) we can begin to think about how those elements support particular non-functional requirements also. In this way non-functional requirements get decomposed and associated with each level of system functionality. Non-functional requirements would ideally be assigned as attributes to each relevant component (preferably inside our chosen UML modelling tool) so they do not get lost or forgotten.

Table 1
System Element Non-Functional Requirement
Hotel System (i.e. including all actors and IT systems). The hotel system must allow customers to check-in 24 hours a day, 365 days a year. Note this is typically the accuracy non-functional requirements are stated at initially. Further analysis is usually needed to provide measurable values.
Hotel Management System (i.e. the hotel IT system). The hotel management system must allow the front-desk clerk to check-in a customer 24 hours a day, 365 days a year with a 99.99% availability value.
Customer Manager (i.e. a system element within the hotel’s IT system). The customer manager system element (component) must allow customer details to be created, read or updated (but not deleted) 24 hours a day, 365 days a year with a 99.99% availability value.
Customer Manager Interface (i.e. the user interface that belongs to the Customer Manager system element). The customer manager interface must allow customer details to be created, read or updated (but not deleted) 24 hours a day, 365 days a year with a 99.99% availability value.

Once it is understood what non-functional requirement each component needs to support we can apply the approach of architectural tactics proposed by the Software Engineering Institute (SEI) to determine how to handle those non-functional requirements.

An architectural tactic represents “codified knowledge” on how to satisfy non-functional requirements by applying one or more patterns or reasoning frameworks (for example queuing or scheduling theory) to the architecture. Tactics show how (the parameters of) a non-functional requirement (e.g. the required response time or availability) can be addressed through architectural decisions to achieve the desired capability.

In the example we are focusing on in Table 1 we need some tactics that allow the desired quality attribute of 99.99% availability (which corresponds to a downtime of 52 min, 34 sec per year) to be achieved by the customer manager interface. A detailed set of availability tactics can be found here but for the purposes of this example availability tactics can be categorized according to whether they address fault detection, recovery, or prevention. Here are some potential tactics for these:

  • Employing good software engineering practices for fault prevention such as code inspections, usability testing and so on to the design and implementation of the interface.
  • Deploying components on highly-available platforms which employ fault detection and recovery approaches such as system monitoring, active failover etc.
  • Developing a backup and recovery approach that allows the platform running the user interface to be replaced within the target availability times.

As this example shows, not all non-functional requirements can be realised suitably by a component alone; sometimes full-realisation can only be done when that component is placed (deployed) onto a suitable computer platform. Once we know what non-functional requirements need to be realised by what components we can then think about how to package these components together to be deployed onto the appropriate computer platform which supports those non-functional requirements (for example on a platform that will support 99.99% availability and so on). Figure 3 shows how this deployment can be modelled in UML adopting the Hot Standby Load Balancer pattern.

Figure 3: Deployment View

Here we have taken one component, the ‘Customer Manager’, and showed how it would be deployed with other components (a ‘Room Manager’ and a ‘Reservation Manager’’) that support the same non-functional requirements onto two application server nodes. A third UML element, an artefact, packages together like components via a UML «manifest» relationship. It is the artefact that actually gets placed onto the nodes. An artefact is a standard UML element that “embodies or manifests a number of model elements. The artefact owns the manifestations, each representing the utilization of a packageable element”.

So far all of this has been done at a logical level; that is there is no mention of technology. However moving from a logical level to a physical (technology dependent level) is a relatively simple step. The packaging notion of an artefact can equally be used for packaging physical components, for example in this case the three components shown in Figure 3 above could Enterprise Java components or .NET components.

This is a simple example to illustrate three main points:

  1. Architecting a system based on functional and non-functional requirements.
  2. Use of a standard notation (i.e. UML) and modelling tool.
  3. Adoption of tactics and patterns to show how a systems qualities can be achieved.

None of it rocket science but something you don’t see done much.

Oops There Goes Our Reputation

I’m guessing that up to two weeks ago most people, like me, had never heard of a company called Epsilon. Now, unfortunately for them, too many people know of them for all the wrong reasons. If you sign up to any services from household names such as Marks and Spencer, Hilton, Marriott or McKinsey you will have probably had several emails in the last two weeks advising you of a security breach which led to “unauthorized entry into Epsilon’s email system”. Unfortunately because Epsilon is a marketing vendor that manages customer email lists for these and other well known household brands chances are your email has been obtained by this unauthorised entry as well. Now, it just might be a pure coincidence, but in the last two weeks I have also received emails from the Chinese government inviting me to a conference on some topic I’ve never heard of, from Kofi Annan, ex-Secretary General of the United Nations and from a lady in Nigeria asking for my bank account details so she can deposit $18.4M into the account so she can leave the country!According to the information on Epsilon’s web site the information that was obtained was limited to email addresses and/or customer names only. So, should we be worried by this and what are the implications on architecture of such systems?

I think we should be worried for at least three reasons:

  1. Whilst the increased spam that is seemingly inevitable following an incident such as this is mildly annoying a deeper concern is how could the criminal elements who now have information on the places I do business on the web put this information together to learn more about me and possibly construct a more sophisticated phishing attack? Unfortunately it’s not only the good guys that have access to data analytics tools.
  2. Many people probably have a single password to access multiple web sites. The criminals who now have your email as well as knowledge of which sites you do business at only have to crack one of these and potentially have access to multiple sites, some of which may have more sensitive information.
  3. Finally how come information I trusted to well known (and by implication ‘secure’) brands and their web sites has been handed over to a third party without me even knowing about it? Can I trust those companies not to be doing this with more sensitive information and should I be withdrawing my business from them?. This is a serious breach of trust and I suspect that many of these brands own reputations will have been damaged.

So what are the impacts to us as IT architects in a case like this? Here are a few:

  1. As IT architects we make architectural decisions all the time. Some of these are relatively trivial (I’ll assign that function to that component etc) whereas others are not. Clearly decisions about which part of the system to entrust personal information to is not trivial. I always advocate documenting significant architectural decisions in a formal way where all the options you considered are captured as well as the rationale and implications behind the decision you made. As our systems get ever more complex and distributed the implications of particular decisions become harder to quantify. I wonder how many architects consider the implications to a companies reputation of entrusting even seemingly low grade personal information to third parties?
  2. It is very likely that incidents such as this are going to result in increased legislation that covers personal information just like there is legislation on Payment Card Industry (PCI) standards. This will demand more architectural rigour as new standards essentially impose new constraints on how we design our systems.
  3. As we trundle slowly to a world where more and more of our data is to be held in the cloud using a so called multi-tenant deployment model it’s surely only a matter of time before unauthorised access to one of our cloud data stores will result in access to many other data sources and a wealth of our personal information. What is needed here is new thinking around patterns of layered security that are tried and tested and, crucially, which can be ‘sold’ to consumers of these new services so they can be reassured that their data is secure. As Software-as-a-Service (SaaS) takes off and new providers join the market we will increasingly need to be reassured they are to be trusted with our personal data. After all if we cannot trust existing, large corporations how can we be expected to trust new, small startups?
  4. Finally I suspect that it is only a matter of time before legislation aimed at systems designers themselves is enforced that make us as IT architects liable for some of those architectural decisions I mentioned earlier. I imagine there are several lawyers engaged by the parties whose customers email addresses were obtained and whose trust and reputation with those customers may now be compromised. I wonder if some of those lawyers will be thinking about the design of such systems in the first place and, by implication, the people who designed those systems?

Open vs. Closed Architectures

There has been much Apple bashing in cyberspace as well as the ‘dead-wood’ parts of the press of late. To the extent that some people are now turning on those that own one of Apple’s wunder-devices (an iPad) accusing them of being “selfish elites“. Phew! I thought it was a typically British trait to knock anything and anyone that was remotely successful but it now seems that the whole world has it in for Mr Jobs’ empire.Back in the pre-google days of 1994 Umberto Eco declared thatthe Macintosh is Catholic and that DOS is Protestant. Indeed, the Macintosh is counter-reformist and has been influenced by the ratio studiorum of the Jesuits. It is cheerful, friendly, conciliatory; it tells the faithful how they must proceed step by step to reach — if not the kingdom of Heaven — the moment in which their document is printed.

The big gripe most people have with Apple is their closed architecture which controls not only who is allowed to write apps for their OS’s but who can produce devices that actually run those OS’s (er, that would be Apple). It’s one of life’s great anomalies as to why Apple is so successful in building products with closed architectures when most everyone would agree that open architectures and systems are ultimately the way to go as, in the end, they lead to greater innovation, wider-usage and, presumably, more profit for those involved. The classic case of an open architecture leading to wide-spread usage is that of the original IBM Personal Computer. Because IBM wanted to fast-track its introduction many of the parts were, unusually for IBM, provided by third-parties including, most significantly the processor (from Intel) and the operating system (from the fledgling Microsoft). This together with the fact that the technical information on the innards of the computer were made publicly available essentially made the IBM PC ‘open’. This more than anything gave it an unprecedented penetration into the marketplace allowing many vendors to provide IBM PC ‘clones’.

There is of course a ‘dark side’ to all of this. Thousands of vendors all providing hardware add-ons and extensions as well as applications resulted in huge inter-working problems which in the early days at least required you to be something of a computer engineer if you wanted to get everything working together. This is where Apple stepped in. As Umberto Eco said, Apple guides the faithful every step of the way. What they sacrifice in openness and choice they gain in everything working out the box, sometimes in three simple steps.

So, is open always best when it comes to architecture or does it sometimes pay to have a closed architecture? What does the architect do when faced with such a choice? Here’s my take:

  • Know your audience. The early PC’s, like it or not were bought by technophiles who enjoyed technology for the sake of technology. The early Mac’s were bought by people who just wanted to use computers to get the job done. In those days both had a market.
  • Know where you want to go. Apple stuck solidly with creating user friendly (not to mention well designed devices) that people would want to own and use. The plethora of PC providers (which there soon were) couldn’t by and large give a damn about design. They just wanted to sell as many devices as possible and let others worry about how to stitch everything together. This in itself generated a huge industry which in a strange self-fulfilling way led to more devices and world domination of the PC and left Apple in a niche market. Openness certainly seemed to be paying.
  • Know how to capitalise on your architectural philosopy. Ultimately openness leads to commoditization. When anyone can do it price dominates and the cheapest always wins. If you own the space then you control the price. Apple’s recent success has been not to capitalise on an open architecture but to capitalise on good design which has enabled it to create high value, desirable products showing that good design trounces an open architecture.

So how about combining the utility of an open architecture with the significance of a well thought through architecture to create a great design? Which funnily enough is what Dan Pink meant by this:

Significance + Utility = Design

Huh, beaten to a good idea again!

It’s Only Television But I Like It

Yes I know it’s a television program and yes I know they are playing up to the camera and yes I know we only see the ‘edited highlights’ but Jamie’s Dream School on Channel 4 last night was an exemplar on how to deliver motivational talks to a disinterested audience. As I discussed last time the ‘teachers’ (actually people at the leading edge in their field) are truly inspirational, passionate individuals who use every trick in the book to engage with and inspire their students. Not only that they are incredibly humble, as typified by one of the pupils asking Robert Winston if he “had ever cured anything” to which he replied he “thought they had helped with some advances, yes”. As well as all this inspirational and motivational teaching you will see there is not a single PowerPoint slide in sight. It’s all about naked presenting (well, apart from the odd prop or two) and story telling.I’ve recently been reading Nancy Duarte’s book Resonate which looks at how storytelling as done by great writers and film-makers can be used by presenters to really engage with their audience. If you want a book that helps you with presentations that is something other than the boring ‘how to’ guides on structuring PowerPoint presentations then it’s definitely worth a read.

So what’s this got to do with IT architecture? Nothing and everything! At one level architecture is just a pile of models and diagrams describing ways for solving business problems. However architecture also needs to be ‘bought alive’ if the ideas it encompasses are to be explained and the costs of implementing it justified to non-technical people. Explaining and presenting architecture is probably one of the most important aspects of the architects role and communication skills should definitely be up their as one of the key competencies possessed by architects. Without these architectures will just remain a bunch of ideas gathering virtual dust in a modeling tool.

On Lego, Granularity, Jamie Oliver and Architecture

There’s a TV program running here in the UK called Jamie’s Dream School where the chef, entrepreneur, restaurateur and Sainsbury’s promoter Jamie Oliver brings together some of Britain’s most inspirational individuals (Robert Winston, Simon Callow, Rankin and Rolf Harris to name but four) to see if they can persuade 20 young people who’ve left school with little to show for the experience to give education a second chance. The central theme seems to be one of hands-on student involvement and live demos, the more outrageous the better. The highlight so far being Robert Winston hacking away at rats and pigs with scalpels and a circular saw resulting in several students vomiting in the playground.This set me thinking about how best to demonstrate software architecture to a bunch of students of a similar age to those in the Jamie Oliver program for a talk I gave at a UK university last week. Much has been written about the analogy between LEGO (R) and services (see this article from Zapthink and another from ZDNet for example). Okay it may not be quite as imaginative as pig carcasses being hacked about but LEGO was the best I had at hand! Here’s how the demo works:

  1. First I give them my favourite defnition of architecture, namely: Architects take existing components and assemble them in interesting and important ways. (Seth Godin).
  2. Then I invite an unsuspecting candidate to come and assemble the body (importantly excluding the wheels) of a car out of LEGO (actually Duplo as its bigger) making a big thing of tipping a bag of bricks out onto the table. This they usually do without too much hassle, the key learning point being that they have created an “interesting” construct out of “existing components”.
  3. I then ask them to add some wheels and tip out a bag of K’NEX (R). As I’m sure even non-parents know K’NEX and LEGO are essentially different “systems” and the two don’t (easily) connect to each other. This usually ends up in bemused looks and a good deal of fiddling around with wheels and bricks trying to figure out how to make the two systems connect to each other.

Depending on how much time and energy you have as well as the attention span of the students there are lots of great learning points to be made here. In order of increasing depth these are:

  1. LEGO (components) have a well defined interface and can easily be assembled in lots of interesting ways.
  2. K’NEX is a different system and has been designed with a different interface. K’NEX and LEGO were not designed to work with each other. One of the jobs of an architect is to watch out for incompatible interfaces and figure out ways of making them work with each other. This is possibly done using a third party product e.g Sellotape (R). I guess an extension to this demo could be a roll of this.
  3. It may be in the component providers interest to use different interfaces as it results in vendor lock-in which means you have to keep going back to that vendor for more components,
  4. Granularity (i.e. in the case of LEGO the number of “nobbles”) is important. Small bricks (few nobbles, fine-grained) may be very reusable but you need lots and lots of them to do anything interesting. Conversely LEGO have now taken to quite specialized pieces (not “bricks” any more but large-grained pieces) that perform one function well (the LEGO rocket for example) but cannot be reused so easily. The optimum for re-usability is somewhere in-between these two.
  5. LEGO may be aimed at children who, with relatively little expertise or training, may be able to assemble interesting things but they are not about to build LEGOland. For that you need an architect!
  6. Finally, if you are feeling really mean, disassemble the lovingly built construction of your student then ask her to re-build it in exactly the same way. Chances are that even for a relatively simple system they won’t be able to. What might have helped was some type of document that described what they had done.

I’m sure there are other interesting analogies to be drawn but I’ll finish by saying that this is not quite as trivial as it sounds. Not only was this a good learning exercise for my students I happen to know a client who is using building blocks like LEGO to help their architects architect systems. The key thing it helps demonstrate is the importance of collaboration in assembling such systems.