The Innovation Conundrum and Why Architecture Matters

A number of items in the financial and business news last week set me thinking about why architecture matters to innovation. Both IBM and Apple announced their second quarter results. IBM’s revenue for Q2 2011 was $26.7B, up 12% on the same quarter last year and Apples revenue for the same quarter was  $24.67B, an incredible 83% jump on the same quarter last year. As I’m sure everyone now knows IBM is 100 years old this year whereas Apple is a mere 35 years old. It looks like both Apple and IBM will become $100B companies this year if all goes to plan (IBM having missed joining the $100B club by a mere $0.1B in 2010). Coincidentally a Forbes article also caught my eye. Forbes listed the top 100 innovative companies. Top of the list was salesforce.com, Apple were number 5 and IBM were, er, not in the top 100! So what’s going on here? How can a company that pretty much invented the mainframe and personal computer, helped put a man on the moon, invented the scanning electron microscope and scratched the letters IBM onto a nickel crystal one atom at a time, and, most recently, took artificial intelligence a giant leap forward with Watson not be classed as innovative?

Perhaps the clue is in what the measure of innovation is. The Forbes article measures innovation by an “innovation premium” which it defines as:

A measure of how much investors have bid up the stock price of a company above the value of its existing business based on expectations of future innovative results (new products, services and markets).

So it would appear that, going by this definition of innovation, investors don’t think IBM is expected to be bringing any innovative products or services to market whereas the world will no doubt be inundated with all sorts of shiny iThingys over the course of the next year or so. But is that really all there is to being innovative? I would venture not.

The final article that caught my eye was about Apples cash reserves. Depending on which source you read this is around $60B and as anyone who has any cash to invest knows, sitting on it is not the best way of getting good returns! Companies generally have a few options with what to do when they amass so much cash, pay out higher dividends to shareholders, buy back their own shares, invest more in R&D or go on a buying spree and buy some companies that fill holes in their portfolio. Whilst this is a good way of quickly entering into markets companies may not be active in it tends to backfire on the innovation premium as mergers and acquisitions (M&A) are not, at least initially, seen as bringing anything new to market. M&A’s has been IBM’s approach over the last decade or so. As well as the big software brands like Lotus, Rational and Tivoli IBM has more recently bought lots of smaller software companies such as Cast Iron Systems, SPSS Statistics and Netezza.

A potential problem with this approach is that people don’t want to buy a “bag of bits” and have to assemble their own solutions Lego style. What they want are business solutions that address the very real and complex (wicked, even) problems they face today. This is where the software architect comes into his or her own. The role of the software architect is to take existing components and assemble them in interesting and important ways“. To that I would add innovative ways as well. Companies no longer want the same old solutions (ERP system, contact management system etc) but new and innovative systems that solve their business problems. This is why we have one of the more interesting jobs there is out there today!

A Service Based Development Process – Part 4

The first three of these blog posts (here, here and here) have looked at the process behind developing business processes and services that could be deployed into an appropriate environment, including a cloud (private, public or hybrid). In this final post I’ll take a look at how to make this ‘real’ by describing an architecture that could be used for developing and deploying services, together with some software products for realising that architecture.The diagram below shows both the development time and also run-time logical level architecture of a system that could be used for both developing and deploying business processes and services. This has been created using the sketching capability of Rational Software Architect.

Here’s a brief summary of what each of the logical components in this architecture sketch do (i.e. their responsibilities):

  • SDLC Repository – The description of the SDLC goes here. That is the work breakdown structure, a description of all the phases, activities and tasks as well as the work products to be created by each task and also the roles used to create them. This would be created and modified by the actor Method Author using a SDLC Developer tool. The repository would typically include guidance (examples, templates, guidelines etc) that show how the SDLC is to be used and how to create work products.
  • SDLC Developer – The tool used by the Method Author to compose new or modify existing processes. This tool published the SDLC into the SDLC Repository.
  • Development Artefacts Repository – This is where the work products that are created on an actual project (i.e. ‘instances’ of the work products described in the SDLC) get placed.
  • Business Process Developer – The tool used to create and modify business processes.
  • IT Service Developer – The tool used to create and modify services.
  • Development Repository – This is where ‘code’ level artefacts get stored during development. This could be a subset of the Development Artefacts Repository.
  • Runtime Services Repository -Services get published hereonce they have been certified and can be released for general use.
  • Process Engine – Executes the business process.
  • Enterprise Service Bus – Runs the services and provides adapters to external or legacy systems.

Having described the logical components the next step is to show how these can be realised using one or more vendors products. No surprise that I am going to show how these map to products from IBM’s portfolio however clearly your own particular requirements (including whose on your preferred vendor list of course) may dictate that you choose other vendors products. Nearly all the IBM product links allow you to download trial versions that you can use to try out this approach.

  • Rational Method Composer – This enables you to manage, author, evolve, measure and deploy effective processes (SDLCs) tailored to your project needs. It is based on Eclipse. Rational Method Composer allows publishing to a web site so effectively covers the needs of both the SDLC Repository and SDLC Developer components.
  • IBM Business Process Manager – This is the latest name for IBM’s combined development and runtime business process server. As well as a business process runtime, ESB and BPM repository it also includes design tools for building processes and services.  The Process Designer allows business authors to build fully executable BPMN processes that include user interfaces for human interaction. The Integration Designer enables IT developers to develop services that easily plug into processes to provide integration and routing logic, data transformation and straight-through BPEL subprocesses. See this whitepaper for more information or click here for the IBM edition of the book BPM for Dummies. IBM Business Process Manager realises the components: Business Process Developer, IT Service Developer, Development Repository, Process Engine and Enterprise Service Bus.
  • WebSphere Service Registry and Repository – Catalogs, and organizes assets and services allowing customers to get a handle on what assets they have, making it easy to locate or distribute. Also enables policy management across the SOA lifecycle, spanning various domains of policies including runtime policies as well as service governance policies. Included in the Advanced Lifecycle Edition is
    Rational Asset Manager which provides life cycle management capabilities to manage asset workflow from concept, development, build, deployment, and retirement as well as Build Forge integration. WebSphere Service Registry and Repository realises the Development Artifacts Repository as well as the Runtime Services Repository.

So, there it is. An approach for developing services as well as an initial architecture allowing for the development and deployment of both business processes and services together with some actual products to get you started. Please feel free to comment here or in any of my links if you have anything you’d like to say.

Open vs. Closed Architectures

There has been much Apple bashing in cyberspace as well as the ‘dead-wood’ parts of the press of late. To the extent that some people are now turning on those that own one of Apple’s wunder-devices (an iPad) accusing them of being “selfish elites“. Phew! I thought it was a typically British trait to knock anything and anyone that was remotely successful but it now seems that the whole world has it in for Mr Jobs’ empire.Back in the pre-google days of 1994 Umberto Eco declared thatthe Macintosh is Catholic and that DOS is Protestant. Indeed, the Macintosh is counter-reformist and has been influenced by the ratio studiorum of the Jesuits. It is cheerful, friendly, conciliatory; it tells the faithful how they must proceed step by step to reach — if not the kingdom of Heaven — the moment in which their document is printed.

The big gripe most people have with Apple is their closed architecture which controls not only who is allowed to write apps for their OS’s but who can produce devices that actually run those OS’s (er, that would be Apple). It’s one of life’s great anomalies as to why Apple is so successful in building products with closed architectures when most everyone would agree that open architectures and systems are ultimately the way to go as, in the end, they lead to greater innovation, wider-usage and, presumably, more profit for those involved. The classic case of an open architecture leading to wide-spread usage is that of the original IBM Personal Computer. Because IBM wanted to fast-track its introduction many of the parts were, unusually for IBM, provided by third-parties including, most significantly the processor (from Intel) and the operating system (from the fledgling Microsoft). This together with the fact that the technical information on the innards of the computer were made publicly available essentially made the IBM PC ‘open’. This more than anything gave it an unprecedented penetration into the marketplace allowing many vendors to provide IBM PC ‘clones’.

There is of course a ‘dark side’ to all of this. Thousands of vendors all providing hardware add-ons and extensions as well as applications resulted in huge inter-working problems which in the early days at least required you to be something of a computer engineer if you wanted to get everything working together. This is where Apple stepped in. As Umberto Eco said, Apple guides the faithful every step of the way. What they sacrifice in openness and choice they gain in everything working out the box, sometimes in three simple steps.

So, is open always best when it comes to architecture or does it sometimes pay to have a closed architecture? What does the architect do when faced with such a choice? Here’s my take:

  • Know your audience. The early PC’s, like it or not were bought by technophiles who enjoyed technology for the sake of technology. The early Mac’s were bought by people who just wanted to use computers to get the job done. In those days both had a market.
  • Know where you want to go. Apple stuck solidly with creating user friendly (not to mention well designed devices) that people would want to own and use. The plethora of PC providers (which there soon were) couldn’t by and large give a damn about design. They just wanted to sell as many devices as possible and let others worry about how to stitch everything together. This in itself generated a huge industry which in a strange self-fulfilling way led to more devices and world domination of the PC and left Apple in a niche market. Openness certainly seemed to be paying.
  • Know how to capitalise on your architectural philosopy. Ultimately openness leads to commoditization. When anyone can do it price dominates and the cheapest always wins. If you own the space then you control the price. Apple’s recent success has been not to capitalise on an open architecture but to capitalise on good design which has enabled it to create high value, desirable products showing that good design trounces an open architecture.

So how about combining the utility of an open architecture with the significance of a well thought through architecture to create a great design? Which funnily enough is what Dan Pink meant by this:

Significance + Utility = Design

Huh, beaten to a good idea again!

Watson, Turing and Clarke

So what do these three have in common?

  • Thomas J. Watson Sr, CEO and founder of IBM (100 years old this year). Currently has a computer named after him.
  • Alan Turing, mathematician and computer scientist (100 years old next year). Has a famous test named after him.
  • Aurthur C. Clarke, scientist and writer (100 years old in 1917). Has a set of laws named after him (and is also the creator of the fictional HAL computer in 2001: A Space Odyssey).

Unless you have moved into a hut, deep in the Amazon rain forest you cannot have missed the publicity over IBM’s ‘Watson’ computer having competed in, and won, the American TV quiz show Jeopardy. I have to confess that until last week I’d not heard of Jeopardy, possibly because a) I’m not a fan of quizzes, b) I’m not American and c) I don’t watch that much television. To those as ignorant as me on these matters the unique thing about Jeopardy is that contestants are presented with clues in the form of answers, and must phrase their responses in the form of a question.

This, it turns out, is what makes this particular quiz such a hard nut for a computer to crack. The clues in the ‘question’ rely on subtle meanings, puns, and riddles; something humans excel at and computers do not. Unlike IBM’s previous game challenger Deep Blue, which defeated chess world champion Gary Kasparov, it’s not sufficient to rely on raw computing ‘brute force’ but this time the computer has to interpret meaning and the nuances of the human language. So has Watson achieved, met or passed the Turing test (which is basically a measure of whether computer can demonstrate intelligence)?

The answer is almost certainly ‘no’. Turing’s test is a measure of a machines ability to exhibit human intelligence. The test, as originally proposed by Turing was that a questioner should ask a series of questions of both a human being and a machine and see whether he can tell which is which through the answers they give. The idea being that if the two were indistinguishable then the machine and the human must both appear to be as intelligent as each other.

As far as I know Turing never stipulated any constraint on the range or type of questions that could be answered which leads us to the nub of the problem. Watson is supremely good at answering Jeopardy type questions just as Deep Blue was good at playing chess. However neither could do what the other does (at least as well). They have been programmed for that given task. Given that Watson is actually a cluster of POWER7 servers any suitably general purpose computer that could win at Jeopardy, play chess as well as exhibit the full range of human emotions and frailties that would be needed to fool a questioner would presumably occupy the area of several football pitches and consume the power of a small city.

That however misses the point completely. The ability of a computer to almost flawlessly answer a range of questions, phrased in a particular way on a range of different subject areas, blindingly fast has enormous potential in fields of medicine, law and other disciplines where questions based on a huge foundation of knowledge built up over decades need to be answered quickly (for example in accident and emergency where quick diagnoses may literally be a matter of life and death). This indeed is one of IBM’s Smarter Planet goals.

Which brings us to Clarke’s third law which states that “any sufficiently advanced technology is indistinguishable from magic”. This is surely something that is attributable to Watson. The other creation of Clarke of course is HAL the computer aboard the spaceship Discovery One on a trip to Saturn that becomes overwhelmed by guilt at having to keep secret the true nature of the spaceships mission and starts killing members of the crew. The point of Clarke’s story (or one of them) being that the downside to a computer that is indistinguishable from a human being is that the computer may also end up mimicking human frailties and weaknesses.  Maybe it’s a good job Watson hasn’t passed Turing’s test then?

Think Like An Architect

A previous entry suggested hiring an architect was a good idea because architects take existing components and assemble them in interesting and important ways. So how should you “think architecturally” in order to create things that are not only interesting but also solve practical, real-world problems? Architectural thinking is about balancing three opposing “forces”: what people want (desirability), what technology can provide (feasibility) and what can actually be built given the constraints of cost, resource and time (viability).

It is basically the role of the architect to help resolve these forces by assembling components “in interesting ways”. There is however a fourth aspect which is often overlooked but which is what separates great architecture from merely good architecture. That is the aesthetics of the architecture.
Aesthetics is what separates a MacBook from a Dell, the Millau Viaduct in France from the Yamba Dam Bridge in Japan and the St Mary Axe from the Ryugyong Hotel in North Korea. Aesthetics is about good design which is what you get when you add ‘significance’ (aesthetic appeal) to ‘utility’ (something that does the job). IBM, the company I work for, is 100 years old this year (check out the centennial video here) and Thomas Watson, IBM’s founder, famously said that “good design is good business”. Watson knew what Steve Jobs, Tim Brown and many other creative designers know; aesthetics is not only good for the people that use or acquire these computers/buildings/systems it’s also good for the businesses that create them. In a world of over-abundance good design/architecture both differentiates companies as well as giving them a competitive advantage.

Software Developments Best Kept Secret

A few people have asked what I meant in my previous entry when a said we should be “killing off the endless debates of agile versus waterfall.” Don’t get me wrong, I’m a big fan of doing development in as efficient a way as possible, after all why would you want to be doing things in a ‘non-agile’ way! However I think that the agile versus waterfall debate really does miss the point. If you have ever worked on anything but the most trivial of software development projects you will quickly realise that there is no such thing as a ‘one size fits all’ software delivery lifecycle (SDLC) process. Each project is different and each brings its own challenges in terms of the best way to specify, develop, deliver and run it. Which brings me to the topic of this entry, the snappily titled Software and Systems Process Engineering Metamodel or ‘SPEM’ (but not SSPEM).

SPEM is a standard owned by the Object Management Group (OMG), the body that also owns the Unified Modeling Language (UML), the Systems Modeling Language (SysML) and a number of other open standards. Essentially SPEM gives you the language (the metamodel) for defining software and system processes in a consistent and repeatable way. SPEM also allows vendors to build tools that automate the way processes are defined and delivered. Just like vendors have built system and software modeling tools based around UML so too can vendors build delivery process modeling tools built around SPEM.

So what exactly does SPEM define and why should you be interested in it? For me there are two reasons why you should look at adopting SPEM on your next project.

  1. SPEM separates out what you create (i.e. the content) from how you create it (i.e. the process) whilst at the same time providing instructions for how to do these two things (i.e. guidance).
  2. SPEM (or at least tools that implement SPEM) allows you to create a customised process by varying what you create and when you create it.

Here’s a diagram to explain the first of these.

SPEM Method Framework
The SPEM Method Framework represents a consistent and repeatable approach to accomplishing a set of objectives based on a collection of well-defined techniques and best practices. The framework consists of three parts:
  • Content: represents the primary reusable building blocks of the method that exist outside of any predefined lifecycle. These are the work products that are created as a result of roles performing tasks.
  • Process: assembles method content into a sequence or workflow (represented by a work breakdown structure) used to organise the project and develop a solution. Process includes the phases that make up an end-to-end SDLC, the activities that phases are broken down into as well as reusable chunks of process referred to as ‘capability patterns’.
  • Guidance: is the ‘glue’ which supports content development and process execution. It describes techniques and best-practice for developing content or ‘executing’ a process.

As well as giving us the ‘language’ for building our own processes SPEM also defines the rules for building those processes. For example phases consist of other phases or activities, activities group tasks, tasks take work products as input and output other work products and so on.

This is all well and good you might say but I don’t want to have to laboriously build a whole process every time I want to run a project. This is where the second advantage of using SPEM comes in. A number of vendors (IBM and Sparx to name two) have built tools that not only automate the process for building a process but which also contain one or more ‘ready-rolled’ processes to get you started. You can either use those ‘out of the box’, extend them by adding your own content or start from scratch (not recommended for novices). What’s more the Eclipse foundation have developed an open software tool, called the Eclipse Process Framework (EPF) that not only gives you a tool for building processes but also comes with a number of existing processes, including OpenUP (open version of the Rational Unified Process) as well as Scrum and DSDM.

If you download and install EPF together with the appropriate method libraries you can use these as the basis for creating your own processes. Here’s what EPF looks like when you open the OpenUP SDLC.

EPF and OpenUP

The above view shows the browsing perspective of EPF, however there is also an authoring perspective which allows you to not only reconfigure a process to suit your own project but also add and remove content (i.e. roles, tasks and work products). Once you have made your changes you can republish the new process (as HTML) and anyone with a browser can then view the process together with all of it work products and, most crucially, associated guidance (i.e. examples, templates, guidelines etc) that allow you to use the process in an effective way.

This is, I believe, the true power of using a tool like EPF (or IBM’s Rational Method Composer which comes preloaded with the Rational Unified Process). You can take an existing SDLC (one you have created or one you have obtained from elsewhere) and customise it to meet the needs of your project. The amount of agility and number of iterations etc that you want to run will depend on the intricacies of your project and not what some method guru tells you that you should be using!

By the way for an excellent introduction and overview of EPF see here and here. The Eclipse web site also contains a wealth of information on EPF. You can also download the complete SPEM 2 specification from the OMG web site here.

Architecture Drill Down in the UML

Solution Architects need to create models of the systems they are building for a number of reasons:

  • Models help to visualise the component parts, their relationships and how they will interact.
  • Models help stakeholders understand how the system will work.
  • Models, defined at the right level of detail, enable the implementers of the system to build the component parts in relative isolation provided the interfaces between the parts are well defined.

These models need to show different amounts of detail depending on who is looking at them and what sort of information you expect to get from them. Grady Booch says that “a good model excludes those elements that are not relevant to the given level of abstraction”. Every system can be described using different views and different models. Each model should be “a semantically closed abstraction of the system” (that is complete at what ever level it is drawn). Ideally models will be both structural, emphasizing the organization of the system, as well as behavioral, emphasizing the dynamics aspects of the system.

To support different views and allow models to be created at different levels of abstraction I use a number of different diagrams, created using the Unified Modeling Language (UML) in an appropriate modeling tool (e.g. Rational Software Architect or Sparx Enterprise Architect). Using a process of “architecture drill-down” I can get both high level views as well as detailed views that are relevant to the level of abstraction I want to see. Here are the views and UML diagrams I create.

  • Enterprise View (created with a UML Package diagram). This sets the context of the whole enterprise and shows actors (users and other external systems) who interact with the enterprise.
  • System View (Package diagram). This shows the context of an individual system within the enterprise and shows internal workers and other internal systems.
  • Subsystem View (Package diagram). This shows the breakdown of one of the internal systems into subsystems and the dependencies between them.
  • Component View (Component diagrams and Sequence diagrams). This shows the relationships between components within the subsystems, both static dependency type relationships (through the UML Component diagram) as well as interactions between components (through the UML Sequence diagram).
  • Component Composition View (Composite Structure diagram). This shows the internal structure of a component and the interfaces it provides.

Note hat a good tool will link all these together and ideally allow them to be published as HTML allowing people without the tool to use them and also navigate through the different levels. Illustrative examples of the first three of the diagrams mentioned above are shown below. These give increasing levels of detail for a hypothetical hotel system. Click on the picture to get a bigger view.

Enterprise View
System View
Subsystem View

In the actual tool, Sparx Enterprise Architect in this case, each of these diagrams is linked so when I click on the package of the first it opens up the second and son on. When published as HTML this “drill-down” gets maintained as hyperlinks allowing for easy navigation and review of the architecture.

Skills for Building a Smarter Planet: A Manifesto

IBM is currently doing a big advertising campaign called Smarter Planet. I recently put together a lecture for a group of university students which tried to weave together a number of themes which I am currently interested in:

Here’s my manifesto:

In the 21st century IT professionals must adopt more of a systems thinking approach if they are to solve the wicked problems the world faces. If we are truly going to build a smarter planet then we need a new breed of versatilists who are able to solve these problems.

There are a number of themes here I plan to return to in future posts.

Does Architecture Matter Any More?

I’ve been reading up on the whole cloud/mashups/social computing thing and the above question occurred to me. Within the context of what are essentially several new architectural styles what is the role of the architect in all of this and what exactly is the architecture he or she is trying to create? In attempting to answer this question my mind diverted to an IT architecture course that I have been lucky enough to teach on a number of occasions both inside and outside IBM. The course is called Architectural Thinking. I imagine (hope) that some people reading this will have attended that class and it occurred to me just how clever the people that created the first version of that class back in 1999 were. The key word of course is thinking. It’s not a class about a particular style of architecture, about a particular architectural process and is certainly (thankfully) not about any particular technology. The key part of that class is about how to think about problems and create architectures, often using an existing style or pattern, to come up with solutions to a client’s business problem. The main axiom being that the architecture should drive the technology and not the other way round. In other words it’s about the fundamentals that never go out of style or, to paraphrase Grady Booch:

Architectural styles come and go but the enduring fundamentals (crisp abstractions; clear separation of concerns; balanced distribution of responsibilities; simplicity) endure and never go out of style.

This course is available outside of IBM for clients so if anyone is interested in running the class get in touch with me on here.

Wot No Blogging?

I think I’ve only just realised how much of a commitment keeping a “proper” blog (that is one that doesn’t just regurgitate information discovered elsewhere without adding some additional value or, better still, creating brand new content) takes! I’ve been distracted by other things of late (anyone looking at where I currently work and who is aware of what is happening in the UK economy at present may be able to guess what) however this month I intend to return with, I hope, a vengeance and give myself the target of blogging at least two meaningful, and hopefully of value, entries each month. This one does not count by the way.