Five Software Architectures That Changed The World

Photo by Kobu Agency on Unsplash
Photo by Kobu Agency on Unsplash

“Software is the invisible thread and hardware is the loom on which computing weaves its fabric, a fabric that we have now draped across all of life”.

Grady Booch

Software, although an “invisible thread” has certainly had a significant impact on our world and now pervades pretty much all of our lives. Some software, and in particular some software architectures, have had a significance beyond just the everyday and have truly changed the world.

But what constitutes a world changing architecture? For me it is one that meets all of the following:

  1. It must have had an impact beyond the field of computer science or a single business area and must have woven its way into peoples lives.
  2. It may not have introduced any new technology but may instead have used some existing components in new and innovative ways.
  3. The architecture itself may be relatively simple, but the way it has been deployed may be what makes it “world changing”.
  4. It has extended the lexicon of our language either literally (as in “I tried googling that word” or indirectly in what we do (e.g. the way we now use App stores to get our software).
  5. The architecture has emergent properties and has been extended in ways the architect(s) did not originally envisage.

Based on these criteria here are five architectures that have really changed our lives and our world.

World Wide Web
When Tim Berners-Lee published his innocuous sounding paper Information Management: A Proposal in 1989 I doubt he could have had any idea what an impact his “proposal” was going to have. This was the paper that introduced us to what we now call the world wide web and has quite literally changed the world forever.

Apple’s iTunes
There has been much talk in cyberspace and in the media in general on the effect and impact Steve Jobs has had on the world. When Apple introduced the iPod in October 2001 although it had the usual Apple cool design makeover it was, when all was said and done, just another MP3 player. What really made the iPod take off and changed everything was iTunes. It not only turned the music industry upside down and inside out but gave us the game-changing concept of the ‘App Store’ as a way of consuming digital media. The impact of this is still ongoing and is driving the whole idea of cloud computing and the way we will consume software.

Google
When Google was founded in 1999 it was just another company building a search engine. As Douglas Edwards says in his book I’m Feeling Lucky “everybody and their brother had a search engine in those days”. When Sergey Brin was asked how he was going to make money (out of search) he said “Well…, we’ll figure something out”. Clearly 12 years later they have figured out that something and become one of the fastest growing companies ever. What Google did was not only create a better, faster, more complete search engine than anyone else but also figured out how to pay for it, and all the other Google applications, through advertising. They have created a new market and value network (in other words a disruptive technology) that has changed the way we seek out and use information.

Wikipedia
Before WIkipedia there was a job called an Encyclopedia Salesman who walked from door to door selling knowledge packed between bound leather covers. Now, such people have been banished to the great redundancy home in the sky along with typesetters and comptometer operators.

If you do a Wikipedia on Wikipedia you get the following definition:

Wikipedia is a multilingual, web-based, free-content encyclopedia project based on an openly editable model. The name “Wikipedia” is a portmanteau of the words wiki (a technology for creating collaborative websites, from the Hawaiian word wiki, meaning “quick”) and encyclopedia. Wikipedia’s articles provide links to guide the user to related pages with additional information.

From an architectural point of view Wikipedia is “just another wiki” however what it has bought to the world is community participation on a massive scale and an architecture to support that collaboration (400 million unique visitors monthly more than 82,000 active contributors working on more than 19 million articles in over 270 languages). Wikipedia clearly meets all of the above crtieria (and more).

Facebook
To many people Facebook is social networking. Not only has it seen off all competitors it makes it almost impossible for new ones to join. Whilst the jury is still out on Google+ it will be difficult to see how it can ever reach the 800 million people Facebook has. Facebook is also the largest photo-storing site on the web and has developed its own photo storage system to store and serve its photographs. See this article on Facebook architecture as well as this presentation (slightly old now but interesting nonetheless).

I’d like to thank both Grady Booch and Peter Eeles for providing input to this post. Grady has been doing great work on software archeology  and knows a thing or two about software architecture. Peter is my colleague at IBM as well as co-author on The Process of Software Architecting.

Emergent Architectures

I was slightly alarmed to read recently in a document describing a particular adaptation of the unified process that allowing architectures to ‘emerge’ was a poor excuse to avoid hard thinking and planning and that emergent architectures, and anyone who advocates them, should be avoided.The term ’emergent architecture’ was, I believe, first coined by Gartner (see here) and applied to Enterprise Architecture. Gartner identified a number of characteristics that could be applied to emergent architectures one of which was that they are non-deterministic. Traditionally (enterprise) architects applied centralised decision-making to design outcomes. Using emergent architecture, they instead must decentralise decision-making to enable innovation.

Whilst emergent architectures certainly have their challenges it is my belief that, if well managed, they can only be a good thing and should certainly not be discouraged. Indeed I would say that emergence could be applied at a Solution Architecture level as well and is ideally suited to more agile approaches where everything is simply not known up front. The key thing with managing an emergent architecture is to capture architectural decisions as you go and ensure the architecture adapts as a result of real business needs.

What Would Google Do?

Readers of this blog will know that one of my interests/research areas is how to effectively bring together left-brain (i.e. logical) and right-brain (i.e. creative) thinkers in order to drive creativity and generate new and innovative ideas to solve some of the worlds wicked problems. One of the books that have most influenced me in this respect is Daniel Pink’s A Whole New Mind – Why Right-Brainers Will Rule the Future. Together with a colleague I am developing the concept of the versatilist (first coined by Gartner) as a role that effectively brings together both right- and left-brain thinkers to solve some of the knotty business problems there are out there. As part of this we are developing a series of brain exercises that can be given to students on creative, problem solving courses to open up their minds and start them thinking outside the proverbial box. One of these exercises is called What Would Google Do? The idea being to try and get them to take the non-conventional, Google, view of how to solve a problem. By way of an example Douglas Edwards, in his book I’m Feeling Lucky – The Confessions of Google Employee Number 59, relates the following story about how Sergey Brin, co-founder of Google, proposed an innovative approach to marketing.“Why don’t we take the marketing budget and use it to inoculate Chechen refugees against cholera. It will help our brand awareness and we’ll get more new people to use Google.”

Just how serious Brin was being here we’ll never know but you get the general idea; no idea is too outrageous for folk in the Googleplex.

To further backup how serious Google are about creativity their chairman Eric Schmidt, delivered a “devastating critique of the UK’s education system and said the country had failed to capitalise on its record of innovation in science and engineering” at this year’s MacTaggart lecture in Edinburgh.  Amongst other criticisms Schmidt aimed at the UK education system he said that the country that invented the computer was “throwing away your great computer heritage by failing to teach programming in schools ” and was flabbergasted to learn that today computer science isn’t even taught as standard in UK schools. Instead the IT curriculum “focuses on teaching how to use software, but gives no insight into how it’s made.” For those of us bought up in the UK at the time of the BBC Microcomputer hopefully this guy will be the saviour of the current generation of programmers.

US readers of this blog should not feel too smug, check out this YouTube video from Dr. Michio Kaku who gives an equally devastating critique of the US education system.

So, all in all, I think the world definitely needs more of a versatilist approach, not only in our education systems but also in the ways we approach problem solving in the workplace. Steve Jobs, the chief executive of Apple, who revealed last week that he was stepping down once told the New York Times: “The Macintosh turned out so well because the people working on it were musicians, artists, poets and historians – who also happened to be excellent computer scientists”. Once again Apple got this right several years ago and are now reaping the benefits of that far reaching, versatilist approach.

Creative Leaps and the Importance of Domain Knowledge

Sometimes innovation appears to come out of nowhere. Creative individuals, or companies, appear to be in touch with the zeitgeist of the times and develop a product (or service) that does not just satisfy an unknown need but may even create a whole new market that didn’t previously exist. I would put James Dyson (bagless vacuum cleaner) as an example of the former and Steve Jobs/Apple (iPad) as an example of the latter.Sometimes the innovation may even be a disruptive technology that creates a new market where one previously did not exist and may even destroy existing markets. Digital photography and its impact on the 35mm film producing companies (Kodak and Ilford) is a classic example of such a disruptive technology.

Most times however creativity comes from simply putting together existing components in new and interesting ways that meet a business need. For merely mortal software architects if we are to do this we not only need a good understanding of what those components do but also how the domain we are working in really, really works. You need to not only be curious about your domain (whether it be financial services, retail, public sector or whatever) but be able to ask the hard questions that no one else thought or bothered to ask. Sometimes this means not following the herd and being fashionable but being completely unfashionable. As Paul Arden, the Creative Director of Saatchi and Saatchi said in his book Whatever You Think, Think The Opposite:

People who create work that fashionable people emulate do the very opposite of what is in fashion. They create something unfashionable, out of time, wrong. Original ideas are created by original people, people who either through instinct or insight know the value of being different and recognise the commonplace as a dangerous place to be.

So do you want to be fashionable or unfashionable?

On Being an Effective Architect

In a previous blog entry I talked about the key skills architects needed to develop to perform their craft which I termed the essence of being an architect. Developing this theme slightly I also think there is a process that architects need to follow if they are to be effective. Stripped down to its bare essentials this process is as shown below.

This is not meant to be a process in the strict SDLC sense of the word but more of a meta-process that applies across any SDLC. Here’s what each of the steps means and some of the artefacts that would typically be created in each step (shown in italics).

  • Envision – Above all else the architect needs to define and maintain the vision of what the system is to be. This can be in the form of one or more, free-format Architecture Overview diagrams and would certainly include having an overall System Context that defined the boundary around the system under development (SUD). The vision would also include capturing the key Architectural Decisions that have shaped the system to be the way it is and also refer to some of the key Architecture Principles that are being followed.
  • Realise -As well as having a vision of how the system will be the architect must know how to realise her vision otherwise the architecture will remain as mere paper or bits and bytes stored in a modeling tool. Realisation is about making choices of which technology will be used to implement the system. Choices considered may be captured as Architectural Decisions and issues or risks associated with making such decisions captured in a RAID Log (Risks-Assumptions-Issues-Dependencies). Technical Prototypes may also be built to prove some of the technologies being selected.
  • Influence -Above all else architects need to be able to exert the required influence to carry through their vision and the realisation of that vision. Some people would refer to this as governance however for me influence is a more subtle, background task that, if done well, is almost unnoticed but nonetheless has the desired effect. Influencing is about having and following a Stakeholder Management Plan and communicating your architecture frequently to your key stakeholders.

Each of these steps are performed many times, or even continuously, on a project. Influencing means listening as well and this may lead to changes in your vision thus starting the whole cycle again.

Adopting this approach is not guaranteed to give you perfect systems as their are lots of other factors, as well as people, that come into play during a typical project. What it will do is give you a framework that allows you to bring some level of rigor to the way you develop your architectures and work with stakeholders on a project.

Happy Birthday WWW

Today is the 20th anniversary of the world wide web; or at least the anniversary of the first web page. On this day in 1991 Tim Berners-Lee posted a short summary of the World Wide Web project on the alt.hypertext newsgroup:

The World Wide Web (WWW) project aims to allow all links to be made to any information anywhere. […] The WWW project was started to allow high energy physicists to share data, news, and documentation. We are very interested in spreading the web to other areas, and having gateway servers for other data. Collaborators welcome!

He certainly found a lot of collaborators!

As I’ve said before I believe the WWW is one of the greatest feats of software architecture ever performed. Happy Birthday WWW!

The Tools We Use

Back in 1964 Marshall McLuhan said  “We shape our tools and afterwards our tools shape us”. McLuhan was actually talking about the media when he said this but much of what he said then has a great deal of relevance in today’s mixed up media world too.It occurs to me that McLuhan’s tool quote equally applies to the tools we use, or misuse, as software architects. PowerPoint (or Keynote for that matter) has received pretty bad press over the years as being a tool that inhibits rather than enhances our creativity. Whilst this does not have to be the case, too many people take tools, such as PowerPoint, and use them in ways I’m pretty sure their creators never intended. Here are some common tool (mis)uses I’ve observed over the years (anti-patterns for tools if you like):

  1. Spreadsheets as a databases. Too many people seem to use spreadsheets as a sort of global repository for dumping ideas, data and information in general because it gives them the ability to easily sort and categorise information. Spreadsheets are good at numbers and presenting analytical data but not for capturing textual information.
  2. Presentations as documents. Sometimes what started out as a presentation to illustrate a good idea seems to grow into a more detailed description of that idea and eventually turns into a full-blown specification! The excuse for doing this being “we can use this to present to the client as well as leaving it with them at the end of the project as the design of the system”. Bad idea!
  3. Presentations as a substitute for presenting. The best presenters present “naked”. Minimal presentations (where sometimes minimal = 0) where the presenter is at the fore and his or her slides are illustrating the key ideas is what presenting is or should be about. Did John F Kennedy, Winston Churchill or Martin Luther King rely on PowerPoint to get their big ideas across? I think not!
  4. Word processors as presentations. This is the opposite of number 2. Whilst not so common  people have been known in my experience to ‘present’ their documents on a screen in a meeting. It goes without saying, or should do, that 12pt (or less) text does not come across well on a screen.
  5. Word processors as web sites. Although most word processors have the capability of generating HTML this is not a good reason for using them to build web sites. There are a multitude of free, open and paid for tools that do a far better job of this.
  6. Emails as documents. This is variant (generalisation) of one of my favourite [sic] anti-patterns. e-mails are one of the greatest sources of unstructured data in the world today. There must be, literally, terabytes of data stored using this medium that should otherwise be captured in a more readily consumable and accessible form. e-mails clearly have a place for forming ideas but not for capturing outcomes and persisting those ideas so others can see them and learn from them.

Oh Dear, Here We Go Again!

So, here we go again. The BBC today report that “IT giants are ‘ripping off’ Whitehall, say MPs”. As I presumably work for one of those “IT giants” I will attempt to comment on this in as impartial a way as is possible.

  • As long as we have ‘IT projects’ rather than ‘business improvement’ or ‘business change’ projects in government, or anywhere else come to that, we (and it is ‘we’ as tax payers) will continue to get ‘ripped off’. Buying IT because it is ‘sexy‘  is always going to end in tears. IT is a tool that may or may not fix a business problem. Unless you understand the true nature of that business problem throwing IT at it is doomed to failure. This is what software architects need to focus on. I’m coming to the conclusion that the best architects are actually technophobes rather than technophiles.
  • It’s not Whitehall that is being ‘ripped off’ here. It’s you and me as tax payers (assuming you live in the UK and pay taxes to the UK government of course). Whether you work in IT or anywhere else this effects you.
  • It’s not only understanding the requirements that is important, it’s also challenging those requirements as well as the business case that led to them in the first place. I suspect that many, many projects have been dreamt up as someones fantasy, nice to have system rather than having any real business value.
  • Governments should be no different from anyone else when it comes to buying IT. If I’m in the market for a new laptop I usually spend a little time reading up on what other buyers think and generally make sure I’m not about to buy something that’s not fit for purpose. One of the criticisms leveled at government in this report is the “lack of IT skills in government and over-reliance on contracting out”. In other words there are not enough experienced architects who work in government that can challenge some of the assumptions and proposed solutions that come from vendors.
  • Both vendors and government departments need to learn how to make agile work on large projects. We have enough experience now to know that multi-year, multi-person, multi-million pound projects that aim to deliver ‘big-bang’ fashion just do not work. Bringing a more agile approach to the table, delivering a little but more often so users can verify and feedback on what they are getting for their money is surely the way to go. This approach depends on more trust between client and supplier as well as better and more continuous engagement throughout the project’s life.

The Innovation Conundrum and Why Architecture Matters

A number of items in the financial and business news last week set me thinking about why architecture matters to innovation. Both IBM and Apple announced their second quarter results. IBM’s revenue for Q2 2011 was $26.7B, up 12% on the same quarter last year and Apples revenue for the same quarter was  $24.67B, an incredible 83% jump on the same quarter last year. As I’m sure everyone now knows IBM is 100 years old this year whereas Apple is a mere 35 years old. It looks like both Apple and IBM will become $100B companies this year if all goes to plan (IBM having missed joining the $100B club by a mere $0.1B in 2010). Coincidentally a Forbes article also caught my eye. Forbes listed the top 100 innovative companies. Top of the list was salesforce.com, Apple were number 5 and IBM were, er, not in the top 100! So what’s going on here? How can a company that pretty much invented the mainframe and personal computer, helped put a man on the moon, invented the scanning electron microscope and scratched the letters IBM onto a nickel crystal one atom at a time, and, most recently, took artificial intelligence a giant leap forward with Watson not be classed as innovative?

Perhaps the clue is in what the measure of innovation is. The Forbes article measures innovation by an “innovation premium” which it defines as:

A measure of how much investors have bid up the stock price of a company above the value of its existing business based on expectations of future innovative results (new products, services and markets).

So it would appear that, going by this definition of innovation, investors don’t think IBM is expected to be bringing any innovative products or services to market whereas the world will no doubt be inundated with all sorts of shiny iThingys over the course of the next year or so. But is that really all there is to being innovative? I would venture not.

The final article that caught my eye was about Apples cash reserves. Depending on which source you read this is around $60B and as anyone who has any cash to invest knows, sitting on it is not the best way of getting good returns! Companies generally have a few options with what to do when they amass so much cash, pay out higher dividends to shareholders, buy back their own shares, invest more in R&D or go on a buying spree and buy some companies that fill holes in their portfolio. Whilst this is a good way of quickly entering into markets companies may not be active in it tends to backfire on the innovation premium as mergers and acquisitions (M&A) are not, at least initially, seen as bringing anything new to market. M&A’s has been IBM’s approach over the last decade or so. As well as the big software brands like Lotus, Rational and Tivoli IBM has more recently bought lots of smaller software companies such as Cast Iron Systems, SPSS Statistics and Netezza.

A potential problem with this approach is that people don’t want to buy a “bag of bits” and have to assemble their own solutions Lego style. What they want are business solutions that address the very real and complex (wicked, even) problems they face today. This is where the software architect comes into his or her own. The role of the software architect is to take existing components and assemble them in interesting and important ways“. To that I would add innovative ways as well. Companies no longer want the same old solutions (ERP system, contact management system etc) but new and innovative systems that solve their business problems. This is why we have one of the more interesting jobs there is out there today!

A Service Based Development Process – Part 4

The first three of these blog posts (here, here and here) have looked at the process behind developing business processes and services that could be deployed into an appropriate environment, including a cloud (private, public or hybrid). In this final post I’ll take a look at how to make this ‘real’ by describing an architecture that could be used for developing and deploying services, together with some software products for realising that architecture.The diagram below shows both the development time and also run-time logical level architecture of a system that could be used for both developing and deploying business processes and services. This has been created using the sketching capability of Rational Software Architect.

Here’s a brief summary of what each of the logical components in this architecture sketch do (i.e. their responsibilities):

  • SDLC Repository – The description of the SDLC goes here. That is the work breakdown structure, a description of all the phases, activities and tasks as well as the work products to be created by each task and also the roles used to create them. This would be created and modified by the actor Method Author using a SDLC Developer tool. The repository would typically include guidance (examples, templates, guidelines etc) that show how the SDLC is to be used and how to create work products.
  • SDLC Developer – The tool used by the Method Author to compose new or modify existing processes. This tool published the SDLC into the SDLC Repository.
  • Development Artefacts Repository – This is where the work products that are created on an actual project (i.e. ‘instances’ of the work products described in the SDLC) get placed.
  • Business Process Developer – The tool used to create and modify business processes.
  • IT Service Developer – The tool used to create and modify services.
  • Development Repository – This is where ‘code’ level artefacts get stored during development. This could be a subset of the Development Artefacts Repository.
  • Runtime Services Repository -Services get published hereonce they have been certified and can be released for general use.
  • Process Engine – Executes the business process.
  • Enterprise Service Bus – Runs the services and provides adapters to external or legacy systems.

Having described the logical components the next step is to show how these can be realised using one or more vendors products. No surprise that I am going to show how these map to products from IBM’s portfolio however clearly your own particular requirements (including whose on your preferred vendor list of course) may dictate that you choose other vendors products. Nearly all the IBM product links allow you to download trial versions that you can use to try out this approach.

  • Rational Method Composer – This enables you to manage, author, evolve, measure and deploy effective processes (SDLCs) tailored to your project needs. It is based on Eclipse. Rational Method Composer allows publishing to a web site so effectively covers the needs of both the SDLC Repository and SDLC Developer components.
  • IBM Business Process Manager – This is the latest name for IBM’s combined development and runtime business process server. As well as a business process runtime, ESB and BPM repository it also includes design tools for building processes and services.  The Process Designer allows business authors to build fully executable BPMN processes that include user interfaces for human interaction. The Integration Designer enables IT developers to develop services that easily plug into processes to provide integration and routing logic, data transformation and straight-through BPEL subprocesses. See this whitepaper for more information or click here for the IBM edition of the book BPM for Dummies. IBM Business Process Manager realises the components: Business Process Developer, IT Service Developer, Development Repository, Process Engine and Enterprise Service Bus.
  • WebSphere Service Registry and Repository – Catalogs, and organizes assets and services allowing customers to get a handle on what assets they have, making it easy to locate or distribute. Also enables policy management across the SOA lifecycle, spanning various domains of policies including runtime policies as well as service governance policies. Included in the Advanced Lifecycle Edition is
    Rational Asset Manager which provides life cycle management capabilities to manage asset workflow from concept, development, build, deployment, and retirement as well as Build Forge integration. WebSphere Service Registry and Repository realises the Development Artifacts Repository as well as the Runtime Services Repository.

So, there it is. An approach for developing services as well as an initial architecture allowing for the development and deployment of both business processes and services together with some actual products to get you started. Please feel free to comment here or in any of my links if you have anything you’d like to say.