Reusable Assets for Architects

Architects are fond of throwing terms around which have mixed, ambiguous or largely non-existent formal definitions. Indeed one of the great problems (still) of our profession is that people cannot agree on the meanings of many of the terms we use everyday. There is no ‘common language’ that all architects speak. If you want to see some examples look up terms like ‘enterprise architecture’ or ‘cloud computing’ in wikipedia then look at what’s written in the ‘discussion’ section.Three terms that often get misused or are used interchangeably fall under the general category of reusable assets. A reusable asset is something which has been proven to be useful, in some form or another, in one project or architectural definition and could be reused elsewhere. The Object Management Group (OMG) defines a reusable asset as one that: provides a solution to a problem for a given context. See the OMG Reusable Asset Specification. Those of you familiar with the classic Design Patterns book by the so called “Gang of Four” will recognise elements of this definition from that book. Indeed reusable assets are a generalization of design patterns. Three other reusable assets, which are of particular use to an architect, are:

  • Reference architectures
  • Application frameworks
  • Industry solutions

What do each of these mean, what’s the difference and when (or how) can they be used?

A reference architecture is a template which shows, usually at a logical level, a set of components and their relationships. Reference architectures are usually created based on perceived best-practice at the time of their creation. This is both a good thing (you get the latest thinking) but can also be bad (they can become dated). Reference architectures are usually associated with a particular domain which could either be a business (e.g. IBM’s Insurance Application Architecture or IAA) or industry (such as a banking reference architecture) or technology domain (e.g. cloud and SOA). Ideally reference architectures will not preordain any technology and will allow multiple vendors products to be mapped to each of the components. Sometimes vendors use reference architectures as a way of placing their tools or products into a cohesive set of products that work together.

An application framework represents the partial implementation of a specific area of a system or an application. Reference architectures may be composed of a number of application frameworks. Probably one of the best known application frameworks is Struts from the Apache open source organisation. Struts is a Java implementation of the Model-View-Controller pattern which can be ‘completed’ by developers for their own applications.

Finally an industry solution is a set of pre-configured (or configurable) software components designed to meet specific business requirements of a particular industry. Industry solutions are usually created and sold by software vendors and are based on their own software products. However the best solutions adhere to open standards and would allow other vendors products to be used as well. Most organisations want to avoid vendor lock-in and are unlikely to take the “whole enchilada”. Industry solutions may be implementations of one or more reference architectures. For example IBM’s Retail Industry Framework implements reference architectures from a number of domains (supply chain, merchandising and product management and so on).

Assets can be considered in terms of their granularity (size) and their level of articulation (implementation). Granularity is related to both the number of elements that comprise the asset and the asset’s impact on the overall architecture. Articulation is concerned with the extent to which the asset can be considered complete. Some assets are specifications only, that is to say are represented in an abstract form, such as a model or document. Other assets are considered to be complete implementations and can be instantiated as is, without modification. Such assets include components and existing applications. The diagram below places the three assets I’ve discussed above in terms of their granularity and articulation.

There are of course a whole range of other reusable assets: design patterns, idioms, components, complete applications and so on. These could be classified in a similar way. The above are the ones that I think architects are most likely to find useful however.

Plus Two More

In my previous post on five architectures that changed the world I left out a couple that didn’t fit my self-imposed criteria. Here, therefore, are two more, the first of which is a bit too techie to be a part of everyone’s lives but is nonetheless hugely important and the second of which has not changed the world yet but has pretty big potential to do so.

IBM System/360
Before the System/360 there was very little interchangeability between computers, even from the same manufacturers. Software had to be created for each type of computer making them very difficult to develop applications for as well as maintain. The System/360 practically invented the concept of architecture as applied to computers in that it had an architecture specification that did not make any assumptions on the implementation itself, but rather describes the interfaces and the expected behavior of an implementation. The System/360 was the first family of computers designed to cover the complete range of applications, from small to large, both commercial and scientific. The development of the System/360 cost $5 billion back in 1964, that’s $34 billion of today’s money and almost destroyed IBM.

Watson
Unless you are American you had probably never heard of the TV game show called Jeopardy! up until the start of 2011. Now we know that it is a show that “uses puns, subtlety and wordplay” that humans enjoy but which computers would get tied up in knots over. This, it turns out, was the challenge that David Ferrucci, the IBM scientist who led the four year quest to build Watson, had set himself to compete live against humans in the TV show.

IBM has “form” on building computers to play games! The previous one (Deep Blue) won a six-game match by two wins to one with three draws against world chess champion Garry Kasparov in 1997. Chess, it turns out, is a breeze to play compared to Jeopardy! Here’s why.
Chess…

  •  §Is a finite, mathematically well-defined search space.
  • Has a large but limited number of moves and states.
  • Makes everything explicit and has unambiguous mathematical rules which computers love.

Games like Jeopardy! play on the subtleties of the human language however which is…

  • Ambiguous, contextual and implicit.
  • Grounded only in human cognition.
  • Can have a seemingly infinite number of ways to express the same meaning.

According to IBM Watson is “built on IBM’s DeepQA technology for hypothesis generation, massive evidence gathering, analysis, and scoring.” Phew! The point of Watson however is not its ability to play a game show but in the potential to “weaves its fabric” into the messiness of our human lives where data is not kept in nice ordered relational databases but is unstructured and seemingly unrelated but nevertheless can sometimes have new and undiscovered meaning. One obvious application is in medical diagnosis but it could also be used in a vast array of other situations from help desks through to sorting out what benefits you are entitled to. So, not world changing yet but definitely watch this space.

Five Software Architectures That Changed The World

Photo by Kobu Agency on Unsplash
Photo by Kobu Agency on Unsplash

“Software is the invisible thread and hardware is the loom on which computing weaves its fabric, a fabric that we have now draped across all of life”.

Grady Booch

Software, although an “invisible thread” has certainly had a significant impact on our world and now pervades pretty much all of our lives. Some software, and in particular some software architectures, have had a significance beyond just the everyday and have truly changed the world.

But what constitutes a world changing architecture? For me it is one that meets all of the following:

  1. It must have had an impact beyond the field of computer science or a single business area and must have woven its way into peoples lives.
  2. It may not have introduced any new technology but may instead have used some existing components in new and innovative ways.
  3. The architecture itself may be relatively simple, but the way it has been deployed may be what makes it “world changing”.
  4. It has extended the lexicon of our language either literally (as in “I tried googling that word” or indirectly in what we do (e.g. the way we now use App stores to get our software).
  5. The architecture has emergent properties and has been extended in ways the architect(s) did not originally envisage.

Based on these criteria here are five architectures that have really changed our lives and our world.

World Wide Web
When Tim Berners-Lee published his innocuous sounding paper Information Management: A Proposal in 1989 I doubt he could have had any idea what an impact his “proposal” was going to have. This was the paper that introduced us to what we now call the world wide web and has quite literally changed the world forever.

Apple’s iTunes
There has been much talk in cyberspace and in the media in general on the effect and impact Steve Jobs has had on the world. When Apple introduced the iPod in October 2001 although it had the usual Apple cool design makeover it was, when all was said and done, just another MP3 player. What really made the iPod take off and changed everything was iTunes. It not only turned the music industry upside down and inside out but gave us the game-changing concept of the ‘App Store’ as a way of consuming digital media. The impact of this is still ongoing and is driving the whole idea of cloud computing and the way we will consume software.

Google
When Google was founded in 1999 it was just another company building a search engine. As Douglas Edwards says in his book I’m Feeling Lucky “everybody and their brother had a search engine in those days”. When Sergey Brin was asked how he was going to make money (out of search) he said “Well…, we’ll figure something out”. Clearly 12 years later they have figured out that something and become one of the fastest growing companies ever. What Google did was not only create a better, faster, more complete search engine than anyone else but also figured out how to pay for it, and all the other Google applications, through advertising. They have created a new market and value network (in other words a disruptive technology) that has changed the way we seek out and use information.

Wikipedia
Before WIkipedia there was a job called an Encyclopedia Salesman who walked from door to door selling knowledge packed between bound leather covers. Now, such people have been banished to the great redundancy home in the sky along with typesetters and comptometer operators.

If you do a Wikipedia on Wikipedia you get the following definition:

Wikipedia is a multilingual, web-based, free-content encyclopedia project based on an openly editable model. The name “Wikipedia” is a portmanteau of the words wiki (a technology for creating collaborative websites, from the Hawaiian word wiki, meaning “quick”) and encyclopedia. Wikipedia’s articles provide links to guide the user to related pages with additional information.

From an architectural point of view Wikipedia is “just another wiki” however what it has bought to the world is community participation on a massive scale and an architecture to support that collaboration (400 million unique visitors monthly more than 82,000 active contributors working on more than 19 million articles in over 270 languages). Wikipedia clearly meets all of the above crtieria (and more).

Facebook
To many people Facebook is social networking. Not only has it seen off all competitors it makes it almost impossible for new ones to join. Whilst the jury is still out on Google+ it will be difficult to see how it can ever reach the 800 million people Facebook has. Facebook is also the largest photo-storing site on the web and has developed its own photo storage system to store and serve its photographs. See this article on Facebook architecture as well as this presentation (slightly old now but interesting nonetheless).

I’d like to thank both Grady Booch and Peter Eeles for providing input to this post. Grady has been doing great work on software archeology  and knows a thing or two about software architecture. Peter is my colleague at IBM as well as co-author on The Process of Software Architecting.

Emergent Architectures

I was slightly alarmed to read recently in a document describing a particular adaptation of the unified process that allowing architectures to ‘emerge’ was a poor excuse to avoid hard thinking and planning and that emergent architectures, and anyone who advocates them, should be avoided.The term ’emergent architecture’ was, I believe, first coined by Gartner (see here) and applied to Enterprise Architecture. Gartner identified a number of characteristics that could be applied to emergent architectures one of which was that they are non-deterministic. Traditionally (enterprise) architects applied centralised decision-making to design outcomes. Using emergent architecture, they instead must decentralise decision-making to enable innovation.

Whilst emergent architectures certainly have their challenges it is my belief that, if well managed, they can only be a good thing and should certainly not be discouraged. Indeed I would say that emergence could be applied at a Solution Architecture level as well and is ideally suited to more agile approaches where everything is simply not known up front. The key thing with managing an emergent architecture is to capture architectural decisions as you go and ensure the architecture adapts as a result of real business needs.

What Would Google Do?

Readers of this blog will know that one of my interests/research areas is how to effectively bring together left-brain (i.e. logical) and right-brain (i.e. creative) thinkers in order to drive creativity and generate new and innovative ideas to solve some of the worlds wicked problems. One of the books that have most influenced me in this respect is Daniel Pink’s A Whole New Mind – Why Right-Brainers Will Rule the Future. Together with a colleague I am developing the concept of the versatilist (first coined by Gartner) as a role that effectively brings together both right- and left-brain thinkers to solve some of the knotty business problems there are out there. As part of this we are developing a series of brain exercises that can be given to students on creative, problem solving courses to open up their minds and start them thinking outside the proverbial box. One of these exercises is called What Would Google Do? The idea being to try and get them to take the non-conventional, Google, view of how to solve a problem. By way of an example Douglas Edwards, in his book I’m Feeling Lucky – The Confessions of Google Employee Number 59, relates the following story about how Sergey Brin, co-founder of Google, proposed an innovative approach to marketing.“Why don’t we take the marketing budget and use it to inoculate Chechen refugees against cholera. It will help our brand awareness and we’ll get more new people to use Google.”

Just how serious Brin was being here we’ll never know but you get the general idea; no idea is too outrageous for folk in the Googleplex.

To further backup how serious Google are about creativity their chairman Eric Schmidt, delivered a “devastating critique of the UK’s education system and said the country had failed to capitalise on its record of innovation in science and engineering” at this year’s MacTaggart lecture in Edinburgh.  Amongst other criticisms Schmidt aimed at the UK education system he said that the country that invented the computer was “throwing away your great computer heritage by failing to teach programming in schools ” and was flabbergasted to learn that today computer science isn’t even taught as standard in UK schools. Instead the IT curriculum “focuses on teaching how to use software, but gives no insight into how it’s made.” For those of us bought up in the UK at the time of the BBC Microcomputer hopefully this guy will be the saviour of the current generation of programmers.

US readers of this blog should not feel too smug, check out this YouTube video from Dr. Michio Kaku who gives an equally devastating critique of the US education system.

So, all in all, I think the world definitely needs more of a versatilist approach, not only in our education systems but also in the ways we approach problem solving in the workplace. Steve Jobs, the chief executive of Apple, who revealed last week that he was stepping down once told the New York Times: “The Macintosh turned out so well because the people working on it were musicians, artists, poets and historians – who also happened to be excellent computer scientists”. Once again Apple got this right several years ago and are now reaping the benefits of that far reaching, versatilist approach.

Creative Leaps and the Importance of Domain Knowledge

Sometimes innovation appears to come out of nowhere. Creative individuals, or companies, appear to be in touch with the zeitgeist of the times and develop a product (or service) that does not just satisfy an unknown need but may even create a whole new market that didn’t previously exist. I would put James Dyson (bagless vacuum cleaner) as an example of the former and Steve Jobs/Apple (iPad) as an example of the latter.Sometimes the innovation may even be a disruptive technology that creates a new market where one previously did not exist and may even destroy existing markets. Digital photography and its impact on the 35mm film producing companies (Kodak and Ilford) is a classic example of such a disruptive technology.

Most times however creativity comes from simply putting together existing components in new and interesting ways that meet a business need. For merely mortal software architects if we are to do this we not only need a good understanding of what those components do but also how the domain we are working in really, really works. You need to not only be curious about your domain (whether it be financial services, retail, public sector or whatever) but be able to ask the hard questions that no one else thought or bothered to ask. Sometimes this means not following the herd and being fashionable but being completely unfashionable. As Paul Arden, the Creative Director of Saatchi and Saatchi said in his book Whatever You Think, Think The Opposite:

People who create work that fashionable people emulate do the very opposite of what is in fashion. They create something unfashionable, out of time, wrong. Original ideas are created by original people, people who either through instinct or insight know the value of being different and recognise the commonplace as a dangerous place to be.

So do you want to be fashionable or unfashionable?

On Being an Effective Architect

In a previous blog entry I talked about the key skills architects needed to develop to perform their craft which I termed the essence of being an architect. Developing this theme slightly I also think there is a process that architects need to follow if they are to be effective. Stripped down to its bare essentials this process is as shown below.

This is not meant to be a process in the strict SDLC sense of the word but more of a meta-process that applies across any SDLC. Here’s what each of the steps means and some of the artefacts that would typically be created in each step (shown in italics).

  • Envision – Above all else the architect needs to define and maintain the vision of what the system is to be. This can be in the form of one or more, free-format Architecture Overview diagrams and would certainly include having an overall System Context that defined the boundary around the system under development (SUD). The vision would also include capturing the key Architectural Decisions that have shaped the system to be the way it is and also refer to some of the key Architecture Principles that are being followed.
  • Realise -As well as having a vision of how the system will be the architect must know how to realise her vision otherwise the architecture will remain as mere paper or bits and bytes stored in a modeling tool. Realisation is about making choices of which technology will be used to implement the system. Choices considered may be captured as Architectural Decisions and issues or risks associated with making such decisions captured in a RAID Log (Risks-Assumptions-Issues-Dependencies). Technical Prototypes may also be built to prove some of the technologies being selected.
  • Influence -Above all else architects need to be able to exert the required influence to carry through their vision and the realisation of that vision. Some people would refer to this as governance however for me influence is a more subtle, background task that, if done well, is almost unnoticed but nonetheless has the desired effect. Influencing is about having and following a Stakeholder Management Plan and communicating your architecture frequently to your key stakeholders.

Each of these steps are performed many times, or even continuously, on a project. Influencing means listening as well and this may lead to changes in your vision thus starting the whole cycle again.

Adopting this approach is not guaranteed to give you perfect systems as their are lots of other factors, as well as people, that come into play during a typical project. What it will do is give you a framework that allows you to bring some level of rigor to the way you develop your architectures and work with stakeholders on a project.

Happy Birthday WWW

Today is the 20th anniversary of the world wide web; or at least the anniversary of the first web page. On this day in 1991 Tim Berners-Lee posted a short summary of the World Wide Web project on the alt.hypertext newsgroup:

The World Wide Web (WWW) project aims to allow all links to be made to any information anywhere. […] The WWW project was started to allow high energy physicists to share data, news, and documentation. We are very interested in spreading the web to other areas, and having gateway servers for other data. Collaborators welcome!

He certainly found a lot of collaborators!

As I’ve said before I believe the WWW is one of the greatest feats of software architecture ever performed. Happy Birthday WWW!

The Tools We Use

Back in 1964 Marshall McLuhan said  “We shape our tools and afterwards our tools shape us”. McLuhan was actually talking about the media when he said this but much of what he said then has a great deal of relevance in today’s mixed up media world too.It occurs to me that McLuhan’s tool quote equally applies to the tools we use, or misuse, as software architects. PowerPoint (or Keynote for that matter) has received pretty bad press over the years as being a tool that inhibits rather than enhances our creativity. Whilst this does not have to be the case, too many people take tools, such as PowerPoint, and use them in ways I’m pretty sure their creators never intended. Here are some common tool (mis)uses I’ve observed over the years (anti-patterns for tools if you like):

  1. Spreadsheets as a databases. Too many people seem to use spreadsheets as a sort of global repository for dumping ideas, data and information in general because it gives them the ability to easily sort and categorise information. Spreadsheets are good at numbers and presenting analytical data but not for capturing textual information.
  2. Presentations as documents. Sometimes what started out as a presentation to illustrate a good idea seems to grow into a more detailed description of that idea and eventually turns into a full-blown specification! The excuse for doing this being “we can use this to present to the client as well as leaving it with them at the end of the project as the design of the system”. Bad idea!
  3. Presentations as a substitute for presenting. The best presenters present “naked”. Minimal presentations (where sometimes minimal = 0) where the presenter is at the fore and his or her slides are illustrating the key ideas is what presenting is or should be about. Did John F Kennedy, Winston Churchill or Martin Luther King rely on PowerPoint to get their big ideas across? I think not!
  4. Word processors as presentations. This is the opposite of number 2. Whilst not so common  people have been known in my experience to ‘present’ their documents on a screen in a meeting. It goes without saying, or should do, that 12pt (or less) text does not come across well on a screen.
  5. Word processors as web sites. Although most word processors have the capability of generating HTML this is not a good reason for using them to build web sites. There are a multitude of free, open and paid for tools that do a far better job of this.
  6. Emails as documents. This is variant (generalisation) of one of my favourite [sic] anti-patterns. e-mails are one of the greatest sources of unstructured data in the world today. There must be, literally, terabytes of data stored using this medium that should otherwise be captured in a more readily consumable and accessible form. e-mails clearly have a place for forming ideas but not for capturing outcomes and persisting those ideas so others can see them and learn from them.

Oh Dear, Here We Go Again!

So, here we go again. The BBC today report that “IT giants are ‘ripping off’ Whitehall, say MPs”. As I presumably work for one of those “IT giants” I will attempt to comment on this in as impartial a way as is possible.

  • As long as we have ‘IT projects’ rather than ‘business improvement’ or ‘business change’ projects in government, or anywhere else come to that, we (and it is ‘we’ as tax payers) will continue to get ‘ripped off’. Buying IT because it is ‘sexy‘  is always going to end in tears. IT is a tool that may or may not fix a business problem. Unless you understand the true nature of that business problem throwing IT at it is doomed to failure. This is what software architects need to focus on. I’m coming to the conclusion that the best architects are actually technophobes rather than technophiles.
  • It’s not Whitehall that is being ‘ripped off’ here. It’s you and me as tax payers (assuming you live in the UK and pay taxes to the UK government of course). Whether you work in IT or anywhere else this effects you.
  • It’s not only understanding the requirements that is important, it’s also challenging those requirements as well as the business case that led to them in the first place. I suspect that many, many projects have been dreamt up as someones fantasy, nice to have system rather than having any real business value.
  • Governments should be no different from anyone else when it comes to buying IT. If I’m in the market for a new laptop I usually spend a little time reading up on what other buyers think and generally make sure I’m not about to buy something that’s not fit for purpose. One of the criticisms leveled at government in this report is the “lack of IT skills in government and over-reliance on contracting out”. In other words there are not enough experienced architects who work in government that can challenge some of the assumptions and proposed solutions that come from vendors.
  • Both vendors and government departments need to learn how to make agile work on large projects. We have enough experience now to know that multi-year, multi-person, multi-million pound projects that aim to deliver ‘big-bang’ fashion just do not work. Bringing a more agile approach to the table, delivering a little but more often so users can verify and feedback on what they are getting for their money is surely the way to go. This approach depends on more trust between client and supplier as well as better and more continuous engagement throughout the project’s life.