This article on five big IT trends is interesting and highlights those areas you might want to consider improving your knowledge and skills in if you want to remain in-demand in the coming year or two. The trends are:
Mobile
Social Computing
Cloud
Consumerization
Big Data
After this bit of new I would also add cyber-security as a sixth!
A perfect storm is defined as being: a critical or disastrous situation created by a powerful concurrence of factors.A perfect storm is certainly what RIM, makers of the Blackberry, have been experiencing recently. For three days, starting on 10th October a problem caused by a router in an unassuming two-storey building in Slough, UK affected almost every one of its users around the world. Not only were users unable to Twitter, or Facebook, more seriously, those users who rely on their Blackberrys for email to do their business may have lost valuable work. Whilst many commentators may have made light of the situation, because people could no longer tweet their every movement, there is a far more serious message here which is that as a civilisation we are now completely dependent on software and hardware technology that runs our daily lives.
Here’s what Blackberry had to say on their service bulletin board on 11th October, mid-way through the crisis:
The messaging and browsing delays that some of you are still experiencing were caused by a core switch failure within RIM’s infrastructure. Although the system is designed to failover to a back-up switch, the failover did not function as previously tested…
Unfortunately for Blackberry it was not only this technical and process failure that formed part of their perfect storm but two other factors, they could not hoped to have predicted, also occurred recently. One was the launch of the latest iPhone 4S from Apple which was released the very same week as Blackberry’s network failure. The other is the allegation that Blackberry’s, or more precisely the Blackberry Messaging Service (BBM), were implicated in the recent riots that took place in London and other UK cities in the summer. For many teens armed with a BlackBerry, BBM has replaced text messaging because it is free, instant and more part of a much larger community than regular SMS. Also, unlike Twitter or Facebook, many BBM messages are untraceable by the authorities.
From an IT architecture point of view clearly the technical and process failure of such a crucial data centre should just not have been allowed to happen. In some ways Blackberry has been a victim of its own success with the number of users growing from 10 million in 2005 to 70 million now without a corresponding increase in capacity of its network and fully functioning failover facility. However the more interesting, and in some ways more intractable, problem is the competitive, sociological and even ethical aspects of the situation. When Apple launched the first iPhone back in 2007 they changed forever the way people interacted with their phones. Some people have observed that the tactile way in which people “stroke” an iPhone rather than jab at tiny buttons has led to their more widespread adoption. Clearly a case of getting the human-computer interface right paying great dividends. Who would have thought however that the very aspect that once made Blackberry’s so popular with business users (their security) could backfire on them in quite such a significant way? Architecture (and design) is not just about getting the right features at the right price it is also about thinking through the likely impact of those features in contexts that may not initially have been envisaged.
Architects are fond of throwing terms around which have mixed, ambiguous or largely non-existent formal definitions. Indeed one of the great problems (still) of our profession is that people cannot agree on the meanings of many of the terms we use everyday. There is no ‘common language’ that all architects speak. If you want to see some examples look up terms like ‘enterprise architecture’ or ‘cloud computing’ in wikipedia then look at what’s written in the ‘discussion’ section.Three terms that often get misused or are used interchangeably fall under the general category of reusable assets. A reusable asset is something which has been proven to be useful, in some form or another, in one project or architectural definition and could be reused elsewhere. The Object Management Group (OMG) defines a reusable asset as one that: provides a solution to a problem for a given context. See the OMG Reusable Asset Specification. Those of you familiar with the classic Design Patterns book by the so called “Gang of Four” will recognise elements of this definition from that book. Indeed reusable assets are a generalization of design patterns. Three other reusable assets, which are of particular use to an architect, are:
Reference architectures
Application frameworks
Industry solutions
What do each of these mean, what’s the difference and when (or how) can they be used?
A reference architecture is a template which shows, usually at a logical level, a set of components and their relationships. Reference architectures are usually created based on perceived best-practice at the time of their creation. This is both a good thing (you get the latest thinking) but can also be bad (they can become dated). Reference architectures are usually associated with a particular domain which could either be a business (e.g. IBM’s Insurance Application Architecture or IAA) or industry (such as a banking reference architecture) or technology domain (e.g. cloud and SOA). Ideally reference architectures will not preordain any technology and will allow multiple vendors products to be mapped to each of the components. Sometimes vendors use reference architectures as a way of placing their tools or products into a cohesive set of products that work together.
An application framework represents the partial implementation of a specific area of a system or an application. Reference architectures may be composed of a number of application frameworks. Probably one of the best known application frameworks is Struts from the Apache open source organisation. Struts is a Java implementation of the Model-View-Controller pattern which can be ‘completed’ by developers for their own applications.
Finally an industry solution is a set of pre-configured (or configurable) software components designed to meet specific business requirements of a particular industry. Industry solutions are usually created and sold by software vendors and are based on their own software products. However the best solutions adhere to open standards and would allow other vendors products to be used as well. Most organisations want to avoid vendor lock-in and are unlikely to take the “whole enchilada”. Industry solutions may be implementations of one or more reference architectures. For example IBM’s Retail Industry Framework implements reference architectures from a number of domains (supply chain, merchandising and product management and so on).
Assets can be considered in terms of their granularity (size) and their level of articulation (implementation). Granularity is related to both the number of elements that comprise the asset and the asset’s impact on the overall architecture. Articulation is concerned with the extent to which the asset can be considered complete. Some assets are specifications only, that is to say are represented in an abstract form, such as a model or document. Other assets are considered to be complete implementations and can be instantiated as is, without modification. Such assets include components and existing applications. The diagram below places the three assets I’ve discussed above in terms of their granularity and articulation.
There are of course a whole range of other reusable assets: design patterns, idioms, components, complete applications and so on. These could be classified in a similar way. The above are the ones that I think architects are most likely to find useful however.
In my previous post on five architectures that changed the world I left out a couple that didn’t fit my self-imposed criteria. Here, therefore, are two more, the first of which is a bit too techie to be a part of everyone’s lives but is nonetheless hugely important and the second of which has not changed the world yet but has pretty big potential to do so.
IBM System/360
Before the System/360 there was very little interchangeability between computers, even from the same manufacturers. Software had to be created for each type of computer making them very difficult to develop applications for as well as maintain. The System/360 practically invented the concept of architecture as applied to computers in that it had an architecture specification that did not make any assumptions on the implementation itself, but rather describes the interfaces and the expected behavior of an implementation. The System/360 was the first family of computers designed to cover the complete range of applications, from small to large, both commercial and scientific. The development of the System/360 cost $5 billion back in 1964, that’s $34 billion of today’s money and almost destroyed IBM.
Watson
Unless you are American you had probably never heard of the TV game show called Jeopardy! up until the start of 2011. Now we know that it is a show that “uses puns, subtlety and wordplay” that humans enjoy but which computers would get tied up in knots over. This, it turns out, was the challenge that David Ferrucci, the IBM scientist who led the four year quest to build Watson, had set himself to compete live against humans in the TV show.
IBM has “form” on building computers to play games! The previous one (Deep Blue) won a six-game match by two wins to one with three draws against world chess champion Garry Kasparov in 1997. Chess, it turns out, is a breeze to play compared to Jeopardy! Here’s why.
Chess…
§Is a finite, mathematically well-defined search space.
•Has a large but limited number of moves and states.
•Makes everything explicit and has unambiguous mathematical rules which computers love.
Games like Jeopardy! play on the subtleties of the human language however which is…
Ambiguous, contextual and implicit.
•Grounded only in human cognition.
•Can have a seemingly infinite number of ways to express the same meaning.
According to IBM Watson is “built on IBM’s DeepQA technology for hypothesis generation, massive evidence gathering, analysis, and scoring.” Phew! The point of Watson however is not its ability to play a game show but in the potential to “weaves its fabric” into the messiness of our human lives where data is not kept in nice ordered relational databases but is unstructured and seemingly unrelated but nevertheless can sometimes have new and undiscovered meaning. One obvious application is in medical diagnosis but it could also be used in a vast array of other situations from help desks through to sorting out what benefits you are entitled to. So, not world changing yet but definitely watch this space.
Software, although an “invisible thread” has certainly had a significant impact on our world and now pervades pretty much all of our lives. Some software, and in particular some software architectures, have had a significance beyond just the everyday and have truly changed the world.
But what constitutes a world changing architecture? For me it is one that meets all of the following:
It must have had an impact beyond the field of computer science or a single business area and must have woven its way into peoples lives.
It may not have introduced any new technology but may instead have used some existing components in new and innovative ways.
The architecture itself may be relatively simple, but the way it has been deployed may be what makes it “world changing”.
It has extended the lexicon of our language either literally (as in “I tried googling that word” or indirectly in what we do (e.g. the way we now use App stores to get our software).
The architecture has emergent properties and has been extended in ways the architect(s) did not originally envisage.
Based on these criteria here are five architectures that have really changed our lives and our world.
World Wide Web
When Tim Berners-Lee published his innocuous sounding paper Information Management: A Proposal in 1989 I doubt he could have had any idea what an impact his “proposal” was going to have. This was the paper that introduced us to what we now call the world wide web and has quite literally changed the world forever.
Apple’s iTunes
There has been much talk in cyberspace and in the media in general on the effect and impact Steve Jobs has had on the world. When Apple introduced the iPod in October 2001 although it had the usual Apple cool design makeover it was, when all was said and done, just another MP3 player. What really made the iPod take off and changed everything was iTunes. It not only turned the music industry upside down and inside out but gave us the game-changing concept of the ‘App Store’ as a way of consuming digital media. The impact of this is still ongoing and is driving the whole idea of cloud computing and the way we will consume software.
Google
When Google was founded in 1999 it was just another company building a search engine. As Douglas Edwards says in his book I’m Feeling Lucky “everybody and their brother had a search engine in those days”. When Sergey Brin was asked how he was going to make money (out of search) he said “Well…, we’ll figure something out”. Clearly 12 years later they have figured out that something and become one of the fastest growing companies ever. What Google did was not only create a better, faster, more complete search engine than anyone else but also figured out how to pay for it, and all the other Google applications, through advertising. They have created a new market and value network (in other words a disruptive technology) that has changed the way we seek out and use information.
Wikipedia
Before WIkipedia there was a job called an Encyclopedia Salesman who walked from door to door selling knowledge packed between bound leather covers. Now, such people have been banished to the great redundancy home in the sky along with typesetters and comptometer operators.
If you do a Wikipedia on Wikipedia you get the following definition:
Wikipedia is a multilingual, web-based, free-content encyclopedia project based on an openly editable model. The name “Wikipedia” is a portmanteau of the words wiki (a technology for creating collaborative websites, from the Hawaiian word wiki, meaning “quick”) and encyclopedia. Wikipedia’s articles provide links to guide the user to related pages with additional information.
From an architectural point of view Wikipedia is “just another wiki” however what it has bought to the world is community participation on a massive scale and an architecture to support that collaboration (400 million unique visitors monthly more than 82,000 active contributors working on more than 19 million articles in over 270 languages). Wikipedia clearly meets all of the above crtieria (and more).
Facebook
To many people Facebook is social networking. Not only has it seen off all competitors it makes it almost impossible for new ones to join. Whilst the jury is still out on Google+ it will be difficult to see how it can ever reach the 800 million people Facebook has. Facebook is also the largest photo-storing site on the web and has developed its own photo storage system to store and serve its photographs. See this article on Facebook architecture as well as this presentation (slightly old now but interesting nonetheless).
I’d like to thank both Grady Booch and Peter Eeles for providing input to this post. Grady has been doing great work on software archeology and knows a thing or two about software architecture. Peter is my colleague at IBM as well as co-author on The Process of Software Architecting.
I was slightly alarmed to read recently in a document describing a particular adaptation of the unified process that allowing architectures to ‘emerge’ was a poor excuse to avoid hard thinking and planning and that emergent architectures, and anyone who advocates them, should be avoided.The term ’emergent architecture’ was, I believe, first coined by Gartner (see here) and applied to Enterprise Architecture. Gartner identified a number of characteristics that could be applied to emergent architectures one of which was that they are non-deterministic. Traditionally (enterprise) architects applied centralised decision-making to design outcomes. Using emergent architecture, they instead must decentralise decision-making to enable innovation.
Whilst emergent architectures certainly have their challenges it is my belief that, if well managed, they can only be a good thing and should certainly not be discouraged. Indeed I would say that emergence could be applied at a Solution Architecture level as well and is ideally suited to more agile approaches where everything is simply not known up front. The key thing with managing an emergent architecture is to capture architectural decisions as you go and ensure the architecture adapts as a result of real business needs.
In a previous blog entry I talked about the key skills architects needed to develop to perform their craft which I termed the essence of being an architect. Developing this theme slightly I also think there is a process that architects need to follow if they are to be effective. Stripped down to its bare essentials this process is as shown below.
This is not meant to be a process in the strict SDLC sense of the word but more of a meta-process that applies across any SDLC. Here’s what each of the steps means and some of the artefacts that would typically be created in each step (shown in italics).
Envision – Above all else the architect needs to define and maintain the vision of what the system is to be. This can be in the form of one or more, free-format Architecture Overview diagrams and would certainly include having an overall System Context that defined the boundary around the system under development (SUD). The vision would also include capturing the key Architectural Decisions that have shaped the system to be the way it is and also refer to some of the key Architecture Principles that are being followed.
Realise -As well as having a vision of how the system will be the architect must know how to realise her vision otherwise the architecture will remain as mere paper or bits and bytes stored in a modeling tool. Realisation is about making choices of which technology will be used to implement the system. Choices considered may be captured as Architectural Decisions and issues or risks associated with making such decisions captured in a RAID Log (Risks-Assumptions-Issues-Dependencies). Technical Prototypes may also be built to prove some of the technologies being selected.
Influence -Above all else architects need to be able to exert the required influence to carry through their vision and the realisation of that vision. Some people would refer to this as governance however for me influence is a more subtle, background task that, if done well, is almost unnoticed but nonetheless has the desired effect. Influencing is about having and following a Stakeholder Management Plan and communicating your architecture frequently to your key stakeholders.
Each of these steps are performed many times, or even continuously, on a project. Influencing means listening as well and this may lead to changes in your vision thus starting the whole cycle again.
Adopting this approach is not guaranteed to give you perfect systems as their are lots of other factors, as well as people, that come into play during a typical project. What it will do is give you a framework that allows you to bring some level of rigor to the way you develop your architectures and work with stakeholders on a project.
So, here we go again. The BBC today report that “IT giants are ‘ripping off’ Whitehall, say MPs”. As I presumably work for one of those “IT giants” I will attempt to comment on this in as impartial a way as is possible.
As long as we have ‘IT projects’ rather than ‘business improvement’ or ‘business change’ projects in government, or anywhere else come to that, we (and it is ‘we’ as tax payers) will continue to get ‘ripped off’. Buying IT because it is ‘sexy‘ is always going to end in tears. IT is a tool that may or may not fix a business problem. Unless you understand the true nature of that business problem throwing IT at it is doomed to failure. This is what software architects need to focus on. I’m coming to the conclusion that the best architects are actually technophobes rather than technophiles.
It’s not Whitehall that is being ‘ripped off’ here. It’s you and me as tax payers (assuming you live in the UK and pay taxes to the UK government of course). Whether you work in IT or anywhere else this effects you.
It’s not only understanding the requirements that is important, it’s also challenging those requirements as well as the business case that led to them in the first place. I suspect that many, many projects have been dreamt up as someones fantasy, nice to have system rather than having any real business value.
Governments should be no different from anyone else when it comes to buying IT. If I’m in the market for a new laptop I usually spend a little time reading up on what other buyers think and generally make sure I’m not about to buy something that’s not fit for purpose. One of the criticisms leveled at government in this report is the “lack of IT skills in government and over-reliance on contracting out”. In other words there are not enough experienced architects who work in government that can challenge some of the assumptions and proposed solutions that come from vendors.
Both vendors and government departments need to learn how to make agile work on large projects. We have enough experience now to know that multi-year, multi-person, multi-million pound projects that aim to deliver ‘big-bang’ fashion just do not work. Bringing a more agile approach to the table, delivering a little but more often so users can verify and feedback on what they are getting for their money is surely the way to go. This approach depends on more trust between client and supplier as well as better and more continuous engagement throughout the project’s life.
A number of items in the financial and business news last week set me thinking about why architecture matters to innovation. Both IBM and Apple announced their second quarter results. IBM’s revenue for Q2 2011 was $26.7B, up 12% on the same quarter last year and Apples revenue for the same quarter was $24.67B, an incredible 83% jump on the same quarter last year. As I’m sure everyone now knows IBM is 100 years old this year whereas Apple is a mere 35 years old. It looks like both Apple and IBM will become $100B companies this year if all goes to plan (IBM having missed joining the $100B club by a mere $0.1B in 2010). Coincidentally a Forbes article also caught my eye. Forbes listed the top 100 innovative companies. Top of the list was salesforce.com, Apple were number 5 and IBM were, er, not in the top 100! So what’s going on here? How can a company that pretty much invented the mainframe and personal computer, helped put a man on the moon, invented the scanning electron microscope and scratched the letters IBM onto a nickel crystal one atom at a time, and, most recently, took artificial intelligence a giant leap forward with Watson not be classed as innovative?
Perhaps the clue is in what the measure of innovation is. The Forbes article measures innovation by an “innovation premium” which it defines as:
A measure of how much investors have bid up the stock price of a company above the value of its existing business based on expectations of future innovative results (new products, services and markets).
So it would appear that, going by this definition of innovation, investors don’t think IBM is expected to be bringing any innovative products or services to market whereas the world will no doubt be inundated with all sorts of shiny iThingys over the course of the next year or so. But is that really all there is to being innovative? I would venture not.
The final article that caught my eye was about Apples cash reserves. Depending on which source you read this is around $60B and as anyone who has any cash to invest knows, sitting on it is not the best way of getting good returns! Companies generally have a few options with what to do when they amass so much cash, pay out higher dividends to shareholders, buy back their own shares, invest more in R&D or go on a buying spree and buy some companies that fill holes in their portfolio. Whilst this is a good way of quickly entering into markets companies may not be active in it tends to backfire on the innovation premium as mergers and acquisitions (M&A) are not, at least initially, seen as bringing anything new to market. M&A’s has been IBM’s approach over the last decade or so. As well as the big software brands like Lotus, Rational and Tivoli IBM has more recently bought lots of smaller software companies such as Cast Iron Systems, SPSS Statistics and Netezza.
A potential problem with this approach is that people don’t want to buy a “bag of bits” and have to assemble their own solutions Lego style. What they want are business solutions that address the very real and complex (wicked, even) problems they face today. This is where the software architect comes into his or her own. The role of the software architect is to “take existing components and assemble them in interesting and important ways“. To that I would add innovative ways as well. Companies no longer want the same old solutions (ERP system, contact management system etc) but new and innovative systems that solve their business problems. This is why we have one of the more interesting jobs there is out there today!
The first three of these blog posts (here, here and here) have looked at the process behind developing business processes and services that could be deployed into an appropriate environment, including a cloud (private, public or hybrid). In this final post I’ll take a look at how to make this ‘real’ by describing an architecture that could be used for developing and deploying services, together with some software products for realising that architecture.The diagram below shows both the development time and also run-time logical level architecture of a system that could be used for both developing and deploying business processes and services. This has been created using the sketching capability of Rational Software Architect.
Here’s a brief summary of what each of the logical components in this architecture sketch do (i.e. their responsibilities):
SDLC Repository – The description of the SDLC goes here. That is the work breakdown structure, a description of all the phases, activities and tasks as well as the work products to be created by each task and also the roles used to create them. This would be created and modified by the actor Method Author using a SDLC Developer tool. The repository would typically include guidance (examples, templates, guidelines etc) that show how the SDLC is to be used and how to create work products.
SDLC Developer – The tool used by the Method Author to compose new or modify existing processes. This tool published the SDLC into the SDLC Repository.
Development Artefacts Repository – This is where the work products that are created on an actual project (i.e. ‘instances’ of the work products described in the SDLC) get placed.
Business Process Developer – The tool used to create and modify business processes.
IT Service Developer – The tool used to create and modify services.
Development Repository – This is where ‘code’ level artefacts get stored during development. This could be a subset of the Development Artefacts Repository.
Runtime Services Repository -Services get published hereonce they have been certified and can be released for general use.
Process Engine – Executes the business process.
Enterprise Service Bus – Runs the services and provides adapters to external or legacy systems.
Having described the logical components the next step is to show how these can be realised using one or more vendors products. No surprise that I am going to show how these map to products from IBM’s portfolio however clearly your own particular requirements (including whose on your preferred vendor list of course) may dictate that you choose other vendors products. Nearly all the IBM product links allow you to download trial versions that you can use to try out this approach.
Rational Method Composer – This enables you to manage, author, evolve, measure and deploy effective processes (SDLCs) tailored to your project needs. It is based on Eclipse. Rational Method Composer allows publishing to a web site so effectively covers the needs of both the SDLC Repository and SDLC Developer components.
IBM Business Process Manager – This is the latest name for IBM’s combined development and runtime business process server. As well as a business process runtime, ESB and BPM repository it also includes design tools for building processes and services. The Process Designer allows business authors to build fully executable BPMN processes that include user interfaces for human interaction. The Integration Designer enables IT developers to develop services that easily plug into processes to provide integration and routing logic, data transformation and straight-through BPEL subprocesses. See this whitepaper for more information or click here for the IBM edition of the book BPM for Dummies. IBM Business Process Manager realises the components: Business Process Developer, IT Service Developer, Development Repository, Process Engine and Enterprise Service Bus.
WebSphere Service Registry and Repository – Catalogs, and organizes assets and services allowing customers to get a handle on what assets they have, making it easy to locate or distribute. Also enables policy management across the SOA lifecycle, spanning various domains of policies including runtime policies as well as service governance policies. Included in the Advanced Lifecycle Edition is
Rational Asset Manager which provides life cycle management capabilities to manage asset workflow from concept, development, build, deployment, and retirement as well as Build Forge integration. WebSphere Service Registry and Repository realises the Development Artifacts Repository as well as the Runtime Services Repository.
So, there it is. An approach for developing services as well as an initial architecture allowing for the development and deployment of both business processes and services together with some actual products to get you started. Please feel free to comment here or in any of my links if you have anything you’d like to say.