The Fall and Rise of the Full Stack Architect

strawberry-layer-cake

Almost three years ago to the day on here I wrote a post called Happy 2013 and Welcome to the Fifth Age! The ‘ages’ of (commercial) computing discussed there were:

  • First Age: The Mainframe Age (1960 – 1975)
  • Second Age: The Mini Computer Age (1975 – 1990)
  • Third Age: The Client-Server Age (1990 – 2000)
  • Fourth Age: The Internet Age (2000 – 2010)
  • Fifth Age: The Mobile Age (2010 – 20??)

One of the things I wrote in that article was this:

“Until a true multi-platform technology such as HTML5 is mature enough, we are in a complex world with lots of new and rapidly changing technologies to get to grips with as well as needing to understand how the new stuff integrates with all the old legacy stuff (again). In other words, a world which we as architects know and love and thrive in.”

So, three years later, are we any closer to having a multi-platform technology? Where does cloud computing fit into all of this and is multi-platform technology making the world get more or less complex for us as architects?

In this post I argue that cloud computing is actually taking us to an age where rather than having to spend our time dealing with the complexities of the different layers of architecture we can be better utilised by focussing on delivering business value in the form of new and innovative services. In other words, rather than us having to specialise as layer architects we can become full-stack architects who create value rather than unwanted or misplaced technology. Let’s explore this further.

The idea of the full stack architect.

Vitruvius, the Roman architect and civil engineer, defined the role of the architect thus:

“The ideal architect should be a [person] of letters, a mathematician, familiar with historical studies, a diligent student of philosophy, acquainted with music, not ignorant of medicine, learned in the responses of juriconsults, familiar with astronomy and astronomical calculations.”

Vitruvius also believed that an architect should focus on three central themes when preparing a design for a building: firmitas (strength), utilitas (functionality), and venustas (beauty).

vitruvian man
Vitruvian Man by Leonardo da Vinci

For Vitruvius then the architect was a multi-disciplined person knowledgable of both the arts and sciences. Architecture was not just about functionality and strength but beauty as well. If such a person actually existed then they had a fairly complete picture of the whole ‘stack’ of things that needed to be considered when architecting a new structure.

So how does all this relate to IT?

In the first age of computing (roughly 1960 – 1975) life was relatively simple. There was a mainframe computer hidden away in the basement of a company managed by a dedicated team of operators who guarded their prized possession with great care and controlled who had access to it and when. You were limited by what you could do with these systems not only by cost and availability but also by the fact that their architectures were fixed and the choice of programming languages (Cobol, PL/I and assembler come to mind) to make them do things was also pretty limited. The architect (should such a role have actually existed then) had a fairly simple task as their options were relatively limited and the number of architectural decisions that needed to be made were correspondingly fairly straight forward. Like Vitruvias’ architect one could see that it would be fairly straight forward to understand the full compute stack upon which business applications needed to run.

Indeed, as the understanding of these computing engines increased you could imagine that the knowledge of the architects and programmers who built systems around these workhorses of the first age reached something of a ‘plateau of productivity’*.

Architecture Stacks 3

However things were about to get a whole lot more complicated.

The fall of the full stack architect.

As IT moved into its second age and beyond (i.e. with the advent of mini computers, personal computers, client-server, the web and early days of the internet) the breadth and complexity of the systems that were built increased. This is not just because of the growth in the number of programming languages, compute platforms and technology providers but also because each age has built another layer on the previous one. The computers from a previous age never go away, they just become the legacy that subsequent ages must deal with. Complexity has also increased because of the pervasiveness of computers. In the fifth age the number of people whose lives are now affected by these machines is orders of magnitude greater than it was in the first age.

All of this has led to niches and specialisms that were inconceivable in the early age of computing. As a result, architecting systems also became more complex giving rise to what have been termed ‘layer’ architects whose specialities were application architecture, infrastructure architecture, middleware architecture and so on.

Architecture Stacks

Whole professions have been built around these disciplines leading to more and more specialisation. Inevitably this has led to a number of things:

  1. The need for communications between the disciplines (and for them to understand each others ‘language’).
  2. As more knowledge accrues in one discipline, and people specialise in it more, it becomes harder for inter-disciplinary understanding to happen.
  3. Architects became hyper-specialised in their own discipline (layer) leading to a kind of ‘peak of inflated expectations’* (at least amongst practitioners of each discipline) as to what they could achieve using the technology they were so well versed in but something of a ‘trough of disillusionment’* to the business (who paid for those systems) when they did not deliver the expected capabilities and came in over cost and behind schedule.

Architecture Stacks 4

So what of the mobile and cloud age which we now find ourselves in?

The rise of the full stack architect.

As the stack we need to deal with has become more ‘cloudified’ and we have moved from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS) it has become easier to understand the full stack as an architect. We can, to some extent, take for granted the lower, specialised parts of the stack and focus on the applications and data that are the differentiators for a business.

Architecture Stacks 2

We no longer have to worry about what type of server to use or even what operating system or programming environments have to be selected. Instead we can focus on what the business needs and how that need can be satisfied by technology. With the right tools and the right cloud platforms we can hopefully climb the ‘slope of enlightenment’ and reach a new ‘plateau of productivity’*.

Architecture Stacks 5

As Neal Ford, Software Architect at Thoughtworks says in this video:

“Architecture has become much more interesting now because it’s become more encompassing … it’s trying to solve real problems rather than play with abstractions.”

 

I believe that the fifth age of computing really has the potential to take us to a new plateau of productivity and hopefully allow all of us to be architects described by this great definition from the author, marketeer and blogger Seth Godin:

“Architects take existing components and assemble them in interesting and important ways.”

What interesting and important things are you going to do in this age of computing?

* Diagrams and terms borrowed from Gartner’s hype cycle.

The Wicked Problems of Government

The dichotomy of our age is surely that as our machines become more and more intelligent the problems that we need them to solve are becoming ever more difficult and intractable. They are indeed truly wicked problems, no more so than in our offices of power where the addition of political and social ‘agendas’ would seem to make some of the problems we face even more difficult to address.

Poll Tax
A Demonstration Against the Infamous ‘Poll Tax’

In their book The Blunders of Our Governments the authors Anthony King and Ivor Crewe recall some of the most costly mistakes made by British governments over the last three decades. These include policy blunders such as the so called poll tax introduced by the Thatcher government in 1990 which led to rioting on the streets of many UK cities (above). Like the poll tax many, in fact most, of the blunders recounted are not IT related however the authors do devote a whole chapter (chapter 13 rather appropriately) to the more egregious examples of successive governments IT blunders. These include:

  • The Crown Prosecution Service, 1989 – A computerised system for tracking prosecutions. Meant to be up and running by 1993-94, abandoned in 1997 following a critical report from the National Audit Office (NAO).
  • The Department of Social Security, 1994 – A system to issue pensions and child benefits using swipe cards rather than the traditional books which were subject to fraud and also inefficient. The government cancelled the project in 1999 after repeated delays and disputes between the various stakeholders and following another critical report by the NAO.
  • The Home Office (Immigration and Nationality Directorate), 1996 – An integrated casework system to deal with asylum, refugee and citizenship applications. The system was meant to be live by October of 1998 but was cancelled in 1999 at a cost to the UK taxpayer of at least £77 million. The backlog of cases for asylum and citizenship which the system had meant to address had got worse not better.

Whilst the authors don’t offer any cast iron solutions to how to solve these problems they do highlight a number of factors these blunders had in common. Many of these were highlighted in a joint Royal Academy of Engineering and British Computer Society report published 10 years ago this month called The Challenges of Complex IT Projects.The major reasons found for why complex IT projects fail included:

  • Lack of agreed measures of success.
  • Lack of clear senior management ownership.
  • Lack of effective stakeholder management.
  • Lack of project/risk management skills.
  • Evaluation of proposals driven by price rather than business benefits.
  • Projects not broken into manageable steps.

In an attempt to address at least some of the issues around the procurement and operation of government IT systems (which is not restricted to the UK of course), in particular those citizen facing services over the internet, the coalition government that came to power in May 2010 commissioned a strategic review of its online delivery of public services by the UK Digital Champion Martha Lane Fox. Her report published in November 2010 recommended:

  • Provision of a common look and feel for all government departments’ transactional online services to citizens and business.
  • The opening up of government services and content, using application programme interfaces (APIs), to third parties.
  • Putting a new central team in Cabinet Office that is in absolute control of the overall user experience across all digital channels and that commissions all government online information from other departments.
  • Appointing a new CEO for digital in the Cabinet Office with absolute authority over the user experience across all government online services and the power to direct all government online spending.

Another government report, published in July of 2011, by the Public Administration Select Committee entitled Government and IT – “a recipe for rip-offs” – time for a new approach proposed 33 recommendations on how government could improve it’s woeful record for delivering IT. These included:

  • Developing a strategy to either replace legacy systems with newer, less costly systems, or open up the intellectual property rights to competitors.
  • Contracts to be broken up to allow for more effective competition and to increase opportunities for SMEs.
  • The Government must stop departments specifying IT solutions and ensure they specify what outcomes they wish to achieve.
  • Having a small group within government with the skills to both procure and manage a contract in partnership
    with its suppliers.
  • Senior Responsible Owners (SROs) should stay in post to oversee the delivery of the benefits for which they are accountable and which the project was intended to deliver.

At least partly as a result of these reports and their recommendations the Government Digital Service (GDS) was established in April 2011 under the leadership of Mike Bracken (previously Director of Digital Development at The Guardian newspaper). GDS works in three core areas:

  • Transforming 25 high volume key exemplars from across government into digital services.
  • Building and maintaining the consolidated GOV.UK website –  which brings government services together in one place.
  • Changing the way government procures IT services.

To the large corporates that have traditionally provided IT software, hardware and services to government GDS has had a big impact on how they do business. Not only does most business now have to be transacted through the governments own CloudStore but GDS also encourages a strong bias in favour of:

  • Software built on open source technology.
  • Systems that conform to open standards.
  • Using the cloud where it makes sense to do so.
  • Agile based development.
  • Working with small to medium enterprises (SME’s) rather than the large corporates seen as “an oligarchy that is ripping off the government“.

There can be no doubt that the sorry litany of public sector IT project failures, rightly or wrongly, have caused the pendulum to swing strongly in the direction that favours the above approach when procuring IT. However some argue that the pendulum has now swung a little too far. Indeed the UK Labour party has launched its own digital strategy review led by shadow Cabinet Office minister Chi Onwurah. She talks about a need to be more context-driven, rather than transaction focused saying that while the GDS focus has been on redesigning 25 “exemplar” transactions, Labour feels this is missing the complexity of delivering public services to the individual. Labour is also critical of the GDSs apparent hostility to large IT suppliers saying it is an “exaggeration” that big IT suppliers are “the bogeymen of IT”. While Labour supports competition and creating opportunities for SMEs, she said that large suppliers “shouldn’t be locked out, but neither should they be locked in”.

The establishment of the GDS has certainly provided a wake up call for the large IT providers however, and here I agree with the views expressed by Ms Onwurah, context is crucial and it’s far too easy to take an overly simplistic approach to trying to solve government IT issues. A good example of this is that of open source software. Open source software is certainly not free and often not dramatically cheaper than proprietary software (which is often built using some elements of open source anyway) once support costs are taken into account. The more serious problem with open source is where the support from it comes from. As the recent Heartbleed security issue with OpenSSL has shown there are dangers in entrusting mission critical enterprise software to people who are not accountable (and even unknown).

One aspect to ‘solving’ wicked problems is to bring more of a multi-disciplinary approach to the table. I have blogged before about the importance of a versatilist approach in solving such problems. Like it or not, the world cannot be viewed in high contrast black and white terms. One of the attributes of a wicked problem is that there is often no right or wrong answer and addressing one aspect of the problem can often introduce other issues. Understanding context and making smart architecture decisions is one aspect to this. Another aspect is whether the so called SMAC (social, mobile, analytics and cloud) technologies can bring a radically new approach to the way government makes use of IT? This is something for discussion in future blog posts.

The Times They Are A-Changin’

Come senators, congressmen
Please heed the call
Don’t stand in the doorway
Don’t block up the hall
For he that gets hurt
Will be he who has stalled
There’s a battle outside and it is ragin’
It’ll soon shake your windows and rattle your walls
For the times they are a-changin’

So sang Bob Dylan in The Times They Are a-Changin’ from his third album of the same name released in early 1964 which makes it 50 years old this year.

These are certainly epochal changing times as we all try to understand the combined forces that social, mobile, analytic and cloud computing are going to have on the world and how we as software architects react to them.

You may have noticed a lack of posts in this blog recently. This is partly due to my own general busyness but also due to the fact that I have been trying to understand and assimilate myself what impact these changes are likely to have on this profession of ours. Is it more of the same, just that the underlying technology is changing (again) or is it really a fundamental change in the way the world is going to work from now on? Whichever it is these are some of the themes I will be covering in upcoming posts in this (hopefully) reinvigorated blog.

I’d like to welcome you to my new place for Software Architecture Zen on the WordPress blogging platform. I’ve been running this blog over on Blogger for getting on five years now but have decided this year to gradually move over here. I hope my readers will follow me here but for now aim to put posts in both places.

The Art of the Possible

This is an edited version of a talk I recently gave to a client. The full talk used elements of my “Let’s Build a Smarter Planet” presentation which you can find starting here.

The author, entrepreneur, marketer, public speaker and blogger Seth Godin has a wonderful definition for what architects do:

Architects take existing components and assemble them in interesting and important ways.

Software architects today have at their disposal a number of ‘large grain’ components, the elements of which we can assemble in a multitude of “interesting and important” ways to make fundamental changes to the world and truly build a smarter planet. These components are shown in the diagram below.

The authors Robert Scoble and Shel Israel in their book Age of Context describe the coming together of these components (actually their components are mobile, social, data, sensors and location) as a perfect storm comparing them with the forces of nature that occasionally converge to whip up a fierce tropical storm.

Of course, like any technological development, there is a down side to all this. As Scoble and Israel point out in their book:

The more the technology knows about you, the more benefits you will receive. That can leave you with the chilling sensation that big data is watching you…

I’ve taken a look at some of this myself here.

Predicting the future is of course a notoriously tricky business. As the late, great science fiction author Aurtur C. Clarke said:

When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

The future, even five years hence, is likely to be very different from what it is now and predicting what might be or not be, even that far ahead, is not an exact science. Despite the perils of making predictions such as this IBM Research’s so called 5 in 5 predictions for this year describe five innovations that will change the way we live, from classrooms that learn to cyber guardians, within the next five years. Here are five YouTube videos that describe these innovations. Further information of 5 in 5 can be found here.

  1. The classroom will learn you.
  2. Buying local will beat online.
  3. Doctors will routinely use your DNA to keep you well.
  4. The city will help you live in it.
  5. A digital guardian will protect you online.

We already have the technology to make our planet ‘smarter’. How we use that technology is limited only by our imagination…

Happy 2013 and Welcome to the Fifth Age!

I would assert that the modern age of commercial computing began roughly 50 years ago with the introduction of the IBM 1401 which was the world’s first fully transistorized computer when it was announced in October of 1959.  By the mid-1960’s almost half of all computer systems in the world were 1401 type machines. During the subsequent 50 years we have gone through a number of different ages of computing; each corresponding to the major, underlying architecture which was dominant during each age or period. The ages with their (very) approximate time spans are:

  • Age 1: The Mainframe Age (1960 – 1975)
  • Age 2: The Mini Computer Age (1975 – 1990)
  • Age 3: The Client-Server Age (1990 – 2000)
  • Age 4: The Internet Age (2000 – 2010)
  • Age 5: The Mobile Age (2010 – 20??)

Of course, the technologies from each age have never completely gone away, they are just not the predominant driving IT force any more (there are still estimated to be some 15,000 mainframe installations world-wide so mainframe programmers are not about to see the end of their careers any time soon). Equally, there other technologies bubbling under the surface running alongside and actually overlapping these major waves. For example networking has evolved from providing the ability to connect a “green screen” to a centralised mainframe, and then mini, to the ability to connect thousands, then millions and now billions of devices. The client-server age and internet age were dependent on cheap and ubiquitous desktop personal computers whilst the current mobile age is driven by offspring’s of the PC, now unshackled from the desktop, which run the same applications (and much, much more) on smaller and smaller devices.

These ages are also characterized by what we might term a decoupling and democratization of the technology. The mainframe age saw the huge and expensive beasts locked away in corporate headquarters and only accessible by qualified members of staff of those companies. Contrast this to the current mobile age where billions of people have devices in their pockets that are many times more powerful than the mainframe computers of the first age of computing and which allow orders of magnitude increases in connectivity and access to information.

Another defining characteristic of each of these ages is the major business uses that the technology was put to. The mainframe age was predominantly centralised systems running companies core business functions that were financially worthwhile to automate or manually complex to administer (payroll, core accounting functions etc). The mobile age is characterised by mobile enterprise application platforms (MEAPs) and apps which are cheap enough to just be used just once and sometimes perform a single or relatively few number of functions.

Given that each of the ages of computing to date has run for 10 – 15 years and the current mobile age is only two years old what predictions are there for how this age might pan out and what should we, as architects, be focusing on and thinking about? As you might expect at this time of year there is no shortage of analyst reports providing all sorts of predictions for the coming year. This joint Appcelerator/IDC Q4 2012 Mobile Developer Report particularly caught my eye as it polled almost 3000 Appcelerator Titanium developers on their thoughts about what is hot in the mobile, social and cloud space. The reason it is important to look at what platforms developers are interested in is, of course, that they can make or break whether those platforms grow and survive over the long term. Microsoft Windows and Apple’s iPhone both took off because developers flocked to those platforms and developed applications for those in preference to competing platforms (anyone remember OS/2?).

As you might expect most developers preferences are to develop for the iOS platforms (iPhone and iPad) closely followed by Android phones and tablets with nearly a third also developing using HTML5 (i.e. cross-platform). Windows phones and tablets are showing some increased interest but Blackberry’s woes would seem to be increasing with a slight drop off in developer interest in those platforms.

Nearly all developers (88.4%) expected that they would be developing for two or more OS’es during 2013. Now that consumers have an increasing number of viable platforms to choose from, the ability to build a mobile app that is available cross-platform is a must for a successful developer.

Understanding mobile platforms and how they integrate with the enterprise is one of the top skills going to be needed over the next few years as the mobile age really takes off. (Consequently it is also going to require employers to work more closely with universities to ensure those skills are obtained).

In many ways the fifth age of computing has actually taken us back several years (pre-internet age) when developers had to support a multitude of operating systems and computer platforms. As a result many MEAP providers are investing in cross platform development tools, such as IBM’s Worklight which is also part of the IBM Mobile Foundation. This platform also adds intelligent end point management (that addresses the issues of security, complexity and BYOD policies) together with an integration framework that enables companies to rapidly connect their hybrid world of public clouds, private clouds, and on-premise applications.

For now then, at least until a true multi-platform technology such as HTML5 is mature enough, we are in a complex world with lots of new and rapidly changing technologies to get to grips with as well as needing to understand how the new stuff integrates with all the old legacy stuff (again). In other words, a world which we as architects know and love and thrive in. Here’s to a complex 2013!

Disruptive Technologies, Smarter Cities and the New Oil

Last week I attended the Smart City and Government Open Data Hackathon in Birmingham, UK. The event was sponsored by IBM and my colleague Dr Rick Robinson, who writes extensively on Smarter Cities as The Urban Technologist, gave the keynote session to kick off the event. The idea of this particular hackathon was to explore ways in which various sources of open data, including the UK governments own open data initiative, could be used in new and creative ways to improve the lives of citizens and make our cities smarter as well as generally better places to live in. There were some great ideas discussed including how to predict future jobs as well as identifying citizens who had not claimed benefits to which they were entitled (and those benefits then going back into the local economy through purchases of goods and services).The phrase “data is the new oil” is by no means a new one. It was first used by Michael Palmer in 2006 in this article. Palmers says:

Data is just like crude. It’s valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic, chemicals, etc to create a valuable entity that drives profitable activity; so must data be broken down, analyzed for it to have value.

Whilst this is a nice metaphor I think I actually prefer the slight adaptation proposed by David McCandless in his TED talk: The beauty of data visualization where he coins the phrase “data is the new soil”. The reason being data needs to be worked and manipulated, just like a good farmer looking after his land, to get the best out of it. In the case of the work done by McCandless this involves creatively visualizing data to show new understandings or interpretations and, as Hans Rosling says, to let the data set change your mind set.

Certainly one way data is most definitely not like oil is in the way it is increasing at exponential rates of growth rather than rapidly diminishing. But it’s not only data. The new triumvirate of data, cloud and mobile is forging a whole new mega-trend in IT nicely captured in this equation proposed by Gabrielle Byrne at the start of this video:

e = mc(imc)2

Where:

  • e is any enterprise (or city, see later)
  • m is mobile
  • c is cloud
  • imc is in memory computing, or stream computing, the instant analysis of masses of fast changing data

This new trend is characterized by a number of incremental innovations that have taken place in IT over previous years in each of the three areas nicely captured in the figure below.

Source: CNET – Where IT is going: Cloud, mobile and data

In his blog post: The new architecture of smarter cities, Rick proposes that a Smarter City needs three essential ‘ingredients’ in order to be really characterized as ‘smart’. These are:

  • Smart cities are led from the top
  • Smart cities have a stakeholder forum
  • Smart cities invest in technology infrastructure

It is this last attribute that, when built on a suitable cloud-mobility-data platform, promises to fundamentally change not only how enterprises are set to change but also cities and even whole nations.  However it’s not just any old platform that needs to be built. In this post I discussed the concept behind so-called disruptive technology platforms and the attributes they must have. Namely:

  • A well defined set of open interfaces.
  • A critical mass of both end users and service providers.
  • Both scaleable and extremely robust.
  • An intrinsic value which cannot be obtained elsewhere.
  • Allow users to interact amongst themselves, maybe in ways that were originally envisaged.
  • Service providers must be given the right level of contract that allows them to innovate, but without actually breaking the platform.

So what might a disruptive technology platform, for a whole city, look like and what innovations might it provide? As an example of such a platform IBM have developed something they call the Intelligent Operations Center or IOC. The idea behind the IOC is to use information from a number of city agencies and departments to make smarter decisions based on rules that can be programmed into the platform. The idea then, is that the IOC will be used to anticipate problems to minimize the impact of disruptions to city services and operations as well as assist in the mobilization of resources across multiple agencies. The IOC allows aggregated data to be visualized in ways that the individual data sets cannot and for new insights to be obtained from that data.

Platforms like the IOC are only the start of what is possible in a truly smart city. They are just beginning to make use of mobile technology, data in the cloud and huge volumes of fast moving data that is analysed in real-time. Whether these platforms turn out to be really disruptive remains to be seen but if this is really the age of “new oil” then we only have the limitations of our imagination to restrict us in how we will use that data to give us valuable new insights into building smart cities.