You can always tell when a technology has reached a certain level of maturity when it gets its own slot on the BBC Radio 4 news program ‘Today‘ which runs here in the UK every weekday morning from 6am – 9am.
Yesterday (Tuesday 19th January) morning saw the UK government’s Chief Scientific Advisor, Sir Mark Walport, talking about blockchain (AKA distributed ledger) and advocating its use for a variety of (government) services. The interview was to publicise a new government report on distributed ledger technology (the Blackett review) which you can find here.
The report has a number of recommendations including the creation of a distributed ledger demonstrator and calls for collaboration between industry, academia and government around standards, security and governance of distributed ledgers.
As you would expect there are a number of startups as well as established companies working on applications of distributed ledger technology including R3CEV whose head of technology is Richard Gendal Brown, an ex-colleague of mine from IBM. Richard tweets on all things blockchain here and has a great blog on the subject here. If you want to understand blockchain you could take a look at Richard’s writings on the topic here. If you want an extremely interesting weekend read on the current state of bitcoin and blockchain technology this is a great article.
IBM, recognising the importance of this technology and the impact it could have on society, is throwing its weight behind the Linux Foundations project that looks to advance this technology following the open source model.
From a software architecture perspective I think this topic is going to be huge and is ripe for some first mover advantage. Those architects who can steal a lead on not only understanding but explaining this technology are going to be in high demand and if you can help with applying the technology in new and innovative ways you are definitely going to be a rockstar!
As software architects we often get wrapped up in ‘the moment’ and are so focused on the immediate project deliverables and achieving the next milestone or sale that we rarely step back to consider the bigger picture and wider ethical implications of what we are doing. I doubt many of us really think whether the application or system we are contributing to in some way is really one we should be involved in or indeed is one that should be built at all.
To be clear, I’m not just talking here about software systems for the defence industry such as guided missiles, fighter planes or warships which clearly have one very definite purpose. I’m assuming that people who do work on such systems have thought, at least at some point in their life, about the implications of what they are doing and have justified it to themselves. Most times this will be something along the lines of these systems being used for defence and if we don’t have them the bad guys will surely come and get us. After all, the doctrine of mutual assured destruction (MAD) fueled the cold war in this way for the best part of fifty years.
Instead, I’m talking about systems which whilst on the face of it are perfectly innocuous, over time grow into behemoths far bigger than was ever intended and evolve into something completely different from their original purpose.
Obviously the biggest system we are are all dealing with, and the one which has had a profound effect on all of our lives, whether we work to develop it or just use it, is the World Wide Web.
The Web is now in its third decade so is well clear of those tumultuous teenage years of trying to figure out its purpose in life and should now be entering a period of growing maturity and and understanding of where it fits in the world. It should be pretty much ‘grown up’ in fact. However the problem with growing up is that in your early years at least you are greatly influenced, for better or worse, by your parents.
“I articulated the vision, wrote the first Web programs, and came up with the now pervasive acronyms URL, HTTP, HTML, and , of course World Wide Web. But many other people, most of them unknown, contributed essential ingredients, in much the same, almost random fashion. A group of individuals holding a common dream and working together at a distance brought about a great change.”
One of the “unknown” people (at least outside of the field of information technology) was Ted Nelson. Ted coined the term hypertext in his 1965 paper Complex Information Processing: A File Structure for the Complex, the Changing, and the Indeterminate and founded Project Xanadu (in 1960) in which all the worlds information could be published in hypertext and all quotes, references etc would be linked to more information and the original source of that information. Most crucially, for Nelson, was the fact that because every quotation had a link back to its source the original author of that quotation could be compensated in some small way (i.e. using what we now term micro-payments). Berners-Lee borrowed Nelson’s vision for hypertext which is what allows all the links you see in this post to work, however with one important omission.
Nelson himself has stated that some aspects of Project Xanadu are being fulfilled by the Web, but sees it as a gross over-simplification of his original vision:
“HTML is precisely what we were trying to PREVENT— ever-breaking links, links going outward only, quotes you can’t follow to their origins, no version management, no rights management.”
The last of these omissions (i.e. no rights management) is possibly one of the greatest oversights in the otherwise beautiful idea of the Web. Why?
Jaron Lanier the computer scientist, composer and author explains the difference between the Web and what Nelson proposed in Project Xanadu in his book Who Owns the Future as follows:
“A core technical difference between a Nelsonian network and what we have become familiar with online is that [Nelson’s] network links were two-way instead of one-way. In a network with two-way links, each node knows what other nodes are linked to it. … Two-way linking would preserve context. It’s a small simple change in how online information should be stored that couldn’t have vaster implications for culture and the economy.”
So what are the cultural and economic implications that Lanier describes?
In both Who Owns the Future and his earlier book You Are Not a Gadget Lanier articulates a number of concerns about how technology, and more specifically certain technologists, are leading us down a road to a dystopian future where not only will most middle class jobs be almost completely wiped out but we will all be subservient to a small number of what Lanier terms siren servers. Lanier defines a siren server as “an elite computer or coordinated collection of computers, on a network characterised by narcissism, hyper amplified risk aversion, and extreme information asymmetry”. He goes on to make the following observation about them:
“Siren servers gather data from the network, often without having to pay for it. The data is analysed using the most powerful available computers, run by the very best available technical people. The results of the analysis are kept secret, but are used to manipulate the rest of the world to advantage.”
Lanier’s two books tend to ramble a bit but nonetheless contain a number of important ideas.
Idea #1: Is the one stated above that because we essentially rushed into building the Web without thinking of the implications of what we were doing we have built up a huge amount of technical debt which could well be impossible to eradicate.
Idea #2: The really big siren servers (i.e. Facebook, Google, Twitter et al) have encouraged us to upload the most intimate details of our lives and in return given us an apparently ‘free’ service. This however has encouraged us to not want to pay for any services, or pay very little for them. This makes it difficult for any of the workers who create the now digitised information (e.g. journalists, photographers and musicians) to earn a decent living. This is ultimately an economically unsustainable situation however because once those information creators are put out of business who will create original content? The world cannot run on Facebook posts and tweets alone. As the musician David Byrne says here:
“The Internet has laid out a cornucopia of riches before us. I can read newspapers from all over the world, for example—and often for free!—but I have to wonder if that feast will be short-lived if no one is paying for the production of the content we are gorging on.”
Idea #3: The world is becoming overly machine centric and people are too ready to hand over a large part of their lives to the new tech elite. These new sirenic entrepreneurs as Lanier calls them not only know far too much about us but can use the data we provide to modify our behaviour. This may either be deliberately in the case of an infamous experiment carried out by Facebook or in unintended ways we as a society are only just beginning to understand.
Idea #4: Is that the siren servers are imposing a commercial asymmetry on all of us. When we used to buy our information packaged in a physical form it was ours to do with as we wished. If we wanted to share a book, or give away a CD or even sell a valuable record for a profit we were perfectly at liberty to do so. Now all information is digital however we can no longer do that. As Lanier says “with an ebook you are no longer a first-class commercial citizen but instead have tenuous rights within someone else’s company store.” If you want to use a different reading device or connect over a different cloud in most cases you will lose access to your purchase.
There can be little doubt that the Web has had a huge transformative impact on all of our lives in the 21st century. We now have access to more information than it’s possible to assimilate the tiniest fraction of in a human lifetime. We can reach out to almost any citizen in almost any part of the world at any time of the day or night. We can perform commercial transactions faster than ever would have been thought possible even 25 years ago and we have access to new tools and processes that genuinely are transforming our lives for the better. This however all comes at a cost even when access to all these bounties is apparently free. As architects and developers who help shape this brave new world should we not take responsibility to not only point out where we may be going wrong but also suggest ways in which we should improve things? This is something I intend to look at in some future posts.
Almost three years ago to the day on here I wrote a post called Happy 2013 and Welcome to the Fifth Age! The ‘ages’ of (commercial) computing discussed there were:
First Age: The Mainframe Age (1960 – 1975)
Second Age: The Mini Computer Age (1975 – 1990)
Third Age: The Client-Server Age (1990 – 2000)
Fourth Age: The Internet Age (2000 – 2010)
Fifth Age: The Mobile Age (2010 – 20??)
One of the things I wrote in that article was this:
“Until a true multi-platform technology such as HTML5 is mature enough, we are in a complex world with lots of new and rapidly changing technologies to get to grips with as well as needing to understand how the new stuff integrates with all the old legacy stuff (again). In other words, a world which we as architects know and love and thrive in.”
So, three years later, are we any closer to having a multi-platform technology? Where does cloud computing fit into all of this and is multi-platform technology making the world get more or less complex for us as architects?
In this post I argue that cloud computing is actually taking us to an age where rather than having to spend our time dealing with the complexities of the different layers of architecture we can be better utilised by focussing on delivering business value in the form of new and innovative services. In other words, rather than us having to specialise as layer architects we can become full-stack architects who create value rather than unwanted or misplaced technology. Let’s explore this further.
The idea of the full stack architect.
Vitruvius, the Roman architect and civil engineer, defined the role of the architect thus:
“The ideal architect should be a [person] of letters, a mathematician, familiar with historical studies, a diligent student of philosophy, acquainted with music, not ignorant of medicine, learned in the responses of juriconsults, familiar with astronomy and astronomical calculations.”
Vitruvius also believed that an architect should focus on three central themes when preparing a design for a building: firmitas (strength), utilitas (functionality), and venustas (beauty).
Vitruvian Man by Leonardo da Vinci
For Vitruvius then the architect was a multi-disciplined person knowledgable of both the arts and sciences. Architecture was not just about functionality and strength but beauty as well. If such a person actually existed then they had a fairly complete picture of the whole ‘stack’ of things that needed to be considered when architecting a new structure.
So how does all this relate to IT?
In the first age of computing (roughly 1960 – 1975) life was relatively simple. There was a mainframe computer hidden away in the basement of a company managed by a dedicated team of operators who guarded their prized possession with great care and controlled who had access to it and when. You were limited by what you could do with these systems not only by cost and availability but also by the fact that their architectures were fixed and the choice of programming languages (Cobol, PL/I and assembler come to mind) to make them do things was also pretty limited. The architect (should such a role have actually existed then) had a fairly simple task as their options were relatively limited and the number of architectural decisions that needed to be made were correspondingly fairly straight forward. Like Vitruvias’ architect one could see that it would be fairly straight forward to understand the full compute stack upon which business applications needed to run.
Indeed, as the understanding of these computing engines increased you could imagine that the knowledge of the architects and programmers who built systems around these workhorses of the first age reached something of a ‘plateau of productivity’*.
However things were about to get a whole lot more complicated.
The fall of the full stack architect.
As IT moved into its second age and beyond (i.e. with the advent of mini computers, personal computers, client-server, the web and early days of the internet) the breadth and complexity of the systems that were built increased. This is not just because of the growth in the number of programming languages, compute platforms and technology providers but also because each age has built another layer on the previous one. The computers from a previous age never go away, they just become the legacy that subsequent ages must deal with. Complexity has also increased because of the pervasiveness of computers. In the fifth age the number of people whose lives are now affected by these machines is orders of magnitude greater than it was in the first age.
All of this has led to niches and specialisms that were inconceivable in the early age of computing. As a result, architecting systems also became more complex giving rise to what have been termed ‘layer’ architects whose specialities were application architecture, infrastructure architecture, middleware architecture and so on.
Whole professions have been built around these disciplines leading to more and more specialisation. Inevitably this has led to a number of things:
The need for communications between the disciplines (and for them to understand each others ‘language’).
As more knowledge accrues in one discipline, and people specialise in it more, it becomes harder for inter-disciplinary understanding to happen.
Architects became hyper-specialised in their own discipline (layer) leading to a kind of ‘peak of inflated expectations’* (at least amongst practitioners of each discipline) as to what they could achieve using the technology they were so well versed in but something of a ‘trough of disillusionment’* to the business (who paid for those systems) when they did not deliver the expected capabilities and came in over cost and behind schedule.
So what of the mobile and cloud age which we now find ourselves in?
The rise of the full stack architect.
As the stack we need to deal with has become more ‘cloudified’ and we have moved from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS) it has become easier to understand the full stack as an architect. We can, to some extent, take for granted the lower, specialised parts of the stack and focus on the applications and data that are the differentiators for a business.
We no longer have to worry about what type of server to use or even what operating system or programming environments have to be selected. Instead we can focus on what the business needs and how that need can be satisfied by technology. With the right tools and the right cloud platforms we can hopefully climb the ‘slope of enlightenment’ and reach a new ‘plateau of productivity’*.
As Neal Ford, Software Architect at Thoughtworks says in this video:
“Architecture has become much more interesting now because it’s become more encompassing … it’s trying to solve real problems rather than play with abstractions.”
I believe that the fifth age of computing really has the potential to take us to a new plateau of productivity and hopefully allow all of us to be architects described by this great definition from the author, marketeer and blogger Seth Godin:
“Architects take existing components and assemble them in interesting and important ways.”
What interesting and important things are you going to do in this age of computing?
In his book Fearless Genius the photographer Doug Menuez has produced a photographic essay on the “digital revolution” that was taking place in Silicon Valley, the area of California some 50 miles south of San Francisco that is home to some of the worlds most successful technology companies, during the period 1985 to 2000.
Fearless Genius by Doug Menuez
You can see a review of this book in my other blog here. Whilst the book covers a number of technology companies that were re-shaping the world during that tumultuous period it focuses pretty heavily on Steve Jobs during the time he had been forced out of Apple and was trying to build his Next Computer.
Steve Jobs Enjoying a Joke
In this video Doug Menuez discusses his photo journalism work during the period that the book documents and at the end poses these three, powerful questions:
Computers will gain consciousness, shouldn’t we be having a public dialogue about that?
On education – who will be the next Steve Jobs, and where will she come from?
Why are all investments today so short term?
Where Will the Next Steve Jobs Come From?
All of which are summed up in the following wonderful quote:
If anything in the future is possible, how do we create the best possible future?
Here in the UK we are about to have an election and choose our leader(s) for the next five years. I find it worrying that there has been practically no debate on the impact that technology is likely to have during this time and how, as citizens of this country, we can get involved in trying to “create the best possible future”.
We’re still wasting colossal fortunes on bad processes and bad technologies. In a digital world, it is perfectly possible to have good public services, keep investing in frontline staff and spend a lot less money. Saving money from the cold world of paper and administration and investing more in the warm hands of doctors, nurses and teachers.
Martha Lane Fox Delivering Her Richard Dimbleby Lecture
I urge everyone to take a look at both Doug and Martha’s inspirational talks and, if you are here in the UK, to go to change.org and sign the petition to “create a new institution and make Britain brilliant at the internet” and ensure we here in the UK have a crack at developing our own fearless genius like Steve Jobs, wherever she may now be living.
Please note that all images in this post, apart from the last one, are (c) Doug Menuez and used with permission of the photographer.