What Makes a Tech City? (Hint: It’s Not the Tech)

Matthew Boulton, James Watt and William Murdoch

The above photograph is of a statue in Centenary Square, Birmingham in the UK. The three figures in it: Matthew Boulton, James Watt and William Murdoch were the tech pioneers of their day, living in and around Birmingham and being associated with a loosely  knit group who referred to themselves as The Lunar Society. The history of the Lunar Society and the people involved has been captured in the book The Lunar Men by Jenny Uglow.

“Amid fields and hills, the Lunar men build factories, plan canals, make steam-engines thunder. They discover new gases, new minerals and new medicines and propose unsettling new ideas. They create objects of beauty and poetry of bizarre allure. They sail on the crest of the new. Yet their powerhouse of invention is not made up of aristocrats or statesmen or scholars but of provincial manufacturers, professional men and gifted amateurs – friends who meet almost by accident and whose lives overlap until they die.”

From The Lunar Men by Jenny Uglow

You don’t have to live in the UK to have heard that Birmingham, like many of the other great manufacturing cities of the Midlands and Northern England has somewhat lost its way over the century or so since the Lunar Men were creating their “objects of beauty and poetry of bizarre allure”. It’s now sometimes hard to believe that these great cities were the powerhouses and engines of the industrial revolution that changed not just England but the whole world. This is something that was neatly summed up by Steven Knight, creator of the BBC television programme Peaky Blinders set in the lawless backstreets of Birmingham in the  1920’s. In a recent interview in the Guardian Knight says:

“It’s typical of Brum that the modern world was invented in Handsworth and nobody knows about it. I am trying to start a “Make it in Birmingham” campaign, to get high-tech industries – film, animation, virtual reality, gaming – all into one place, a place where people make things, which is what Birmingham has always been.”

Likewise Andy Street, Managing Director of John Lewis and Chair of the Greater Birmingham & Solihull Local Enterprise Partnership had this to say about Birmingham in his University of Birmingham Business School Advisory Board guest lecture last year:

“Birmingham was once a world leader due to our innovations in manufacturing, and the city is finally experiencing a renaissance. Our ambition is to be one of the biggest, most successful cities in the world once more.”

Andy Street  CBE – MD of John Lewis

If Birmingham and cities like it, not just in England but around the world, are to become engines of innovation once again then they need to take a step change in how they go about doing that. The lesson to be learned from the Lunar Men is that they did not wait for grants from central Government or the European Union or for some huge corporation to move in and take things in hand but that they drove innovation from their own passion and inquisitiveness about how the world worked, or could work. They basically got together, decided what needed to be done and got on with it. They literally designed and built the infrastructure that was to be form the foundations of innovation for the next 100 years.

Today we talk of digital innovation and how the industries of our era are disrupting traditional ones (many of them formed by the Lunar Men and their descendants) for better and for worse. Now every city wants a piece of that action and wants to emulate the shining light of digital innovation and disruption, Silicon Valley in California. Is that possible? According to the Medium post To Invent the Future, You Must Understand the Past, the answer is no. The post concludes by saying:

“…no one will succeed because no place else — including Silicon Valley itself in its 2015 incarnation — could ever reproduce the unique concoction of academic research, technology, countercultural ideals and a California-specific type of Gold Rush reputation that attracts people with a high tolerance for risk and very little to lose.”

So can this really be true? High tolerance to risk (and failure) is certainly one of the traits that makes for a creative society. No amount of tax breaks or university research programmes is going to fix that problem. Taking the example of the Lunar Men though, one thing that cities can do to disrupt themselves from within is to effect change from the bottom up rather than the top down. Cities are made up of citizens after all and they are the very people that not only know what needs changing but also are best placed to bring about that change.

Whitepaper-cover-212x300

With this in mind, an organisation in Birmingham called Silicon Canal (see here if you want to know where that name comes from) of which I am a part, has created a white paper putting forward our ideas on how to build a tech and digital ecosystem in and around Birmingham. You can download a copy of the white paper here.

The paper not only identifies the problem areas but also how things can be improved and suggests potential solutions to grow the tech ecosystem in the Greater Birmingham area so that it competes on an international stage. Download the white paper, read it and if you are based in Birmingham join in the conversation and if you’re not use the research contained within it to look at your own city and how you can help change it for the better.

This paper was launched at an event this week in the new iCentrum building at Innovation Birmingham which is a great space that is starting to address one of the issues highlighted in the white paper, namely to bring together two key elements of a successful tech ecosystem, established companies and entrepreneurs.

Another event that is taking place in Birmingham next month is TEDx Brum – The Power of US which promises to have lots of inspiring talks by local people who are already effecting change from within.

As a final comment if you’re still not sure that you have the power to make changes that make a difference here are some words from the late Steve Jobs:

“Everything around you that you call life was made up by people that were no smarter than you and you can change it, you can influence it, you can build your own things that other people can use.”

Steve Jobs

From Turing to Watson (via Minsky)

This week (Monday 25th) I gave a lecture about IBM’s Watson technology platform to a group of first year students at Warwick Business School. My plan was to write up the transcript of that lecture, with links for references and further study, as a blog post. The following day when I opened up my computer to start writing the post I saw that, by a sad coincidence, Marvin Minsky the American cognitive scientist and co-founder of the Massachusetts Institute of Technology’s AI laboratory had died only the day before my lecture. Here is that blog post, now updated with some references to Minsky and his pioneering work on machine intelligence.

Minsky
Marvin Minsky in a lab at MIT in 1968 (c) MIT

First though, let’s start with Alan Turing, sometimes referred to as “the founder of computer science”, who led the team that developed a programmable machine to break the Nazi’s Enigma code, which was used to encrypt messages sent between units on the battlefield during World War 2. The work of Turing and his team was recently brought to life in the film The Imitation Game starring Benedict Cumberbatch as Turing and Keira Knightley as Joan Clarke, the only female member of the code breaking team.

Turing
Alan Turing

Sadly, instead of being hailed a hero, Turing was persecuted for his homosexuality and committed suicide in 1954 having undergone a course of hormonal treatment to reduce his libido rather than serve a term in prison. It seems utterly barbaric and unforgivable that such an action could have been brought against someone who did so much to affect the outcome of WWII. It took nearly 60 years for his conviction to be overturned when on 24 December 2013, Queen Elizabeth II signed a pardon for Turing, with immediate effect.

In 1949 Turing became Deputy Director of the Computing Laboratory at Manchester University, working on software for one of the earliest computers. During this time he worked in the emerging field of artificial intelligence and proposed an experiment which became known as the Turing test having observed that: “a computer would deserve to be called intelligent if it could deceive a human into believing that it was human.”

The idea of the test was that a computer could be said to “think” if a human interrogator could not tell it apart, through conversation, from a human being.

Turing’s test was supposedly ‘passed’ in June 2014 when a computer called Eugene fooled several of its interrogators that it was a 13 year old boy. There has been much discussion since as to whether this was a valid run of the test and that the so called “supercomputer,” was nothing but a chatbot or a script made to mimic human conversation. In other words Eugene could in no way considered to be intelligent. Certainly not in the sense that Professor Marvin Minsky would have defined intelligence at any rate.

In the early 1970s Minsky, working with the computer scientist and educator Seymour Papert, wrote a book called The Society of Mind, which combined both of their insights from the fields of child psychology and artificial intelligence.

Minsky and Papert believed that there was no real difference between humans and machines. Humans, they maintained, are actually machines of a kind whose brains are made up of many semiautonomous but unintelligent “agents.” Their theory revolutionized thinking about how the brain works and how people learn.

Despite the more widespread accessibility to apparently intelligent machines with programs like Apple Siri Minsky maintained that there had been “very little growth in artificial intelligence” in the past decade, saying that current work had been “mostly attempting to improve systems that aren’t very good and haven’t improved much in two decades”.

Minsky also thought that large technology companies should not get involved the field of AI saying: “we have to get rid of the big companies and go back to giving support to individuals who have new ideas because attempting to commercialise existing things hasn’t worked very well,”

Whilst much of the early work researching AI certainly came out of organisations like Minsky’s AI lab at MIT it seems slightly disingenuous to believe that commercialistion of AI, as being carried out by companies like Google, Facebook and IBM, is not going to generate new ideas. The drive for commercialisation (and profit), just like war in Turing’s time, is after all one of the ways, at least in the capitalist world, that innovation is created.

Which brings me nicely to Watson.

IBM Watson is a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data. It is named after Thomas J. Watson, the first CEO of IBM, who led the company from 1914 – 1956.

Thomas_J_Watson_Sr
Thomas J. Watson

IBM Watson was originally built to compete on the US television program Jeopardy.  On 14th February 2011 IBM entered Watson onto a special 3 day version of the program where the computer was pitted against two of the show’s all-time champions. Watson won by a significant margin. So what is the significance of a machine winning a game show and why is this a “game changing” event in more than the literal sense of the term?

Today we’re in the midst of an information revolution. Not only is the volume of data and information we’re producing dramatically outpacing our ability to make use of it but the sources and types of data that inform the work we do and the decisions we make are broader and more diverse than ever before. Although businesses are implementing more and more data driven projects using advanced analytics tools they’re still only reaching 12% of the data they have, leaving 88% of it to go to waste. That’s because this 88% of data is “invisible” to computers. It’s the type of data that is encoded in language and unstructured information, in the form of text, that is books, emails, journals, blogs, articles, tweets, as well as images, sound and video. If we are to avoid such a “data waste” we need better ways to make use of that data and generate “new knowledge” around it. We need, in other words, to be able to discover new connections, patterns, and insights in order to draw new conclusions and make decisions with more confidence and speed than ever before.

For several decades we’ve been digitizing the world; building networks to connect the world around us. Today those networks connect not just traditional structured data sources but also unstructured data from social networks and increasingly Internet of Things (IoT) data from sensors and other intelligent devices.

Data to Knowledge
From Data to Knowledge

These additional sources of data mean that we’ve reached an inflection point in which the sheer volume of information generated is so vast; we no longer have the ability to use it productively. The purpose of cognitive systems like IBM Watson is to process the vast amounts of information that is stored in both structured and unstructured formats to help turn it into useful knowledge.

There are three capabilities that differentiate cognitive systems from traditional programmed computing systems.

  • Understanding: Cognitive systems understand like humans do, whether that’s through natural language or the written word; vocal or visual.
  • Reasoning: They can not only understand information but also the underlying ideas and concepts. This reasoning ability can become more advanced over time. It’s the difference between the reasoning strategies we used as children to solve mathematical problems, and then the strategies we developed when we got into advanced math like geometry, algebra and calculus.
  • Learning: They never stop learning. As a technology, this means the system actually gets more valuable with time. They develop “expertise”. Think about what it means to be an expert- – it’s not about executing a mathematical model. We don’t consider our doctors to be experts in their fields because they answer every question correctly. We expect them to be able to reason and be transparent about their reasoning, and expose the rationale for why they came to a conclusion.

The idea of cognitive systems like IBM Watson is not to pit man against machine but rather to have both reasoning together. Humans and machines have unique characteristics and we should not be looking for one to supplant the other but for them to complement each other. Working together with systems like IBM Watson, we can achieve the kinds of outcomes that would never have been possible otherwise:

IBM is making the capabilities of Watson available as a set of cognitive building blocks delivered as APIs on its cloud-based, open platform Bluemix. This means you can build cognition into your digital applications, products, and operations, using any one or combination of a number of available APIs. Each API is capable of performing a different task, and in combination, they can be adapted to solve any number of business problems or create deeply engaging experiences.

So what Watson APIs are available? Currently there are around forty which you can find here together with documentation and demos. Four examples of the Watson APIs you will find at this link are:

Watson API - Dialog

 

Dialog

Use natural language to automatically respond to user questions

 

 

Watson API - Visual Recognition

 

Visual Recognition

Analyses the contents of an image or video and classifies by category.

 

 

Watson API - Text to Speech

 

Text to Speech

Synthesize speech audio from an input of plain text.

 

 

Watson API - Personality Insights

 

Personality Insights

Understand someones personality from what they have written.

 

 

It’s never been easier to get started with AI by using these cognitive building blocks. I wonder what Turing would have made of this technology and how soon someone will be able to pin together current and future cognitive building blocks to really pass Turing’s famous test?

Blockchain in UK Government

You can always tell when a technology has reached a certain level of maturity when it gets its own slot on the BBC Radio 4 news program ‘Today‘ which runs here in the UK every weekday morning from 6am – 9am.

Yesterday (Tuesday 19th January) morning saw the UK government’s Chief Scientific Advisor, Sir Mark Walport, talking about blockchain (AKA distributed ledger) and advocating its use for a variety of (government) services. The interview was to publicise a new government report on distributed ledger technology (the Blackett review) which you can find here.

The report has a number of recommendations including the creation of a distributed ledger demonstrator and calls for collaboration between industry, academia and government around standards, security and governance of distributed ledgers.

As you would expect there are a number of startups as well as established companies working on applications of distributed ledger technology including R3CEV whose head of technology is Richard Gendal Brown, an ex-colleague of mine from IBM. Richard tweets on all things blockchain here and has a great blog on the subject here. If you want to understand blockchain you could take a look at Richard’s writings on the topic here. If you want an extremely interesting weekend read on the current state of bitcoin and blockchain technology this is a great article.

IBM, recognising the importance of this technology and the impact it could have on society, is throwing its weight behind the Linux Foundations project that looks to advance this technology following the open source model.

From a software architecture perspective I think this topic is going to be huge and is ripe for some first mover advantage. Those architects who can steal a lead on not only understanding but explaining this technology are going to be in high demand and if you can help with applying the technology in new and innovative ways you are definitely going to be a rockstar!

Did We Build the Wrong Web?

OLYMPUS DIGITAL CAMERA
Photograph by the author

As software architects we often get wrapped up in ‘the moment’ and are so focused on the immediate project deliverables and achieving the next milestone or sale that we rarely step back to consider the bigger picture and wider ethical implications of what we are doing. I doubt many of us really think whether the application or system we are contributing to in some way is really one we should be involved in or indeed is one that should be built at all.

To be clear, I’m not just talking here about software systems for the defence industry such as guided missiles, fighter planes or warships which clearly have one very definite purpose. I’m assuming that people who do work on such systems have thought, at least at some point in their life, about the implications of what they are doing and have justified it to themselves. Most times this will be something along the lines of these systems being used for defence and if we don’t have them the bad guys will surely come and get us. After all, the doctrine of mutual assured destruction (MAD) fueled the cold war in this way for the best part of fifty years.

Instead, I’m talking about systems which whilst on the face of it are perfectly innocuous, over time grow into behemoths far bigger than was ever intended and evolve into something completely different from their original purpose.

Obviously the biggest system we are are all dealing with, and the one which has had a profound effect on all of our lives, whether we work to develop it or just use it, is the World Wide Web.

The Web is now in its third decade so is well clear of those tumultuous teenage years of trying to figure out its purpose in life and should now be entering a period of growing maturity and and understanding of where it fits in the world. It should be pretty much ‘grown up’ in fact. However the problem with growing up is that in your early years at least you are greatly influenced, for better or worse, by your parents.

Sir Tim Berners-Lee, father of the web, in his book Weaving the Web says of its origin:

“I articulated the vision, wrote the first Web programs, and came up with the now pervasive acronyms URL, HTTP, HTML, and , of course World Wide Web. But many other people, most of them unknown, contributed essential ingredients, in much the same, almost random fashion. A group of individuals holding a common dream and working together at a distance brought about a great change.”

One of the “unknown” people (at least outside of the field of information technology) was Ted Nelson. Ted coined the term hypertext in his 1965 paper Complex Information Processing: A File Structure for the Complex, the Changing, and the Indeterminate and founded  Project Xanadu (in 1960) in which all the worlds information could be published in hypertext and all quotes, references etc would be linked to more information and the original source of that information. Most crucially, for Nelson, was the fact that because every quotation had a link back to its source the original author of that quotation could be compensated in some small way (i.e. using what we now term micro-payments). Berners-Lee borrowed Nelson’s vision for hypertext which is what allows all the links you see in this post to work, however with one important omission.

Nelson himself has stated that some aspects of Project Xanadu are being fulfilled by the Web, but sees it as a gross over-simplification of his original vision:

“HTML is precisely what we were trying to PREVENT— ever-breaking links, links going outward only, quotes you can’t follow to their origins, no version management, no rights management.”

The last of these omissions (i.e. no rights management) is possibly one of the greatest oversights in the otherwise beautiful idea of the Web. Why?

Jaron Lanier the computer scientist, composer and author explains the difference between the Web and what Nelson proposed in Project Xanadu in his book Who Owns the Future as follows:

“A core technical difference between a Nelsonian network and what we have become familiar with online is that [Nelson’s] network links were two-way instead of one-way. In a network with two-way links, each node knows what other nodes are linked to it. … Two-way linking would preserve context. It’s a small simple change in how online information should be stored that couldn’t have vaster implications for culture and the economy.”

 

So what are the cultural and economic implications that Lanier describes?

In both Who Owns the Future and his earlier book You Are Not a Gadget Lanier articulates a number of concerns about how technology, and more specifically certain technologists, are leading us down a road to a dystopian future where not only will most middle class jobs be almost completely wiped out but we will all be subservient to a small number of what Lanier terms siren servers. Lanier defines a siren server as “an elite computer or coordinated collection of computers, on a network characterised by narcissism, hyper amplified risk aversion, and extreme information asymmetry”. He goes on to make the following observation about them:

“Siren servers gather data from the network, often without having to pay for it. The data is analysed using the most powerful available computers, run by the very best available technical people. The results of the analysis are kept secret, but are used to manipulate the rest of the world to advantage.”

Lanier’s two books tend to ramble a bit but nonetheless contain a number of important ideas.

Idea #1: Is the one stated above that because we essentially rushed into building the Web without thinking of the implications of what we were doing we have built up a huge amount of technical debt which could well be impossible to eradicate.

Idea #2: The really big siren servers (i.e. Facebook, Google, Twitter et al) have encouraged us to upload the most intimate details of our lives and in return given us an apparently ‘free’ service. This however has encouraged us to not want to pay for any services, or pay very little for them. This makes it difficult for any of the workers who create the now digitised information (e.g. journalists, photographers and musicians) to earn a decent living. This is ultimately an economically unsustainable situation however because once those information creators are put out of business who will create original content? The world cannot run on Facebook posts and tweets alone. As the musician David Byrne says here:

“The Internet has laid out a cornucopia of riches before us. I can read newspapers from all over the world, for example—and often for free!—but I have to wonder if that feast will be short-lived if no one is paying for the production of the content we are gorging on.”

Idea #3: The world is becoming overly machine centric and people are too ready to hand over a large part of their lives to the new tech elite. These new sirenic entrepreneurs as Lanier calls them not only know far too much about us but can use the data we provide to modify our behaviour. This may either be deliberately in the case of an infamous experiment carried out by Facebook or in unintended ways we as a society are only just beginning to understand.

 

Idea #4: Is that the siren servers are imposing a commercial asymmetry on all of us. When we used to buy our information packaged in a physical form it was ours to do with as we wished. If we wanted to share a book, or give away a CD or even sell a valuable record for a profit we were perfectly at liberty to do so. Now all information is digital however we can no longer do that. As Lanier says “with an ebook you are no longer a first-class commercial citizen but instead have tenuous rights within someone else’s company store.” If you want to use a different reading device or connect over a different cloud in most cases you will lose access to your purchase.

There can be little doubt that the Web has had a huge transformative impact on all of our lives in the 21st century. We now have access to more information than it’s possible to assimilate the tiniest fraction of in a human lifetime. We can reach out to almost any citizen in almost any part of the world at any time of the day or night. We can perform commercial transactions faster than ever would have been thought possible even 25 years ago and we have access to new tools and processes that genuinely are transforming our lives for the better. This however all comes at a cost even when access to all these bounties is apparently free. As architects and developers who help shape this brave new world should we not take responsibility to not only point out where we may be going wrong but also suggest ways in which we should improve things? This is something I intend to look at in some future posts.

The Fall and Rise of the Full Stack Architect

strawberry-layer-cake

Almost three years ago to the day on here I wrote a post called Happy 2013 and Welcome to the Fifth Age! The ‘ages’ of (commercial) computing discussed there were:

  • First Age: The Mainframe Age (1960 – 1975)
  • Second Age: The Mini Computer Age (1975 – 1990)
  • Third Age: The Client-Server Age (1990 – 2000)
  • Fourth Age: The Internet Age (2000 – 2010)
  • Fifth Age: The Mobile Age (2010 – 20??)

One of the things I wrote in that article was this:

“Until a true multi-platform technology such as HTML5 is mature enough, we are in a complex world with lots of new and rapidly changing technologies to get to grips with as well as needing to understand how the new stuff integrates with all the old legacy stuff (again). In other words, a world which we as architects know and love and thrive in.”

So, three years later, are we any closer to having a multi-platform technology? Where does cloud computing fit into all of this and is multi-platform technology making the world get more or less complex for us as architects?

In this post I argue that cloud computing is actually taking us to an age where rather than having to spend our time dealing with the complexities of the different layers of architecture we can be better utilised by focussing on delivering business value in the form of new and innovative services. In other words, rather than us having to specialise as layer architects we can become full-stack architects who create value rather than unwanted or misplaced technology. Let’s explore this further.

The idea of the full stack architect.

Vitruvius, the Roman architect and civil engineer, defined the role of the architect thus:

“The ideal architect should be a [person] of letters, a mathematician, familiar with historical studies, a diligent student of philosophy, acquainted with music, not ignorant of medicine, learned in the responses of juriconsults, familiar with astronomy and astronomical calculations.”

Vitruvius also believed that an architect should focus on three central themes when preparing a design for a building: firmitas (strength), utilitas (functionality), and venustas (beauty).

vitruvian man
Vitruvian Man by Leonardo da Vinci

For Vitruvius then the architect was a multi-disciplined person knowledgable of both the arts and sciences. Architecture was not just about functionality and strength but beauty as well. If such a person actually existed then they had a fairly complete picture of the whole ‘stack’ of things that needed to be considered when architecting a new structure.

So how does all this relate to IT?

In the first age of computing (roughly 1960 – 1975) life was relatively simple. There was a mainframe computer hidden away in the basement of a company managed by a dedicated team of operators who guarded their prized possession with great care and controlled who had access to it and when. You were limited by what you could do with these systems not only by cost and availability but also by the fact that their architectures were fixed and the choice of programming languages (Cobol, PL/I and assembler come to mind) to make them do things was also pretty limited. The architect (should such a role have actually existed then) had a fairly simple task as their options were relatively limited and the number of architectural decisions that needed to be made were correspondingly fairly straight forward. Like Vitruvias’ architect one could see that it would be fairly straight forward to understand the full compute stack upon which business applications needed to run.

Indeed, as the understanding of these computing engines increased you could imagine that the knowledge of the architects and programmers who built systems around these workhorses of the first age reached something of a ‘plateau of productivity’*.

Architecture Stacks 3

However things were about to get a whole lot more complicated.

The fall of the full stack architect.

As IT moved into its second age and beyond (i.e. with the advent of mini computers, personal computers, client-server, the web and early days of the internet) the breadth and complexity of the systems that were built increased. This is not just because of the growth in the number of programming languages, compute platforms and technology providers but also because each age has built another layer on the previous one. The computers from a previous age never go away, they just become the legacy that subsequent ages must deal with. Complexity has also increased because of the pervasiveness of computers. In the fifth age the number of people whose lives are now affected by these machines is orders of magnitude greater than it was in the first age.

All of this has led to niches and specialisms that were inconceivable in the early age of computing. As a result, architecting systems also became more complex giving rise to what have been termed ‘layer’ architects whose specialities were application architecture, infrastructure architecture, middleware architecture and so on.

Architecture Stacks

Whole professions have been built around these disciplines leading to more and more specialisation. Inevitably this has led to a number of things:

  1. The need for communications between the disciplines (and for them to understand each others ‘language’).
  2. As more knowledge accrues in one discipline, and people specialise in it more, it becomes harder for inter-disciplinary understanding to happen.
  3. Architects became hyper-specialised in their own discipline (layer) leading to a kind of ‘peak of inflated expectations’* (at least amongst practitioners of each discipline) as to what they could achieve using the technology they were so well versed in but something of a ‘trough of disillusionment’* to the business (who paid for those systems) when they did not deliver the expected capabilities and came in over cost and behind schedule.

Architecture Stacks 4

So what of the mobile and cloud age which we now find ourselves in?

The rise of the full stack architect.

As the stack we need to deal with has become more ‘cloudified’ and we have moved from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS) it has become easier to understand the full stack as an architect. We can, to some extent, take for granted the lower, specialised parts of the stack and focus on the applications and data that are the differentiators for a business.

Architecture Stacks 2

We no longer have to worry about what type of server to use or even what operating system or programming environments have to be selected. Instead we can focus on what the business needs and how that need can be satisfied by technology. With the right tools and the right cloud platforms we can hopefully climb the ‘slope of enlightenment’ and reach a new ‘plateau of productivity’*.

Architecture Stacks 5

As Neal Ford, Software Architect at Thoughtworks says in this video:

“Architecture has become much more interesting now because it’s become more encompassing … it’s trying to solve real problems rather than play with abstractions.”

 

I believe that the fifth age of computing really has the potential to take us to a new plateau of productivity and hopefully allow all of us to be architects described by this great definition from the author, marketeer and blogger Seth Godin:

“Architects take existing components and assemble them in interesting and important ways.”

What interesting and important things are you going to do in this age of computing?

* Diagrams and terms borrowed from Gartner’s hype cycle.

Where Are the New Fearless Geniuses? 

In his book Fearless Genius the photographer Doug Menuez has produced a photographic essay on the “digital revolution” that was taking place in Silicon Valley, the area of California some 50 miles south of San Francisco that is home to some of the worlds most successful technology companies, during the period 1985 to 2000.

Fearless Genius by Doug Menuez
Fearless Genius by Doug Menuez

You can see a review of this book in my other blog here. Whilst the book covers a number of technology companies that were re-shaping the world during that tumultuous period it focuses pretty heavily on Steve Jobs during the time he had been forced out of Apple and was trying to build his Next Computer.

Steve Jobs Enjoying a Joke
Steve Jobs Enjoying a Joke

In this video Doug Menuez discusses his photo journalism work during the period that the book documents and at the end poses these three, powerful questions:

  1. Computers will gain consciousness, shouldn’t we be having a public dialogue about that?
  2. On education – who will be the next Steve Jobs, and where will she come from?
  3. Why are all investments today so short term?
Where Will the Next Steve Jobs Come From?
Where Will the Next Steve Jobs Come From?

All of which are summed up in the following wonderful quote:

If anything in the future is possible, how do we create the best possible future?

Here in the UK we are about to have an election and choose our leader(s) for the next five years. I find it worrying that there has been practically no debate on the impact that technology is likely to have during this time and how, as citizens of this country, we can get involved in trying to “create the best possible future”.

Last month Baroness Martha Lane Fox gave the Richard Dimbleby Lecture called Dot Everyone – Power the Internet and You which, in a similar way to what Doug Menuez is doing in the US, was a call to arms for all of us to become more involved in our digital future. As Ms. Lane-Fox says:

We’re still wasting colossal fortunes on bad processes and bad technologies. In a digital world, it is perfectly possible to have good public services, keep investing in frontline staff and spend a lot less money. Saving money from the cold world of paper and administration and investing more in the warm hands of doctors, nurses and teachers.

Martha Lane Fox Delivering Her Richard Dimbleby Lecture
Martha Lane Fox Delivering Her Richard Dimbleby Lecture

I urge everyone to take a look at both Doug and Martha’s inspirational talks and, if you are here in the UK, to go to change.org and sign the petition to “create a new institution and make Britain brilliant at the internet” and ensure we here in the UK have a crack at developing our own fearless genius like Steve Jobs, wherever she may now be living.

Please note that all images in this post, apart from the last one, are (c) Doug Menuez and used with permission of the photographer.

When a Bridge Falls Down is the Architect to Blame?

Here’s a question. When a bridge or building falls down whose “fault” is it. Is it the architect who designed the bridge or building in the first place, is it the the builders and construction workers who did not build it to spec, the testers for not testing the worst-case scenario or the people that maintain or operate the building or bridge? How might we use disasters from the world of civil engineering to learn about better ways of architecting software systems?Here are four well known examples of architectural disasters (resulting in increasing loss of life) from the world of civil engineering:

  1. The Millenium Bridge in London, a steel suspension bridge for pedestrians across the Thames. Construction of the bridge began in 1998, with the opening on 10 June 2000. Two days after the bridge opened the participants in a charity walk felt an unexpected swaying motion. The bridge was closed for almost two years while modifications were made to eliminate the “wobble” which was caused by a positive feedback phenomenon, known as Synchronous Lateral Excitation. The natural sway motion of people walking caused small sideways oscillations which in turn caused people on the bridge to sway in step increasing the amplitude of the bridge oscillations and continually reinforcing the effect. Engineers added dampers to the bridge to prevent the horizontal and vertical movement. No people or animals were injured in this incident.
  2. The Tacoma Narrows Bridge was opened to traffic on July 1st 1940 and collapsed four months later. At the time of its construction the bridge was the third longest suspension bridge in the world. Even from the time the deck was built, it began to move vertically in windy conditions, which led to it being given the nickname Galloping Gertie. Several measures aimed at stopping the motion were ineffective, and the bridge’s main span finally collapsed under 40 mph wind conditions on November 7, 1940. No people were injured in this incident but a dog was killed.
  3. On 28 January 2006, the roof of one of the buildings at the Katowice International Fair collapsed in Katowice, Poland. There were 700 people in the hall at the time. The collapse was due to the weight of the snow on the roof. A later inquiry found numerous design and construction flaws that contributed to the speed of collapse. 65 people were killed when the roof collapsed.
  4. The twin towers of the World Trade Center (WTC) in downtown Manhattan collapsed on September 11 2001when al-Qaeda terrorists hijacked two commercial passenger jets and flew them into the skyscrapers. A government report that looked at the collapse declared that the WTC design had been sound and attributed the collapses to “extraordinary factors beyond the control of the builders”. 2,752 people died, including all 157 passengers and crew aboard the two airplanes.

In at least one of these cases (the Katowice International Fair building) various people (including the designers) have been indicted for “directly endangering lives of other people” and face up to 12 years of prison. They are also charging the buildings operator for “gross negligence” in not removing snow quickly enough.

So what can we learn from these natural and man made disasters and apply to the world of software architecture? In each of these cases the constructions were based on well known “patterns” (suspension bridges, trade halls and skyscrapers have all successfully been built before and have not collapsed). What was different in each of these cases was that the non-functional characteristics were not taken into account. In the case of the bridges, oscillations caused by external factors (people and winds) were not adequately catered for. In the case of the trade hall in Katowice the building’s roof was not engineered to handle the additional weight caused by snow. Finally, in the case of the WTC, the impact of a modern passenger jet, fully laden with fuel, crashing into the building was simply not conceived of (although interestingly an “aircraft-impact analysis”, involving the impact of a Boeing 707 at 600 mph was actually done which concluded that although there would “a horrendous fire” and “a lot of people would be killed,” the building itself would not collapse. Here are some lessons I would draw from these incidents and how we might relate them to the field of software architecture:

  1. Architects need to take into account all non-functional requirements. Obviously this is easier said than done. Who would have thought of such an unexpected event as a passenger jet crashing into a skyscraper? Actually, to their credit, the buildings architects did but what they lacked was the ability to properly model the effect of such impacts on the structures, especially the effects of the fires.
  2. For complex systems, architects should build models to model all aspects of the architecture. Tools appropriate to the task should be deployed and the right “level” of modelling needs to be done. Prototyping as a means of testing new or interesting technical challenges should also be adopted.
  3. Designers should faithfully implement the architectural blueprints and the architect should remain on the project during the design and implementation phases to check their blueprints are implemented as expected.
  4. Testing should be taken into account early and thought given to how the non-functional characteristics can be tested. Real limits should be applied taking into account the worst case (but realistic) scenario.
  5. Operations and maintenance should be involved from an early stage to make sure they are aware of the impact of unexpected events (for example a complete loss of all systems because of an aircraft crashing on the data centre) and have operational procedures in place to address such events.

As a final, and sobering, footnote to the above here’s a quote from a report produced by the British Computer Society and Royal Academy of Engineers called The Challenges of Complex IT Projects.

Compared with other branches of engineering in their formative years, not enough people (are known to) die from software errors. It is much easier to hide a failed software project than a collapsing bridge or an exploding chemical plant.

The implications of this statement would seem to be that it’s only when software has major, and very public, failures that people will really take note and maybe start to address problems before they occur. There are plenty of learning points (anti-patterns) in other industries that we can learn from and should probably do so before we start having major software errors that cause loss of life.

You may be interested in the follow up to the above report which describes some success stories and why they worked (just in case you thought it was all bad).

 

Why Didn’t I Do That?

You know how annoying it is when someone does something that is so blindingly obvious in retrospect you ask yourself the question “why didn’t I do that”? I’m convinced that the next big thing is not going to be the invention of something radically new but rather a new use of some of the tools we already have. When Tim Berners-Lee invented the world-wide web he didn’t create anything new. Internet protocols, mark-up languages and the idea of hypertext already existed. He just took them and put them together in a radically new way. What was the flash of inspiration that led to this and why did he do it and not someone else? After all that is basically the job of a Solution Architect, to apply technology in new and innovative ways that address business problems. So why did Tim Berners-Lee invent the world-wide web and not you, I or any of the companies we work for? Here are some observations thoughts.

  1. Tim had a clear idea of what he was trying to do. If you look at the paper Berners-Lee wrote, proposing what became the world-wide web, the first thing you’ll see it has a very clear statement of what it is he’s trying to do. Here is his statement of the problem he’s trying to solve together with an idea for the solution: Many of the discussions of the future at CERN and the LHC era end with the questions – “Yes, but how will we ever keep track of such a large project?” This proposal provides an answer to such questions. Firstly, it discusses the problem of information access at CERN. Then, it introduces the idea of linked information systems, and compares them with less flexible ways of finding information. It then summarise my short experience with non-linear text systems known as “hypertext”, describes what CERN needs from such a system, and what industry may provide. Finally, it suggests steps we should take to involve ourselves with hypertext now, so that individually and collectively we may understand what we are creating. Conclusion: Having a very idea or vision of what it is you are trying do helps focus the mind wonderfully and also helps to avoid woolly thinking. Even better is to give yourself a (realistic but aggressive) timescale in which to come up with a solution.
  2. Tim knew how to write a mean architecture document. The paper describing the idea behind what we now call “the web” (Information Management: A Proposal) is a masterpiece in understated simplicity. As well as the clear statement on what the problem is the paper goes on to describe the requirements that such an information management system should have as well as the solution, captured in a few beautifully simple architecture overview diagrams. I think this paper is a lesson to all of us in what a good architectural deliverable should be.
  3. Tim didn’t give up. In his book Weaving the Web Berners-Lee describes how he had a couple of abortive attempts at convincing his superiors of the need for his proposal for an information management system. Conclusion: Having a great idea is one thing. If you can’t explain that idea to others who, for example have the money to fund it, then you may as well not have that idea. Sometimes getting your explanation right takes time and a few attempts. The moral here is don’t give up. Learn from your failures and try again. It will test your perseverance and the faith you have in your idea but that is probably what you need to convince yourself it’s worth doing.
  4. Tim prototyped. Part of how Tim convinced people of the worth of what he was doing was to build a credible prototype of what it was he wanted to do. Tim was a C programmer and used his NeXT computer to build a working system of what it was he wanted to do. He actively encouraged his colleagues to use his prototype to get them to buy into his idea. Having a set of users already in place who are convinced by what you are doing, is one sure fire way of promoting the worth of your new system.
  5. Tim gave it all away. In may ways this is the most incredible thing of all about what Tim Berners-Lee did with the web; he gave it all away. Imagine if he patented his idea and took a ‘cut’ which gave him 0.00001¢ every time someone did a search or hit a page (I don’t know if this is legally possible, I’m no lawyer, but you get the idea). He would be fabulously rich beyond any of our most wildest dreams! And yet he (and indeed CERN) decided not to go down this path. This has surely got to be one of the all time most altruistic actions that anyone has ever taken.

10 Things I (Should Have) Learned in (IT) Architecture School

Inspired by this book I discovered in the Tate Modern book shop this week I don’t (yet) have 101 things I can claim I should have learned in IT Architecture School but this would certainly be my 10 things:

  1. The best architectures are full of patterns. This from Grady Booch. Whilst there is an increasing  need to be innovative in the architectures we create we also need to learn from what has gone before. Basing architectures on well-tried and tested patterns is one way of doing this.
  2. Projects that develop IT systems rarely fail for technical reasons. In this report the reasons for IT project failures are cited and practically all of them are because of human (communication) failures rather than real technical challenges. Learning point: effective IT architects need to have soft (people skills) as well as hard (technical skills). See my thoughts on this here.
  3. The best architecture documentation contains multiple viewpoints. There is no single viewpoint that adequately describes an architecture. Canny architects know this and use viewpoint frameworks to organise and categorise these various viewpoints. Here’s a paper myself and some IBM colleagues wrote a while ago describing one such viewpoint framework. You can also find out much more about this in the book I wrote with Peter Eeles last year.
  4. All architecture is design but not all design is architecture. Also from Grady. This is a tricky one and alludes to the thorny issue of “what is architecture” and “what is design”. The point is that the best practice of design (separation of concerns, design by contract, identification of clear component responsibilities etc) is also the practice of good architecture how architectures focus is on the significant elements that drive the overall shape of the system under development. For more on this see here.
  5. A project without a system context diagram is doomed to fail. Quite simply the system context bounds the system (or systems) under development and says what is in scope and what is out. If you don’t do this early you will spend endless hours later on arguing about this. Draw a system context early, get it agreed and print it out at least A2 size and pin it in highly visible places. See here for more discussion on this.
  6. Complex systems may be complicated but complicated systems are not necessarily complex. For more discussion on this topic see my blog entry here.
  7. Use architectural blueprints for building systems but use architectural drawings for communicating about systems. A blueprint is a formal specification of what is to be. This is best created using a formal modeling language such as UML or Archimate. As well as this we also need to be able to communicate our architectures to none or at least semi-literate IT people (often the people who hold the purse). Such communications are better done using drawings, not created using formal modeling tools but done with drawing tools. It’s worth knowing the difference and when to use each.
  8. Make the process fit the project, not the other way around. I’m all for having a ‘proper’ software delivery life-cycle (SDLC) but the first thing I do when deploying one on a project is customise it to my own purposes. In software development as in gentleman’s suits there is no “one size fits all”. Just like you might think you can pick up a suit at Marks and Spencers that perfectly fits you can’t. You also cannot take an off-the-shelf SDLC that perfectly fits your project. Make sure you customise it so it does fit.
  9. Success causes more problems than failure.This comes from Clay Shirky’s new book Cognitive Surplus. See this link at TED for Clay’s presentation on this topic. You should also check this out to see why organisations learn more from failure than success. The point here is that you can analyse a problem to death and not move forward until you think you have covered every base but you will always find some problem or another you didn’t expect. Although you might (initially) have to address more problems by not doing too much up front analysis in the long run you are probably going to be better off. Shipping early and benefitting from real user experience will inevitably mean you have more problems but you will learn more from these than trying to build the ‘perfect’ solution but running the risk of never sipping anything.
  10. Knowing how to present an architecture is as important as knowing how to create one. Although this is last, it’s probably the most important lesson you will learn. Producing good presentations that describe an architecture, that are targeted appropriately at stakeholders, is probably as important as the architecture itself. For more on this see here.
Enhanced by Zemanta

On Being an Artitect

My mum, who just turned 85 this month, mispronounces the word architect. She says “artitect” where a “t” replaces the “ch”. I’ve tried to put her right on this a few times but I’ve just finished reading the book by Seth Godin called “Linchpin – Are You Indispensable?” and decided that actually she’s probably been pronouncing the word right after all. I’ve decided that the key bit she’s got right and I (and all of the rest of us haven’t) is the “art” bit. Let me explain why.

The thrust of Seth’s book is that to survive in today’s world of work you have to bring a completely different approach to the way you do that work. In other words you have to be an artist. You have to create things that others can’t or won’t because they just do what they are told not what they think could be the right creative approach to building something that is radically new. Before I proceed much further with this thread I guess we need to define what we mean by artist in this context. I like this from Seth’s book:

An artist is someone who uses bravery, insight, creativity and boldness to challenge the status-quo. And an artist takes it personally.

As to what artists create:

Art isn’t only a painting. Art is anything that is creative, passionate and personal.

I’d also add something like “and changes the world for the better” to that last statement otherwise I think that some fairly dodgy activities might pass for art as well (or maybe even that is my lizard brain kicking in, see below).

Of course that’s not to say that you shouldn’t learn the basics of your craft whether you are a surgeon, a programmer or a barista in a coffee shop. Instead you should learn them but then forget them because after that they will hold you back. Picasso was a great “classical” artist. In other words he knew how to create art that would have looked perfectly respectable in traditional parts of the art galleries of the world where all the great masters work is displayed that follows the literal interpretation of the world. However once he had mastered that he threw the rule book out completely and started to create art that no one else had dared to do and changed the art-world forever.

So an artitect (rather than an architect) is someone who uses creativity, insight, breadth of vision and passion to create architectures (or even artitectures) that are new and different in someway that meet the challenges laid down for it, and then some.

Here are the five characteristics that I see a good artitect as having:

  1. Artitects are always creating new “mixes”. Some of the best IT architects I know tell me how they are creating new solutions to problems by pulling together software components and making them work together in interesting and new ways. Probably one of the greatest IT architects of all time – Tim Berners-Lee who invented the world-wide web – actually used a mix of three technologies and ideas that were already out there. Markup languages, the transmission control protocol (TCP) and hypertext. What Tim did was to put them together in quite literally a world-changing way.
  2. Artitects don’t follow the process in the manual, instead they write the manual. If you find yourself climbing the steps that someone else has already carved out then guess what, you’ll end up in the same place as everyone else, not somewhere that’s new and exciting.
  3. Artitects look at problems in a radically different way to everyone else. They try to find a completely different viewpoint that others won’t have seen and to build a solution around that. I liken this to a great photograph that takes a view that others have seen a thousand times before and puts a completely different spin on it either by standing in a different place, using a different type of lens or getting creative in the photo-editing stage.
  4. Artitects are not afraid to make mistakes or to receive ridicule from their peers and colleagues. Instead they positively thrive on it. Today you will probably have tens or even hundreds of ideas for solutions to problems pop into your head and pop straight out again because internally you are rejecting them as not been the “right approach”. What if instead of allowing your lizard brain (that is the part of your brain that evolved first and kept you safe on the savanna when you could easily get eaten by a sabre-toothed tiger) to have its say you wrote those ideas down and actually tried out a few? Nine out of ten or 99 out of a 100 of them might fail causing some laughter from your peers but the one that doesn’t could be great! Maybe even the next world-wide web?
  5. Artitects are always seeking out new ideas and new approaches from unlikely places. They don’t just seek out inspiration from the usual places that their profession demands but go to places and look to meet people in completely different disciplines. For new ideas talk to “proper” artists, real architects or maybe even accountants!!!

Perhaps from now on we should all do a bit less architecture and a bit more artitecture?