Machines like us? – Part II

Brain image by Elisa from Pixabay. Composition by the author

[Creativity is] the relationship between a human being and the mysteries of inspiration.

Elizabeth Gilbert – Big Magic

Another week and another letter from a group of artificial intelligence (AI) experts and public figures expressing their concern about the risk of AI. This one has really gone mainstream with Channel 4 News here in the UK having it as their lead story on their 7pm broadcast. They even managed to get Max Tegmark as well as Tony Cohn – professor of automated reasoning at the University of Leeds – on the programme to discuss this “risk of extinction”.

Whilst I am really pleased that the risks from AI are finally being discussed we must be careful not to focus too much on the Terminator-like existential threat that some people are predicting if we don’t mitigate against them in some way. There are certainly some scenarios which could lead to an artificial general intelligence (AGI) causing destruction on a large scale but I don’t believe these are imminent and as likely to happen as the death and destruction likely to be caused by pandemics, climate change or nuclear war. Instead, some of the more likely negative impacts of AGI might be:

It’s worth pointing out that all of the above scenarios do not involve AI’s suddenly deciding themselves they are going to wreak havoc and destruction but would involve humans being somewhere in the loop that initiates such actions.

It’s also worth noting that there are fairly serious rebuttals emerging to the general hysterical fear and paranoia being promulgated by the aforementioned letter. Marc Andreessen for example says that what “AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence – and many others, from the creation of new medicines to ways to solve climate change to technologies to reach the stars – much, much better from here”.

Whilst it is possible that AI could be used as a force for good is it, as Naomi Klein points out, really going to happen under our current economic system? A system that is built to maximize the extraction of wealth and profit for a small group of hyper-wealthy companies and individuals. Is “AI – far from living up to all those utopian hallucinations – [is] much more likely to become a fearsome tool of further dispossession and despoilation”. I wonder if this topic will be on the agenda for the proposed global AI ‘safety measure’ summit in autumn?

Whilst both sides of this discussion have good valid arguments for and against AI, as discussed in the first of these posts, what I am more interested in is not whether we are about to be wiped out by AI but how we as humans can coexist with this technology. AI is not going to go away because of a letter written by a groups of experts. It may get legislated against but we still need to figure out how we are going to live with artificial intelligence.

In my previous post I discussed whether AI is actually intelligent as measured against Tegmark’s definition of intelligence, namely the: “ability to accomplish complex goals”. This time I want to focus on whether AI machines can actually be creative.

As you might expect, just like with intelligence, there are many, many definitions of creativity. My current favourite is the one by Elizabeth Gilbert quoted above however no discussion on creativity can be had without mentioning the late Ken Robinsons definition: “Creativity is the process of having original ideas that have value”.

In the above short video Robinson notes that imagination is what is distinctive about humanity. Imagination is what enables us to step outside our current space and bring to mind things that are not present to our senses. In other words imagination is what helps us connect our past with the present and even the future. We have, what is quite possibly (or not) the unique ability in all animals that inhabit the earth, to imagine “what if”. But to be creative you do actually have to do something. It’s no good being imaginative if you cannot turn those thoughts into actions that create something new (or at least different) that is of value.

Professor Margaret Ann Boden who is Research Professor of Cognitive Science defines creativity as ”the ability to come up with ideas or artefacts that are new, surprising or valuable.” I would couple this definition with a quote from the marketeer and blogger Seth Godin who, when discussing what architects do, says they “take existing components and assemble them in interesting and important ways”. This too as essential aspect of being creative. Using what others have done and combining these things in different ways.

It’s important to say however that humans don’t just pass ideas around and recombine them – we also occassionally generate new ideas that are entirely left-field through processes we do not understand.

Maybe part of the reason for this is because, as the writer William Deresiewicz says:

AI operates by making high-probability choices: the most likely next word, in the case of written texts. Artists—painters and sculptors, novelists and poets, filmmakers, composers, choreographers—do the opposite. They make low-probability choices. They make choices that are unexpected, strange, that look like mistakes. Sometimes they are mistakes, recognized, in retrospect, as happy accidents. That is what originality is, by definition: a low-probability choice, a choice that has never been made.

William Deresiewicz, Why AI Will Never Rival Human Creativity

When we think of creativity, most of us associate it to some form of overt artistic pursuit such as painting, composing music, writing fiction, sculpting or photography. The act of being creative is much more than this however. A person can be a creative thinker (and doer) even if they never pick up a paintbrush or a musical instrument or a camera. You are being creative when you decide on a catchy slogan for your product; you are being creative when you pitch your own idea for a small business; and most of all, you are being creative when you are presented with a problem and come up with a unique solution. Referring to the image at the top of my post, who is the most creative – Alan Turing who invented a code breaking machine that historians reckon reduced the length of World War II by at least two years saving millions of lives or Picasso whose painting Guernica expressed his outrage against war?

It is because of these very human reasons on what creativity is that AI will never be truly creative or rival our creativity. True creativity (not just a mashup of someone else’s ideas) only has meaning if it has an injection of human experience, emotion, pain, suffering, call it what you will. When Nick Cave was asked what he thought of ChatGPT’s attempt at writing a song in the style of Nick Cave, he answered this:

Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend.

Nick Cave, The Red Hand Files

Imagination, intuition, influence and inspiration (the four I’s of creativity) are all very human characteristics that underpin our creative souls. In a world where having original ideas sets humans apart from machines, thinking creatively is more important than ever and educators have a responsibility to foster, not stifle their students’ creative minds. Unfortunately our current education system is not a great model for doing this. We have a system whose focus is on learning facts and passing exams and which will never allow people to take meaningful jobs that allow them to work alongside machines that do the grunt work whilst allowing them to do what they do best – be CREATIVE. If we don’t do this, the following may well become true:

In tomorrow’s workplace, either the human is telling the robot what to do or the robot is telling the human what to do.

Alec Ross, The Industries of the Future

What Have we Learnt from Ten Years of the iPhone?

Ten years ago this week (on 9th January 2007) the late Steve Jobs, then at the hight of his powers at Apple, introduced the iPhone to an unsuspecting world. The history of that little device (which has got both smaller and bigger in the interceding ten years) is writ large over the entire Internet so I’m not going to repeat it here. However it’s worth looking at the above video on YouTube not just to remind yourself what a monumental and historical moment in tech history this was, even though few of us realised it at the time, but also to see a masterpiece in how to launch a new product.

Within two minutes of Jobs walking on stage he has the audience shouting and cheering as if he’s a rock star rather than a CEO. At around 16:25 when he’s unveiled his new baby and shows for the first time how to scroll through a list in a screen (hard to believe that ten years ago know one knew this was possible) they are practically eating out of his hand and he still has over an hour to go!

This iPhone keynote, probably one of the most important in the whole of tech history, is a case study on how to deliver a great presentation. Indeed, Nancy Duart in her book Resonate, has this as one of her case studies for how to “present visual stories that transform audiences”. In the book she analyses the whole event to show how Jobs’ uses all of the classic techniques of storytelling, establish what is and what could be, build suspense, keep your audience engaged, make them marvel and finally  show them a new bliss.

The iPhone product launch, though hugely important, is not what this post is about though. Rather, it’s about how ten years later the iPhone has kept pace with innovations in technology to not only remain relevant (and much copied) but also to continue to influence (for better and worse) the way people interact, communicate and indeed live. There are a number of enabling ideas and technologies, both introduced at launch as well as since, that have enabled this to happen. What are they and how can we learn from the example set by Apple and how can we improve on them?

Open systems generally beat closed systems

At its launch Apple had created a small set of native apps the making of which was not available to third-party developers. According to Jobs, it was an issue of security. “You don’t want your phone to be an open platform,” he said. “You don’t want it to not work because one of the apps you loaded that morning screwed it up. Cingular doesn’t want to see their West Coast network go down because of some app. This thing is more like an iPod than it is a computer in that sense.”

Jobs soon went back on that decision which is one of the factors that has led to the overwhelming success of the device. There are now 2.2 million apps available for download in the App Store with over 140 billion downloads made since 2007.

As has been shown time and time again, opening systems up and allowing access to third party developers nearly always beat keeping systems closed and locked down.

Open systems need easy to use ecosystems

Claiming your system is open does not mean developers will flock to it to extend your system unless it is both easy and potentially profitable to do so. Further, the second of these is unlikely to happen unless the first enabler is put in place.

Today with new systems being built around Cognitive computing, the Internet of Things (IoT) and Blockchain companies both large and small are vying with each other to provide easy to use but secure ecosystems that allow these new technologies to flourish and grow, hopefully to the benefits to business and society as a whole. There will be casualties on the way but this competition, and the recognition that systems need to be built right rather than us just building the right system at the time is what matters.

Open systems must not mean insecure systems

One of the reasons Jobs gave for not initially making the iPhone an open platform was his concerns over security and for hackers to break into those systems wreaking havoc. These concerns have not gone away but have become even more prominent. IoT and artificial intelligence, when embedded in everyday objects like cars and  kitchen appliances as well as our logistics and defence systems have the potential to cause there own unique and potentially disastrous type of destruction.

The cost of data breaches alone is estimated at $3.8 to $4 million and that’s without even considering the wider reputational loss companies face. Organisations need to monitor how security threats are evolving year to year and get well-informed insights about the impact they can have on their business and reputation.

Ethics matter too

With all the recent press coverage of how fake news may have affected the US election and may impact the upcoming German and French elections as well as the implications of driverless cars making life and death decisions for us, the ethics of cognitive computing is becoming a more and more serious topic for public discussion as well as potential government intervention.

In October last year the Whitehouse released a report called Preparing for the Future of Artificial Intelligence. The report looked at the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy and made a number of recommendations on further actions. These included:

  • Prioritising open training data and open data standards in AI.
  • Industry should work with government to keep government updated on the general progress of AI in industry, including the likelihood of milestones being reached
  • The Federal government should prioritize basic and long-term AI research

As part of the answer to addressing the Whitehouse report this week a group of private investors, including LinkedIn co-founder Reid Hoffman and eBay founder Pierre Omidyar, launched a $27 million research fund, called the Ethics and Governance of Artificial Intelligence Fund. The group’s purpose is to foster the development of artificial intelligence for social good by approaching technological developments with input from a diverse set of viewpoints, such as policymakers, faith leaders, and economists.

I have discussed before about transformative technologies like the world wide web have impacted all of our lives, and not always for the good. I hope that initiatives like that of the US government (which will hopefully continue under the new leadership) will enable a good and rationale public discourse on how  we allow these new systems to shape our lives for the next ten years and beyond.

What Makes a Tech City? (Hint: It’s Not the Tech)

Boulton, Watt and Murdoch
The above photograph is of a statue in Centenary Square, Birmingham in the UK. The three figures in it: Matthew Boulton, James Watt and William Murdoch were the tech pioneers of their day, living in and around Birmingham and being associated with a loosely  knit group who referred to themselves as The Lunar Society. The history of the Lunar Society and the people involved has been captured in the book The Lunar Men by Jenny Uglow.

“Amid fields and hills, the Lunar men build factories, plan canals, make steam-engines thunder. They discover new gases, new minerals and new medicines and propose unsettling new ideas. They create objects of beauty and poetry of bizarre allure. They sail on the crest of the new. Yet their powerhouse of invention is not made up of aristocrats or statesmen or scholars but of provincial manufacturers, professional men and gifted amateurs – friends who meet almost by accident and whose lives overlap until they die.”

From The Lunar Men by Jenny Uglow

You don’t have to live in the UK to have heard that Birmingham, like many of the other great manufacturing cities of the Midlands and Northern England has somewhat lost its way over the century or so since the Lunar Men were creating their “objects of beauty and poetry of bizarre allure”. It’s now sometimes hard to believe that these great cities were the powerhouses and engines of the industrial revolution that changed not just England but the whole world. This is something that was neatly summed up by Steven Knight, creator of the BBC television programme Peaky Blinders set in the lawless backstreets of Birmingham in the  1920’s. In a recent interview in the Guardian Knight says:

“It’s typical of Brum that the modern world was invented in Handsworth and nobody knows about it. I am trying to start a “Make it in Birmingham” campaign, to get high-tech industries – film, animation, virtual reality, gaming – all into one place, a place where people make things, which is what Birmingham has always been.”

Likewise Andy Street, Managing Director of John Lewis and Chair of the Greater Birmingham & Solihull Local Enterprise Partnership had this to say about Birmingham in his University of Birmingham Business School Advisory Board guest lecture last year:

“Birmingham was once a world leader due to our innovations in manufacturing, and the city is finally experiencing a renaissance. Our ambition is to be one of the biggest, most successful cities in the world once more.”

Andy Street  CBE – MD of John Lewis

If Birmingham and cities like it, not just in England but around the world, are to become engines of innovation once again then they need to take a step change in how they go about doing that. The lesson to be learned from the Lunar Men is that they did not wait for grants from central Government or the European Union or for some huge corporation to move in and take things in hand but that they drove innovation from their own passion and inquisitiveness about how the world worked, or could work. They basically got together, decided what needed to be done and got on with it. They literally designed and built the infrastructure that was to be form the foundations of innovation for the next 100 years.

Today we talk of digital innovation and how the industries of our era are disrupting traditional ones (many of them formed by the Lunar Men and their descendants) for better and for worse. Now every city wants a piece of that action and wants to emulate the shining light of digital innovation and disruption, Silicon Valley in California. Is that possible? According to the Medium post To Invent the Future, You Must Understand the Past, the answer is no. The post concludes by saying:

“…no one will succeed because no place else — including Silicon Valley itself in its 2015 incarnation — could ever reproduce the unique concoction of academic research, technology, countercultural ideals and a California-specific type of Gold Rush reputation that attracts people with a high tolerance for risk and very little to lose.”

So can this really be true? High tolerance to risk (and failure) is certainly one of the traits that makes for a creative society. No amount of tax breaks or university research programmes is going to fix that problem. Taking the example of the Lunar Men though, one thing that cities can do to disrupt themselves from within is to effect change from the bottom up rather than the top down. Cities are made up of citizens after all and they are the very people that not only know what needs changing but also are best placed to bring about that change.

With this in mind, an organisation in Birmingham called Silicon Canal (see here if you want to know where that name comes from) of which I am a part, has created a white paper putting forward our ideas on how to build a tech and digital ecosystem in and around Birmingham. You can download a copy of the white paper here.
Whitepaper-cover-212x300

The paper not only identifies the problem areas but also how things can be improved and suggests potential solutions to grow the tech ecosystem in the Greater Birmingham area so that it competes on an international stage. Download the white paper, read it and if you are based in Birmingham join in the conversation and if you’re not use the research contained within it to look at your own city and how you can help change it for the better.

This paper was launched at an event this week in the new iCentrum building at Innovation Birmingham which is a great space that is starting to address one of the issues highlighted in the white paper, namely to bring together two key elements of a successful tech ecosystem, established companies and entrepreneurs.

Another event that is taking place in Birmingham next month is TEDx Brum – The Power of US which promises to have lots of inspiring talks by local people who are already effecting change from within.

As a final comment if you’re still not sure that you have the power to make changes that make a difference here are some words from the late Steve Jobs:

“Everything around you that you call life was made up by people that were no smarter than you and you can change it, you can influence it, you can build your own things that other people can use.”

Steve Jobs

Complexity is Simple

I was taken with this cartoon and the comments put up by Hugh Macleod last week over at his gapingvoid.com blog so I hope he doesn’t mind me reproducing it here.

Complexity is Simple (c) Hugh Macleod 2014
Complexity is Simple (c) Hugh Macleod 2014

Complex isn’t complicated. Complex is just that, complex.

Think about an airplane taking off and landing reliably day after day. Thousands of little processes happening all in sync. Each is simple. Each adds to the complexity of the whole.

Complicated is the other thing, the thing you don’t want. Complicated is difficult. Complicated is separating your business into silos, and then none of those silos talking to each other.

At companies with a toxic culture, even what should be simple can end up complicated. That’s when you know you’ve really got problems…

I like this because it resonates perfectly well with a blog post I put up almost four years ago now called Complex Systems versus Complicated Systems. where I make the point that “whilst complicated systems may be complex (and exhibit emergent properties) it does not follow that complex systems have to be complicated“. A good architecture avoids complicated systems by building them out of lots of simple components whose interactions can certainly create a complex system but not one that needs to be overly complicated.

Discover Problems, Don’t Solve Them

A while ago I wrote a post called Bring me problems not solutions. An article by Don Peppers on Linkedin called ‘Class of 2013: You Can’t Make a Living Just by Solving Problems’ adds an interesting spin to this and piles even more pressure on those people entering the job market now, as well as those of us figuring out how to stay in it!As we all know, Moore’s Law says that the number of transistors on integrated circuits doubles approximately every two years. As this power has increased the types of problems computers can solve has also increased exponentially. By the time today’s graduates reach retirement age, say in 50 years time (which itself might be getting further away thus compounding the problem) computers will be several million times more powerful than they are today.

As Peppers says:

If you can state something as a technical problem that has a solution – a task to be completed – then eventually this problem can and will be solved by computer.

This was always the case, it’s just that as computers are able to perform even more calculations per second the kinds of problems will become more and more complex that they can solve. Hence the white collar and skilled professional jobs will also become consumed by the ever increasing power of the computer. Teachers, lawyers, doctors, financial analysts, traders and even those modern day pariahs of our society journalists and politicians will continue to see their jobs become redundant.

So if the salaried jobs of even those of us who solve problems for a living continue to disappear what’s left? Peppers suggests there are two potential areas that computers will struggle with, one is to become very good at dealing with interpersonal issues – people skills (darn it, those pesky HR types are going to be in work for a while longer). The other way is not to focus on solving problems but on discovering them.

Discovering problems is something that computers find hard to do, and probably will continue to do so. It’s just too difficult to bound the requirements and define the tasks that are needed for creating a problem. Discovering new problems has another name, it’s also known as “creativity.” Creativity involves finding and solving a problem that wasn’t there before. How to be creative is a very profitable source of income for authors right now with more and more books appearing on this subject every month. However, here’s the irony, just as we are realising we need to be fostering creativity as a skill even more we are quite literally turning the clock back on our children’s innate abilities to be creative. As explained in this video (The Faustian Bargain) “the way we raise children these days is at odds with the way we’ve evolved to learn”.

Sadly our politicians don’t seem to get this. Here in the UK, the head of state for education, Michael Gove, doesn’t understand creativity and his proposed education reforms “fly in the face of all that we know about creativity and how best to nurture it”. It seems that the problem is not just confined to the UK (and probably other Northern Hemisphere countries). In India the blogger and photographer Sumeet Moghe is thinking that his daughter doesn’t deserve school. and is struggling with what alternatives a concerned parent might provide.

So, what to do? Luckily there are people that realise the importance of a creative education, fostering a love of learning and nurturing the concept of lifelong learning. Sir Ken Robinson’s TED talk on how schools kill creativity is one of the most watched presentations of all time. So, what to do? Watch this and other talks by Ken Robinson as well as other talks on TED that deal in matters of creativity. Learn what you can and get involved in the “creative life” as much as possible. If you live in countries that don’t support creativity in education then write to your elected representative and ask her or him what they, and the government they are a part of, are doing about it. For the sake of all of us this is a problem that is too important to let our leaders get away with not fixing.

I Think Therefore I Blog

I recently delivered a short presentation called “I Think Therefore I Blog”. Whilst this does not not specifically have anything to do with software architecture, I hope it might provide some encouragement to colleagues and others out there in the blogosphere as to why blogging can be good for you and why it’s worth pursuing, sometimes in the face of no or very little feedback!

Reason #1: Blogging helps you think (and reflect)
The author Joan Didion once said, “I don’t know what I think until I try to write it down.” Amazon CEO Jeff Bezos preaches the value of writing long form prose to clarify thinking. Blogging, as a form of self expression (and I’m not talking about blogs that just post references to other material)  forces you to think by writing down your arguments and assumptions. This is the single biggest reason to do it, and I think it alone makes it worth it.

You have a lot of opinions and I’m sure you hold some of them pretty strongly. Pick one and write it up in a post — I’m sure your opinion will change somewhat, or at least become more nuanced. Putting something down on ‘paper’ means a lot of the uncertainty and vagueness goes away leaving you to defend your position for yourself. Even if no one else reads or comments on your blog (and they often don’t) you still get the chance to clarify your thoughts in your own mind, and as you write, they become even clearer.

The more you blog, the better you become at writing for your audience, managing your arguments, defending your position, thinking critically. I find that if I don’t understand something very well and want to learn more about it, writing a blog post about that topic focuses my thinking and helps me learn it better.

Reason #2: Blogging enforces discipline
A blog is a broadcast, not a publication. It is not static. Like a shark, if it stops moving, it dies. If you want your blog to last and grow you need to write regularly, it therefore enforces some form of discipline on your life.

Although I don’t always achieve this I do find that writing a little, a lot is better than trying to write a whole post in one go. Start a post with an idea, write it down, then add to it as your thoughts develop, you’ll soon have something you are happy with and are ready to publish.  The key thing is to start as soon as you have an idea, capture it straight away before you forget it then expand on it.

Reason #3: Blogging gives you wings
If you persist with blogging, you will discover that you develop new and creative ways to articulate what you want to say. As I write, I often search for alternative ways to express myself. This can be through images, quotes, a retelling of old experiences through stories, videos, audio, or useful hyperlinks to related web resources.

You have many ways to convey your ideas, and you are only limited by your own imagination. Try out new ways of communicating and take risks. Blogging is the platform that allows you to be creative.

Reason #4: Blogging creates personal momentum
Blogging puts you out there, for all the word to see, to be judged and criticized for both your words and how you structure them. It’s a bit intimidating, but I know the only way to become a better writer is to keep doing it.

Once you have started blogging, and you realise that you can actually do it, you will probably want to develop your skills further. Blogging can be time consuming, but the rewards are ultimately worth it. In my experience, I find myself breaking out of inertia to create some forward movement in my thinking, especially when I blog about topics that may be emotive, controversial, challenging. The more you blog, the better you become at writing for your audience, managing your arguments, defending your position, thinking critically. The photographer Henri Cartier-Bresson said “your first 10,000 photos are your worst”, a similar rule probably applies to blog posts!

I also believe blogging makes be better at my job. I can’t share my expertise or ideas if I don’t have any. My commitment to write 2-4 times per month keeps me motivated to experiment and discover new things that help me develop at work and personally.

Conversely, if I am not blogging regularly then I need to ask myself why that is. Is it because I’m not getting sufficient stimulus or ideas from what I am doing and if so what can I do to change that?

Reason #5: Blogging gives you (more) eminence
Those of us that work in the so called knowledge economy need to build and maintain, for want of a better word, our ’eminence’. Eminence is defined as being “a position of superiority, high rank or fame”. What I mean by eminence here is having a position which others look to for guidance, expertise or inspiration. You are known as someone who can offer a point of view or an opinion. A blog gives you that platform and also allows you to engage in the real world.

So, there you have it, my reasons for blogging. As a postscript to this I fortuitously came across this post as I was writing which adds some kind of perspective to the act of blogging. I suggest you give the post a read but here is a quote which gives a good summary:

…if you start blogging thinking that you’re well on your way to achieving Malcolm Gladwell’s career, you are setting yourself for disappointment. It will suck the enjoyment out of writing. Every completed post will be saddled with a lot of time staring at traffic stats that refuse to go up. It’s depressing.

I have to confess to doing the occasional bit of TSS (traffic stat staring) myself but at the same time have concluded there is no point in chasing the ratings as they might have said in more traditional broadcast media. If you want to blog, do it for its own sake and (some of) the reasons above, don’t do it because you think you will become famous and/or rich (though don’t entirely close the door to that possibility).

Steal Like an Artist

David Bowie is having something of a resurgence this year. Not only has he released a critically acclaimed new album, The Next Day, there is also an exhibition of the artefacts from his long career at the Victoria & Albert museum in London. These includes handwritten lyrics, original costumes, fashion, photography, film, music videos, set designs and Bowie’s own instruments.

David Bowie was a collector. Not only did he collect, he also stole. As he said in a Playboy interview back in 1976:

The only art I’ll ever study is stuff that I can steal from.

He even steals from himself, check out the cover of his new album to see what I mean.

Austin Kleon has written a whole book on this topic, Steal Like an Artist, in which he makes the case that nothing is original and that nine out of ten times when someone says that something is new, it’s just that they don’t know the the original sources involved. Kleon goes on to say:

What a good artist understands is that nothing comes from nowhere. All creative work builds on what came before. Nothing is completely original.

So what on earth has this got to do with software architecture?

Eighteen years ago one of the all time great IT books was published. Design Patterns – Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides introduced the idea of patterns, originally a construct used by the building architect Christopher Alexander,  to the IT world at large. As the authors say in the introduction to their book:

One thing expert designers know not to do is solve every problem from first principles. Rather, they reuse solutions that have worked for them in the past. When they find a good solution, they use it again and again. Such experience is part of what makes them experts.

So expert designers ‘steal’ work they have already used before. The idea of the Design Patterns book was to publish patterns that others had found to work for them so they could be reused (or stolen). The patterns in Design Patterns were small design elements that could be used when building object-oriented software. Although they included code samples, they were not directly reusable without adaptation, not to mention coding, in a chosen programming language.

Fast forward eighteen years and the concept of patterns is alive and well but has reached a new level of abstraction and therefore reuse. Expert Integrated Systems like IBM’s PureApplication SystemTM use patterns to provide fast, high-quality deployments of sophisticated environments that enable enterprises to get new business applications up and running as quickly as possible. Whereas the design patterns from the book by Gamma et al were design elements that could be used to craft complete programs the PureApplication System patterns are collections of virtual images that form a a complete system. For example, the Business Process Management (BPM) pattern includes an HTTP server, a clustered pair of BPM servers, a cluster administration server, and a database server. When an administrator deploys this pattern, all the inter-connected parts are created and ready to run together. Time to deploy such systems is reduced from days or even, in some cases, weeks to just hours.

Some may say that the creation and proliferation of such patterns is another insidious step to the deskilling of our profession. If all it takes to deploy a complex BPM system is just a few mouse clicks then where does that leave those who once had to design such systems from scratch?

Going back to our art stealing analogy, a good artist does not just steal the work of others and pass it off as their own (at least most of them don’t) rather, they use the ideas contained in that work and build on them to create something new and unique (or at least different). Rather than having to create new stuff from scratch they adopt the ideas that others have come up with then adapt them to make their own creations. These creations themselves can then be used by others and further adapted thus the whole thing becomes a sort of virtuous circle:Adopt Adapt

A good architect, just like a good artist, should not fear patterns but should embrace them and know that they free him up to focus on creating something that is new and of real (business) value. Building on the good work that others have done before us is something we should all be encouraged to do more of. As Salvador Dalis said:

Those who do not want to imitate anything, produce nothing.

Is the Raspberry Pi the New BBC Microcomputer?

There has been much discussion here in the UK over the last couple of years about the state of tech education and what should be done about it. The concern being that our schools are not doing enough to create the tech leaders and entrepreneurs of the future.

The current discussion kicked off  in January 2011 when Microsoft’s director of education, Steve Beswick, claimed that in UK schools there is much “untapped potential” in how teenagers use technology. Beswick said that a Microsoft survey had found that 71% of teenagers believed they learned more about information technology outside of school than in formal information and communication technology (ICT) lessons. An interesting observation given that one of the criticisms often leveled at these ICT classes is that they just teach kids how to use Microsoft Office.The discussion moved in August of 2011, this time at the Edinburgh International Television Festival where Google chairman Eric Schmidt said he thought education in Britain was holding back the country’s chances of success in the digital media economy. Schmidt said he was flabbergasted to learn that computer science was not taught as standard in UK schools, despite what he called the “fabulous initiative” in the 1980s when the BBC not only broadcast programmes for children about coding, but shipped over a million BBC Micro computers into schools and homes.

January 2012 saw even the schools minister, Michael Gove, say that the ICT curriculum was “a mess” and must be radically revamped to prepare pupils for the future (Gove suspended the ICT Curriculum in September 2012). All well and good but as some have commented “not everybody is going to need to learn to code, but everyone does need office skills”.

In May 2012 Schmidt was back in the UK again, this time at London’s Science Museum where he announced that Google would provide the funds to support Teach First – a charity which puts graduates on a six-week training programme before deploying them to schools where they teach classes over a two-year period.

So, what now? With the new ICT curriculum not due out until 2014 what are the kids who are about to start their GCSE’s to do? Does it matter they won’t be able to learn ICT at school? The Guardian’s John Naughton proposed a manifesto for teaching computer science in March 2012 as part of his papers digital literacy campaign.  As I’ve questioned before should it be the role of schools to teach the very specific programming skills being proposed; skills that might be out of date by the time the kids learning them enter the workforce? Clearly something needs to be done otherwise, as my colleague Dr Rick Robinson says, where will the next generation of technology millionaires come from? bbc micro

Whatever shape the new curriculum takes, one example (one that Eric Schmidt himself used) of a success story in the learning of IT skills is that of the now almost legendary BBC Microcomputer. A project started 30 years ago this year. For those too young to remember, or were not around in the UK at the time, the BBC Microcomputer got its name from project devised by the BBC to enhance the nation’s computer literacy. The BBC wanted a machine around which they could base a series called The Computer Programme, showing how computers could be used, not just for computer programming but also for graphics, sound and vision, artificial intelligence and controlling peripheral devices. To support the series the BBC drew up a spec for a computer that could be bought by people watching the programme to actually put into practice what they were watching. The machine was built by Acorn the spec of which you can read here.ba8dd-bbcmicroscreen

The BBC Micro was not only a great success in terms of the television programme, it also helped spur on a whole generation of programmers. On turning the computer on you were faced with the screen on the right. The computer would not do anything unless you fed it instructions using the BASIC programming language so you were pretty much forced to learn programming! I can vouch for this personally because although I had just entered the IT profession at the time this was in the days of million pound mainframes hidden away in backrooms guarded jealously by teams of computer operators who only gave access via time-sharing for minutes at a time. Having your own computer which you could tap away on and get instant results was, for me, a revelation.

Happily it looks like the current gap in the IT curriculum may about to be filled by the humble Raspberry Pi computer. The idea behind the Raspberry Pi came from a group of computer scientists at Cambridge, England’s computer laboratory back in 2006. As Ebon Upton founder and trustee of the Raspberry Pi Foundation said:

Something had changed the way kids were interacting with computers. A number of problems were identified: the colonisation of the ICT curriculum with lessons on using Word and Excel, or writing webpages; the end of the dot-com boom; and the rise of the home PC and games console to replace the Amigas, BBC Micros, Spectrum ZX and Commodore 64 machines that people of an earlier generation learned to program on.

Out of this concern at the lack of programming and computer skills in today’s youngsters was born the Raspberry Pi computer (see below) which began shipping in February 2012. Whilst the on board processor and peripheral controllers on this credit card sized, $25 device are orders of magnitude more powerful than anything the BBC Micros and Commodore 64 machines had, in other ways this computer is even more basic than any of those computers. It comes with no power supply, screen, keyboard, mouse or even operating system (Linux can be installed via a SD card). There is quite a learning curve just to get up and running although what Raspberry Pi has going for it that the BBC Micro did not is the web and the already large number of help pages as well as ideas for projects and even the odd Raspberry Pi Jam (get it). Hopefully this means these ingenious devices will not become just another piece of computer kit lying around in our school classrooms.e65ef-raspberrypi

The Computer Literacy Project (CLP) which was behind the idea of the original BBC Micro and “had the grand ambition to change the culture of computing in Britain’s homes” produced a report in May of this year called The Legacy of the BBC Micro which, amongst other things, explores whether the CLP had any lasting legacy on the culture of computing in Britain. The full report can be downloaded here. One of the recommendations from the report is that “kit, clubs and formal learning need to be augmented by support for individual learners; they may be the entrepreneurs of the future“. 30 years ago this support was provided by the BBC as well as schools. Whether the same could be done today in schools that seem to be largely results driven and a BBC that seems to be imploding in on itself is difficult to tell.

And so to the point of this post: is the Raspberry Pi the new BBC Micro in the way it spurred on a generation of programmers that spread their wings and went on to create the tech boom (and let’s not forget odd bust) of the last 30 years? More to the point, is that what the world needs right now? Computers are getting getting far smarter “out of the box”. IBM’s recent announcements of it’s PureSystems brand promise a “smarter approach to IT” in terms of installation, deployment, development and operations. Who knows what stage so called expert integrated systems will be at by the time today’s students begin to hit the workforce in 5 – 10 years time? Does the Raspberry Pi have a place in this world? A world where many, if not most, programming jobs continue to be shipped to low cost regions, currently the BRIC, MIST countries and so on, I am sure, the largely untapped African sub-continent.

I believe that to some extent the fact that the Raspberry Pi is a computer and yes, with a bit of effort, you can program it, is largely an irrelevance. What’s important is that the Raspberry Pi ignites an interest in a new generation of kids that gets them away from just consuming computing (playing games, reading Facebook entries, browsing the web etc) to actually creating something instead. It’s this creative spark that is needed now, today and as we move forward that, no matter what computing platforms we have in 5, 10 or 50 years time, will always need creative thinkers to solve the worlds really difficult business and technical problems.

And by the way my Raspberry Pi is on order.

Bring Me Problems, Not Solutions

“Bring me solutions, not problems” is a phrase that the former British Prime Minister Margaret Thatcher was, apparently, fond of using. As I’ve pointed out before the role of the architect is to “take existing components and assemble them in interesting and important ways“. For the architect then, who wants to assemble components in interesting ways, problems are what are needed, not solutions – without problems to solve we have no job to do. Indeed problem solving is what entrepreneurship is all about and the ability to properly define the problem in the first place therefore becomes key to solving the problem.Fundamentally the architect asks:

  1. What is the problem I am trying to solve?
  2. What solution can I construct that would address that problem?
  3. What technology (if any) should I apply in implementing that solution?

This approach is summed up in the following picture, a sort of meta-architecture process.

The key thing here of course is the effective use of technology. Sometimes that means not using technology at all because a manual system is equally (cost) effective. One thing that architects should avoid at all costs is to become over enthusiastic about using too much of the wrong kind of technology. Adopting a sound architectural process, following well understood architectural principles and using what other have done before, that is applying architectural patterns, are ways to ensure we don’t leap to a solution built on potentially the wrong technology, too quickly.

For architects then, who are looking for their next interesting challenge, the cry should be “bring me problems, not solutions”.

Choosing What to Leave Out

In his book Steal Like an Artist, Austin Kleon makes this insightful statement:

In this age of information abundance and overload, those who get ahead will be the folks who figure out what to leave out, so they can concentrate on what’s really important to them. Nothing is more paralyzing than the idea of limitless possibilities. The idea that you can do anything is absolutely terrifying.

This resonates nicely with another article here on frugal engineering or “designing more with less”. In this article the authors (Nirmalya Kumar and Phanish Puranam) discuss how innovation is meeting the needs of the Indian marketplace, where consumers are both demanding as well as budget constrained and how “the beauty of the Indian market is that it pushes you in a corner…it demands everything in the world, but cheaper and smaller.” This article also talks about “defeaturing” or “feature rationalization”, or “ditching the junk DNA” that tends to accumulate in products over time.

As an example of this the most popular mobile phone in India (and in fact, at one point, the bestselling consumer electronics device in the world) is the Nokia 1100. The reason for this device’s popularity? Its stripped down functionality (ability to store multiple contact lists so it can be used by many users, ability to enter a price limit for a call and built-in flashlight, radio and alarm) and low price point make it an invaluable tool for life in poor and underdeveloped economies such as rural India and South America.

For a software architect wishing to make decisions about what components to build a system from there can often be a bewildering set of choices. Not only do several vendors offer solutions that will address the needs there are often many ways of doing the same thing, usually requiring the use of multiple, overlapping products from different vendors. All of this adds to the complexity of the final solution and can end up in a system that is both hard to maintain as well as difficult, if not impossible, to extend and enrich.

Going back to Austin Kleon’s assertion above, the trick is to figure out what to leave out, just focusing on what is really important to the use of the system. In my experience this usually means that version 1.0 of anything is rarely going to be right and it’s not until version 2.0+ that the fog of complexity gradually begins to lift allowing what really matters to shine through. Remember that one of my suggested architecture social objects is the Change Case. This is a good place to put those features of little immediate value, allowing you to come back at a later date and think about whether they are still needed. My guess is you will be surprised at how often the need for such features has passed.