Complexity is Simple

I was taken with this cartoon and the comments put up by Hugh Macleod last week over at his gapingvoid.com blog so I hope he doesn’t mind me reproducing it here.

Complexity is Simple (c) Hugh Macleod 2014
Complexity is Simple (c) Hugh Macleod 2014

Complex isn’t complicated. Complex is just that, complex.

Think about an airplane taking off and landing reliably day after day. Thousands of little processes happening all in sync. Each is simple. Each adds to the complexity of the whole.

Complicated is the other thing, the thing you don’t want. Complicated is difficult. Complicated is separating your business into silos, and then none of those silos talking to each other.

At companies with a toxic culture, even what should be simple can end up complicated. That’s when you know you’ve really got problems…

I like this because it resonates perfectly well with a blog post I put up almost four years ago now called Complex Systems versus Complicated Systems. where I make the point that “whilst complicated systems may be complex (and exhibit emergent properties) it does not follow that complex systems have to be complicated“. A good architecture avoids complicated systems by building them out of lots of simple components whose interactions can certainly create a complex system but not one that needs to be overly complicated.

Discover Problems, Don’t Solve Them

A while ago I wrote a post called Bring me problems not solutions. An article by Don Peppers on Linkedin called ‘Class of 2013: You Can’t Make a Living Just by Solving Problems’ adds an interesting spin to this and piles even more pressure on those people entering the job market now, as well as those of us figuring out how to stay in it!As we all know, Moore’s Law says that the number of transistors on integrated circuits doubles approximately every two years. As this power has increased the types of problems computers can solve has also increased exponentially. By the time today’s graduates reach retirement age, say in 50 years time (which itself might be getting further away thus compounding the problem) computers will be several million times more powerful than they are today.

As Peppers says:

If you can state something as a technical problem that has a solution – a task to be completed – then eventually this problem can and will be solved by computer.

This was always the case, it’s just that as computers are able to perform even more calculations per second the kinds of problems will become more and more complex that they can solve. Hence the white collar and skilled professional jobs will also become consumed by the ever increasing power of the computer. Teachers, lawyers, doctors, financial analysts, traders and even those modern day pariahs of our society journalists and politicians will continue to see their jobs become redundant.

So if the salaried jobs of even those of us who solve problems for a living continue to disappear what’s left? Peppers suggests there are two potential areas that computers will struggle with, one is to become very good at dealing with interpersonal issues – people skills (darn it, those pesky HR types are going to be in work for a while longer). The other way is not to focus on solving problems but on discovering them.

Discovering problems is something that computers find hard to do, and probably will continue to do so. It’s just too difficult to bound the requirements and define the tasks that are needed for creating a problem. Discovering new problems has another name, it’s also known as “creativity.” Creativity involves finding and solving a problem that wasn’t there before. How to be creative is a very profitable source of income for authors right now with more and more books appearing on this subject every month. However, here’s the irony, just as we are realising we need to be fostering creativity as a skill even more we are quite literally turning the clock back on our children’s innate abilities to be creative. As explained in this video (The Faustian Bargain) “the way we raise children these days is at odds with the way we’ve evolved to learn”.

Sadly our politicians don’t seem to get this. Here in the UK, the head of state for education, Michael Gove, doesn’t understand creativity and his proposed education reforms “fly in the face of all that we know about creativity and how best to nurture it”. It seems that the problem is not just confined to the UK (and probably other Northern Hemisphere countries). In India the blogger and photographer Sumeet Moghe is thinking that his daughter doesn’t deserve school. and is struggling with what alternatives a concerned parent might provide.

So, what to do? Luckily there are people that realise the importance of a creative education, fostering a love of learning and nurturing the concept of lifelong learning. Sir Ken Robinson’s TED talk on how schools kill creativity is one of the most watched presentations of all time. So, what to do? Watch this and other talks by Ken Robinson as well as other talks on TED that deal in matters of creativity. Learn what you can and get involved in the “creative life” as much as possible. If you live in countries that don’t support creativity in education then write to your elected representative and ask her or him what they, and the government they are a part of, are doing about it. For the sake of all of us this is a problem that is too important to let our leaders get away with not fixing.

I Think Therefore I Blog

I recently delivered a short presentation called “I Think Therefore I Blog”. Whilst this does not not specifically have anything to do with software architecture, I hope it might provide some encouragement to colleagues and others out there in the blogosphere as to why blogging can be good for you and why it’s worth pursuing, sometimes in the face of no or very little feedback!

Reason #1: Blogging helps you think (and reflect)
The author Joan Didion once said, “I don’t know what I think until I try to write it down.” Amazon CEO Jeff Bezos preaches the value of writing long form prose to clarify thinking. Blogging, as a form of self expression (and I’m not talking about blogs that just post references to other material)  forces you to think by writing down your arguments and assumptions. This is the single biggest reason to do it, and I think it alone makes it worth it.

You have a lot of opinions and I’m sure you hold some of them pretty strongly. Pick one and write it up in a post — I’m sure your opinion will change somewhat, or at least become more nuanced. Putting something down on ‘paper’ means a lot of the uncertainty and vagueness goes away leaving you to defend your position for yourself. Even if no one else reads or comments on your blog (and they often don’t) you still get the chance to clarify your thoughts in your own mind, and as you write, they become even clearer.

The more you blog, the better you become at writing for your audience, managing your arguments, defending your position, thinking critically. I find that if I don’t understand something very well and want to learn more about it, writing a blog post about that topic focuses my thinking and helps me learn it better.

Reason #2: Blogging enforces discipline
A blog is a broadcast, not a publication. It is not static. Like a shark, if it stops moving, it dies. If you want your blog to last and grow you need to write regularly, it therefore enforces some form of discipline on your life.

Although I don’t always achieve this I do find that writing a little, a lot is better than trying to write a whole post in one go. Start a post with an idea, write it down, then add to it as your thoughts develop, you’ll soon have something you are happy with and are ready to publish.  The key thing is to start as soon as you have an idea, capture it straight away before you forget it then expand on it.

Reason #3: Blogging gives you wings
If you persist with blogging, you will discover that you develop new and creative ways to articulate what you want to say. As I write, I often search for alternative ways to express myself. This can be through images, quotes, a retelling of old experiences through stories, videos, audio, or useful hyperlinks to related web resources.

You have many ways to convey your ideas, and you are only limited by your own imagination. Try out new ways of communicating and take risks. Blogging is the platform that allows you to be creative.

Reason #4: Blogging creates personal momentum
Blogging puts you out there, for all the word to see, to be judged and criticized for both your words and how you structure them. It’s a bit intimidating, but I know the only way to become a better writer is to keep doing it.

Once you have started blogging, and you realise that you can actually do it, you will probably want to develop your skills further. Blogging can be time consuming, but the rewards are ultimately worth it. In my experience, I find myself breaking out of inertia to create some forward movement in my thinking, especially when I blog about topics that may be emotive, controversial, challenging. The more you blog, the better you become at writing for your audience, managing your arguments, defending your position, thinking critically. The photographer Henri Cartier-Bresson said “your first 10,000 photos are your worst”, a similar rule probably applies to blog posts!

I also believe blogging makes be better at my job. I can’t share my expertise or ideas if I don’t have any. My commitment to write 2-4 times per month keeps me motivated to experiment and discover new things that help me develop at work and personally.

Conversely, if I am not blogging regularly then I need to ask myself why that is. Is it because I’m not getting sufficient stimulus or ideas from what I am doing and if so what can I do to change that?

Reason #5: Blogging gives you (more) eminence
Those of us that work in the so called knowledge economy need to build and maintain, for want of a better word, our ’eminence’. Eminence is defined as being “a position of superiority, high rank or fame”. What I mean by eminence here is having a position which others look to for guidance, expertise or inspiration. You are known as someone who can offer a point of view or an opinion. A blog gives you that platform and also allows you to engage in the real world.

So, there you have it, my reasons for blogging. As a postscript to this I fortuitously came across this post as I was writing which adds some kind of perspective to the act of blogging. I suggest you give the post a read but here is a quote which gives a good summary:

…if you start blogging thinking that you’re well on your way to achieving Malcolm Gladwell’s career, you are setting yourself for disappointment. It will suck the enjoyment out of writing. Every completed post will be saddled with a lot of time staring at traffic stats that refuse to go up. It’s depressing.

I have to confess to doing the occasional bit of TSS (traffic stat staring) myself but at the same time have concluded there is no point in chasing the ratings as they might have said in more traditional broadcast media. If you want to blog, do it for its own sake and (some of) the reasons above, don’t do it because you think you will become famous and/or rich (though don’t entirely close the door to that possibility).

Steal Like an Artist

David Bowie is having something of a resurgence this year. Not only has he released a critically acclaimed new album, The Next Day, there is also an exhibition of the artefacts from his long career at the Victoria & Albert museum in London. These includes handwritten lyrics, original costumes, fashion, photography, film, music videos, set designs and Bowie’s own instruments.

David Bowie was a collector. Not only did he collect, he also stole. As he said in a Playboy interview back in 1976:

The only art I’ll ever study is stuff that I can steal from.

He even steals from himself, check out the cover of his new album to see what I mean.

Austin Kleon has written a whole book on this topic, Steal Like an Artist, in which he makes the case that nothing is original and that nine out of ten times when someone says that something is new, it’s just that they don’t know the the original sources involved. Kleon goes on to say:

What a good artist understands is that nothing comes from nowhere. All creative work builds on what came before. Nothing is completely original.

So what on earth has this got to do with software architecture?

Eighteen years ago one of the all time great IT books was published. Design Patterns – Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides introduced the idea of patterns, originally a construct used by the building architect Christopher Alexander,  to the IT world at large. As the authors say in the introduction to their book:

One thing expert designers know not to do is solve every problem from first principles. Rather, they reuse solutions that have worked for them in the past. When they find a good solution, they use it again and again. Such experience is part of what makes them experts.

So expert designers ‘steal’ work they have already used before. The idea of the Design Patterns book was to publish patterns that others had found to work for them so they could be reused (or stolen). The patterns in Design Patterns were small design elements that could be used when building object-oriented software. Although they included code samples, they were not directly reusable without adaptation, not to mention coding, in a chosen programming language.

Fast forward eighteen years and the concept of patterns is alive and well but has reached a new level of abstraction and therefore reuse. Expert Integrated Systems like IBM’s PureApplication SystemTM use patterns to provide fast, high-quality deployments of sophisticated environments that enable enterprises to get new business applications up and running as quickly as possible. Whereas the design patterns from the book by Gamma et al were design elements that could be used to craft complete programs the PureApplication System patterns are collections of virtual images that form a a complete system. For example, the Business Process Management (BPM) pattern includes an HTTP server, a clustered pair of BPM servers, a cluster administration server, and a database server. When an administrator deploys this pattern, all the inter-connected parts are created and ready to run together. Time to deploy such systems is reduced from days or even, in some cases, weeks to just hours.

Some may say that the creation and proliferation of such patterns is another insidious step to the deskilling of our profession. If all it takes to deploy a complex BPM system is just a few mouse clicks then where does that leave those who once had to design such systems from scratch?

Going back to our art stealing analogy, a good artist does not just steal the work of others and pass it off as their own (at least most of them don’t) rather, they use the ideas contained in that work and build on them to create something new and unique (or at least different). Rather than having to create new stuff from scratch they adopt the ideas that others have come up with then adapt them to make their own creations. These creations themselves can then be used by others and further adapted thus the whole thing becomes a sort of virtuous circle:Adopt Adapt

A good architect, just like a good artist, should not fear patterns but should embrace them and know that they free him up to focus on creating something that is new and of real (business) value. Building on the good work that others have done before us is something we should all be encouraged to do more of. As Salvador Dalis said:

Those who do not want to imitate anything, produce nothing.

Is the Raspberry Pi the New BBC Microcomputer?

There has been much discussion here in the UK over the last couple of years about the state of tech education and what should be done about it. The concern being that our schools are not doing enough to create the tech leaders and entrepreneurs of the future.

The current discussion kicked off  in January 2011 when Microsoft’s director of education, Steve Beswick, claimed that in UK schools there is much “untapped potential” in how teenagers use technology. Beswick said that a Microsoft survey had found that 71% of teenagers believed they learned more about information technology outside of school than in formal information and communication technology (ICT) lessons. An interesting observation given that one of the criticisms often leveled at these ICT classes is that they just teach kids how to use Microsoft Office.The discussion moved in August of 2011, this time at the Edinburgh International Television Festival where Google chairman Eric Schmidt said he thought education in Britain was holding back the country’s chances of success in the digital media economy. Schmidt said he was flabbergasted to learn that computer science was not taught as standard in UK schools, despite what he called the “fabulous initiative” in the 1980s when the BBC not only broadcast programmes for children about coding, but shipped over a million BBC Micro computers into schools and homes.

January 2012 saw even the schools minister, Michael Gove, say that the ICT curriculum was “a mess” and must be radically revamped to prepare pupils for the future (Gove suspended the ICT Curriculum in September 2012). All well and good but as some have commented “not everybody is going to need to learn to code, but everyone does need office skills”.

In May 2012 Schmidt was back in the UK again, this time at London’s Science Museum where he announced that Google would provide the funds to support Teach First – a charity which puts graduates on a six-week training programme before deploying them to schools where they teach classes over a two-year period.

So, what now? With the new ICT curriculum not due out until 2014 what are the kids who are about to start their GCSE’s to do? Does it matter they won’t be able to learn ICT at school? The Guardian’s John Naughton proposed a manifesto for teaching computer science in March 2012 as part of his papers digital literacy campaign.  As I’ve questioned before should it be the role of schools to teach the very specific programming skills being proposed; skills that might be out of date by the time the kids learning them enter the workforce? Clearly something needs to be done otherwise, as my colleague Dr Rick Robinson says, where will the next generation of technology millionaires come from? bbc micro

Whatever shape the new curriculum takes, one example (one that Eric Schmidt himself used) of a success story in the learning of IT skills is that of the now almost legendary BBC Microcomputer. A project started 30 years ago this year. For those too young to remember, or were not around in the UK at the time, the BBC Microcomputer got its name from project devised by the BBC to enhance the nation’s computer literacy. The BBC wanted a machine around which they could base a series called The Computer Programme, showing how computers could be used, not just for computer programming but also for graphics, sound and vision, artificial intelligence and controlling peripheral devices. To support the series the BBC drew up a spec for a computer that could be bought by people watching the programme to actually put into practice what they were watching. The machine was built by Acorn the spec of which you can read here.ba8dd-bbcmicroscreen

The BBC Micro was not only a great success in terms of the television programme, it also helped spur on a whole generation of programmers. On turning the computer on you were faced with the screen on the right. The computer would not do anything unless you fed it instructions using the BASIC programming language so you were pretty much forced to learn programming! I can vouch for this personally because although I had just entered the IT profession at the time this was in the days of million pound mainframes hidden away in backrooms guarded jealously by teams of computer operators who only gave access via time-sharing for minutes at a time. Having your own computer which you could tap away on and get instant results was, for me, a revelation.

Happily it looks like the current gap in the IT curriculum may about to be filled by the humble Raspberry Pi computer. The idea behind the Raspberry Pi came from a group of computer scientists at Cambridge, England’s computer laboratory back in 2006. As Ebon Upton founder and trustee of the Raspberry Pi Foundation said:

Something had changed the way kids were interacting with computers. A number of problems were identified: the colonisation of the ICT curriculum with lessons on using Word and Excel, or writing webpages; the end of the dot-com boom; and the rise of the home PC and games console to replace the Amigas, BBC Micros, Spectrum ZX and Commodore 64 machines that people of an earlier generation learned to program on.

Out of this concern at the lack of programming and computer skills in today’s youngsters was born the Raspberry Pi computer (see below) which began shipping in February 2012. Whilst the on board processor and peripheral controllers on this credit card sized, $25 device are orders of magnitude more powerful than anything the BBC Micros and Commodore 64 machines had, in other ways this computer is even more basic than any of those computers. It comes with no power supply, screen, keyboard, mouse or even operating system (Linux can be installed via a SD card). There is quite a learning curve just to get up and running although what Raspberry Pi has going for it that the BBC Micro did not is the web and the already large number of help pages as well as ideas for projects and even the odd Raspberry Pi Jam (get it). Hopefully this means these ingenious devices will not become just another piece of computer kit lying around in our school classrooms.e65ef-raspberrypi

The Computer Literacy Project (CLP) which was behind the idea of the original BBC Micro and “had the grand ambition to change the culture of computing in Britain’s homes” produced a report in May of this year called The Legacy of the BBC Micro which, amongst other things, explores whether the CLP had any lasting legacy on the culture of computing in Britain. The full report can be downloaded here. One of the recommendations from the report is that “kit, clubs and formal learning need to be augmented by support for individual learners; they may be the entrepreneurs of the future“. 30 years ago this support was provided by the BBC as well as schools. Whether the same could be done today in schools that seem to be largely results driven and a BBC that seems to be imploding in on itself is difficult to tell.

And so to the point of this post: is the Raspberry Pi the new BBC Micro in the way it spurred on a generation of programmers that spread their wings and went on to create the tech boom (and let’s not forget odd bust) of the last 30 years? More to the point, is that what the world needs right now? Computers are getting getting far smarter “out of the box”. IBM’s recent announcements of it’s PureSystems brand promise a “smarter approach to IT” in terms of installation, deployment, development and operations. Who knows what stage so called expert integrated systems will be at by the time today’s students begin to hit the workforce in 5 – 10 years time? Does the Raspberry Pi have a place in this world? A world where many, if not most, programming jobs continue to be shipped to low cost regions, currently the BRIC, MIST countries and so on, I am sure, the largely untapped African sub-continent.

I believe that to some extent the fact that the Raspberry Pi is a computer and yes, with a bit of effort, you can program it, is largely an irrelevance. What’s important is that the Raspberry Pi ignites an interest in a new generation of kids that gets them away from just consuming computing (playing games, reading Facebook entries, browsing the web etc) to actually creating something instead. It’s this creative spark that is needed now, today and as we move forward that, no matter what computing platforms we have in 5, 10 or 50 years time, will always need creative thinkers to solve the worlds really difficult business and technical problems.

And by the way my Raspberry Pi is on order.

Bring Me Problems, Not Solutions

“Bring me solutions, not problems” is a phrase that the former British Prime Minister Margaret Thatcher was, apparently, fond of using. As I’ve pointed out before the role of the architect is to “take existing components and assemble them in interesting and important ways“. For the architect then, who wants to assemble components in interesting ways, problems are what are needed, not solutions – without problems to solve we have no job to do. Indeed problem solving is what entrepreneurship is all about and the ability to properly define the problem in the first place therefore becomes key to solving the problem.Fundamentally the architect asks:

  1. What is the problem I am trying to solve?
  2. What solution can I construct that would address that problem?
  3. What technology (if any) should I apply in implementing that solution?

This approach is summed up in the following picture, a sort of meta-architecture process.

The key thing here of course is the effective use of technology. Sometimes that means not using technology at all because a manual system is equally (cost) effective. One thing that architects should avoid at all costs is to become over enthusiastic about using too much of the wrong kind of technology. Adopting a sound architectural process, following well understood architectural principles and using what other have done before, that is applying architectural patterns, are ways to ensure we don’t leap to a solution built on potentially the wrong technology, too quickly.

For architects then, who are looking for their next interesting challenge, the cry should be “bring me problems, not solutions”.

Choosing What to Leave Out

In his book Steal Like an Artist, Austin Kleon makes this insightful statement:

In this age of information abundance and overload, those who get ahead will be the folks who figure out what to leave out, so they can concentrate on what’s really important to them. Nothing is more paralyzing than the idea of limitless possibilities. The idea that you can do anything is absolutely terrifying.

This resonates nicely with another article here on frugal engineering or “designing more with less”. In this article the authors (Nirmalya Kumar and Phanish Puranam) discuss how innovation is meeting the needs of the Indian marketplace, where consumers are both demanding as well as budget constrained and how “the beauty of the Indian market is that it pushes you in a corner…it demands everything in the world, but cheaper and smaller.” This article also talks about “defeaturing” or “feature rationalization”, or “ditching the junk DNA” that tends to accumulate in products over time.

As an example of this the most popular mobile phone in India (and in fact, at one point, the bestselling consumer electronics device in the world) is the Nokia 1100. The reason for this device’s popularity? Its stripped down functionality (ability to store multiple contact lists so it can be used by many users, ability to enter a price limit for a call and built-in flashlight, radio and alarm) and low price point make it an invaluable tool for life in poor and underdeveloped economies such as rural India and South America.

For a software architect wishing to make decisions about what components to build a system from there can often be a bewildering set of choices. Not only do several vendors offer solutions that will address the needs there are often many ways of doing the same thing, usually requiring the use of multiple, overlapping products from different vendors. All of this adds to the complexity of the final solution and can end up in a system that is both hard to maintain as well as difficult, if not impossible, to extend and enrich.

Going back to Austin Kleon’s assertion above, the trick is to figure out what to leave out, just focusing on what is really important to the use of the system. In my experience this usually means that version 1.0 of anything is rarely going to be right and it’s not until version 2.0+ that the fog of complexity gradually begins to lift allowing what really matters to shine through. Remember that one of my suggested architecture social objects is the Change Case. This is a good place to put those features of little immediate value, allowing you to come back at a later date and think about whether they are still needed. My guess is you will be surprised at how often the need for such features has passed.

Why We Need STEM++ Graduates

The need for more STEM (that’s Science, Technology, Engineering and Maths) skills seems to be on the agenda more and more these days. There is a strong feeling that the so called developed nations have depended too much on financial and other services to grow their economies and as a result “lost” their ability to design, develop and manufacture goods, largely because we are not producing enough STEM graduates to do this.Whilst I would see software as falling fairly and squarely into the STEM skillset (even if it is also used to  underpin nearly all of the modern financial services industry) as this blog post by Jessica Benjamin from IBM points out STEM skills alone won’t solve the really hard problems that are out there. With respect to the particular problems around big data Jessica succinctly says:

All the skills it takes to tell a good story, to compose a complete orchestra, are the skills it takes to put the pieces of this big data world together. If data is just data until its information, what’s a lot of information without the thought and skill of pulling all the chords together?

The need for right as well as left brained thinkers for solving the worlds really, really hard business problems is something that has been recognised for some time now by several prominent business leaders. Indeed the intersection of technology (left-brained) and design (right-brained) has certainly played a part in a lot of what technology companies like IBM and Apple have been a part of and made them successful.

So we need not just STEM skills but STEM++ skills where the addition of  “righty” skills like arts, humanities and design help us build not just a smarter world but one that is better to live in. For more on this check out my other (joint) blog The Versatilist Way.

Giving Users What They Want (Maybe)

Tradition has it that users come up with a set of requirements which architects and designers take and turn into “a solution”. That is, a combination of bespoke and off-the-shelf, hardware and software components, that are assembled in such a way they address all the requirements (non-functional as well as functional). Another point of view is that users don’t actually know what they want and therefore need to be guided toward solutions they never knew they needed or indeed knew were possible. Two famous proponents of this approach were Henry Ford who supposedly said:

If I had asked people what they wanted, they would have said faster horses.

which is debunked here and of course Steve Jobs and Apple whose “Eureka” moments continue to give us gadgets we never knew we needed. As Adrian Slywotzky points out here however, the magic that Jobs and Apple seem to regularly perform is actually based on highly focused and detailed business design, continuous refinement through prototyping and a manic attention to the customer experience.In other words it really is 90% perspiration and 10% inspiration.

One thing that both Henry Ford and Steve Jobs/Apple did have in common was also a deep understanding of the technology in their chosen fields of expertise and, more importantly, where that technology was heading.

If, as an architect, you are to have a sensible conversation with users (AKA customers, clients, stakeholders et al) about how to create an architecture that addresses their needs you not only need a good understanding of their business you also need a deep understanding of what technology is available and where that technology is heading. This is a tall order for one persons brain which is why the job of an architect is uniquely fascinating (but also hard work). It’s also why, if you’re good at it, you’ll be in demand for a while yet.

Remember that, even though they may not know it, users are looking at you to guide them not only on what the knowns are but also on what the unknowns are. In other words, it’s your job to understand the art of the possible, not just the art of the common-or-garden.

On Being a Versatilist

For some time now I’ve been discussing the idea of a versatilist as someone whose numerous roles (across business, science and the arts) assignments and experiences enable them to synthesize knowledge in new and exciting ways that may not have been possible if only one of these viewpoints were taken. I truly believe that this is such an interesting and fruitful topic that I have teamed up with David Evans, Principal Consultant and Owner at Koan Solutions Ltd and created a new blog on the topic of versatilism and what being a versatilist means.Check out our new blog The Versatilist Way and give us your thoughts and ideas.