The Future of Software (Architecture)

If you hang out on the internet for long enough you begin to pick up memes about what the future (of virtually anything) might be like. Bearing in mind this cautionary quote from Jay Rosen:

Nothing is the future of anything. And so every one of your “No, X is not the future of Y” articles is as witless as the originating hype.

Here is a meme I am detecting on the future of software based on a number of factors each of which has two opposing forces, either one of which may win out.  Individually each one of these factors is not particularly new, however together they may well be part of a perfect storm that is about to mark a radical shift in our industry. First, here are the five factors.

Peer to peer rather than hierarchical.
Chris Ames over at 8bit recently posted this article on The Future of Software which really caught my attention. Once upon a time (i.e. prior to about 2006) software was essentially delivered as packages. You either took the whole enchilada or none of it at all. You bought, for example, the Microsoft Office suite and all it’s components and they all played together in the way Microsoft demanded. There was a hierarchy within that suite that allowed all the parts to work together and provided a common look and feel but it was essentially a monolithic package, you bought all the features even though you may only use 10% of them. As Ames points out however, the future is loosely coupled with specialty components (you can call them ‘apps’ if you wish) providing relatively simple functions (that is, do one thing but do it really, really well) that also provide well defined programming interfaces.

This is really peer-to-peer. Tasks are partitioned between the applications and no one application has any greater privilege than another. More significantly different providers can develop and distribute these applications so no one vendor has a dominance on the market. This also allows newer and smaller specialist companies to build and develop such components/applications.

A craft rather than an engineering discipline.
Since I started in this industry back in the early 80’s (and probably way before that) the argument has raged as to whether the ‘industry’ of software development is an engineering discipline or a craft (or even an art). When I was at university (late 70’s) software engineering did not exist as a field of study in any significant way and computer science was in its infancy. Whilst the industry has gone through several reincarnations over the years and multiple attempts at enforcing an engineering discipline the somewhat chaotic nature of the internet, where let’s face it a lot of software development is done, makes it feel more like a craft than ever before. This is fed by the fact that essentially anyone with a computer and a good idea (and a lot of stamina) can, it seems, set up shop and make, if not a fortune, at least a fairly decent living.

Free rather than paid for.
Probably the greatest threat any of us feels at present (at least those of us whose work in creating digitised information in one form or another) is the fact that it seems someone, somewhere is prepared to do what you do, if not for free, then at least for a very small amount of money. Open source software is a good example. At the point of procurement (PoP) it is ‘free’ (and yes I know total cost of ownership is another matter) something that many organisations are recognising and taking advantage of, including the UK government whose stated strategy is to use open source code by default. At the same time many new companies, funded by venture capitalists hungry to be finding the next Facebook or Twitter, are pouring, mainly dollars, into new ventures which, at least on the face of it are offering a plethora of new capabilities for nothing. Check out these guys if you want a summary of the services/capabilities out there and how to join them all together.

Distributed in the cloud rather than packaged as an application.
I wanted to try and not get into talking about technology here, especially the so called SMAC (Social, Mobile, Analytics and Cloud) paradigm. Cloud, both as a technology and a concept, is too important to ignore for these purposes however. In many ways software, at least software distribution, has come full circle with the advent of cloud. Once, all software was only available in a centrally managed data centre. The personal computer temporarily set it free but cloud has now bought software back under some form of central control, albeit in a much more greatly available form. Of course there are lots of questions around cloud computing still (Is it secure? What happens if the cloud provider goes out of business? What happens if my data gets into the wrong hands?). I think cloud is a technology that is here to stay and definitely a part of my meme.

Developed in a cooperative, agile way rather than a collaborative, process driven (waterfall) one.
Here’s a nice quote from the web anthropologist, futurist and author Stowe Boyd that perfectly captures this: 

“In the collaborative business, people affiliate with coworkers around shared business culture and an approved strategic plan to which they subordinate their personal aims. But in a cooperative business, people affiliate with coworkers around a shared business ethos, and each is pursuing their own personal aims to which they subordinate business strategy. So, cooperatives are first and foremost organized around cooperation as a set of principles that circumscribe the nature of loose connection, while collaboratives are organized around belonging to a collective, based on tight connection. Loose, laissez-faire rules like ‘First, do no harm’, ‘Do unto others’, and ‘Hear everyone’s opinion before binding commitments’ are the sort of rules (unsurprisingly) that define the ethos of cooperative work, and which come before the needs and ends of any specific project.”

Check out the Valve model as an example of a cooperative.

So, if you were thinking of starting (or re-starting) a career in software (engineering, development, architecture, etc) what does this meme mean to you? Here are a few thoughts:

  • Whilst we should not write off the large software product vendors and package suppliers there is no doubt they are going to be in for a rough ride over the next few years whilst they adjust their business models to take into account the pressures from open source and the distribution mechanism bought on by the cloud. If you want to be a part of solving those conundrums then spending time working for such companies will be an “interesting” place to be.
  • If you like the idea of cooperation and the concept of a shared business ethos, rather than collaboration, then searching out, or better still starting, a company that adheres to such a model might be the way to go. It will be interesting to see how well the concept of the cooperative scales though.
  • It seems as if we are finally nearing the nirvana of true componentisation (that is software as units of functionality with a well defined interface). The promised simplicity that can be offered through API’s as shown in the diagram above is pretty much with us (remember the components shown in that diagram are all from different vendors). Individuals as well as small-medium enterprises (SME’s) that can take advantage of this paradigm are set to benefit from a new component gold rush.
  • On the craft/art versus engineering discipline of software, the need for systems that are highly resilient and available 99.9999% of the time has never been greater as these systems run more and more of our lives. Traditionally these have been systems developed by core teams following well tried and tested engineering disciplines. However as the internet has enabled teams to become more distributed where such systems are developed becomes less of an issue and these two paradigms need not be mutually exclusive Open source software is clearly a good example of applications that are developed locally but to high degrees of ‘workmanship’. The problem with such software tends not to be in initial development but in ongoing maintenance and upgrades which is where larger companies can step in and provide that extra level of assurance.

If these five factors really are part of a meme (that is a cultural unit for carrying ideas) then I expect it to evolve and mutate. Please feel free to be involved in that evolutionary process.

I Think Therefore I Blog

I recently delivered a short presentation called “I Think Therefore I Blog”. Whilst this does not not specifically have anything to do with software architecture, I hope it might provide some encouragement to colleagues and others out there in the blogosphere as to why blogging can be good for you and why it’s worth pursuing, sometimes in the face of no or very little feedback!

Reason #1: Blogging helps you think (and reflect)
The author Joan Didion once said, “I don’t know what I think until I try to write it down.” Amazon CEO Jeff Bezos preaches the value of writing long form prose to clarify thinking. Blogging, as a form of self expression (and I’m not talking about blogs that just post references to other material)  forces you to think by writing down your arguments and assumptions. This is the single biggest reason to do it, and I think it alone makes it worth it.

You have a lot of opinions and I’m sure you hold some of them pretty strongly. Pick one and write it up in a post — I’m sure your opinion will change somewhat, or at least become more nuanced. Putting something down on ‘paper’ means a lot of the uncertainty and vagueness goes away leaving you to defend your position for yourself. Even if no one else reads or comments on your blog (and they often don’t) you still get the chance to clarify your thoughts in your own mind, and as you write, they become even clearer.

The more you blog, the better you become at writing for your audience, managing your arguments, defending your position, thinking critically. I find that if I don’t understand something very well and want to learn more about it, writing a blog post about that topic focuses my thinking and helps me learn it better.

Reason #2: Blogging enforces discipline
A blog is a broadcast, not a publication. It is not static. Like a shark, if it stops moving, it dies. If you want your blog to last and grow you need to write regularly, it therefore enforces some form of discipline on your life.

Although I don’t always achieve this I do find that writing a little, a lot is better than trying to write a whole post in one go. Start a post with an idea, write it down, then add to it as your thoughts develop, you’ll soon have something you are happy with and are ready to publish.  The key thing is to start as soon as you have an idea, capture it straight away before you forget it then expand on it.

Reason #3: Blogging gives you wings
If you persist with blogging, you will discover that you develop new and creative ways to articulate what you want to say. As I write, I often search for alternative ways to express myself. This can be through images, quotes, a retelling of old experiences through stories, videos, audio, or useful hyperlinks to related web resources.

You have many ways to convey your ideas, and you are only limited by your own imagination. Try out new ways of communicating and take risks. Blogging is the platform that allows you to be creative.

Reason #4: Blogging creates personal momentum
Blogging puts you out there, for all the word to see, to be judged and criticized for both your words and how you structure them. It’s a bit intimidating, but I know the only way to become a better writer is to keep doing it.

Once you have started blogging, and you realise that you can actually do it, you will probably want to develop your skills further. Blogging can be time consuming, but the rewards are ultimately worth it. In my experience, I find myself breaking out of inertia to create some forward movement in my thinking, especially when I blog about topics that may be emotive, controversial, challenging. The more you blog, the better you become at writing for your audience, managing your arguments, defending your position, thinking critically. The photographer Henri Cartier-Bresson said “your first 10,000 photos are your worst”, a similar rule probably applies to blog posts!

I also believe blogging makes be better at my job. I can’t share my expertise or ideas if I don’t have any. My commitment to write 2-4 times per month keeps me motivated to experiment and discover new things that help me develop at work and personally.

Conversely, if I am not blogging regularly then I need to ask myself why that is. Is it because I’m not getting sufficient stimulus or ideas from what I am doing and if so what can I do to change that?

Reason #5: Blogging gives you (more) eminence
Those of us that work in the so called knowledge economy need to build and maintain, for want of a better word, our ’eminence’. Eminence is defined as being “a position of superiority, high rank or fame”. What I mean by eminence here is having a position which others look to for guidance, expertise or inspiration. You are known as someone who can offer a point of view or an opinion. A blog gives you that platform and also allows you to engage in the real world.

So, there you have it, my reasons for blogging. As a postscript to this I fortuitously came across this post as I was writing which adds some kind of perspective to the act of blogging. I suggest you give the post a read but here is a quote which gives a good summary:

…if you start blogging thinking that you’re well on your way to achieving Malcolm Gladwell’s career, you are setting yourself for disappointment. It will suck the enjoyment out of writing. Every completed post will be saddled with a lot of time staring at traffic stats that refuse to go up. It’s depressing.

I have to confess to doing the occasional bit of TSS (traffic stat staring) myself but at the same time have concluded there is no point in chasing the ratings as they might have said in more traditional broadcast media. If you want to blog, do it for its own sake and (some of) the reasons above, don’t do it because you think you will become famous and/or rich (though don’t entirely close the door to that possibility).

This is for Everyone

Twenty years ago today on 30th April 1993 CERN published a brief statement that made World Wide Web technology available on a royalty free basis and changed the world forever. Here’s the innocuous piece of paper that shows this and that truly allowed Tim Berners-Lee, at the fantastic London 2012 Olympics opening ceremony to claim “this is for everyone”. Over the past twenty years the web has become imbedded in all of our lives in ways which most of us could never have dreamed of and has probably given many of us in the software industry quite a secure (and for some, lucrative) living during that time.How fitting then that yesterday, almost 20 years to the day since CERN’s historic announcement, IBM announced a new appliance called IBM MessageSight designed to help organizations manage and communicate with the billions of mobile devices and sensors found in systems such as automobiles, traffic management systems, smart buildings and household appliances, the so called Internet of Things.

I’ve no idea what this announcement means in terms of capabilities, other than what is available in the press release, however it is comforting to note that foundational to IBM MessageSight is its support of MQTT, which was recently proposed to become an OASIS standard, providing a lightweight messaging transport for communication in machine to machine (M2M) and mobile environments. Today more than ever enterprises and governments are demanding compliance with open standards rather than proprietary ones so it is good to see that platforms such as MessageSight will be adhering to such standards.

A Step Too Far?

The trouble with technology, especially it seems computer technology, is that it keeps “improving”.  I’ve written before about the ethics of the job that we as software architects do and whether or not we should always accept what we do without asking questions, not least of which should be, is this a technology step too far that I am building or being asked to build?

Three articles have caught my eye this week which have made me ponder this question again.

The first is from the technology watcher and author Nicholas Carr who talks about the Glass Collective, an an investment syndicate made up of three companies: Google Ventures, Andreessen Horowitz and Kleiner Perkins Caufield & Byers whose collective aim is to provide seed funding to entrepreneurs in the Glass ecosystem to help jump start their ideas.For those not in the know about Glass it is, according to the Google blog, all about “getting technology out of the way” and has the aim of building technology that is “seamless, beautiful and empowering“. Glasses first manifestation is to be Internet-connected glasses that take photos, record video and offer hands-free Internet access right in front of a users’ eyes.

Clearly the type of augmented reality that Glass opens up could have huge educational benefits (think of walking around a museum or art gallery and getting information on what you are looking at piped right to you as you look at different works of art) as well as very serious privacy implications. For another view on this read the excellent blog post from my IBM colleague Rick Robinson on privacy in digital cities.

In his blog post Carr refers to a quote from Marshall McLuhan, made a half century ago and now seeming quite prescient:

Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit by taking a lease on our eyes and ears and nerves, we don’t really have any rights left.

The next thing to catch my eye (or actually several thousand things) was around the whole sorry tale of the Boston bombings. This post in particular from the Wall Street Journal discusses the role of Boston’s so called fusion center that “helps investigators scour for connections among potential suspects, by mining hundreds of law enforcement sources around the region, ranging from traffic violations, to jail records and criminal histories, along with public data like property records.”

Whilst I doubt anyone would question the validity of using data in this way to track down people that have performed atrocities such as we saw in Boston, it does highlight just how much data is now collected on us and about us, much of which we have no control over of broadcasting to the world.

Finally, on a much lighter note, we learn that the contraceptive maker Durex has released their “long distance, sexy time fundawear“. I’ll let you watch the first live trial video of this at your leisure (warning, not entirely work safe) but let’s just say here that it adds a whole new dimension to stroking the screen on your smartphone. I guess this one has no immediate privacy issues (providing the participants don’t wear their Google Glass at the same time as playing in their fundawear at least) it does raise some interesting questions about how much we will let technology impinge on the most intimate part of our lives.

So where does this latest foray of mine into digital privacy take us and what conclusions, if any, can we draw? Back in 2006 IBM Fellow and Chief Scientist Jeff Jonas posted a comment on his blog called Responsible Innovation: Designing for Human Rights in which he asks two questions: what if we are creating technologies that go in the face of the Universal Declaration of Human Rights and what if systems are designed without the essential characteristics needed to support basic privacy and civil liberties principles?

Jeff argues that if technologies could play a role in any of the arrest, detention, exile, interference, attacks or deprivation mentioned in the Universal Declaration of Human Rights then they must support disclosure of the source upon which such invasions are predicated. He suggests that systems that could affect one’s privacy or civil liberties should have a number of design characteristics built in that allow for some level of auditability as well as ensuring accuracy of the data they hold. Such characteristics as, every data point is associated to its data source and every data point is associated to its author etc. Given this was written in 2006 when Facebook was only two years old and still largely confined to use in US universities this is a hugely prescient and thoughtful piece of insight (which is why Jeff is an IBM Fellow of course).

So, there’s an idea! New technologies, when they come along should, be examined to ensure they have built in safeguards that mean such rights as are granted to us all in the Universal Declaration of Human Rights are not infringed or taken away from us. How would this be done and, more importantly of course, what bodies or organisations would we empower to ensure such safeguards were both effective and enforceable? No easy or straightforward answers here but certainly a topic for some discussion I believe.

Steal Like an Artist

David Bowie is having something of a resurgence this year. Not only has he released a critically acclaimed new album, The Next Day, there is also an exhibition of the artefacts from his long career at the Victoria & Albert museum in London. These includes handwritten lyrics, original costumes, fashion, photography, film, music videos, set designs and Bowie’s own instruments.

David Bowie was a collector. Not only did he collect, he also stole. As he said in a Playboy interview back in 1976:

The only art I’ll ever study is stuff that I can steal from.

He even steals from himself, check out the cover of his new album to see what I mean.

Austin Kleon has written a whole book on this topic, Steal Like an Artist, in which he makes the case that nothing is original and that nine out of ten times when someone says that something is new, it’s just that they don’t know the the original sources involved. Kleon goes on to say:

What a good artist understands is that nothing comes from nowhere. All creative work builds on what came before. Nothing is completely original.

So what on earth has this got to do with software architecture?

Eighteen years ago one of the all time great IT books was published. Design Patterns – Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides introduced the idea of patterns, originally a construct used by the building architect Christopher Alexander,  to the IT world at large. As the authors say in the introduction to their book:

One thing expert designers know not to do is solve every problem from first principles. Rather, they reuse solutions that have worked for them in the past. When they find a good solution, they use it again and again. Such experience is part of what makes them experts.

So expert designers ‘steal’ work they have already used before. The idea of the Design Patterns book was to publish patterns that others had found to work for them so they could be reused (or stolen). The patterns in Design Patterns were small design elements that could be used when building object-oriented software. Although they included code samples, they were not directly reusable without adaptation, not to mention coding, in a chosen programming language.

Fast forward eighteen years and the concept of patterns is alive and well but has reached a new level of abstraction and therefore reuse. Expert Integrated Systems like IBM’s PureApplication SystemTM use patterns to provide fast, high-quality deployments of sophisticated environments that enable enterprises to get new business applications up and running as quickly as possible. Whereas the design patterns from the book by Gamma et al were design elements that could be used to craft complete programs the PureApplication System patterns are collections of virtual images that form a a complete system. For example, the Business Process Management (BPM) pattern includes an HTTP server, a clustered pair of BPM servers, a cluster administration server, and a database server. When an administrator deploys this pattern, all the inter-connected parts are created and ready to run together. Time to deploy such systems is reduced from days or even, in some cases, weeks to just hours.

Some may say that the creation and proliferation of such patterns is another insidious step to the deskilling of our profession. If all it takes to deploy a complex BPM system is just a few mouse clicks then where does that leave those who once had to design such systems from scratch?

Going back to our art stealing analogy, a good artist does not just steal the work of others and pass it off as their own (at least most of them don’t) rather, they use the ideas contained in that work and build on them to create something new and unique (or at least different). Rather than having to create new stuff from scratch they adopt the ideas that others have come up with then adapt them to make their own creations. These creations themselves can then be used by others and further adapted thus the whole thing becomes a sort of virtuous circle:Adopt Adapt

A good architect, just like a good artist, should not fear patterns but should embrace them and know that they free him up to focus on creating something that is new and of real (business) value. Building on the good work that others have done before us is something we should all be encouraged to do more of. As Salvador Dalis said:

Those who do not want to imitate anything, produce nothing.

Yet More Architecting

Previously  I have discussed the use of the word ‘architecting’ and whether it is a valid word when describing the thing that architects do.

One of the people who commented on that blog entry informed me that the IEEE have updated the architecture standard, IEEE-1471 which describes the architecture of a software-intensive system to ISO/IEC/IEEE 42010:2011, Systems and software engineering — Architecture description. They have also updated slightly the definition of the word architecting to: 

The process of conceiving, defining, expressing, documenting, communicating, certifying proper implementation of, maintaining and improving an architecture throughout a system’s life cycle (i.e., “designing”).

Interesting that they have added that last bit in brackets “that is designing”. I always fall back on the words used by Grady Booch to resolve that other ongoing discussion about whether the word architecture is valid at all in describing what we do “all architecture is design but not all design is architecture”.

A Tale of Two Presentations

Popular consensus would seem to have it that the 2007 presentation by Steve Jobs at MacWorld where he unveiled the iPhone is one of the all time best business presentations ever. Not just in terms of the delivery but also in terms of the impact it had on the world.

As a stark contrast, according to Ron Galloway in the Huff Post Business Blog a recent presentation by Sony introducing the PS4 will likely go down as one of the worst business presentations ever. I’ve not seen the Sony presentation but according to Wired they held reporters hostage for two hours and never actually showed them their new console, just the controller, and revealed very little about what the new console would be like.

Amazing that a company as large and influential as Sony can make so many fundamental presentation mistakes but a salutary lesson to us all I think.

There is some very good presentation advice at the end of the Huff Post blog by the way. So useful it’s worth cutting out and sticking to your presentation notes.

  1. Respect your audience and their time.
  2. Get on stage.
  3. Make your assertion.
  4. Support it with visual evidence.
  5. Repeat your assertion.
  6. Leave the stage.

A Tale of Two Cities

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair…

So began A Tale of Two Cities written by Charles Dickens in 1859. The novel depicts the plight of the French peasantry demoralized by their aristocracy in the years leading up to the revolution and many unflattering social parallels with life in London during the same time period.

This week I’ve come across two very interesting and contrasting views of what smarter cities might look like which could well be summed up by the opening words of Dickens’ novel. One very new (February 2013) and one quite old (September 2008), they offer respectively a utopian and dystopian view of of the future of our cities.

The first view comes from the engineering and architectural consultancy company Arup. Their internal think-tank, Foresight + Innovation, has produced a report called It’s Alive – Can you imaine the urban building of the future?  In the report the author, Josef Hargrave, imagines what life will be like in 2050 if, as is predicted, 75% of the planets 9 billion population are living in cities. Hargrave asks:

As city living takes center stage, what will we come to expect from the design and function of urban structures and buildings?

In the future cities of Hargraves view:

  • The buildings in our cities will be manipulated in real-time and the components that they are made up from will be a part of the internet of things.They will be flexible structures whose components can be upgraded and rearranged over time.
  • Buildings will understand an individuals personal preferences, possibly at the level of their genetic composition.
  • Buildings will be more akin to living organisms and react to external conditions through a series of feedback loops. They will function as a “synthetic and highly sensitive nervous system”.
  • Buildings will not only be made from sustainable resources but will become an integral component of urban food production, containing areas for food production as well as bio-fuel cells that provide energy for the building.
  • Buildings will be integrated with the systems around them (green spaces, public transport and smart energy grids).

All of the above are obviously going to require a smart infrastructure of sensors generating data that can be analysed in real-time and reacted to by both the buildings systems as well as individuals who live and work in them. A nice job not only for the building architect but also the IT architect who needs to design those systems and make sure they all work together.

The other future vision I stumbled across this week is not quite as reassuring or cozy. Written in 2008 The Internet of Things – A critique of the ambient technology and the all seeing network of RFID is a series of essays which describe a slightly more alarming world where the promise of large numbers of interconnected devices (AKA The Internet of Things) are used for more surreptitious monitoring of the earths citizens.

Mark Weiser, Chief Scientist at Xerox-PARK and the so-called father of ubiquitous computing once said:

The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.

As the name of this paper suggests it largely focuses on the threat of ubiquitous RFID devices. At the time the paper was written smart phones like the iPhone, introduced one year earlier, were only just taking off and the tracking and monitoring capabilities of these devices was in its infancy. This paper provides a series of warnings of what might happen when computers disappear completely and really become fully integrated into our lives.

For example at one level there might be benefits from tracking John who goes to shop A and buys object B, then visits shop C and buys object D because we know the ingredients for making that bomb too. For some governments however if shop A happens to be the offices of an “illegal” human rights organisation and shop C is actually an outside public space where an organised march is taking place arresting John may be for a different purpose.

I guess the second city tale could be summed up by asking: when the environment becomes the interface, where is the off switch?

Whichever vision comes to pass (and it is most likely to be some combination of the two) as technologists we have it within out power to shape our future for the good not the worse. In the United Kingdom, where I live, we sometimes work ourselves into a bit of a frenzy over the machinations of government and industry whether it be the latest sex scandal, expenses misconduct or banking wrongdoing. We do, compared with many countries, have a relatively free press however where we eventually learn of these scandals. We also have unfettered access to the internet and tools like this where we can make our voices heard. It is beholden on us all therefore to make sure we do express concerns where they are valid and make sure we continue to make out governments and business leaders are held to account and use technology wisely. I certainly know which of these two cities I would rather live in.

Ten Things User’s Don’t Care About

I recently came across a blog post called Things users don’t care about at the interface and product design blog bokardo. It struck me this was the basis of a good list of things end users of the systems that we architect may also not care about and might therefore help us focus on the things that matter in a system development project. Here then, is my list of ten things users don’t (or shouldn’t care about):

  1. How long you spent on it. Of course, if you spent so long you didn’t actually deliver anything this is another problem. However users still won’t care, it’s just they’ll never know they are missing something (unless your competitor beat you to it).
  2. How hard it was to implement. You may be immensely proud of how your team overcame tremendous odds to solve that really tricky programming problem. However all users are concerned about is whether the thing actually works and makes their lives a little easier. Sometimes just good enough is all that is required.
  3. How clean your architecture is. As architects we strive for purity in our designs and love to follow good architectural principles. Of course these are important because good architectural practice usually leads to more robust and resilient systems. The message here is not to go too overboard on this and don’t strive for architectural purity over the ability to ship something.
  4. How extensible it is. Extensibility (and here we can add a number of other non-runtime qualities such as scaleability, portability, testability etc.) is often something we sweat a lot over when designing a system. These are things that are important to people who need to maintain and run the system but not to end users who just want to use the system to get their job done and go home at a reasonable time! Although we might like to place great emphasis on the longevity our systems might have (which these qualities often ensure) sometimes technology just marches on and makes these systems redundant before they ever get chance to be upgraded etc. The message here is although these qualities are important, they need to be put into the broader perspective of the likely lifetime of the systems we are building.
  5. How amazing the next version will be. Ah yes, there will always be another version that really makes life easier and does what was probably promised in the first place! The fact is there will be no “next version” if version 1.0 does not do enough to win over hearts and minds (which actually does not always have to be that much).
  6. What you think they should be interested in. As designers of systems we often give users what we think they would be interested in rather than what they actually want. Be careful here, you have to be very lucky or very prescient or like the late Steve Jobs to make this work.
  7. How important this is to you. Remember all those sleepless nights you spent worrying over that design problem that would not go away? Well, guess what, once the system is out there no one cares how important that was to you. See item 2).
  8. What development process you followed. The best development process is the one that ships your product in a reasonably timely fashion and within the budget that was set for the project. How agile it is or what documents do or don’t get written does not matter to the humble user.
  9. How much money was spent in development. Your boss or your company or your client care very much about this but the financial cost of a system is something that users don’t see and most times could not possibly comprehend. Spend your time wisely in focusing on what will make a difference to the users experience and let someone else sweat the financial stuff.
  10. The prima donna(s) who worked on the project. Most of us have worked with such people. The ones who, having worked on one or two successful projects, think they are ready to project manage the next moon landing or design the system that will solve world hunger or can turn out code faster than Mark Zuckerberg on steroids. What’s important on a project is team effort not individuals with overly-inflated egos. Make use of these folk when you can but don’t let them over power the others and decimate team morale.

Happy 2013 and Welcome to the Fifth Age!

I would assert that the modern age of commercial computing began roughly 50 years ago with the introduction of the IBM 1401 which was the world’s first fully transistorized computer when it was announced in October of 1959.  By the mid-1960’s almost half of all computer systems in the world were 1401 type machines. During the subsequent 50 years we have gone through a number of different ages of computing; each corresponding to the major, underlying architecture which was dominant during each age or period. The ages with their (very) approximate time spans are:

  • Age 1: The Mainframe Age (1960 – 1975)
  • Age 2: The Mini Computer Age (1975 – 1990)
  • Age 3: The Client-Server Age (1990 – 2000)
  • Age 4: The Internet Age (2000 – 2010)
  • Age 5: The Mobile Age (2010 – 20??)

Of course, the technologies from each age have never completely gone away, they are just not the predominant driving IT force any more (there are still estimated to be some 15,000 mainframe installations world-wide so mainframe programmers are not about to see the end of their careers any time soon). Equally, there other technologies bubbling under the surface running alongside and actually overlapping these major waves. For example networking has evolved from providing the ability to connect a “green screen” to a centralised mainframe, and then mini, to the ability to connect thousands, then millions and now billions of devices. The client-server age and internet age were dependent on cheap and ubiquitous desktop personal computers whilst the current mobile age is driven by offspring’s of the PC, now unshackled from the desktop, which run the same applications (and much, much more) on smaller and smaller devices.

These ages are also characterized by what we might term a decoupling and democratization of the technology. The mainframe age saw the huge and expensive beasts locked away in corporate headquarters and only accessible by qualified members of staff of those companies. Contrast this to the current mobile age where billions of people have devices in their pockets that are many times more powerful than the mainframe computers of the first age of computing and which allow orders of magnitude increases in connectivity and access to information.

Another defining characteristic of each of these ages is the major business uses that the technology was put to. The mainframe age was predominantly centralised systems running companies core business functions that were financially worthwhile to automate or manually complex to administer (payroll, core accounting functions etc). The mobile age is characterised by mobile enterprise application platforms (MEAPs) and apps which are cheap enough to just be used just once and sometimes perform a single or relatively few number of functions.

Given that each of the ages of computing to date has run for 10 – 15 years and the current mobile age is only two years old what predictions are there for how this age might pan out and what should we, as architects, be focusing on and thinking about? As you might expect at this time of year there is no shortage of analyst reports providing all sorts of predictions for the coming year. This joint Appcelerator/IDC Q4 2012 Mobile Developer Report particularly caught my eye as it polled almost 3000 Appcelerator Titanium developers on their thoughts about what is hot in the mobile, social and cloud space. The reason it is important to look at what platforms developers are interested in is, of course, that they can make or break whether those platforms grow and survive over the long term. Microsoft Windows and Apple’s iPhone both took off because developers flocked to those platforms and developed applications for those in preference to competing platforms (anyone remember OS/2?).

As you might expect most developers preferences are to develop for the iOS platforms (iPhone and iPad) closely followed by Android phones and tablets with nearly a third also developing using HTML5 (i.e. cross-platform). Windows phones and tablets are showing some increased interest but Blackberry’s woes would seem to be increasing with a slight drop off in developer interest in those platforms.

Nearly all developers (88.4%) expected that they would be developing for two or more OS’es during 2013. Now that consumers have an increasing number of viable platforms to choose from, the ability to build a mobile app that is available cross-platform is a must for a successful developer.

Understanding mobile platforms and how they integrate with the enterprise is one of the top skills going to be needed over the next few years as the mobile age really takes off. (Consequently it is also going to require employers to work more closely with universities to ensure those skills are obtained).

In many ways the fifth age of computing has actually taken us back several years (pre-internet age) when developers had to support a multitude of operating systems and computer platforms. As a result many MEAP providers are investing in cross platform development tools, such as IBM’s Worklight which is also part of the IBM Mobile Foundation. This platform also adds intelligent end point management (that addresses the issues of security, complexity and BYOD policies) together with an integration framework that enables companies to rapidly connect their hybrid world of public clouds, private clouds, and on-premise applications.

For now then, at least until a true multi-platform technology such as HTML5 is mature enough, we are in a complex world with lots of new and rapidly changing technologies to get to grips with as well as needing to understand how the new stuff integrates with all the old legacy stuff (again). In other words, a world which we as architects know and love and thrive in. Here’s to a complex 2013!