The Future of Software (Architecture)

If you hang out on the internet for long enough you begin to pick up memes about what the future (of virtually anything) might be like. Bearing in mind this cautionary quote from Jay Rosen:

Nothing is the future of anything. And so every one of your “No, X is not the future of Y” articles is as witless as the originating hype.

Here is a meme I am detecting on the future of software based on a number of factors each of which has two opposing forces, either one of which may win out.  Individually each one of these factors is not particularly new, however together they may well be part of a perfect storm that is about to mark a radical shift in our industry. First, here are the five factors.

Peer to peer rather than hierarchical.
Chris Ames over at 8bit recently posted this article on The Future of Software which really caught my attention. Once upon a time (i.e. prior to about 2006) software was essentially delivered as packages. You either took the whole enchilada or none of it at all. You bought, for example, the Microsoft Office suite and all it’s components and they all played together in the way Microsoft demanded. There was a hierarchy within that suite that allowed all the parts to work together and provided a common look and feel but it was essentially a monolithic package, you bought all the features even though you may only use 10% of them. As Ames points out however, the future is loosely coupled with specialty components (you can call them ‘apps’ if you wish) providing relatively simple functions (that is, do one thing but do it really, really well) that also provide well defined programming interfaces.

This is really peer-to-peer. Tasks are partitioned between the applications and no one application has any greater privilege than another. More significantly different providers can develop and distribute these applications so no one vendor has a dominance on the market. This also allows newer and smaller specialist companies to build and develop such components/applications.

A craft rather than an engineering discipline.
Since I started in this industry back in the early 80’s (and probably way before that) the argument has raged as to whether the ‘industry’ of software development is an engineering discipline or a craft (or even an art). When I was at university (late 70’s) software engineering did not exist as a field of study in any significant way and computer science was in its infancy. Whilst the industry has gone through several reincarnations over the years and multiple attempts at enforcing an engineering discipline the somewhat chaotic nature of the internet, where let’s face it a lot of software development is done, makes it feel more like a craft than ever before. This is fed by the fact that essentially anyone with a computer and a good idea (and a lot of stamina) can, it seems, set up shop and make, if not a fortune, at least a fairly decent living.

Free rather than paid for.
Probably the greatest threat any of us feels at present (at least those of us whose work in creating digitised information in one form or another) is the fact that it seems someone, somewhere is prepared to do what you do, if not for free, then at least for a very small amount of money. Open source software is a good example. At the point of procurement (PoP) it is ‘free’ (and yes I know total cost of ownership is another matter) something that many organisations are recognising and taking advantage of, including the UK government whose stated strategy is to use open source code by default. At the same time many new companies, funded by venture capitalists hungry to be finding the next Facebook or Twitter, are pouring, mainly dollars, into new ventures which, at least on the face of it are offering a plethora of new capabilities for nothing. Check out these guys if you want a summary of the services/capabilities out there and how to join them all together.

Distributed in the cloud rather than packaged as an application.
I wanted to try and not get into talking about technology here, especially the so called SMAC (Social, Mobile, Analytics and Cloud) paradigm. Cloud, both as a technology and a concept, is too important to ignore for these purposes however. In many ways software, at least software distribution, has come full circle with the advent of cloud. Once, all software was only available in a centrally managed data centre. The personal computer temporarily set it free but cloud has now bought software back under some form of central control, albeit in a much more greatly available form. Of course there are lots of questions around cloud computing still (Is it secure? What happens if the cloud provider goes out of business? What happens if my data gets into the wrong hands?). I think cloud is a technology that is here to stay and definitely a part of my meme.

Developed in a cooperative, agile way rather than a collaborative, process driven (waterfall) one.
Here’s a nice quote from the web anthropologist, futurist and author Stowe Boyd that perfectly captures this: 

“In the collaborative business, people affiliate with coworkers around shared business culture and an approved strategic plan to which they subordinate their personal aims. But in a cooperative business, people affiliate with coworkers around a shared business ethos, and each is pursuing their own personal aims to which they subordinate business strategy. So, cooperatives are first and foremost organized around cooperation as a set of principles that circumscribe the nature of loose connection, while collaboratives are organized around belonging to a collective, based on tight connection. Loose, laissez-faire rules like ‘First, do no harm’, ‘Do unto others’, and ‘Hear everyone’s opinion before binding commitments’ are the sort of rules (unsurprisingly) that define the ethos of cooperative work, and which come before the needs and ends of any specific project.”

Check out the Valve model as an example of a cooperative.

So, if you were thinking of starting (or re-starting) a career in software (engineering, development, architecture, etc) what does this meme mean to you? Here are a few thoughts:

  • Whilst we should not write off the large software product vendors and package suppliers there is no doubt they are going to be in for a rough ride over the next few years whilst they adjust their business models to take into account the pressures from open source and the distribution mechanism bought on by the cloud. If you want to be a part of solving those conundrums then spending time working for such companies will be an “interesting” place to be.
  • If you like the idea of cooperation and the concept of a shared business ethos, rather than collaboration, then searching out, or better still starting, a company that adheres to such a model might be the way to go. It will be interesting to see how well the concept of the cooperative scales though.
  • It seems as if we are finally nearing the nirvana of true componentisation (that is software as units of functionality with a well defined interface). The promised simplicity that can be offered through API’s as shown in the diagram above is pretty much with us (remember the components shown in that diagram are all from different vendors). Individuals as well as small-medium enterprises (SME’s) that can take advantage of this paradigm are set to benefit from a new component gold rush.
  • On the craft/art versus engineering discipline of software, the need for systems that are highly resilient and available 99.9999% of the time has never been greater as these systems run more and more of our lives. Traditionally these have been systems developed by core teams following well tried and tested engineering disciplines. However as the internet has enabled teams to become more distributed where such systems are developed becomes less of an issue and these two paradigms need not be mutually exclusive Open source software is clearly a good example of applications that are developed locally but to high degrees of ‘workmanship’. The problem with such software tends not to be in initial development but in ongoing maintenance and upgrades which is where larger companies can step in and provide that extra level of assurance.

If these five factors really are part of a meme (that is a cultural unit for carrying ideas) then I expect it to evolve and mutate. Please feel free to be involved in that evolutionary process.

Yet More Architecting

Previously  I have discussed the use of the word ‘architecting’ and whether it is a valid word when describing the thing that architects do.

One of the people who commented on that blog entry informed me that the IEEE have updated the architecture standard, IEEE-1471 which describes the architecture of a software-intensive system to ISO/IEC/IEEE 42010:2011, Systems and software engineering — Architecture description. They have also updated slightly the definition of the word architecting to: 

The process of conceiving, defining, expressing, documenting, communicating, certifying proper implementation of, maintaining and improving an architecture throughout a system’s life cycle (i.e., “designing”).

Interesting that they have added that last bit in brackets “that is designing”. I always fall back on the words used by Grady Booch to resolve that other ongoing discussion about whether the word architecture is valid at all in describing what we do “all architecture is design but not all design is architecture”.

Ten Things User’s Don’t Care About

I recently came across a blog post called Things users don’t care about at the interface and product design blog bokardo. It struck me this was the basis of a good list of things end users of the systems that we architect may also not care about and might therefore help us focus on the things that matter in a system development project. Here then, is my list of ten things users don’t (or shouldn’t care about):

  1. How long you spent on it. Of course, if you spent so long you didn’t actually deliver anything this is another problem. However users still won’t care, it’s just they’ll never know they are missing something (unless your competitor beat you to it).
  2. How hard it was to implement. You may be immensely proud of how your team overcame tremendous odds to solve that really tricky programming problem. However all users are concerned about is whether the thing actually works and makes their lives a little easier. Sometimes just good enough is all that is required.
  3. How clean your architecture is. As architects we strive for purity in our designs and love to follow good architectural principles. Of course these are important because good architectural practice usually leads to more robust and resilient systems. The message here is not to go too overboard on this and don’t strive for architectural purity over the ability to ship something.
  4. How extensible it is. Extensibility (and here we can add a number of other non-runtime qualities such as scaleability, portability, testability etc.) is often something we sweat a lot over when designing a system. These are things that are important to people who need to maintain and run the system but not to end users who just want to use the system to get their job done and go home at a reasonable time! Although we might like to place great emphasis on the longevity our systems might have (which these qualities often ensure) sometimes technology just marches on and makes these systems redundant before they ever get chance to be upgraded etc. The message here is although these qualities are important, they need to be put into the broader perspective of the likely lifetime of the systems we are building.
  5. How amazing the next version will be. Ah yes, there will always be another version that really makes life easier and does what was probably promised in the first place! The fact is there will be no “next version” if version 1.0 does not do enough to win over hearts and minds (which actually does not always have to be that much).
  6. What you think they should be interested in. As designers of systems we often give users what we think they would be interested in rather than what they actually want. Be careful here, you have to be very lucky or very prescient or like the late Steve Jobs to make this work.
  7. How important this is to you. Remember all those sleepless nights you spent worrying over that design problem that would not go away? Well, guess what, once the system is out there no one cares how important that was to you. See item 2).
  8. What development process you followed. The best development process is the one that ships your product in a reasonably timely fashion and within the budget that was set for the project. How agile it is or what documents do or don’t get written does not matter to the humble user.
  9. How much money was spent in development. Your boss or your company or your client care very much about this but the financial cost of a system is something that users don’t see and most times could not possibly comprehend. Spend your time wisely in focusing on what will make a difference to the users experience and let someone else sweat the financial stuff.
  10. The prima donna(s) who worked on the project. Most of us have worked with such people. The ones who, having worked on one or two successful projects, think they are ready to project manage the next moon landing or design the system that will solve world hunger or can turn out code faster than Mark Zuckerberg on steroids. What’s important on a project is team effort not individuals with overly-inflated egos. Make use of these folk when you can but don’t let them over power the others and decimate team morale.

Happy 2013 and Welcome to the Fifth Age!

I would assert that the modern age of commercial computing began roughly 50 years ago with the introduction of the IBM 1401 which was the world’s first fully transistorized computer when it was announced in October of 1959.  By the mid-1960’s almost half of all computer systems in the world were 1401 type machines. During the subsequent 50 years we have gone through a number of different ages of computing; each corresponding to the major, underlying architecture which was dominant during each age or period. The ages with their (very) approximate time spans are:

  • Age 1: The Mainframe Age (1960 – 1975)
  • Age 2: The Mini Computer Age (1975 – 1990)
  • Age 3: The Client-Server Age (1990 – 2000)
  • Age 4: The Internet Age (2000 – 2010)
  • Age 5: The Mobile Age (2010 – 20??)

Of course, the technologies from each age have never completely gone away, they are just not the predominant driving IT force any more (there are still estimated to be some 15,000 mainframe installations world-wide so mainframe programmers are not about to see the end of their careers any time soon). Equally, there other technologies bubbling under the surface running alongside and actually overlapping these major waves. For example networking has evolved from providing the ability to connect a “green screen” to a centralised mainframe, and then mini, to the ability to connect thousands, then millions and now billions of devices. The client-server age and internet age were dependent on cheap and ubiquitous desktop personal computers whilst the current mobile age is driven by offspring’s of the PC, now unshackled from the desktop, which run the same applications (and much, much more) on smaller and smaller devices.

These ages are also characterized by what we might term a decoupling and democratization of the technology. The mainframe age saw the huge and expensive beasts locked away in corporate headquarters and only accessible by qualified members of staff of those companies. Contrast this to the current mobile age where billions of people have devices in their pockets that are many times more powerful than the mainframe computers of the first age of computing and which allow orders of magnitude increases in connectivity and access to information.

Another defining characteristic of each of these ages is the major business uses that the technology was put to. The mainframe age was predominantly centralised systems running companies core business functions that were financially worthwhile to automate or manually complex to administer (payroll, core accounting functions etc). The mobile age is characterised by mobile enterprise application platforms (MEAPs) and apps which are cheap enough to just be used just once and sometimes perform a single or relatively few number of functions.

Given that each of the ages of computing to date has run for 10 – 15 years and the current mobile age is only two years old what predictions are there for how this age might pan out and what should we, as architects, be focusing on and thinking about? As you might expect at this time of year there is no shortage of analyst reports providing all sorts of predictions for the coming year. This joint Appcelerator/IDC Q4 2012 Mobile Developer Report particularly caught my eye as it polled almost 3000 Appcelerator Titanium developers on their thoughts about what is hot in the mobile, social and cloud space. The reason it is important to look at what platforms developers are interested in is, of course, that they can make or break whether those platforms grow and survive over the long term. Microsoft Windows and Apple’s iPhone both took off because developers flocked to those platforms and developed applications for those in preference to competing platforms (anyone remember OS/2?).

As you might expect most developers preferences are to develop for the iOS platforms (iPhone and iPad) closely followed by Android phones and tablets with nearly a third also developing using HTML5 (i.e. cross-platform). Windows phones and tablets are showing some increased interest but Blackberry’s woes would seem to be increasing with a slight drop off in developer interest in those platforms.

Nearly all developers (88.4%) expected that they would be developing for two or more OS’es during 2013. Now that consumers have an increasing number of viable platforms to choose from, the ability to build a mobile app that is available cross-platform is a must for a successful developer.

Understanding mobile platforms and how they integrate with the enterprise is one of the top skills going to be needed over the next few years as the mobile age really takes off. (Consequently it is also going to require employers to work more closely with universities to ensure those skills are obtained).

In many ways the fifth age of computing has actually taken us back several years (pre-internet age) when developers had to support a multitude of operating systems and computer platforms. As a result many MEAP providers are investing in cross platform development tools, such as IBM’s Worklight which is also part of the IBM Mobile Foundation. This platform also adds intelligent end point management (that addresses the issues of security, complexity and BYOD policies) together with an integration framework that enables companies to rapidly connect their hybrid world of public clouds, private clouds, and on-premise applications.

For now then, at least until a true multi-platform technology such as HTML5 is mature enough, we are in a complex world with lots of new and rapidly changing technologies to get to grips with as well as needing to understand how the new stuff integrates with all the old legacy stuff (again). In other words, a world which we as architects know and love and thrive in. Here’s to a complex 2013!

Do Smarter Cities Make their Citizens Smarter?

This is an update of a post I originally put in my blog The Versatilist Way. I’ve removed the reference to the now discredited book by Jonah Lehry (though I believe the basic premise of what he was saying about “urban friction” remains true) and added some additional references to research by the clinical psychologist Professor Ian Robertson.My IBM colleague,  Dr. Rick Robinson, blogs here as The Urban Technologist where he writes about emergent technology and smarter cities. This particular post from Rick called Digital Platforms for Smarter City Market-Making discusses how encouraging organic growth of small to medium enterprises (SMEs) in cities not only helps with the economic revival of some of our run down inner city areas but also means those SMEs are less likely to up roots and move to another area when better tax or other incentives are on offer. As Rick says:

By building clusters of companies providing related products and services with strong input/output linkages, cities can create economies that are more deeply rooted in their locality.

Examples include Birmingham’s Jewellery Quarter which has a cluster of designers, manufacturers and retailers who also work with Birmingham City University’s School of Jewellery and Horology. Linkages with local colleges and universities is another way of reinforcing the locality of SME’s. Of course, just because we classify an enterprise as being ‘small to medium’ or ‘local’ does not mean that, because of the internet, it cannot have a global reach. These days even small, ‘mom and pop’ business can be both local as well as global.

Another example of generating organic growth is the so called Silicon Roundabout area of Shoreditch, Hoxton and Old Street in London which now counts some 3,200 firms and over 48,000 jobs. See here for a Demos report on this called A Tale of Tech City.

Clearly generating growth in our cities, as a way of improving both the economy as well as the general livelihoods of its citizens, should be considered a good thing, especially if that growth can be in new business areas which helps to replace our dying manufacturing industries and reduce our dependency on the somewhat ‘toxic’ financial services industry. However it turns out that encouraging this kind of clustering of people also has a positive feedback effect which means that groups of people together achieve more than just the sum of all the individuals.

In 2007 the British theoretical physicist Geoffrey West and colleagues published a paper called Growth, innovation, scaling, and the pace of life in cities. The paper described the results of an analysis of a huge amount of urban data from cities around the world. Data included everything from the number of coffee shops in urban areas, personal income, number of murders and even the walking speed of pedestrians. West and his team analysed all of this data and discovered that the rhythm of cities could be described by a few simple equations – the equivalent of Newtons laws of motion for cities if you like.  These laws can be taken and used to predict the behavior of our cities. One of the equations that West and his team discovered was around the measurement of socioeconomic variables such as number of patents, per-capita income etc. It turns out that any variable that can be measured in cities scales to an exponent of 1.15. In other words moving to a city of 1 million inhabitants results, on average, 15% more patents, 15% more income etc than a person living in a city of five hundred thousand. This phenomena is referred to as “superlinear scaling” – as cities get bigger, everything starts to accelerate. This applies to any city, anywhere in the world from Manhattan, to London to Hong Kong to Sydney.

So what is it about cities that appears to make their citizens smarter the bigger they grow? More to the point what do we mean by a smarter city in this context?

IBM defines a smarter city as one that:

Makes optimal use of all the interconnected information available today to better understand and control its operations and optimize the use of limited resources.

Whilst it would seem to make sense that having an optimised and better connected (city) infrastructure that ensures information flows freely and efficiently would make such cities work better and improve use of limited resources could this also enable the citizens themselves to be more creative? In other words do smarter cities produce smarter citizens? Some research by the clinical psychologist Professor Ian Robertson indicates that not only this might be the case but also, more intriguingly, citizens that live in vibrant and culturally diverse cities might actually live longer.

In this blog post Professor Robertson suggests that humming metropolises like New York or London or Sydney, through what he refers to as the three E’s, provide their citizens with stimulation’s that effect the chemicals in the brain making them smarter as well as reducing their chancing of developing aging diseases like Alzheimer’s. These three E’s are:

  • Excitation – the constant novelty that big cities provide, whether it be in terms of the construction of the next architecturally significant building or a new theater production or art gallery show provides a stimulating environment which has been shown to develop better memory and even lead to the growth of new brain cells.
  • Expectation. When there is a mix of cultures and ages it seems that older people don’t think themselves old; instead they seem to discard the preconceived notions of what people of a certain age are supposed to do and act like people of a much younger age.
  • Empowerment. By definition people who stay or live in cities tend to be wealthier. Again research has shown that money, power and success change brain functions and make people mentally sharper, more motivated and bolder.

If this is correct and the three E’s found in big cities really do make us both smart and live longer then the challenge of this century must be that we need to make our cities smarter, so they can sustain bigger populations that can live healthy and productive lives which can then have a positive feedback effect on the cities themselves. Maybe there really is a reason to love our cities after all?

The Moral Architect

I started my career in the telecommunications division of the General Electrical Company (GEC) as a software engineer designing digital signalling systems for Private Branch Exchanges based on the Digital Private Network Signalling System. As part of that role I represented GEC on the working party that defined the DPNSS standard which was owned by British Telecom. I remember at one of the meetings the head of the working party, whose name I unfortunately forget, posed the question: what would have happened if regimes such as those of Nazi Germany or the Stalinist Soviet Union had access to the powerful (sic) technology we were developing? When I look back at that time (early 80’s) such “powerful technology” looks positively antiquated – we were actually talking about little more than the ability to know who was calling whom using calling line identification! However that question was an important one to ask and is now one we should be asking more than ever today.One of the roles of the architect is to ask the questions that others tend to either forget about or purposely don’t ask because the answer is “too hard”. Questions like:

  • So you expect 10,000 people to use your website but what happens if it really takes off and the number of users is 10 or 100 times that?
  • So you’re giving your workforce mobile devices that can be used to access your sales systems, what happens when one of your employees leaves their tablet on a plane/train/taxi?
  • So we are buying database software from a new vendor who will help us migrate from our old systems but what in-house skills do we have to manage and operate this new software?
  • Etc

In many ways these are the easy questions, for a slightly harder question consider this one posed by Nicholas Carr in this blog post.

So you’re happily tweeting away as your Google self-driving car crosses a bridge, its speed precisely synced to the 50 m.p.h. limit. A group of frisky schoolchildren is also heading across the bridge, on the pedestrian walkway. Suddenly, there’s a tussle, and three of the kids are pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?

Pity the poor architect who has to design for that particular use case (and probably several hundred others not yet thought of)! Whilst this might seem to be someway off, the future, as they say, is actually a lot closer than you think. As Carr points out, the US Department of Defence has just issued guidelines designed to:

Minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.

Guidelines which presumably software architects and designers, amongst others, need to get their heads around.

For anyone who has even the remotest knowledge of the genre of science fiction this is probably going to sound familiar. As far back as 1942 the author Isaac Asimov formulated his famous three laws of robotics which current and future software architects may well be minded to adopt as an important set of architectural principles. These three laws, as stated in Asimov’s 1942 short story Runaround, are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

As stated here these laws are beautifully concise and unambiguous however the devil, of course, will be in the implementation. Asimov himself went on to make quite a career of writing stories that tussled with some of the ambiguities that could arise from the conflicts between these laws.

So back to the point of this blog. As our systems become ever more complex and infringe on more and more of our everyday lives are ethical or moral requirements such as these going to be another set of things that software architects need to deal with? I would say absolutely yes. More than ever we need to understand not just the impact on humanity of those systems we are building but also those systems (and tools) we are using everyday. As  Douglas Rushkoff says in his book Program or be Programmed:

If you don’t know what the software you’re using is for, then you’re not using it but being used by it.

In a recent blog post Seth Godin poses a number of questions of what freedom in a digital world really means. Many of these are difficult moral questions with no easy answer and yet systems we are building now, today are implicitly or explicitly embedding assumptions around some of these questions whether we like it or not. One could argue that we should always question whether a particular system should be built or not (just because we can do something does not necessarily mean we should) but often by the time you realise you should be asking such questions it’s already too late. Many of the systems we have today were not built as such, but rather grew or emerged. Facebook may have started out as a means of connecting college friends but now it’s a huge interconnected world of relationships and likes and dislikes and photographs and timelines and goodness knows what else that can be ‘mined’ for all sorts of purposes not originally envisaged.

One of the questions architects and technologists alike must surely be asking is how much mining (of personal data) is it right to do? Technology exists to track our digital presence wherever we go but how much should we be making use of that data and and to what end? The story of how the US retailer Target found out a teenage girl was pregnant before her father did has been doing the rounds for a while now. Apart from the huge embarrassment to the girl and her family this story probably had a fairly harmless outcome however what if that girl had lived in a part of the world where such behavior was treated with less sympathy?

It is of course up to each of us to decide what sort of systems we are or are not prepared to work on in order to earn a living. Each of us must make a moral and ethical judgment based on our own values and beliefs. We should also take care in judging others that create systems we do not agree with or think are “wrong”. What is important however is to always question the motives and the reasons behind those systems and be very clear why you are doing what you are doing and are able to sleep easy having made your decision.

Architects Don’t Code

WikiWikiWeb is one of the first wiki experiences I, and I suspect many people of a certain age, had. WikiWikiWeb was created by Ward Cunningham for the Portland Pattern Repository, a fantastic source of informal guidance and advice by experts on how to build software. It contains a wealth of patterns (and antipatterns) on pretty much any software topic known to man and a good few that are fast disappearing into the mists of time (TurboPascal anyone?).For a set of patterns related to the topics I cover in this blog go to the search page, type ‘architect’ into the search field and browse through some of the 169 (as of this date) topics found.  I was doing just this the other day and came across the ArchitectsDontCode pattern (or possibly antipattern). The problem statement for this pattern is as follows:

The System Architect responsible for designing your system hasn’t written a line of code in two years. But they’ve produced quite a lot of ISO9001-compliant documentation and are quite proud of it.

The impact of this is given as:

A project where the code reflects a design that the SystemArchitect never thought of because the one he came up with was fundamentally flawed and the developers couldn’t explain why it was flawed since the SystemArchitect never codes and is uninterested in implementation details.

Hmmm, pretty damning for System Architects. Just for the record such a person is defined here as being:

[System Architect] – A person with the leading vision, the overall comprehension of how the hardware, software, and network fit together.

The meaning of job titles can of course vary massively from one organisation to another. What matters is the role itself and what that person does in the role. It is often the case that any role with ‘architect’ in the title is much denigrated by developers, especially in my experience on agile projects, who see such people as being an overhead who contribute nothing to a project but reams of documents, or worse UML models, that no one reads.

Sometimes software developers, by which I mean people who actually write code for a living, can take a somewhat parochial view of the world of software. In the picture below their world is often constrained to the middle Application layer, that is to say they are developing application software, maybe using two or three programming languages, with a quite clear boundary and set of requirements (or at least requirements that can be fairly easily agreed through appropriate techniques). Such software may of course run into tens of thousands of lines of code and have  several tens of developers working on it. There needs therefore to be someone who maintains an overall vision of what this application should do. Whether that is someone with the title of Application Architect, Lead Programmer or Chief Designer does not really matter; it is the fact they look after the overall integrity of the application that matters. Such a person on a small team may indeed do some of the coding or at least be very familiar with the current version of whatever programming language is being deployed.

In the business world of bespoke applications, as opposed to ‘shrink-wrapped’ applications, things are a bit more complicated. New applications need to communicate with legacy software and often require middleware to aid that communication. Information will exist in a multitude of databases and may need some form of extract, transform and load (ETL) and master data management (MDM) tools to get access to and use that information as well as analytics tools to make sense of it. Finally there will be business processes that exist or need to be built which will coordinate and orchestrate activities across a whole series of new and legacy applications as well as manual processes. All of these require software or tooling of some sort and similarly need someone to maintain overall integrity. This I see as being the domain, or area of concern, of the Software Architect. Does such a person still code on the project? Well maybe, but on typical projects I see it is unlikely such a person has a much time for this activity. That’s not to say however that she needs some level of (current) knowledge on how all the parts fit together and what they do. No mean task on a large business system.

Finally all this software (business processes, data, applications and middleware) has to be deployed onto actual hardware (computers, networks and storage). Whilst the choice and selection of such hardware may fall to another specialist role (sometimes referred to as an Infrastructure or Technical Architect) there is another level of overall system integrity that needs to be maintained. Such a role is often called the System Architect or maybe Chief Architect. At this stage it is possible that the background of such a person has never involved coding to any great degree so such a person is unlikely to write any code on a project and quite rightly so! This is often not just a technical role that worries about systems development but also a people role that worries about satisfying the numerous stakeholders that such large projects have.

Where you choose to sit in the above layered model and what role you take will of course depend on your experience and personal interests. All roles are important and each must work together if systems, that depend on so many moving parts, are to be delivered in time and on budget.

Disruptive Technologies, Smarter Cities and the New Oil

Last week I attended the Smart City and Government Open Data Hackathon in Birmingham, UK. The event was sponsored by IBM and my colleague Dr Rick Robinson, who writes extensively on Smarter Cities as The Urban Technologist, gave the keynote session to kick off the event. The idea of this particular hackathon was to explore ways in which various sources of open data, including the UK governments own open data initiative, could be used in new and creative ways to improve the lives of citizens and make our cities smarter as well as generally better places to live in. There were some great ideas discussed including how to predict future jobs as well as identifying citizens who had not claimed benefits to which they were entitled (and those benefits then going back into the local economy through purchases of goods and services).The phrase “data is the new oil” is by no means a new one. It was first used by Michael Palmer in 2006 in this article. Palmers says:

Data is just like crude. It’s valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic, chemicals, etc to create a valuable entity that drives profitable activity; so must data be broken down, analyzed for it to have value.

Whilst this is a nice metaphor I think I actually prefer the slight adaptation proposed by David McCandless in his TED talk: The beauty of data visualization where he coins the phrase “data is the new soil”. The reason being data needs to be worked and manipulated, just like a good farmer looking after his land, to get the best out of it. In the case of the work done by McCandless this involves creatively visualizing data to show new understandings or interpretations and, as Hans Rosling says, to let the data set change your mind set.

Certainly one way data is most definitely not like oil is in the way it is increasing at exponential rates of growth rather than rapidly diminishing. But it’s not only data. The new triumvirate of data, cloud and mobile is forging a whole new mega-trend in IT nicely captured in this equation proposed by Gabrielle Byrne at the start of this video:

e = mc(imc)2

Where:

  • e is any enterprise (or city, see later)
  • m is mobile
  • c is cloud
  • imc is in memory computing, or stream computing, the instant analysis of masses of fast changing data

This new trend is characterized by a number of incremental innovations that have taken place in IT over previous years in each of the three areas nicely captured in the figure below.

Source: CNET – Where IT is going: Cloud, mobile and data

In his blog post: The new architecture of smarter cities, Rick proposes that a Smarter City needs three essential ‘ingredients’ in order to be really characterized as ‘smart’. These are:

  • Smart cities are led from the top
  • Smart cities have a stakeholder forum
  • Smart cities invest in technology infrastructure

It is this last attribute that, when built on a suitable cloud-mobility-data platform, promises to fundamentally change not only how enterprises are set to change but also cities and even whole nations.  However it’s not just any old platform that needs to be built. In this post I discussed the concept behind so-called disruptive technology platforms and the attributes they must have. Namely:

  • A well defined set of open interfaces.
  • A critical mass of both end users and service providers.
  • Both scaleable and extremely robust.
  • An intrinsic value which cannot be obtained elsewhere.
  • Allow users to interact amongst themselves, maybe in ways that were originally envisaged.
  • Service providers must be given the right level of contract that allows them to innovate, but without actually breaking the platform.

So what might a disruptive technology platform, for a whole city, look like and what innovations might it provide? As an example of such a platform IBM have developed something they call the Intelligent Operations Center or IOC. The idea behind the IOC is to use information from a number of city agencies and departments to make smarter decisions based on rules that can be programmed into the platform. The idea then, is that the IOC will be used to anticipate problems to minimize the impact of disruptions to city services and operations as well as assist in the mobilization of resources across multiple agencies. The IOC allows aggregated data to be visualized in ways that the individual data sets cannot and for new insights to be obtained from that data.

Platforms like the IOC are only the start of what is possible in a truly smart city. They are just beginning to make use of mobile technology, data in the cloud and huge volumes of fast moving data that is analysed in real-time. Whether these platforms turn out to be really disruptive remains to be seen but if this is really the age of “new oil” then we only have the limitations of our imagination to restrict us in how we will use that data to give us valuable new insights into building smart cities.

Is “Architect” a Job or a Role?

This is one of the questions posed in the SATURN 2011 Keynote called The Intimate Relationship Between Architecture and Code: Architecture Experiences of a Playing Coach by Dave Thomas. The article is a series of observations, presumably made by the author and colleagues, on the view of architects as seen by developers on agile projects and is fairly damning of architects and what they do. I’d urge you to read the complete list but a few highlights are:

  • Part of the problem is us [architects] disparate views from ivory tower, dismissal of new approaches, faith in models not in code.
  • Enterprise architect = an oxymoron? Takes up so much time.
  • Models are useful, but they’re not the architecture. Diagrams usually have no semantics, no language, and therefore tell us almost nothing.
  • Components are a good idea. Frameworks aren’t. They are components that were not finished.
  • The identification of high-value innovation opportunities is a key architect responsibility.
  • Is “architect” a job or a role? It’s not a job, it’s a role. They need to be able to understand the environment, act as playing coaches, and read code.

Addressing each of these comments probably justifies a blog post in its own right however I believe the final comment, and the title of this post, gets to the heart of the problem here. Being an architect is not, or should not, be a job but a role. Whatever type of architect you are (enterprise, application, infrastructure) it is important, actually vital, to have an understanding of technology that is beyond the words and pictures we use to describe that technology. You occasionally need to roll up your sleeves and “get down and dirty” whether that be in speaking to users to understand their business needs, designing web sites, writing code or installing and configuring hardware. In other words you should be adept at other roles that support and reinforce your role as an architect.

Unfortunately, in many organisations, treating ‘architect’ as a role rather than a job title is difficult. Architects are seen as occupying more senior positions which bring higher salaries and therefore cannot justify the time it takes to practice, with technology rather than just talking about it or drawing pretty pictures using fancy modeling tools. As discussed elsewhere there is no easy path to mastering any subject, rather it takes regular and continued practice. If you are passionate about what you do you need to carve out time in your day to practice as well as keep up to date with what is new and what is current. The danger we all face is that we can spend too much time oiling the machine rather than using the finite number of brain cycles we have each day making a difference and making real change that matters.

Architect or Architecting?

A discussion has arisen on one of the IBM forums about whether the verb that describes what architects do (as in “to architect” or “architecting”) is valid English or not. The recommendation in the IBM word usage database has apparently always been that when you need a verb to describe what an architect does use “design,” “plan,” or “structure”. Needless to say this has generated quite a bit of comment (145 at the last count) including:

  • Police are policing, judges are judging, dancers are dancing, why then aren’t architects architecting?
  • Architects are not “architecting” because they design.
  •  I feel a need to defend the term ‘architecting’. Engineers do engineering, architects do architecting. We have the role of software or system architecture and the term describes what they do. There is a subtle but useful distinction between a software designer and a software architect that was identified about 30 years ago by the then IBMer Fred Brooks in his foundational text, The Mythical Man Month.
  • From a grammatical point of view use of “architecting” as a verb or gerund is as poor as using leverage as a verb… and as far as meaning is concerned, as poor as any platitude used when knowledge of precise content and detail is lacking.

As someone who has co-authored a book called The Process of Software Architecting I should probably declare more than a passing interest in this and feel that the verb ‘architecting’ or ‘to architect’ is perfectly valid. Whether it is strictly correct English or not I will leave to others far better qualified to pass judgment on. My defence of using architect as a verb is that there is a, sometimes subtle, difference between architecture and design (Grady Booch says “all architecture is design but not all design is architecture”) and although architects do perform elements of design, that is not all they do. I, for one, would not wish to see the two confused.

The definition of architecting we use in the book  The Process of Software Architecting comes from the IEEE standard 1471-2000 which defines architecting as:

The activities of defining, documenting, maintaining, improving, and certifying proper implementation of an architecture.

As a related aside on whether adding ‘ing’ to a noun to turn int into a verb is correct English or not it is interesting to see that the ‘verbing’ of nouns is picking up pace at the London Olympics where we now seem to have ‘medaling’ and ‘platforming’ entering the English language.