Do Smarter Cities Make their Citizens Smarter?

This is an update of a post I originally put in my blog The Versatilist Way. I’ve removed the reference to the now discredited book by Jonah Lehry (though I believe the basic premise of what he was saying about “urban friction” remains true) and added some additional references to research by the clinical psychologist Professor Ian Robertson.My IBM colleague,  Dr. Rick Robinson, blogs here as The Urban Technologist where he writes about emergent technology and smarter cities. This particular post from Rick called Digital Platforms for Smarter City Market-Making discusses how encouraging organic growth of small to medium enterprises (SMEs) in cities not only helps with the economic revival of some of our run down inner city areas but also means those SMEs are less likely to up roots and move to another area when better tax or other incentives are on offer. As Rick says:

By building clusters of companies providing related products and services with strong input/output linkages, cities can create economies that are more deeply rooted in their locality.

Examples include Birmingham’s Jewellery Quarter which has a cluster of designers, manufacturers and retailers who also work with Birmingham City University’s School of Jewellery and Horology. Linkages with local colleges and universities is another way of reinforcing the locality of SME’s. Of course, just because we classify an enterprise as being ‘small to medium’ or ‘local’ does not mean that, because of the internet, it cannot have a global reach. These days even small, ‘mom and pop’ business can be both local as well as global.

Another example of generating organic growth is the so called Silicon Roundabout area of Shoreditch, Hoxton and Old Street in London which now counts some 3,200 firms and over 48,000 jobs. See here for a Demos report on this called A Tale of Tech City.

Clearly generating growth in our cities, as a way of improving both the economy as well as the general livelihoods of its citizens, should be considered a good thing, especially if that growth can be in new business areas which helps to replace our dying manufacturing industries and reduce our dependency on the somewhat ‘toxic’ financial services industry. However it turns out that encouraging this kind of clustering of people also has a positive feedback effect which means that groups of people together achieve more than just the sum of all the individuals.

In 2007 the British theoretical physicist Geoffrey West and colleagues published a paper called Growth, innovation, scaling, and the pace of life in cities. The paper described the results of an analysis of a huge amount of urban data from cities around the world. Data included everything from the number of coffee shops in urban areas, personal income, number of murders and even the walking speed of pedestrians. West and his team analysed all of this data and discovered that the rhythm of cities could be described by a few simple equations – the equivalent of Newtons laws of motion for cities if you like.  These laws can be taken and used to predict the behavior of our cities. One of the equations that West and his team discovered was around the measurement of socioeconomic variables such as number of patents, per-capita income etc. It turns out that any variable that can be measured in cities scales to an exponent of 1.15. In other words moving to a city of 1 million inhabitants results, on average, 15% more patents, 15% more income etc than a person living in a city of five hundred thousand. This phenomena is referred to as “superlinear scaling” – as cities get bigger, everything starts to accelerate. This applies to any city, anywhere in the world from Manhattan, to London to Hong Kong to Sydney.

So what is it about cities that appears to make their citizens smarter the bigger they grow? More to the point what do we mean by a smarter city in this context?

IBM defines a smarter city as one that:

Makes optimal use of all the interconnected information available today to better understand and control its operations and optimize the use of limited resources.

Whilst it would seem to make sense that having an optimised and better connected (city) infrastructure that ensures information flows freely and efficiently would make such cities work better and improve use of limited resources could this also enable the citizens themselves to be more creative? In other words do smarter cities produce smarter citizens? Some research by the clinical psychologist Professor Ian Robertson indicates that not only this might be the case but also, more intriguingly, citizens that live in vibrant and culturally diverse cities might actually live longer.

In this blog post Professor Robertson suggests that humming metropolises like New York or London or Sydney, through what he refers to as the three E’s, provide their citizens with stimulation’s that effect the chemicals in the brain making them smarter as well as reducing their chancing of developing aging diseases like Alzheimer’s. These three E’s are:

  • Excitation – the constant novelty that big cities provide, whether it be in terms of the construction of the next architecturally significant building or a new theater production or art gallery show provides a stimulating environment which has been shown to develop better memory and even lead to the growth of new brain cells.
  • Expectation. When there is a mix of cultures and ages it seems that older people don’t think themselves old; instead they seem to discard the preconceived notions of what people of a certain age are supposed to do and act like people of a much younger age.
  • Empowerment. By definition people who stay or live in cities tend to be wealthier. Again research has shown that money, power and success change brain functions and make people mentally sharper, more motivated and bolder.

If this is correct and the three E’s found in big cities really do make us both smart and live longer then the challenge of this century must be that we need to make our cities smarter, so they can sustain bigger populations that can live healthy and productive lives which can then have a positive feedback effect on the cities themselves. Maybe there really is a reason to love our cities after all?

The Moral Architect

I started my career in the telecommunications division of the General Electrical Company (GEC) as a software engineer designing digital signalling systems for Private Branch Exchanges based on the Digital Private Network Signalling System. As part of that role I represented GEC on the working party that defined the DPNSS standard which was owned by British Telecom. I remember at one of the meetings the head of the working party, whose name I unfortunately forget, posed the question: what would have happened if regimes such as those of Nazi Germany or the Stalinist Soviet Union had access to the powerful (sic) technology we were developing? When I look back at that time (early 80’s) such “powerful technology” looks positively antiquated – we were actually talking about little more than the ability to know who was calling whom using calling line identification! However that question was an important one to ask and is now one we should be asking more than ever today.One of the roles of the architect is to ask the questions that others tend to either forget about or purposely don’t ask because the answer is “too hard”. Questions like:

  • So you expect 10,000 people to use your website but what happens if it really takes off and the number of users is 10 or 100 times that?
  • So you’re giving your workforce mobile devices that can be used to access your sales systems, what happens when one of your employees leaves their tablet on a plane/train/taxi?
  • So we are buying database software from a new vendor who will help us migrate from our old systems but what in-house skills do we have to manage and operate this new software?
  • Etc

In many ways these are the easy questions, for a slightly harder question consider this one posed by Nicholas Carr in this blog post.

So you’re happily tweeting away as your Google self-driving car crosses a bridge, its speed precisely synced to the 50 m.p.h. limit. A group of frisky schoolchildren is also heading across the bridge, on the pedestrian walkway. Suddenly, there’s a tussle, and three of the kids are pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?

Pity the poor architect who has to design for that particular use case (and probably several hundred others not yet thought of)! Whilst this might seem to be someway off, the future, as they say, is actually a lot closer than you think. As Carr points out, the US Department of Defence has just issued guidelines designed to:

Minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.

Guidelines which presumably software architects and designers, amongst others, need to get their heads around.

For anyone who has even the remotest knowledge of the genre of science fiction this is probably going to sound familiar. As far back as 1942 the author Isaac Asimov formulated his famous three laws of robotics which current and future software architects may well be minded to adopt as an important set of architectural principles. These three laws, as stated in Asimov’s 1942 short story Runaround, are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

As stated here these laws are beautifully concise and unambiguous however the devil, of course, will be in the implementation. Asimov himself went on to make quite a career of writing stories that tussled with some of the ambiguities that could arise from the conflicts between these laws.

So back to the point of this blog. As our systems become ever more complex and infringe on more and more of our everyday lives are ethical or moral requirements such as these going to be another set of things that software architects need to deal with? I would say absolutely yes. More than ever we need to understand not just the impact on humanity of those systems we are building but also those systems (and tools) we are using everyday. As  Douglas Rushkoff says in his book Program or be Programmed:

If you don’t know what the software you’re using is for, then you’re not using it but being used by it.

In a recent blog post Seth Godin poses a number of questions of what freedom in a digital world really means. Many of these are difficult moral questions with no easy answer and yet systems we are building now, today are implicitly or explicitly embedding assumptions around some of these questions whether we like it or not. One could argue that we should always question whether a particular system should be built or not (just because we can do something does not necessarily mean we should) but often by the time you realise you should be asking such questions it’s already too late. Many of the systems we have today were not built as such, but rather grew or emerged. Facebook may have started out as a means of connecting college friends but now it’s a huge interconnected world of relationships and likes and dislikes and photographs and timelines and goodness knows what else that can be ‘mined’ for all sorts of purposes not originally envisaged.

One of the questions architects and technologists alike must surely be asking is how much mining (of personal data) is it right to do? Technology exists to track our digital presence wherever we go but how much should we be making use of that data and and to what end? The story of how the US retailer Target found out a teenage girl was pregnant before her father did has been doing the rounds for a while now. Apart from the huge embarrassment to the girl and her family this story probably had a fairly harmless outcome however what if that girl had lived in a part of the world where such behavior was treated with less sympathy?

It is of course up to each of us to decide what sort of systems we are or are not prepared to work on in order to earn a living. Each of us must make a moral and ethical judgment based on our own values and beliefs. We should also take care in judging others that create systems we do not agree with or think are “wrong”. What is important however is to always question the motives and the reasons behind those systems and be very clear why you are doing what you are doing and are able to sleep easy having made your decision.

Is the Raspberry Pi the New BBC Microcomputer?

There has been much discussion here in the UK over the last couple of years about the state of tech education and what should be done about it. The concern being that our schools are not doing enough to create the tech leaders and entrepreneurs of the future.

The current discussion kicked off  in January 2011 when Microsoft’s director of education, Steve Beswick, claimed that in UK schools there is much “untapped potential” in how teenagers use technology. Beswick said that a Microsoft survey had found that 71% of teenagers believed they learned more about information technology outside of school than in formal information and communication technology (ICT) lessons. An interesting observation given that one of the criticisms often leveled at these ICT classes is that they just teach kids how to use Microsoft Office.The discussion moved in August of 2011, this time at the Edinburgh International Television Festival where Google chairman Eric Schmidt said he thought education in Britain was holding back the country’s chances of success in the digital media economy. Schmidt said he was flabbergasted to learn that computer science was not taught as standard in UK schools, despite what he called the “fabulous initiative” in the 1980s when the BBC not only broadcast programmes for children about coding, but shipped over a million BBC Micro computers into schools and homes.

January 2012 saw even the schools minister, Michael Gove, say that the ICT curriculum was “a mess” and must be radically revamped to prepare pupils for the future (Gove suspended the ICT Curriculum in September 2012). All well and good but as some have commented “not everybody is going to need to learn to code, but everyone does need office skills”.

In May 2012 Schmidt was back in the UK again, this time at London’s Science Museum where he announced that Google would provide the funds to support Teach First – a charity which puts graduates on a six-week training programme before deploying them to schools where they teach classes over a two-year period.

So, what now? With the new ICT curriculum not due out until 2014 what are the kids who are about to start their GCSE’s to do? Does it matter they won’t be able to learn ICT at school? The Guardian’s John Naughton proposed a manifesto for teaching computer science in March 2012 as part of his papers digital literacy campaign.  As I’ve questioned before should it be the role of schools to teach the very specific programming skills being proposed; skills that might be out of date by the time the kids learning them enter the workforce? Clearly something needs to be done otherwise, as my colleague Dr Rick Robinson says, where will the next generation of technology millionaires come from? bbc micro

Whatever shape the new curriculum takes, one example (one that Eric Schmidt himself used) of a success story in the learning of IT skills is that of the now almost legendary BBC Microcomputer. A project started 30 years ago this year. For those too young to remember, or were not around in the UK at the time, the BBC Microcomputer got its name from project devised by the BBC to enhance the nation’s computer literacy. The BBC wanted a machine around which they could base a series called The Computer Programme, showing how computers could be used, not just for computer programming but also for graphics, sound and vision, artificial intelligence and controlling peripheral devices. To support the series the BBC drew up a spec for a computer that could be bought by people watching the programme to actually put into practice what they were watching. The machine was built by Acorn the spec of which you can read here.ba8dd-bbcmicroscreen

The BBC Micro was not only a great success in terms of the television programme, it also helped spur on a whole generation of programmers. On turning the computer on you were faced with the screen on the right. The computer would not do anything unless you fed it instructions using the BASIC programming language so you were pretty much forced to learn programming! I can vouch for this personally because although I had just entered the IT profession at the time this was in the days of million pound mainframes hidden away in backrooms guarded jealously by teams of computer operators who only gave access via time-sharing for minutes at a time. Having your own computer which you could tap away on and get instant results was, for me, a revelation.

Happily it looks like the current gap in the IT curriculum may about to be filled by the humble Raspberry Pi computer. The idea behind the Raspberry Pi came from a group of computer scientists at Cambridge, England’s computer laboratory back in 2006. As Ebon Upton founder and trustee of the Raspberry Pi Foundation said:

Something had changed the way kids were interacting with computers. A number of problems were identified: the colonisation of the ICT curriculum with lessons on using Word and Excel, or writing webpages; the end of the dot-com boom; and the rise of the home PC and games console to replace the Amigas, BBC Micros, Spectrum ZX and Commodore 64 machines that people of an earlier generation learned to program on.

Out of this concern at the lack of programming and computer skills in today’s youngsters was born the Raspberry Pi computer (see below) which began shipping in February 2012. Whilst the on board processor and peripheral controllers on this credit card sized, $25 device are orders of magnitude more powerful than anything the BBC Micros and Commodore 64 machines had, in other ways this computer is even more basic than any of those computers. It comes with no power supply, screen, keyboard, mouse or even operating system (Linux can be installed via a SD card). There is quite a learning curve just to get up and running although what Raspberry Pi has going for it that the BBC Micro did not is the web and the already large number of help pages as well as ideas for projects and even the odd Raspberry Pi Jam (get it). Hopefully this means these ingenious devices will not become just another piece of computer kit lying around in our school classrooms.e65ef-raspberrypi

The Computer Literacy Project (CLP) which was behind the idea of the original BBC Micro and “had the grand ambition to change the culture of computing in Britain’s homes” produced a report in May of this year called The Legacy of the BBC Micro which, amongst other things, explores whether the CLP had any lasting legacy on the culture of computing in Britain. The full report can be downloaded here. One of the recommendations from the report is that “kit, clubs and formal learning need to be augmented by support for individual learners; they may be the entrepreneurs of the future“. 30 years ago this support was provided by the BBC as well as schools. Whether the same could be done today in schools that seem to be largely results driven and a BBC that seems to be imploding in on itself is difficult to tell.

And so to the point of this post: is the Raspberry Pi the new BBC Micro in the way it spurred on a generation of programmers that spread their wings and went on to create the tech boom (and let’s not forget odd bust) of the last 30 years? More to the point, is that what the world needs right now? Computers are getting getting far smarter “out of the box”. IBM’s recent announcements of it’s PureSystems brand promise a “smarter approach to IT” in terms of installation, deployment, development and operations. Who knows what stage so called expert integrated systems will be at by the time today’s students begin to hit the workforce in 5 – 10 years time? Does the Raspberry Pi have a place in this world? A world where many, if not most, programming jobs continue to be shipped to low cost regions, currently the BRIC, MIST countries and so on, I am sure, the largely untapped African sub-continent.

I believe that to some extent the fact that the Raspberry Pi is a computer and yes, with a bit of effort, you can program it, is largely an irrelevance. What’s important is that the Raspberry Pi ignites an interest in a new generation of kids that gets them away from just consuming computing (playing games, reading Facebook entries, browsing the web etc) to actually creating something instead. It’s this creative spark that is needed now, today and as we move forward that, no matter what computing platforms we have in 5, 10 or 50 years time, will always need creative thinkers to solve the worlds really difficult business and technical problems.

And by the way my Raspberry Pi is on order.

Architects Don’t Code

WikiWikiWeb is one of the first wiki experiences I, and I suspect many people of a certain age, had. WikiWikiWeb was created by Ward Cunningham for the Portland Pattern Repository, a fantastic source of informal guidance and advice by experts on how to build software. It contains a wealth of patterns (and antipatterns) on pretty much any software topic known to man and a good few that are fast disappearing into the mists of time (TurboPascal anyone?).For a set of patterns related to the topics I cover in this blog go to the search page, type ‘architect’ into the search field and browse through some of the 169 (as of this date) topics found.  I was doing just this the other day and came across the ArchitectsDontCode pattern (or possibly antipattern). The problem statement for this pattern is as follows:

The System Architect responsible for designing your system hasn’t written a line of code in two years. But they’ve produced quite a lot of ISO9001-compliant documentation and are quite proud of it.

The impact of this is given as:

A project where the code reflects a design that the SystemArchitect never thought of because the one he came up with was fundamentally flawed and the developers couldn’t explain why it was flawed since the SystemArchitect never codes and is uninterested in implementation details.

Hmmm, pretty damning for System Architects. Just for the record such a person is defined here as being:

[System Architect] – A person with the leading vision, the overall comprehension of how the hardware, software, and network fit together.

The meaning of job titles can of course vary massively from one organisation to another. What matters is the role itself and what that person does in the role. It is often the case that any role with ‘architect’ in the title is much denigrated by developers, especially in my experience on agile projects, who see such people as being an overhead who contribute nothing to a project but reams of documents, or worse UML models, that no one reads.

Sometimes software developers, by which I mean people who actually write code for a living, can take a somewhat parochial view of the world of software. In the picture below their world is often constrained to the middle Application layer, that is to say they are developing application software, maybe using two or three programming languages, with a quite clear boundary and set of requirements (or at least requirements that can be fairly easily agreed through appropriate techniques). Such software may of course run into tens of thousands of lines of code and have  several tens of developers working on it. There needs therefore to be someone who maintains an overall vision of what this application should do. Whether that is someone with the title of Application Architect, Lead Programmer or Chief Designer does not really matter; it is the fact they look after the overall integrity of the application that matters. Such a person on a small team may indeed do some of the coding or at least be very familiar with the current version of whatever programming language is being deployed.

In the business world of bespoke applications, as opposed to ‘shrink-wrapped’ applications, things are a bit more complicated. New applications need to communicate with legacy software and often require middleware to aid that communication. Information will exist in a multitude of databases and may need some form of extract, transform and load (ETL) and master data management (MDM) tools to get access to and use that information as well as analytics tools to make sense of it. Finally there will be business processes that exist or need to be built which will coordinate and orchestrate activities across a whole series of new and legacy applications as well as manual processes. All of these require software or tooling of some sort and similarly need someone to maintain overall integrity. This I see as being the domain, or area of concern, of the Software Architect. Does such a person still code on the project? Well maybe, but on typical projects I see it is unlikely such a person has a much time for this activity. That’s not to say however that she needs some level of (current) knowledge on how all the parts fit together and what they do. No mean task on a large business system.

Finally all this software (business processes, data, applications and middleware) has to be deployed onto actual hardware (computers, networks and storage). Whilst the choice and selection of such hardware may fall to another specialist role (sometimes referred to as an Infrastructure or Technical Architect) there is another level of overall system integrity that needs to be maintained. Such a role is often called the System Architect or maybe Chief Architect. At this stage it is possible that the background of such a person has never involved coding to any great degree so such a person is unlikely to write any code on a project and quite rightly so! This is often not just a technical role that worries about systems development but also a people role that worries about satisfying the numerous stakeholders that such large projects have.

Where you choose to sit in the above layered model and what role you take will of course depend on your experience and personal interests. All roles are important and each must work together if systems, that depend on so many moving parts, are to be delivered in time and on budget.

Bring Me Problems, Not Solutions

“Bring me solutions, not problems” is a phrase that the former British Prime Minister Margaret Thatcher was, apparently, fond of using. As I’ve pointed out before the role of the architect is to “take existing components and assemble them in interesting and important ways“. For the architect then, who wants to assemble components in interesting ways, problems are what are needed, not solutions – without problems to solve we have no job to do. Indeed problem solving is what entrepreneurship is all about and the ability to properly define the problem in the first place therefore becomes key to solving the problem.Fundamentally the architect asks:

  1. What is the problem I am trying to solve?
  2. What solution can I construct that would address that problem?
  3. What technology (if any) should I apply in implementing that solution?

This approach is summed up in the following picture, a sort of meta-architecture process.

The key thing here of course is the effective use of technology. Sometimes that means not using technology at all because a manual system is equally (cost) effective. One thing that architects should avoid at all costs is to become over enthusiastic about using too much of the wrong kind of technology. Adopting a sound architectural process, following well understood architectural principles and using what other have done before, that is applying architectural patterns, are ways to ensure we don’t leap to a solution built on potentially the wrong technology, too quickly.

For architects then, who are looking for their next interesting challenge, the cry should be “bring me problems, not solutions”.

The Art of What’s Possible (and What’s Not)

One of the things Apple are definitely good at is giving us products we didn’t know we needed (e.g. the iPad). Steve Jobs, who died a year ago this week, famously said “You’ve got to start with the customer experience and work back to the technology — not the other way around”  (see this video at around 1:55 as well as this interview with Steve Jobs in Wired).

The subtle difference from the “normal” requirements gathering process here is that, rather than asking what the customer wants, you are looking at the customer experience you want to create and then trying to figure out how available technology can realise that experience. In retrospect, we can all see why a device like the iPad is so useful (movies and books on the go, a cloud enabled device that lets you move data between it and other devices, mobile web on a screen you can actually read etc, etc). Chances are however that it would have been very difficult to elicit a set of requirements from someone that would have ended up with such a device.

Jobs goes on to say “you can’t start with the technology and try to figure out where you’re going to try and sell it”. In many ways this is a restatement of the well known “golden hammer” anti-pattern (to a man with a hammer, everything appears as a nail) from software development, the misapplication of a favored technology, tool or concept in solving a problem.

Whilst all this is true and would seem to make sense, at least as far as Apple is concerned, there is still another subtlety at play when building truly successful products that people didn’t know they wanted. As an illustration of this consider another, slightly more infamous Apple product, the Newton Message Pad.

In many ways the Newton was an early version of the iPad or iPhone (see above for the two side by side), some 25 years ahead of its time. One of its goals was to “reinvent personal computing”. There were many reasons why the Newton did not succeed (including it’s large, clunky size and poor handwriting recognition system) however one of them must surely have been that the device was just too far ahead of the technology available at the time in terms of processing power, memory, battery life and display technology. Sometimes ideas can be really great but the technology is just not there to support them.So, whilst Jobs is right in saying you cannot start with the technology then decide how to sell it equally you cannot start with an idea if the technology is not there to support it, as was the case with the Newton. So what does this mean for architects?

A good understanding of technology, how it works and how it can be used to solve business problems is, of course, a key skill of any architect however, equally important is an understanding of what is not possible with current technology. It is sometimes too easy to be seduced by technology and to overstate what it is capable of. Looking out for this, especially when there may be pressure on to close a sale, is something we must all do and be forceful in calling it out when we think something is not possible.

Disruptive Technologies, Smarter Cities and the New Oil

Last week I attended the Smart City and Government Open Data Hackathon in Birmingham, UK. The event was sponsored by IBM and my colleague Dr Rick Robinson, who writes extensively on Smarter Cities as The Urban Technologist, gave the keynote session to kick off the event. The idea of this particular hackathon was to explore ways in which various sources of open data, including the UK governments own open data initiative, could be used in new and creative ways to improve the lives of citizens and make our cities smarter as well as generally better places to live in. There were some great ideas discussed including how to predict future jobs as well as identifying citizens who had not claimed benefits to which they were entitled (and those benefits then going back into the local economy through purchases of goods and services).The phrase “data is the new oil” is by no means a new one. It was first used by Michael Palmer in 2006 in this article. Palmers says:

Data is just like crude. It’s valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic, chemicals, etc to create a valuable entity that drives profitable activity; so must data be broken down, analyzed for it to have value.

Whilst this is a nice metaphor I think I actually prefer the slight adaptation proposed by David McCandless in his TED talk: The beauty of data visualization where he coins the phrase “data is the new soil”. The reason being data needs to be worked and manipulated, just like a good farmer looking after his land, to get the best out of it. In the case of the work done by McCandless this involves creatively visualizing data to show new understandings or interpretations and, as Hans Rosling says, to let the data set change your mind set.

Certainly one way data is most definitely not like oil is in the way it is increasing at exponential rates of growth rather than rapidly diminishing. But it’s not only data. The new triumvirate of data, cloud and mobile is forging a whole new mega-trend in IT nicely captured in this equation proposed by Gabrielle Byrne at the start of this video:

e = mc(imc)2

Where:

  • e is any enterprise (or city, see later)
  • m is mobile
  • c is cloud
  • imc is in memory computing, or stream computing, the instant analysis of masses of fast changing data

This new trend is characterized by a number of incremental innovations that have taken place in IT over previous years in each of the three areas nicely captured in the figure below.

Source: CNET – Where IT is going: Cloud, mobile and data

In his blog post: The new architecture of smarter cities, Rick proposes that a Smarter City needs three essential ‘ingredients’ in order to be really characterized as ‘smart’. These are:

  • Smart cities are led from the top
  • Smart cities have a stakeholder forum
  • Smart cities invest in technology infrastructure

It is this last attribute that, when built on a suitable cloud-mobility-data platform, promises to fundamentally change not only how enterprises are set to change but also cities and even whole nations.  However it’s not just any old platform that needs to be built. In this post I discussed the concept behind so-called disruptive technology platforms and the attributes they must have. Namely:

  • A well defined set of open interfaces.
  • A critical mass of both end users and service providers.
  • Both scaleable and extremely robust.
  • An intrinsic value which cannot be obtained elsewhere.
  • Allow users to interact amongst themselves, maybe in ways that were originally envisaged.
  • Service providers must be given the right level of contract that allows them to innovate, but without actually breaking the platform.

So what might a disruptive technology platform, for a whole city, look like and what innovations might it provide? As an example of such a platform IBM have developed something they call the Intelligent Operations Center or IOC. The idea behind the IOC is to use information from a number of city agencies and departments to make smarter decisions based on rules that can be programmed into the platform. The idea then, is that the IOC will be used to anticipate problems to minimize the impact of disruptions to city services and operations as well as assist in the mobilization of resources across multiple agencies. The IOC allows aggregated data to be visualized in ways that the individual data sets cannot and for new insights to be obtained from that data.

Platforms like the IOC are only the start of what is possible in a truly smart city. They are just beginning to make use of mobile technology, data in the cloud and huge volumes of fast moving data that is analysed in real-time. Whether these platforms turn out to be really disruptive remains to be seen but if this is really the age of “new oil” then we only have the limitations of our imagination to restrict us in how we will use that data to give us valuable new insights into building smart cities.

Is “Architect” a Job or a Role?

This is one of the questions posed in the SATURN 2011 Keynote called The Intimate Relationship Between Architecture and Code: Architecture Experiences of a Playing Coach by Dave Thomas. The article is a series of observations, presumably made by the author and colleagues, on the view of architects as seen by developers on agile projects and is fairly damning of architects and what they do. I’d urge you to read the complete list but a few highlights are:

  • Part of the problem is us [architects] disparate views from ivory tower, dismissal of new approaches, faith in models not in code.
  • Enterprise architect = an oxymoron? Takes up so much time.
  • Models are useful, but they’re not the architecture. Diagrams usually have no semantics, no language, and therefore tell us almost nothing.
  • Components are a good idea. Frameworks aren’t. They are components that were not finished.
  • The identification of high-value innovation opportunities is a key architect responsibility.
  • Is “architect” a job or a role? It’s not a job, it’s a role. They need to be able to understand the environment, act as playing coaches, and read code.

Addressing each of these comments probably justifies a blog post in its own right however I believe the final comment, and the title of this post, gets to the heart of the problem here. Being an architect is not, or should not, be a job but a role. Whatever type of architect you are (enterprise, application, infrastructure) it is important, actually vital, to have an understanding of technology that is beyond the words and pictures we use to describe that technology. You occasionally need to roll up your sleeves and “get down and dirty” whether that be in speaking to users to understand their business needs, designing web sites, writing code or installing and configuring hardware. In other words you should be adept at other roles that support and reinforce your role as an architect.

Unfortunately, in many organisations, treating ‘architect’ as a role rather than a job title is difficult. Architects are seen as occupying more senior positions which bring higher salaries and therefore cannot justify the time it takes to practice, with technology rather than just talking about it or drawing pretty pictures using fancy modeling tools. As discussed elsewhere there is no easy path to mastering any subject, rather it takes regular and continued practice. If you are passionate about what you do you need to carve out time in your day to practice as well as keep up to date with what is new and what is current. The danger we all face is that we can spend too much time oiling the machine rather than using the finite number of brain cycles we have each day making a difference and making real change that matters.

Architect or Architecting?

A discussion has arisen on one of the IBM forums about whether the verb that describes what architects do (as in “to architect” or “architecting”) is valid English or not. The recommendation in the IBM word usage database has apparently always been that when you need a verb to describe what an architect does use “design,” “plan,” or “structure”. Needless to say this has generated quite a bit of comment (145 at the last count) including:

  • Police are policing, judges are judging, dancers are dancing, why then aren’t architects architecting?
  • Architects are not “architecting” because they design.
  •  I feel a need to defend the term ‘architecting’. Engineers do engineering, architects do architecting. We have the role of software or system architecture and the term describes what they do. There is a subtle but useful distinction between a software designer and a software architect that was identified about 30 years ago by the then IBMer Fred Brooks in his foundational text, The Mythical Man Month.
  • From a grammatical point of view use of “architecting” as a verb or gerund is as poor as using leverage as a verb… and as far as meaning is concerned, as poor as any platitude used when knowledge of precise content and detail is lacking.

As someone who has co-authored a book called The Process of Software Architecting I should probably declare more than a passing interest in this and feel that the verb ‘architecting’ or ‘to architect’ is perfectly valid. Whether it is strictly correct English or not I will leave to others far better qualified to pass judgment on. My defence of using architect as a verb is that there is a, sometimes subtle, difference between architecture and design (Grady Booch says “all architecture is design but not all design is architecture”) and although architects do perform elements of design, that is not all they do. I, for one, would not wish to see the two confused.

The definition of architecting we use in the book  The Process of Software Architecting comes from the IEEE standard 1471-2000 which defines architecting as:

The activities of defining, documenting, maintaining, improving, and certifying proper implementation of an architecture.

As a related aside on whether adding ‘ing’ to a noun to turn int into a verb is correct English or not it is interesting to see that the ‘verbing’ of nouns is picking up pace at the London Olympics where we now seem to have ‘medaling’ and ‘platforming’ entering the English language.

Architecting Disruptive Technology Platforms

Bob Metcalfe, the founder of 3Com and co-inventor of  Ethernet has said:

Be prepared to learn how the growth of exponential and disruptive technologies will impact your industry, your company, your career and your life.

The term disruptive technology has been widely used as a synonym of disruptive innovation, but the latter is now preferred, because market disruption has been found to be a function usually not of technology itself but rather of its changing application.Wikepedia defines a disruptive innovation (first coined by Clayton Christensen) as:

An innovation that helps create a new market and value network, and eventually goes on to disrupt an existing market and value network (over a few years or decades), displacing an earlier technology.

Examples of disruptive innovations (and what they have disrupted/displaced) are:

  • Digital media (CDs/DVDs)
  • Desktop publishing (traditional publishing)
  • Digital photography (chemical/film photography)
  • LCD televisions (CRT televisions)
  • Wikipedia (traditional encyclopedias)
  • Tablet computers (personal computers, maybe)

The above are all examples of technologies/innovations that have disrupted existing business models, or even whole industries. However there is another class or type of disruptive innovation which not only disrupts a market but creates a whole new ecosystem upon which a new industry can be created. Examples of these are the likes of Facebook, Twitter and iTunes. What these have provided, as well, are a platform upon which providers, complementors, users and suppliers co-exist to support, nurture and grow the ecosystem of the platform and create a disruptive technology platform (DTP). Here’s a system context diagram for such a platform.

The four actors in this system context play the following roles:

  • Provider – Develops and provides the core platform. Providers need to ensure the platform exposes interfaces (that Complementors can use) and also ensure standards are defined that allow the platform to grow in a controlled way.
  • Complementor – Supplement the platform with new features, services and products that increase the value of the platform to End Users (and draw mor of them in to use the platform).
  • End User – As well as performing the obvious ‘using the platform’ role Users will also drive demand that  Complementors help fulfill. Also there are likely to be more Users if there are more Complementors providing new features. A well architected platform also allows End Users to interact with each other.
  • Supplier – Usually enters into a contract with the core platform provider to provide a known product or service or technology. Probably not innovating in the same way as the complementor would.

If we use Facebook (the platform) as a real instance of the above then the provider is Facebook (the company) who have created a platform that is extensible through a well defined set of interfaces. Complementors are the many third party providers who have developed new features to extend the underlying platform (e.g. Airbnb and The Guardian). End uers are, of course, the 800 million or so people who have Facebook accounts. Suppliers would be the companies who, for example, provide the hardware and software infrastructure upon which Facebook runs.

Of course, just because you are providing a new technology platform does not mean it will automatically be a disruptive technology platform. Looking at some of the technology platforms that are currently out there and have, or are in the process of disrupting businesses or whole industries we can see some common themes however. Here are some of these (in no particular order of priority):

  • A DTP has a well defined set of open interfaces which complementors can use, possibly in ways not originally envisaged by the platform provider.
  • The DTP needs to build up a critical mass of both end users and complementors, each of which feeds off the other in a positive feedback loop so the platform grows.
  • The DTP must be both scaleable but extremely robust.
  • The DTP must provide an intrinsic value which cannot be obtained elsewhere, or if it can, must give additional benefits which make users come to the DTP rather than go elsewhere. Providing music on iTunes at a low enough cost and easy enough to obtain preventing users going to free file sharing sites is an example.
  • End users must be allowed to interact amongst themselves, again in ways that may not have been originally envisaged.
  • Complementors must be provided with the right level of contract that allows them to innovate, but without actually breaking the platform (Apple’s contract to App store developers is an example). The DTP provider needs to retain some level of control.

These are just some of the attributes I would expect a DTP to have, there must be more. Feel free to comment and provide some observations on what you think constitutes a DTP.