Let’s Build a Smarter Planet – Part II

This is the second part of the transcript of a lecture I recently gave at the University of Birmingham in the UK.

In Part I of this set of four posts I tried to give you a flavour of what IBM is and what it is trying to do to make our planet smarter. So I hear you ask, what do you actually do towards this effort? Well I’ll tell you what my job entails but first here’s an apocryphal tale. During my last year at this university (1979) I took a module in astrophysics. One day I was sitting with my tutor and he somewhat randomly asked me how much data I thought the world would ever need? Bear in mind that at this time the web, in the form that Tim Berners-Lee envisioned it, was still a good ten years away and Facebook, Twitter etc even further off than that. Rather randomly I thought that probably the most data you would expect to store would be the personal details about every person on the planet (so basic personal details plus financial details etc) and maybe the same again for companies, government departments and other institutions. Say 100 KB of data per person and 1 GB per institution.

So, by my reckoning assuming there were around about 5 billion people on the planet back in 1980 and 1 billion institutions that would equate to 1 billion x 1 GB plus 5 billion x 100 KB or 1.5 TB of storage and that would have been all we needed, ever!! How wrong can you be*?2d2fe-worldofdataNow, incredibly we create 2.5 quintillion (that’s ‘1’ followed by 18 zeroes) bytes of data every day. That’s 170 million times more data created every day than I thought would ever be needed back in 1980! There can be no doubt we live in a world that is awash with data. Some commentators have said that data is the new oil and there to be exploited and commercialised in an endless number of ways. How do we make sense of this sea of data?

So why am I telling you all of this and what has it got to do with what I do? Here’s what I believe, as Nancy Duarte the American writer and entrepreneur says:

Technology is meaningless until you understand how humans use it and benefit from it.

My mistake back in 1980 was not understanding how humans would embrace technology over the coming decades and use it in ways then completely unimaginable. I was only thinking in terms of my 1980’s ‘box’ where data was largely text based and restricted to what computers were then doing with information, not what they could do with it.

And that’s the key thing about the role of a software architect, at least in IBM. It’s about helping people understand technology and help them use it in new and interesting ways that is beneficial to them, and hopefully the planet. It’s about how to connect people with information.

Here’s the formal definition of an architect from the Institute of Electric and Electronic Engineers.

[An architect is] the person, team or organisation responsible for systems architecture.

No offence to them but pretty boring and unclear I’d say.

Here’s what Rob Daly thinks an architect is.

ar-chi-tect  \är-ke-,tekt\  n. One who believes that conception comes before erection.

He is of course right. One of the things an architect needs to ensure is that you don’t rush to the wrong decision about how to build something. A great many projects have failed because they have been ill-conceived or planned. These can have disastrous consequences as seems to be the case with President Obamas new healthcare insurance web site.

So what do architects really do? A couple of years ago myself and a colleague from IBM wrote a book on this very subject and here is a list of the capabilities we believe architects need.

  • The architect is a technical leader: As well as having technical skills, the architect exhibits leadership qualities. Leadership can be characterized in terms of both position in the organization, and also in terms of the qualities that the architect exhibits.
  • The architect role may be fulfilled by a team: There is a difference between a role and a person. One person may fulfill many roles. Given the requirement for a very broad set of skills in an architect, it is often the case that the architect role is fulfilled by more than one person.
  • The architect understands the software development process: Most architects will have been a developer at some point and should have a good appreciation of the need to define and endorse best practices used on the project. More specifically, the architect should have an appreciation of the software development process, since it is this process that ensures that all of the members of the team work in a coordinated manner.
  • The architect has knowledge of the business domain: As well as having a grasp of software development, it is also highly desirable (some would say essential) for the architect to have an understanding of the business domain so that they can act as an intermediary between stakeholders and users who understand the business, and the development team who may be more familiar with technology.
  • The architect has technology knowledge: Certain aspects of architecting clearly require a knowledge of technology and an architect should therefore have a certain level of technology skills. However, they do not need to be technology experts as such and need only be concerned with the significant elements of a technology, and not the detail.
  • The architect has design skills: Although architecting is not confined solely to design (as we have seen, the architect is also involved in requirements tasks, for example), it is clearly the core aspect of architecting. The architecture embodies key design decisions, so the architect should possess strong design skills.
  • The architect has programming skills: The developers on the project represent one of the most important groups that the architect must interact with. After all, it is their work products that ultimately deliver the working executable software. The communication between the architect and the developers can only be effective if the architect is appreciative of the developers’ work. Therefore, the architect should have a certain level of programming skills, even if they do not necessarily write code on the project, and those skills need to be kept up to date with the technologies being used.
  • The architect is a good communicator: Of all of the “soft skills” associated with the architect, communication is the most important. There are a number of dimensions to effective communication, and the architect needs to be proficient in all of them. Specifically, the architect should have effective verbal, written and presentation skills. Also, the communication should be two-way. The architect should be a good listener and observer also.
  • The architect makes decisions: An architect that is unable to make decisions in an environment where much is unknown, where there is insufficient time to explore all alternatives, and where there is pressure to deliver, is unlikely to survive. Unfortunately, such environments are often the norm rather than the exception, and successful architects acknowledge the situation, rather than trying to change it. Even though the architect may consult others when making decisions and foster an environment where others are included in the decision-making, it is still their responsibility to make the appropriate decisions and these are not always proven to be right.
  • The architect is aware of organizational politics: Successful architects are not only concerned with technology. They are also politically astute, and are conscious of where the power in an organization resides. This knowledge is used to ensure that the right people are communicated with, and that support for their project is aired in the right circles. To ignore organizational politics is, quite simply, naïve.
  • The architect is a negotiator: Given the many dimensions of architecting, the architect interacts with many stakeholders. Some of these interactions require negotiation skills. For example, a particular focus for the architect is to minimize risk as early as possible in the project, since this has a direct correspondence to the time it takes to stabilize the architecture.

One of the things I think you’ll notice from this list is that actually very few of them are to do with technology. Sure, you need to understand and keep up with technology but you also need these other attributes as well.

Part III of this talk is here.

*As an interesting aside the cost of 1GB of storage in 1980 was $193,000, it’s now around $0.05.

Let’s Build a Smarter Planet – Part I

This is the abridged transcript of a talk I gave at the University of Birmingham earlier this month. The talk was aimed at graduates and tried to explain why working at a company like IBM is about more than just IT. Even though it’s abridged, with imbedded videos and graphics it’s still pretty long so I’ve split it into four parts corresponding to the sections of the talk which were:

  1. What is IBM (and why is it building a smarter planet)?
  2. What do I do at IBM?
  3. What might you do at IBM?
  4. Why does IBM need people like you?

Here’s Part I.

Hello everyone, my name is Peter Cripps. I work for IBM as a software architect and I’m here today to talk to you about how IBM is building a smarter planet, the role we play as IBM’ers in doing that and what opportunities there are for people like you to become involved.

Before we get going let me ask you a few questions:

  • Who’s used any IBM software in the last week?
  • How about this year?
  • How about ever?

Okay, that’s sort of what I expected the answer would be. I ask this question a lot when I speak to people like you and I always get a similar response. So why is it I wonder that a company like IBM, actually the fifth largest information technology company by revenue in 2012, ahead of Microsoft, Google and Dell and the second largest software company in the world, one behind Microsoft, makes products that no one thinks they use?

Well, let me tell you; I can almost guarantee that most, if not all of you, will have used some IBM software over the past month or so. If you have drawn money out from a bank, browsed and bought something from an internet store, bought a plane ticket or interacted with one of the many government departments which are now online you have unknowingly used some IBM software.

IBM software might not be the sexy stuff you use on you mobile phone or laptop (though we do some of that as well) but is actually part of the infrastructure of many of the worlds IT systems. It’s like the plumbing in your house, you don’t necessarily see it but you surely would miss it if it wasn’t there.
So, by the end of this talk, I hope you’ll understand a bit more about what IBM does and that I will have piqued your interest a little to maybe consider IBM when you are looking for a job. One area I’d particularly like to explore is how IBM has a “mission” to build a smarter planet. So, let’s start with this video.

Now, I don’t know if, while you were watching that video, you were reminded of the film Minority Report directed by Steven Spielberg and based on the short story by Philip K Dick written in 1956?

The film’s central theme is the question of free will versus determinism. It examines whether free will can exist if the future is set and known in advance. Other themes include the role of preventive government in protecting its citizenry, the role of media in a future state where electronic advancements make its presence nearly boundless.

There can be little doubt that computers, and more specifically software, is now so intertwined in our lives our planet could not exist in its present form without it. Here’s a nice quote from Grady Booch, Chief Scientist and IBM Fellow, which I really like:

Software is the invisible thread and hardware is the loom on which computing weaves its fabric, a fabric that we have now draped across all of life.

This of course is for better or worse which is the fourth theme I’d like to cover with you today because I believe it’s an incredibly important one which will probably affect you more than me as you go through your working life.

I’d like to start by talking about what we mean by a smarter planet and how IBM is going about building one. First of all though let me give you a potted history of IBM just so you have a bit of background about how it has got to where it is today and why building a smarter planet is so important to it.In 1911 the American entrepreneur Charles Flint who had interests in a number of companies including ship building, munitions and weighing machines bought out Herman Hollerith’s Tabulating Machine Company and merged several of his companies together to form the Computing-Tabulating-Recording Company, or C-T-R, headquartered in New York. By 1914 the company was struggling so Flint hired Thomas Watson Sr (remember that name, it will turn up again a little later) to run the company. Over the following decade, Watson forged the disparate pieces of C-T-R into a unified company with a strong culture. He focused resources on the tabulating machine business, foreseeing that information technology had an ever-expanding future and literally creating the information industry. Watson also began expanding overseas—beyond the UK, Canada and Germany where its products were already sold—taking tiny C-T-R global. By 1924, he renamed C-T-R with the more expansive name of International Business Machines or IBM.

Fast forward to 2011, IBM’s 100th birthday and it is now a $100 billion turnover company with over 400,000 employees worldwide operating in over 170 countries. Today IBM UK has around 20,000 employees, bringing innovative solutions to a diverse client base to help solve some of their toughest business challenges.

In order to provide such innovation to its clients it invests a huge amount in research and development, $75 billion since the turn of the century. Notice that is ‘R’ and ‘D’, not just ‘D’ which is what many companies ascribe to this term. IBM has 12 research labs around the world including a smarter cities lab in Dublin, Ireland.Although IBM is currently the second largest software company in the world software actually makes up less than half of IBM’s revenue. Computer hardware, strangely enough what many people still think IBM as being all about, actually accounts for less than one sixth of IBM’s revenue. Services, a business that IBM didn’t even have when I joined the company in 1987, takes the second largest share!

As further proof of the way IBM seeks to drive innovation in the industry here’s another interesting statistic. It took IBM 53 years to receive it’s first 5000 US patents. It now regularly exceeds that number every year. The way it does that is through the innovation and creativity of its people.

Here are a few of the people you may have heard of and the innovations they introduced.

The above were all recipients of the Turing Prize (along with three other IBM’ers). In addition physicists Gerd K. Binnig and Heinrich Rohrer were awarded a Nobel Prize in Physics in 1986 for the scanning tunneling microscope (STM) which was invented 1981. The invention permitted scientists to obtain previously unseen images of silicon, nickel, oxygen, carbon and other atoms. Shown here is IBM etched in single carbon atoms using the STM.58a6e-ibmstmimageThis is just one of five Nobel prizes IBM has been awarded. IBM Research has grown from a small lab on the campus of a major university to the largest industrial research organisation in the world. A global body of 3000 scientists now collaborates with academics in universities around the globe, at the boundaries of information technology.

Some of IBM’s achievements extend beyond it’s own boundaries. Here are two notable people who have built their own global enterprises based on ideas or innovations from within IBM.f41fb-billandlarry
After negotiations with Digital Research failed, IBM awarded a contract to Bill Gates fledgling Microsoft in November 1980 to provide a version of the CP/M OS, which was set to be used in the upcoming IBM Personal Computer (IBM PC).Larry Ellison founded Oracle in 1977 on the back of the pioneering work done on relational databases by Ted Codd of IBM.
So, that’s a little bit about IBM the company what about the smarter planet it’s trying to build?2f881-smarterplanet

This is the famous picture of earth rising above the moons horizon taken in 1968 by the Apollo 10 astronauts on the last test mission to the moon before the first moon landing 7 months later. Imagine how frustrating that was, to have got so close but not to have actually set foot on the moon?

By the way, whilst talking about the moon and Apollo did you know that IBM was instrumental in getting a man to the moon? Not only in making the computers for the mission control engineers on the ground but also some of the on-board avionics hardware and software as well. But I digress, back to a smarter planet.

When considering what we mean by a smarter planet we talk about it in terms of the so called “three I’s”:

  • Instrumented: We have the ability to measure, sense and see the exact condition of everything. We now have computers and smart sensors pretty much everywhere. Its estimated there are 800 quintillion transistors on the planet (which is around 100 billion for every person alive).
  • Interconnected: People, systems and objects can communicate and interact with each other in entirely new ways.
  • Intelligent: We can respond to changes quickly and accurately, and get better results by predicting and optimizing for future events.

So how does this work in practice? Here’s an example from the field of healthcare. New born babies, some born before 26 weeks, are tethered to a host of medical devices that continuously measure heart rate, respiration and other vitals – that generate minute-by-minute readings of their fragile condition.  Data is coming out of those machines at a rate of a thousand readings per second and yet nurses typically take a single reading every 30 or 60 minutes! Not only that but the data is rarely stored for more than 24 hours meaning that insights into early detection of conditions like sepsis cannot be done. In 2009 IBM instigated a first of a kind (FOAK) system called ‘Artemis’ that  is capable of processing 1256 readings a second it currently receives per patient, and has the potential to provide real-time analysis to help clinicians to predict more quickly potential adverse changes in an infant’s condition.

Here’s another example of a great innovation from IBM which we are just beginning to exploit in new and powerful ways.

IBM’s computer, code-named “Watson” (remember him) leverages leading-edge Question-Answering technology, allowing the computer to process and understand natural language. It incorporates massively parallel analytical capabilities to emulate the human mind’s ability to understand the actual meaning behind words, distinguish between relevant and irrelevant content, and ultimately, demonstrate confidence to deliver precise final answers. In February of 2011, Watson made history by not only being the first computer to compete against humans on the US television quiz show, Jeopardy!, but by achieving a landslide win over prior champions Ken Jennings and Brad Rutter. The questions on this show are full of subtlety, puns and wordplay—the sorts of things that delight humans but choke computers. “What is The Black Death of a Salesman?” is the correct response to the Jeopardy! clue, “Colorful fourteenth century plague that became a hit play by Arthur Miller.”

So what is so clever about a computer winning a quiz show…?

We’re only just at the beginning of what we can do with exploiting all of the data that we are creating. A smarter planet is one that makes sense of all this data to improve all of our lives.

Part II of this talk is here.

What is Architecture? This is Architecture

What is architecture? Here’s a nice video that describes it in a unique way. Architecture from MAYAnMAYA on Vimeo.

Tom Graves says that for him the missing element of the architecture definition is that every structure:

 …implies a story, expresses a story, provides a stage for a story. Architecture is what people do to create structure. Building is how we create structure.  But story is why people do what they do to create structure. Story is that other missing key to architecture. Or, to put it another way: Architecture is the intersection of structure and story.

Like the video says architecture is about distilling out the essence of something, whether it be a building, a software system or, in this case, a cup, and sharing what is unique about it. The sharing is the story behind the architecture. How you tell that story, whether it be in documents, pictures or presentations, is key to the essence of the architecture as well.

What Have Architects Ever Done for Us?

I’ve been thinking about blogging on the topic of what value architects bring to the table in an age of open source software, commoditized hardware and agile development for a while. I’ve finally been spurred into action by re-discovering the famous Monty Python sketch What have the Romans ever done for us? (I often find that thinking of a name for a blog post helps me to formulate the content and structure what I want to say). Here’s the video in case you haven’t seen it.

So, picture the scene…

You are in a meeting with the chief information office (CIO) of a public or private sector enterprise who has been tasked with aligning IT with the new business strategy to “deliver real business value”. The current hot technologies, namely social media, mobile, big data/analytics and cloud, are all being mooted as the thing the organisation needs to enable it to leapfrog the competition and deliver something new and innovative to its customers. The CIO however has been burnt before by an architecture team that seems to spend most of its time discussing new technology, drawing fine looking pictures that adorn their cubicle walls and attending conferences sponsored by vendors. She struggles to see the value these people bring and asks in a frustrated tone “what have architects ever done for us”? What’s your response? Here’s what I think architects should be doing to support the CIO and help her achieve the enterprise’s goals.

  1. Architects bring order from chaos. The world of IT continues to get ever more challenging. Each new architectural paradigm adds more layers of complexity onto an organisations already overstretched IT infrastructure. As more technologies get thrown into this mix, often to solve immediate and pressing business problems but without being a part of any overall strategic vision, IT systems begin to sink into more and more of a chaotic state. One of the roles of an architect is not only to attempt to prevent this happening in the first place (see number 2) but also to describe a future “to-be” state, together with a road map for how to get to this new world. Some will say that this form of enterprise level architecting is fundamentally flawed however I would argue it still has great value provided it is done at the right level of abstraction (not everything is enterprise level) and recognises change will be continuous and true nirvana will never be achieved.
  2. Architects don’t jump on the latest trend and forget what went before. When a new technology comes along it’s sometimes easy to forget that it’s just a new technology. Whilst the impact on end users may be different, the way enterprises go about integrating that technology into their business, still needs to follow tried and tested methods. Remember, don’t throw out the baby with the bath water.
  3. Architects focus on business value rather than latest technology. Technologies come and go, some change the world, some don’t. Unless technology can provide some tangible benefits to the way a business operates it is unlikely to gain a foothold. Architects know that identifying the business value of technology and realising that value through robust solutions built on the technology is what is key. Technology for the sake of technology no longer works (and probably never did).
  4. Architects know how to apply technology to bring innovation.This is subtlety different from 3. This is about not just using technology to provide incremental improvements in the way a business operates but in using technology to provide disruptive innovation that causes a major shift in the way a business operates. Such disruptions often cause some businesses to disappear but at the same time can cause others to be created.
  5. Architects know the importance of “shipping”. According to Steve Jobs “real artists ship”. Delivering something (anything) on time and within budget is one of the great challenges of software development. Time or money (or both) usually run out before anything is delivered.Good architects know the importance of working within the constraints of time and money and work with project managers to ensure shipping takes place on time and within budget.

So there you have it, my take on the value of architects and what you hopefully do for your organisation or clients. Now, if only we could do something about bringing world peace…

More on Architectural Granularity

I published this post on architectural granularity just over two years ago and have been made aware of a research paper published on this topic last year: Weber, M.; Wondrak, C. (2012) Measuring the Influence of Project Characteristics on Optimal Software Project Granularity in: Proceedings of the 20th European Conference on Information Systems (ECIS). The people responsible for the paper are working on a web-based service to make the tools from the project available for software architects. You can find a preview/demo here.

The Future of Software (Architecture)

If you hang out on the internet for long enough you begin to pick up memes about what the future (of virtually anything) might be like. Bearing in mind this cautionary quote from Jay Rosen:

Nothing is the future of anything. And so every one of your “No, X is not the future of Y” articles is as witless as the originating hype.

Here is a meme I am detecting on the future of software based on a number of factors each of which has two opposing forces, either one of which may win out.  Individually each one of these factors is not particularly new, however together they may well be part of a perfect storm that is about to mark a radical shift in our industry. First, here are the five factors.

Peer to peer rather than hierarchical.
Chris Ames over at 8bit recently posted this article on The Future of Software which really caught my attention. Once upon a time (i.e. prior to about 2006) software was essentially delivered as packages. You either took the whole enchilada or none of it at all. You bought, for example, the Microsoft Office suite and all it’s components and they all played together in the way Microsoft demanded. There was a hierarchy within that suite that allowed all the parts to work together and provided a common look and feel but it was essentially a monolithic package, you bought all the features even though you may only use 10% of them. As Ames points out however, the future is loosely coupled with specialty components (you can call them ‘apps’ if you wish) providing relatively simple functions (that is, do one thing but do it really, really well) that also provide well defined programming interfaces.

This is really peer-to-peer. Tasks are partitioned between the applications and no one application has any greater privilege than another. More significantly different providers can develop and distribute these applications so no one vendor has a dominance on the market. This also allows newer and smaller specialist companies to build and develop such components/applications.

A craft rather than an engineering discipline.
Since I started in this industry back in the early 80’s (and probably way before that) the argument has raged as to whether the ‘industry’ of software development is an engineering discipline or a craft (or even an art). When I was at university (late 70’s) software engineering did not exist as a field of study in any significant way and computer science was in its infancy. Whilst the industry has gone through several reincarnations over the years and multiple attempts at enforcing an engineering discipline the somewhat chaotic nature of the internet, where let’s face it a lot of software development is done, makes it feel more like a craft than ever before. This is fed by the fact that essentially anyone with a computer and a good idea (and a lot of stamina) can, it seems, set up shop and make, if not a fortune, at least a fairly decent living.

Free rather than paid for.
Probably the greatest threat any of us feels at present (at least those of us whose work in creating digitised information in one form or another) is the fact that it seems someone, somewhere is prepared to do what you do, if not for free, then at least for a very small amount of money. Open source software is a good example. At the point of procurement (PoP) it is ‘free’ (and yes I know total cost of ownership is another matter) something that many organisations are recognising and taking advantage of, including the UK government whose stated strategy is to use open source code by default. At the same time many new companies, funded by venture capitalists hungry to be finding the next Facebook or Twitter, are pouring, mainly dollars, into new ventures which, at least on the face of it are offering a plethora of new capabilities for nothing. Check out these guys if you want a summary of the services/capabilities out there and how to join them all together.

Distributed in the cloud rather than packaged as an application.
I wanted to try and not get into talking about technology here, especially the so called SMAC (Social, Mobile, Analytics and Cloud) paradigm. Cloud, both as a technology and a concept, is too important to ignore for these purposes however. In many ways software, at least software distribution, has come full circle with the advent of cloud. Once, all software was only available in a centrally managed data centre. The personal computer temporarily set it free but cloud has now bought software back under some form of central control, albeit in a much more greatly available form. Of course there are lots of questions around cloud computing still (Is it secure? What happens if the cloud provider goes out of business? What happens if my data gets into the wrong hands?). I think cloud is a technology that is here to stay and definitely a part of my meme.

Developed in a cooperative, agile way rather than a collaborative, process driven (waterfall) one.
Here’s a nice quote from the web anthropologist, futurist and author Stowe Boyd that perfectly captures this: 

“In the collaborative business, people affiliate with coworkers around shared business culture and an approved strategic plan to which they subordinate their personal aims. But in a cooperative business, people affiliate with coworkers around a shared business ethos, and each is pursuing their own personal aims to which they subordinate business strategy. So, cooperatives are first and foremost organized around cooperation as a set of principles that circumscribe the nature of loose connection, while collaboratives are organized around belonging to a collective, based on tight connection. Loose, laissez-faire rules like ‘First, do no harm’, ‘Do unto others’, and ‘Hear everyone’s opinion before binding commitments’ are the sort of rules (unsurprisingly) that define the ethos of cooperative work, and which come before the needs and ends of any specific project.”

Check out the Valve model as an example of a cooperative.

So, if you were thinking of starting (or re-starting) a career in software (engineering, development, architecture, etc) what does this meme mean to you? Here are a few thoughts:

  • Whilst we should not write off the large software product vendors and package suppliers there is no doubt they are going to be in for a rough ride over the next few years whilst they adjust their business models to take into account the pressures from open source and the distribution mechanism bought on by the cloud. If you want to be a part of solving those conundrums then spending time working for such companies will be an “interesting” place to be.
  • If you like the idea of cooperation and the concept of a shared business ethos, rather than collaboration, then searching out, or better still starting, a company that adheres to such a model might be the way to go. It will be interesting to see how well the concept of the cooperative scales though.
  • It seems as if we are finally nearing the nirvana of true componentisation (that is software as units of functionality with a well defined interface). The promised simplicity that can be offered through API’s as shown in the diagram above is pretty much with us (remember the components shown in that diagram are all from different vendors). Individuals as well as small-medium enterprises (SME’s) that can take advantage of this paradigm are set to benefit from a new component gold rush.
  • On the craft/art versus engineering discipline of software, the need for systems that are highly resilient and available 99.9999% of the time has never been greater as these systems run more and more of our lives. Traditionally these have been systems developed by core teams following well tried and tested engineering disciplines. However as the internet has enabled teams to become more distributed where such systems are developed becomes less of an issue and these two paradigms need not be mutually exclusive Open source software is clearly a good example of applications that are developed locally but to high degrees of ‘workmanship’. The problem with such software tends not to be in initial development but in ongoing maintenance and upgrades which is where larger companies can step in and provide that extra level of assurance.

If these five factors really are part of a meme (that is a cultural unit for carrying ideas) then I expect it to evolve and mutate. Please feel free to be involved in that evolutionary process.

Yet More Architecting

Previously  I have discussed the use of the word ‘architecting’ and whether it is a valid word when describing the thing that architects do.

One of the people who commented on that blog entry informed me that the IEEE have updated the architecture standard, IEEE-1471 which describes the architecture of a software-intensive system to ISO/IEC/IEEE 42010:2011, Systems and software engineering — Architecture description. They have also updated slightly the definition of the word architecting to: 

The process of conceiving, defining, expressing, documenting, communicating, certifying proper implementation of, maintaining and improving an architecture throughout a system’s life cycle (i.e., “designing”).

Interesting that they have added that last bit in brackets “that is designing”. I always fall back on the words used by Grady Booch to resolve that other ongoing discussion about whether the word architecture is valid at all in describing what we do “all architecture is design but not all design is architecture”.

Ten Things User’s Don’t Care About

I recently came across a blog post called Things users don’t care about at the interface and product design blog bokardo. It struck me this was the basis of a good list of things end users of the systems that we architect may also not care about and might therefore help us focus on the things that matter in a system development project. Here then, is my list of ten things users don’t (or shouldn’t care about):

  1. How long you spent on it. Of course, if you spent so long you didn’t actually deliver anything this is another problem. However users still won’t care, it’s just they’ll never know they are missing something (unless your competitor beat you to it).
  2. How hard it was to implement. You may be immensely proud of how your team overcame tremendous odds to solve that really tricky programming problem. However all users are concerned about is whether the thing actually works and makes their lives a little easier. Sometimes just good enough is all that is required.
  3. How clean your architecture is. As architects we strive for purity in our designs and love to follow good architectural principles. Of course these are important because good architectural practice usually leads to more robust and resilient systems. The message here is not to go too overboard on this and don’t strive for architectural purity over the ability to ship something.
  4. How extensible it is. Extensibility (and here we can add a number of other non-runtime qualities such as scaleability, portability, testability etc.) is often something we sweat a lot over when designing a system. These are things that are important to people who need to maintain and run the system but not to end users who just want to use the system to get their job done and go home at a reasonable time! Although we might like to place great emphasis on the longevity our systems might have (which these qualities often ensure) sometimes technology just marches on and makes these systems redundant before they ever get chance to be upgraded etc. The message here is although these qualities are important, they need to be put into the broader perspective of the likely lifetime of the systems we are building.
  5. How amazing the next version will be. Ah yes, there will always be another version that really makes life easier and does what was probably promised in the first place! The fact is there will be no “next version” if version 1.0 does not do enough to win over hearts and minds (which actually does not always have to be that much).
  6. What you think they should be interested in. As designers of systems we often give users what we think they would be interested in rather than what they actually want. Be careful here, you have to be very lucky or very prescient or like the late Steve Jobs to make this work.
  7. How important this is to you. Remember all those sleepless nights you spent worrying over that design problem that would not go away? Well, guess what, once the system is out there no one cares how important that was to you. See item 2).
  8. What development process you followed. The best development process is the one that ships your product in a reasonably timely fashion and within the budget that was set for the project. How agile it is or what documents do or don’t get written does not matter to the humble user.
  9. How much money was spent in development. Your boss or your company or your client care very much about this but the financial cost of a system is something that users don’t see and most times could not possibly comprehend. Spend your time wisely in focusing on what will make a difference to the users experience and let someone else sweat the financial stuff.
  10. The prima donna(s) who worked on the project. Most of us have worked with such people. The ones who, having worked on one or two successful projects, think they are ready to project manage the next moon landing or design the system that will solve world hunger or can turn out code faster than Mark Zuckerberg on steroids. What’s important on a project is team effort not individuals with overly-inflated egos. Make use of these folk when you can but don’t let them over power the others and decimate team morale.

Happy 2013 and Welcome to the Fifth Age!

I would assert that the modern age of commercial computing began roughly 50 years ago with the introduction of the IBM 1401 which was the world’s first fully transistorized computer when it was announced in October of 1959.  By the mid-1960’s almost half of all computer systems in the world were 1401 type machines. During the subsequent 50 years we have gone through a number of different ages of computing; each corresponding to the major, underlying architecture which was dominant during each age or period. The ages with their (very) approximate time spans are:

  • Age 1: The Mainframe Age (1960 – 1975)
  • Age 2: The Mini Computer Age (1975 – 1990)
  • Age 3: The Client-Server Age (1990 – 2000)
  • Age 4: The Internet Age (2000 – 2010)
  • Age 5: The Mobile Age (2010 – 20??)

Of course, the technologies from each age have never completely gone away, they are just not the predominant driving IT force any more (there are still estimated to be some 15,000 mainframe installations world-wide so mainframe programmers are not about to see the end of their careers any time soon). Equally, there other technologies bubbling under the surface running alongside and actually overlapping these major waves. For example networking has evolved from providing the ability to connect a “green screen” to a centralised mainframe, and then mini, to the ability to connect thousands, then millions and now billions of devices. The client-server age and internet age were dependent on cheap and ubiquitous desktop personal computers whilst the current mobile age is driven by offspring’s of the PC, now unshackled from the desktop, which run the same applications (and much, much more) on smaller and smaller devices.

These ages are also characterized by what we might term a decoupling and democratization of the technology. The mainframe age saw the huge and expensive beasts locked away in corporate headquarters and only accessible by qualified members of staff of those companies. Contrast this to the current mobile age where billions of people have devices in their pockets that are many times more powerful than the mainframe computers of the first age of computing and which allow orders of magnitude increases in connectivity and access to information.

Another defining characteristic of each of these ages is the major business uses that the technology was put to. The mainframe age was predominantly centralised systems running companies core business functions that were financially worthwhile to automate or manually complex to administer (payroll, core accounting functions etc). The mobile age is characterised by mobile enterprise application platforms (MEAPs) and apps which are cheap enough to just be used just once and sometimes perform a single or relatively few number of functions.

Given that each of the ages of computing to date has run for 10 – 15 years and the current mobile age is only two years old what predictions are there for how this age might pan out and what should we, as architects, be focusing on and thinking about? As you might expect at this time of year there is no shortage of analyst reports providing all sorts of predictions for the coming year. This joint Appcelerator/IDC Q4 2012 Mobile Developer Report particularly caught my eye as it polled almost 3000 Appcelerator Titanium developers on their thoughts about what is hot in the mobile, social and cloud space. The reason it is important to look at what platforms developers are interested in is, of course, that they can make or break whether those platforms grow and survive over the long term. Microsoft Windows and Apple’s iPhone both took off because developers flocked to those platforms and developed applications for those in preference to competing platforms (anyone remember OS/2?).

As you might expect most developers preferences are to develop for the iOS platforms (iPhone and iPad) closely followed by Android phones and tablets with nearly a third also developing using HTML5 (i.e. cross-platform). Windows phones and tablets are showing some increased interest but Blackberry’s woes would seem to be increasing with a slight drop off in developer interest in those platforms.

Nearly all developers (88.4%) expected that they would be developing for two or more OS’es during 2013. Now that consumers have an increasing number of viable platforms to choose from, the ability to build a mobile app that is available cross-platform is a must for a successful developer.

Understanding mobile platforms and how they integrate with the enterprise is one of the top skills going to be needed over the next few years as the mobile age really takes off. (Consequently it is also going to require employers to work more closely with universities to ensure those skills are obtained).

In many ways the fifth age of computing has actually taken us back several years (pre-internet age) when developers had to support a multitude of operating systems and computer platforms. As a result many MEAP providers are investing in cross platform development tools, such as IBM’s Worklight which is also part of the IBM Mobile Foundation. This platform also adds intelligent end point management (that addresses the issues of security, complexity and BYOD policies) together with an integration framework that enables companies to rapidly connect their hybrid world of public clouds, private clouds, and on-premise applications.

For now then, at least until a true multi-platform technology such as HTML5 is mature enough, we are in a complex world with lots of new and rapidly changing technologies to get to grips with as well as needing to understand how the new stuff integrates with all the old legacy stuff (again). In other words, a world which we as architects know and love and thrive in. Here’s to a complex 2013!

Do Smarter Cities Make their Citizens Smarter?

This is an update of a post I originally put in my blog The Versatilist Way. I’ve removed the reference to the now discredited book by Jonah Lehry (though I believe the basic premise of what he was saying about “urban friction” remains true) and added some additional references to research by the clinical psychologist Professor Ian Robertson.My IBM colleague,  Dr. Rick Robinson, blogs here as The Urban Technologist where he writes about emergent technology and smarter cities. This particular post from Rick called Digital Platforms for Smarter City Market-Making discusses how encouraging organic growth of small to medium enterprises (SMEs) in cities not only helps with the economic revival of some of our run down inner city areas but also means those SMEs are less likely to up roots and move to another area when better tax or other incentives are on offer. As Rick says:

By building clusters of companies providing related products and services with strong input/output linkages, cities can create economies that are more deeply rooted in their locality.

Examples include Birmingham’s Jewellery Quarter which has a cluster of designers, manufacturers and retailers who also work with Birmingham City University’s School of Jewellery and Horology. Linkages with local colleges and universities is another way of reinforcing the locality of SME’s. Of course, just because we classify an enterprise as being ‘small to medium’ or ‘local’ does not mean that, because of the internet, it cannot have a global reach. These days even small, ‘mom and pop’ business can be both local as well as global.

Another example of generating organic growth is the so called Silicon Roundabout area of Shoreditch, Hoxton and Old Street in London which now counts some 3,200 firms and over 48,000 jobs. See here for a Demos report on this called A Tale of Tech City.

Clearly generating growth in our cities, as a way of improving both the economy as well as the general livelihoods of its citizens, should be considered a good thing, especially if that growth can be in new business areas which helps to replace our dying manufacturing industries and reduce our dependency on the somewhat ‘toxic’ financial services industry. However it turns out that encouraging this kind of clustering of people also has a positive feedback effect which means that groups of people together achieve more than just the sum of all the individuals.

In 2007 the British theoretical physicist Geoffrey West and colleagues published a paper called Growth, innovation, scaling, and the pace of life in cities. The paper described the results of an analysis of a huge amount of urban data from cities around the world. Data included everything from the number of coffee shops in urban areas, personal income, number of murders and even the walking speed of pedestrians. West and his team analysed all of this data and discovered that the rhythm of cities could be described by a few simple equations – the equivalent of Newtons laws of motion for cities if you like.  These laws can be taken and used to predict the behavior of our cities. One of the equations that West and his team discovered was around the measurement of socioeconomic variables such as number of patents, per-capita income etc. It turns out that any variable that can be measured in cities scales to an exponent of 1.15. In other words moving to a city of 1 million inhabitants results, on average, 15% more patents, 15% more income etc than a person living in a city of five hundred thousand. This phenomena is referred to as “superlinear scaling” – as cities get bigger, everything starts to accelerate. This applies to any city, anywhere in the world from Manhattan, to London to Hong Kong to Sydney.

So what is it about cities that appears to make their citizens smarter the bigger they grow? More to the point what do we mean by a smarter city in this context?

IBM defines a smarter city as one that:

Makes optimal use of all the interconnected information available today to better understand and control its operations and optimize the use of limited resources.

Whilst it would seem to make sense that having an optimised and better connected (city) infrastructure that ensures information flows freely and efficiently would make such cities work better and improve use of limited resources could this also enable the citizens themselves to be more creative? In other words do smarter cities produce smarter citizens? Some research by the clinical psychologist Professor Ian Robertson indicates that not only this might be the case but also, more intriguingly, citizens that live in vibrant and culturally diverse cities might actually live longer.

In this blog post Professor Robertson suggests that humming metropolises like New York or London or Sydney, through what he refers to as the three E’s, provide their citizens with stimulation’s that effect the chemicals in the brain making them smarter as well as reducing their chancing of developing aging diseases like Alzheimer’s. These three E’s are:

  • Excitation – the constant novelty that big cities provide, whether it be in terms of the construction of the next architecturally significant building or a new theater production or art gallery show provides a stimulating environment which has been shown to develop better memory and even lead to the growth of new brain cells.
  • Expectation. When there is a mix of cultures and ages it seems that older people don’t think themselves old; instead they seem to discard the preconceived notions of what people of a certain age are supposed to do and act like people of a much younger age.
  • Empowerment. By definition people who stay or live in cities tend to be wealthier. Again research has shown that money, power and success change brain functions and make people mentally sharper, more motivated and bolder.

If this is correct and the three E’s found in big cities really do make us both smart and live longer then the challenge of this century must be that we need to make our cities smarter, so they can sustain bigger populations that can live healthy and productive lives which can then have a positive feedback effect on the cities themselves. Maybe there really is a reason to love our cities after all?