This is for Everyone

Twenty years ago today on 30th April 1993 CERN published a brief statement that made World Wide Web technology available on a royalty free basis and changed the world forever. Here’s the innocuous piece of paper that shows this and that truly allowed Tim Berners-Lee, at the fantastic London 2012 Olympics opening ceremony to claim “this is for everyone”. Over the past twenty years the web has become imbedded in all of our lives in ways which most of us could never have dreamed of and has probably given many of us in the software industry quite a secure (and for some, lucrative) living during that time.How fitting then that yesterday, almost 20 years to the day since CERN’s historic announcement, IBM announced a new appliance called IBM MessageSight designed to help organizations manage and communicate with the billions of mobile devices and sensors found in systems such as automobiles, traffic management systems, smart buildings and household appliances, the so called Internet of Things.

I’ve no idea what this announcement means in terms of capabilities, other than what is available in the press release, however it is comforting to note that foundational to IBM MessageSight is its support of MQTT, which was recently proposed to become an OASIS standard, providing a lightweight messaging transport for communication in machine to machine (M2M) and mobile environments. Today more than ever enterprises and governments are demanding compliance with open standards rather than proprietary ones so it is good to see that platforms such as MessageSight will be adhering to such standards.

A Step Too Far?

The trouble with technology, especially it seems computer technology, is that it keeps “improving”.  I’ve written before about the ethics of the job that we as software architects do and whether or not we should always accept what we do without asking questions, not least of which should be, is this a technology step too far that I am building or being asked to build?

Three articles have caught my eye this week which have made me ponder this question again.

The first is from the technology watcher and author Nicholas Carr who talks about the Glass Collective, an an investment syndicate made up of three companies: Google Ventures, Andreessen Horowitz and Kleiner Perkins Caufield & Byers whose collective aim is to provide seed funding to entrepreneurs in the Glass ecosystem to help jump start their ideas.For those not in the know about Glass it is, according to the Google blog, all about “getting technology out of the way” and has the aim of building technology that is “seamless, beautiful and empowering“. Glasses first manifestation is to be Internet-connected glasses that take photos, record video and offer hands-free Internet access right in front of a users’ eyes.

Clearly the type of augmented reality that Glass opens up could have huge educational benefits (think of walking around a museum or art gallery and getting information on what you are looking at piped right to you as you look at different works of art) as well as very serious privacy implications. For another view on this read the excellent blog post from my IBM colleague Rick Robinson on privacy in digital cities.

In his blog post Carr refers to a quote from Marshall McLuhan, made a half century ago and now seeming quite prescient:

Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit by taking a lease on our eyes and ears and nerves, we don’t really have any rights left.

The next thing to catch my eye (or actually several thousand things) was around the whole sorry tale of the Boston bombings. This post in particular from the Wall Street Journal discusses the role of Boston’s so called fusion center that “helps investigators scour for connections among potential suspects, by mining hundreds of law enforcement sources around the region, ranging from traffic violations, to jail records and criminal histories, along with public data like property records.”

Whilst I doubt anyone would question the validity of using data in this way to track down people that have performed atrocities such as we saw in Boston, it does highlight just how much data is now collected on us and about us, much of which we have no control over of broadcasting to the world.

Finally, on a much lighter note, we learn that the contraceptive maker Durex has released their “long distance, sexy time fundawear“. I’ll let you watch the first live trial video of this at your leisure (warning, not entirely work safe) but let’s just say here that it adds a whole new dimension to stroking the screen on your smartphone. I guess this one has no immediate privacy issues (providing the participants don’t wear their Google Glass at the same time as playing in their fundawear at least) it does raise some interesting questions about how much we will let technology impinge on the most intimate part of our lives.

So where does this latest foray of mine into digital privacy take us and what conclusions, if any, can we draw? Back in 2006 IBM Fellow and Chief Scientist Jeff Jonas posted a comment on his blog called Responsible Innovation: Designing for Human Rights in which he asks two questions: what if we are creating technologies that go in the face of the Universal Declaration of Human Rights and what if systems are designed without the essential characteristics needed to support basic privacy and civil liberties principles?

Jeff argues that if technologies could play a role in any of the arrest, detention, exile, interference, attacks or deprivation mentioned in the Universal Declaration of Human Rights then they must support disclosure of the source upon which such invasions are predicated. He suggests that systems that could affect one’s privacy or civil liberties should have a number of design characteristics built in that allow for some level of auditability as well as ensuring accuracy of the data they hold. Such characteristics as, every data point is associated to its data source and every data point is associated to its author etc. Given this was written in 2006 when Facebook was only two years old and still largely confined to use in US universities this is a hugely prescient and thoughtful piece of insight (which is why Jeff is an IBM Fellow of course).

So, there’s an idea! New technologies, when they come along should, be examined to ensure they have built in safeguards that mean such rights as are granted to us all in the Universal Declaration of Human Rights are not infringed or taken away from us. How would this be done and, more importantly of course, what bodies or organisations would we empower to ensure such safeguards were both effective and enforceable? No easy or straightforward answers here but certainly a topic for some discussion I believe.

Steal Like an Artist

David Bowie is having something of a resurgence this year. Not only has he released a critically acclaimed new album, The Next Day, there is also an exhibition of the artefacts from his long career at the Victoria & Albert museum in London. These includes handwritten lyrics, original costumes, fashion, photography, film, music videos, set designs and Bowie’s own instruments.

David Bowie was a collector. Not only did he collect, he also stole. As he said in a Playboy interview back in 1976:

The only art I’ll ever study is stuff that I can steal from.

He even steals from himself, check out the cover of his new album to see what I mean.

Austin Kleon has written a whole book on this topic, Steal Like an Artist, in which he makes the case that nothing is original and that nine out of ten times when someone says that something is new, it’s just that they don’t know the the original sources involved. Kleon goes on to say:

What a good artist understands is that nothing comes from nowhere. All creative work builds on what came before. Nothing is completely original.

So what on earth has this got to do with software architecture?

Eighteen years ago one of the all time great IT books was published. Design Patterns – Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides introduced the idea of patterns, originally a construct used by the building architect Christopher Alexander,  to the IT world at large. As the authors say in the introduction to their book:

One thing expert designers know not to do is solve every problem from first principles. Rather, they reuse solutions that have worked for them in the past. When they find a good solution, they use it again and again. Such experience is part of what makes them experts.

So expert designers ‘steal’ work they have already used before. The idea of the Design Patterns book was to publish patterns that others had found to work for them so they could be reused (or stolen). The patterns in Design Patterns were small design elements that could be used when building object-oriented software. Although they included code samples, they were not directly reusable without adaptation, not to mention coding, in a chosen programming language.

Fast forward eighteen years and the concept of patterns is alive and well but has reached a new level of abstraction and therefore reuse. Expert Integrated Systems like IBM’s PureApplication SystemTM use patterns to provide fast, high-quality deployments of sophisticated environments that enable enterprises to get new business applications up and running as quickly as possible. Whereas the design patterns from the book by Gamma et al were design elements that could be used to craft complete programs the PureApplication System patterns are collections of virtual images that form a a complete system. For example, the Business Process Management (BPM) pattern includes an HTTP server, a clustered pair of BPM servers, a cluster administration server, and a database server. When an administrator deploys this pattern, all the inter-connected parts are created and ready to run together. Time to deploy such systems is reduced from days or even, in some cases, weeks to just hours.

Some may say that the creation and proliferation of such patterns is another insidious step to the deskilling of our profession. If all it takes to deploy a complex BPM system is just a few mouse clicks then where does that leave those who once had to design such systems from scratch?

Going back to our art stealing analogy, a good artist does not just steal the work of others and pass it off as their own (at least most of them don’t) rather, they use the ideas contained in that work and build on them to create something new and unique (or at least different). Rather than having to create new stuff from scratch they adopt the ideas that others have come up with then adapt them to make their own creations. These creations themselves can then be used by others and further adapted thus the whole thing becomes a sort of virtuous circle:Adopt Adapt

A good architect, just like a good artist, should not fear patterns but should embrace them and know that they free him up to focus on creating something that is new and of real (business) value. Building on the good work that others have done before us is something we should all be encouraged to do more of. As Salvador Dalis said:

Those who do not want to imitate anything, produce nothing.

Happy 2013 and Welcome to the Fifth Age!

I would assert that the modern age of commercial computing began roughly 50 years ago with the introduction of the IBM 1401 which was the world’s first fully transistorized computer when it was announced in October of 1959.  By the mid-1960’s almost half of all computer systems in the world were 1401 type machines. During the subsequent 50 years we have gone through a number of different ages of computing; each corresponding to the major, underlying architecture which was dominant during each age or period. The ages with their (very) approximate time spans are:

  • Age 1: The Mainframe Age (1960 – 1975)
  • Age 2: The Mini Computer Age (1975 – 1990)
  • Age 3: The Client-Server Age (1990 – 2000)
  • Age 4: The Internet Age (2000 – 2010)
  • Age 5: The Mobile Age (2010 – 20??)

Of course, the technologies from each age have never completely gone away, they are just not the predominant driving IT force any more (there are still estimated to be some 15,000 mainframe installations world-wide so mainframe programmers are not about to see the end of their careers any time soon). Equally, there other technologies bubbling under the surface running alongside and actually overlapping these major waves. For example networking has evolved from providing the ability to connect a “green screen” to a centralised mainframe, and then mini, to the ability to connect thousands, then millions and now billions of devices. The client-server age and internet age were dependent on cheap and ubiquitous desktop personal computers whilst the current mobile age is driven by offspring’s of the PC, now unshackled from the desktop, which run the same applications (and much, much more) on smaller and smaller devices.

These ages are also characterized by what we might term a decoupling and democratization of the technology. The mainframe age saw the huge and expensive beasts locked away in corporate headquarters and only accessible by qualified members of staff of those companies. Contrast this to the current mobile age where billions of people have devices in their pockets that are many times more powerful than the mainframe computers of the first age of computing and which allow orders of magnitude increases in connectivity and access to information.

Another defining characteristic of each of these ages is the major business uses that the technology was put to. The mainframe age was predominantly centralised systems running companies core business functions that were financially worthwhile to automate or manually complex to administer (payroll, core accounting functions etc). The mobile age is characterised by mobile enterprise application platforms (MEAPs) and apps which are cheap enough to just be used just once and sometimes perform a single or relatively few number of functions.

Given that each of the ages of computing to date has run for 10 – 15 years and the current mobile age is only two years old what predictions are there for how this age might pan out and what should we, as architects, be focusing on and thinking about? As you might expect at this time of year there is no shortage of analyst reports providing all sorts of predictions for the coming year. This joint Appcelerator/IDC Q4 2012 Mobile Developer Report particularly caught my eye as it polled almost 3000 Appcelerator Titanium developers on their thoughts about what is hot in the mobile, social and cloud space. The reason it is important to look at what platforms developers are interested in is, of course, that they can make or break whether those platforms grow and survive over the long term. Microsoft Windows and Apple’s iPhone both took off because developers flocked to those platforms and developed applications for those in preference to competing platforms (anyone remember OS/2?).

As you might expect most developers preferences are to develop for the iOS platforms (iPhone and iPad) closely followed by Android phones and tablets with nearly a third also developing using HTML5 (i.e. cross-platform). Windows phones and tablets are showing some increased interest but Blackberry’s woes would seem to be increasing with a slight drop off in developer interest in those platforms.

Nearly all developers (88.4%) expected that they would be developing for two or more OS’es during 2013. Now that consumers have an increasing number of viable platforms to choose from, the ability to build a mobile app that is available cross-platform is a must for a successful developer.

Understanding mobile platforms and how they integrate with the enterprise is one of the top skills going to be needed over the next few years as the mobile age really takes off. (Consequently it is also going to require employers to work more closely with universities to ensure those skills are obtained).

In many ways the fifth age of computing has actually taken us back several years (pre-internet age) when developers had to support a multitude of operating systems and computer platforms. As a result many MEAP providers are investing in cross platform development tools, such as IBM’s Worklight which is also part of the IBM Mobile Foundation. This platform also adds intelligent end point management (that addresses the issues of security, complexity and BYOD policies) together with an integration framework that enables companies to rapidly connect their hybrid world of public clouds, private clouds, and on-premise applications.

For now then, at least until a true multi-platform technology such as HTML5 is mature enough, we are in a complex world with lots of new and rapidly changing technologies to get to grips with as well as needing to understand how the new stuff integrates with all the old legacy stuff (again). In other words, a world which we as architects know and love and thrive in. Here’s to a complex 2013!

Disruptive Technologies, Smarter Cities and the New Oil

Last week I attended the Smart City and Government Open Data Hackathon in Birmingham, UK. The event was sponsored by IBM and my colleague Dr Rick Robinson, who writes extensively on Smarter Cities as The Urban Technologist, gave the keynote session to kick off the event. The idea of this particular hackathon was to explore ways in which various sources of open data, including the UK governments own open data initiative, could be used in new and creative ways to improve the lives of citizens and make our cities smarter as well as generally better places to live in. There were some great ideas discussed including how to predict future jobs as well as identifying citizens who had not claimed benefits to which they were entitled (and those benefits then going back into the local economy through purchases of goods and services).The phrase “data is the new oil” is by no means a new one. It was first used by Michael Palmer in 2006 in this article. Palmers says:

Data is just like crude. It’s valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic, chemicals, etc to create a valuable entity that drives profitable activity; so must data be broken down, analyzed for it to have value.

Whilst this is a nice metaphor I think I actually prefer the slight adaptation proposed by David McCandless in his TED talk: The beauty of data visualization where he coins the phrase “data is the new soil”. The reason being data needs to be worked and manipulated, just like a good farmer looking after his land, to get the best out of it. In the case of the work done by McCandless this involves creatively visualizing data to show new understandings or interpretations and, as Hans Rosling says, to let the data set change your mind set.

Certainly one way data is most definitely not like oil is in the way it is increasing at exponential rates of growth rather than rapidly diminishing. But it’s not only data. The new triumvirate of data, cloud and mobile is forging a whole new mega-trend in IT nicely captured in this equation proposed by Gabrielle Byrne at the start of this video:

e = mc(imc)2

Where:

  • e is any enterprise (or city, see later)
  • m is mobile
  • c is cloud
  • imc is in memory computing, or stream computing, the instant analysis of masses of fast changing data

This new trend is characterized by a number of incremental innovations that have taken place in IT over previous years in each of the three areas nicely captured in the figure below.

Source: CNET – Where IT is going: Cloud, mobile and data

In his blog post: The new architecture of smarter cities, Rick proposes that a Smarter City needs three essential ‘ingredients’ in order to be really characterized as ‘smart’. These are:

  • Smart cities are led from the top
  • Smart cities have a stakeholder forum
  • Smart cities invest in technology infrastructure

It is this last attribute that, when built on a suitable cloud-mobility-data platform, promises to fundamentally change not only how enterprises are set to change but also cities and even whole nations.  However it’s not just any old platform that needs to be built. In this post I discussed the concept behind so-called disruptive technology platforms and the attributes they must have. Namely:

  • A well defined set of open interfaces.
  • A critical mass of both end users and service providers.
  • Both scaleable and extremely robust.
  • An intrinsic value which cannot be obtained elsewhere.
  • Allow users to interact amongst themselves, maybe in ways that were originally envisaged.
  • Service providers must be given the right level of contract that allows them to innovate, but without actually breaking the platform.

So what might a disruptive technology platform, for a whole city, look like and what innovations might it provide? As an example of such a platform IBM have developed something they call the Intelligent Operations Center or IOC. The idea behind the IOC is to use information from a number of city agencies and departments to make smarter decisions based on rules that can be programmed into the platform. The idea then, is that the IOC will be used to anticipate problems to minimize the impact of disruptions to city services and operations as well as assist in the mobilization of resources across multiple agencies. The IOC allows aggregated data to be visualized in ways that the individual data sets cannot and for new insights to be obtained from that data.

Platforms like the IOC are only the start of what is possible in a truly smart city. They are just beginning to make use of mobile technology, data in the cloud and huge volumes of fast moving data that is analysed in real-time. Whether these platforms turn out to be really disruptive remains to be seen but if this is really the age of “new oil” then we only have the limitations of our imagination to restrict us in how we will use that data to give us valuable new insights into building smart cities.

Architect or Architecting?

A discussion has arisen on one of the IBM forums about whether the verb that describes what architects do (as in “to architect” or “architecting”) is valid English or not. The recommendation in the IBM word usage database has apparently always been that when you need a verb to describe what an architect does use “design,” “plan,” or “structure”. Needless to say this has generated quite a bit of comment (145 at the last count) including:

  • Police are policing, judges are judging, dancers are dancing, why then aren’t architects architecting?
  • Architects are not “architecting” because they design.
  •  I feel a need to defend the term ‘architecting’. Engineers do engineering, architects do architecting. We have the role of software or system architecture and the term describes what they do. There is a subtle but useful distinction between a software designer and a software architect that was identified about 30 years ago by the then IBMer Fred Brooks in his foundational text, The Mythical Man Month.
  • From a grammatical point of view use of “architecting” as a verb or gerund is as poor as using leverage as a verb… and as far as meaning is concerned, as poor as any platitude used when knowledge of precise content and detail is lacking.

As someone who has co-authored a book called The Process of Software Architecting I should probably declare more than a passing interest in this and feel that the verb ‘architecting’ or ‘to architect’ is perfectly valid. Whether it is strictly correct English or not I will leave to others far better qualified to pass judgment on. My defence of using architect as a verb is that there is a, sometimes subtle, difference between architecture and design (Grady Booch says “all architecture is design but not all design is architecture”) and although architects do perform elements of design, that is not all they do. I, for one, would not wish to see the two confused.

The definition of architecting we use in the book  The Process of Software Architecting comes from the IEEE standard 1471-2000 which defines architecting as:

The activities of defining, documenting, maintaining, improving, and certifying proper implementation of an architecture.

As a related aside on whether adding ‘ing’ to a noun to turn int into a verb is correct English or not it is interesting to see that the ‘verbing’ of nouns is picking up pace at the London Olympics where we now seem to have ‘medaling’ and ‘platforming’ entering the English language.

Hassle Maps and Expert Integrated Systems

In his book Demand: Creating What People Love Before They Know They Want It the business thinker and management consultant Adrian Slywotzky defines the concept of a Hassle Map thus:

Hassle Map (HA-sul map) noun 1. a diagram of the characteristics of existing products, services and systems that cause people to waste time, energy, money 2. (from a customer’s perspective) a litany of the headaches, disappointments and frustrations one experiences 3. (from a demand creator’s perspective) an array of tantalising opportunities.

Documenting, either literally or mentally, the hassle map for a product, service, system or process is the first step on the way to improving it and to creating something that people will love and want. A key part of the hassle map is finding out what users of an existing product or service find most annoying and stop them from buying it in great quantities. For Steve Jobs this was the inadequacies of existing mobile phones, for Reed Hastings CEO of Netflix it was the ‘hassle’ of having to walk to the video store to rent a movie (and being fined when you forgot to take it back on time), for Jeff Bezos, founder of Amazon, it was not just building an e-reader device (the Kindle) with a great interface but also one which had a massive catalogue of books that could be downloaded in ‘one-click’. The list goes on.

One way of drawing up a hassle map is to think of what the world would be like without the hassle; a sort of idealized view of life. A hassle map, consists of a number of hassles and for each, a view of what life would be like without the hassle. I once worked with a client who was fond of using the phrase “imagine a world where…” Well, the solution bit of a hassle map is the world where that hassle no longer exists.

Expert integrated systems, as manifested by IBM’s PureFlex and PureApplication Systems, are an attempt at addressing the hassle maps currently felt by businesses when building IT systems. Here are 10 hassles that these systems are trying to overcome.

Hassle Solution
IT increasingly seen as a constraint on business innovation rather than an enabler. Expert integrated systems enable delivery of new capabilities, faster allowing IT resources to be moved from ‘running the business’ to ‘changing the business’.
Software and hardware has to be ordered separately taking days or weeks to arrive. System arrives as a single integrated hardware and software package, ready to be turned on.
Components arrive as a “bag of parts” requiring integration and optimization. Components are pre-installed, integrated and optimized.
Specification of deployment environment requires specialist skills, can be brittle and error prone. ‘Patterns of expertise’ that capture proven best practices for complex tasks learned from decades of client and partner engagements that are captured, lab tested and built into the system.
Systems require time-consuming optimization by experts on site. Pre-optimized by experts in the factory before shipment.
Deployment time takes weeks. Deployment time takes minutes.
Multiple management consoles for each middleware software product. Single point of management across all middleware products.
Lack of dynamic elasticity results in cumbersome re-allocation of resources. Repeatable self service provisioning, integrated and elastic application and data runtimes and application-aware workload management.
Takes weeks or months for a development or test environment to be built plus non-standard configurations can cause errors and delay production deployments by weeks. Self service development, test and production environments, provisioned, secured and managed in adherence to corporate policies through customizable pre-defined patterns.
Upgrades involve days of downtime. Zero downtime upgrades.

Of course ‘hassles’, are really only high-level requirements stated in a way that business folk really care about, that is what is causing them pain. These are the right sort of requirements and the sort we IT folk must take most notice of if we are to build systems that solve ‘real-world’ business problems.

What Does IBM’s PureSystem Announcement Mean for Architects?

On April 11th IBM announced what it is referring to as a new category of systems, expert integrated systems. As befits a company like IBM when it makes an announcement such as this, a fair deluge of information has been made available, including this expert integrated systems blog as well as an expert integrated system home at ibm.com.IBM says expert integrated systems are different because of three things: built-in expertise, integration by design and a simplified experience. In other words they are more than just a static stack of software and hardware components – a server here, some database software there, serving a fixed application at the top. Instead, these systems have three unique attributes:

  • Built-in expertise. Expert integrated systems represent the collective knowledge of thousands of deployments, established best practices, innovative thinking, IT industry leadership, and the distilled expertise of solution providers. Captured into the system in a deployable form from the base system infrastructure through the application.
  • Integrated by design.  All the hardware and software components are integrated and tuned in the lab and packaged in the factory into a single ready-to-go system. All of the integration is done for you, by experts.
  • Simplified experience. Expert integrated systems are designed to make every part of the IT lifecycle easier, from the moment you start designing what you need to the time you purchase, set up, operate, maintain and upgrade the system. Expert integrated systems provide a single point of management as well as integrated monitoring and maintenance.

At launch IBM has announced two models, PureFlex System and PureApplication System. IBM PureFlex System provides a factory integrated and optimized system infrastructure with integrated systems management whilst IBM PureApplication System provides an integrated and optimized application aware platform which captures patterns of expertise as well as providing simplified management via a single management console.

For a good, detailed and independent description of the PureSystem announcement see Timothy Prickett Morgan’s article in The Register. Another interesting view, from James Governer on RedMonk, is that PureSystems are IBM’s “iPad moment“. Governer argues that just as the iPad has driven a fundamental break with the past (tablets rather than laptops or even desktops), IBM wants to do the same thing in the data center. Another similarity with the iPad is IBM’s push to have application partners running on the new boxes at launch. The PureSystems site includes a catalog of third party apps customers can buy pre-installed.

What I’m interested in here is not so much what expert integrated systems are but what exactly the implications are for architects, specifically software architects. As Daniel Pink says in his book A Whole New Mind:

..any job that depends on routines – that can be reduced to a set of rules, or broken down into a set of repeatable steps – is at risk.

So are expert integrated systems, with built-in expertise and that are integrated by design, about to put the job of the software architect at risk?

In many ways the advent of the expert integrated system is really another step on the path of increasing levels of abstraction in computing that was started when the first assembler languages did away with the need for writing complex and error-prone machine language instructions in the 1950’s. Since then the whole history of computing has really been about adding additional layers of abstraction on top of the raw processors of the computers themselves. Each layer has allowed the programmers of such systems to worry less about how to control the computer and more on the actual problems to be solved. As we move toward trying to solve increasingly complex business problems the focus has to be more on business than IT. Expert integrated systems therefore have the potential (and it’s early days yet) to let the software architect focus on understanding how application software components can be combined in new and interesting ways (the true purpose of a software architect in my view) to solve complex and wicked problems rather than focusing too much on the complexities of what middleware components work with what and how all of these work with different operating systems and computer platforms.

So, rather than being the end of the era of the software architect I see expert integrated systems as being the start of a new era, even an age of enlightenment, when we can focus on the really interesting problems rather than the tedious ones bought about by the technology we have inherited over the last six decades or so.

Why We Need STEM++ Graduates

The need for more STEM (that’s Science, Technology, Engineering and Maths) skills seems to be on the agenda more and more these days. There is a strong feeling that the so called developed nations have depended too much on financial and other services to grow their economies and as a result “lost” their ability to design, develop and manufacture goods, largely because we are not producing enough STEM graduates to do this.Whilst I would see software as falling fairly and squarely into the STEM skillset (even if it is also used to  underpin nearly all of the modern financial services industry) as this blog post by Jessica Benjamin from IBM points out STEM skills alone won’t solve the really hard problems that are out there. With respect to the particular problems around big data Jessica succinctly says:

All the skills it takes to tell a good story, to compose a complete orchestra, are the skills it takes to put the pieces of this big data world together. If data is just data until its information, what’s a lot of information without the thought and skill of pulling all the chords together?

The need for right as well as left brained thinkers for solving the worlds really, really hard business problems is something that has been recognised for some time now by several prominent business leaders. Indeed the intersection of technology (left-brained) and design (right-brained) has certainly played a part in a lot of what technology companies like IBM and Apple have been a part of and made them successful.

So we need not just STEM skills but STEM++ skills where the addition of  “righty” skills like arts, humanities and design help us build not just a smarter world but one that is better to live in. For more on this check out my other (joint) blog The Versatilist Way.

You’re Building Me a What?

This week I’ve been attending a cloud architecture workshop. Not to architect a cloud for anyone in particular but to learn what the approach to architecting clouds should be. This being an IBM workshop there was, of course, lots of Tivoli this, WebSphere that and Power the other. Whilst the workshop was full of good advice I couldn’t help of thinking of this cartoon from 2008:

Courtesy geekandpoke.typepad.com

Just replace the word ‘SOA’ with ‘cloud’ (as ‘SOA’ could have been replaced by ‘client-server’ in the early nineties) and you get the idea. As software architects it is very easy to get seduced by technology, especially when it is new and your vendors, consultants and analysts are telling you this really is the future. However if you cannot explain to your client why you’re building him a cloud and what business benefit it will bring him then you are likely to fail just as much with this technology as people have with previous technology choices.