What Have we Learnt from Ten Years of the iPhone?

Ten years ago this week (on 9th January 2007) the late Steve Jobs, then at the hight of his powers at Apple, introduced the iPhone to an unsuspecting world. The history of that little device (which has got both smaller and bigger in the interceding ten years) is writ large over the entire Internet so I’m not going to repeat it here. However it’s worth looking at the above video on YouTube not just to remind yourself what a monumental and historical moment in tech history this was, even though few of us realised it at the time, but also to see a masterpiece in how to launch a new product.

Within two minutes of Jobs walking on stage he has the audience shouting and cheering as if he’s a rock star rather than a CEO. At around 16:25 when he’s unveiled his new baby and shows for the first time how to scroll through a list in a screen (hard to believe that ten years ago know one knew this was possible) they are practically eating out of his hand and he still has over an hour to go!

This iPhone keynote, probably one of the most important in the whole of tech history, is a case study on how to deliver a great presentation. Indeed, Nancy Duart in her book Resonate, has this as one of her case studies for how to “present visual stories that transform audiences”. In the book she analyses the whole event to show how Jobs’ uses all of the classic techniques of storytelling, establish what is and what could be, build suspense, keep your audience engaged, make them marvel and finally  show them a new bliss.

The iPhone product launch, though hugely important, is not what this post is about though. Rather, it’s about how ten years later the iPhone has kept pace with innovations in technology to not only remain relevant (and much copied) but also to continue to influence (for better and worse) the way people interact, communicate and indeed live. There are a number of enabling ideas and technologies, both introduced at launch as well as since, that have enabled this to happen. What are they and how can we learn from the example set by Apple and how can we improve on them?

Open systems generally beat closed systems

At its launch Apple had created a small set of native apps the making of which was not available to third-party developers. According to Jobs, it was an issue of security. “You don’t want your phone to be an open platform,” he said. “You don’t want it to not work because one of the apps you loaded that morning screwed it up. Cingular doesn’t want to see their West Coast network go down because of some app. This thing is more like an iPod than it is a computer in that sense.”

Jobs soon went back on that decision which is one of the factors that has led to the overwhelming success of the device. There are now 2.2 million apps available for download in the App Store with over 140 billion downloads made since 2007.

As has been shown time and time again, opening systems up and allowing access to third party developers nearly always beat keeping systems closed and locked down.

Open systems need easy to use ecosystems

Claiming your system is open does not mean developers will flock to it to extend your system unless it is both easy and potentially profitable to do so. Further, the second of these is unlikely to happen unless the first enabler is put in place.

Today with new systems being built around Cognitive computing, the Internet of Things (IoT) and Blockchain companies both large and small are vying with each other to provide easy to use but secure ecosystems that allow these new technologies to flourish and grow, hopefully to the benefits to business and society as a whole. There will be casualties on the way but this competition, and the recognition that systems need to be built right rather than us just building the right system at the time is what matters.

Open systems must not mean insecure systems

One of the reasons Jobs gave for not initially making the iPhone an open platform was his concerns over security and for hackers to break into those systems wreaking havoc. These concerns have not gone away but have become even more prominent. IoT and artificial intelligence, when embedded in everyday objects like cars and  kitchen appliances as well as our logistics and defence systems have the potential to cause there own unique and potentially disastrous type of destruction.

The cost of data breaches alone is estimated at $3.8 to $4 million and that’s without even considering the wider reputational loss companies face. Organisations need to monitor how security threats are evolving year to year and get well-informed insights about the impact they can have on their business and reputation.

Ethics matter too

With all the recent press coverage of how fake news may have affected the US election and may impact the upcoming German and French elections as well as the implications of driverless cars making life and death decisions for us, the ethics of cognitive computing is becoming a more and more serious topic for public discussion as well as potential government intervention.

In October last year the Whitehouse released a report called Preparing for the Future of Artificial Intelligence. The report looked at the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy and made a number of recommendations on further actions. These included:

  • Prioritising open training data and open data standards in AI.
  • Industry should work with government to keep government updated on the general progress of AI in industry, including the likelihood of milestones being reached
  • The Federal government should prioritize basic and long-term AI research

As part of the answer to addressing the Whitehouse report this week a group of private investors, including LinkedIn co-founder Reid Hoffman and eBay founder Pierre Omidyar, launched a $27 million research fund, called the Ethics and Governance of Artificial Intelligence Fund. The group’s purpose is to foster the development of artificial intelligence for social good by approaching technological developments with input from a diverse set of viewpoints, such as policymakers, faith leaders, and economists.

I have discussed before about transformative technologies like the world wide web have impacted all of our lives, and not always for the good. I hope that initiatives like that of the US government (which will hopefully continue under the new leadership) will enable a good and rationale public discourse on how  we allow these new systems to shape our lives for the next ten years and beyond.

Tech: The Missing Generation

I’ve recently been spending a fair bit of time in hospital. Not, thankfully, for myself but with my mother who fell and broke her arm a few weeks back which has resulted in lots of visits to our local Accident & Emergency (A&E)  department as well as a short stay in hospital whilst they pinned her arm back in place.

nhs hospital
An elderly gentleman walks past an NHS hospital sign in London. Photograph: Cate Gillon/Getty Images

Anyone who knows anything about the UK also knows how much we value our National Health Service (NHS). So much so that when it was our turn to run the Olympic Games back in 2012 Danny Boyle’s magnificent opening ceremony dedicated a whole segment to this wonderful institution featuring doctors, nurses and patients dancing around beds to music from Mike Oldfield’s Tubular Bells.

nhs london 2012 olympics
Olympic Opening Ceremony NHS Segment – Picture Courtesy the International Business Times

The NHS was created out of the ideal that good healthcare should be available to all, regardless of wealth. When it was launched by the then minister of health, Aneurin Bevan, on July 5 1948, it was based on three core principles:

  • that it meet the needs of everyone
  • that it be free at the point of delivery
  • that it be based on clinical need, not ability to pay

These three principles have guided the development of the NHS over more than 60 years, remain at its core and are embodied in its constitution.

nhs constitution
NHS Constitution Logo

All of this, of course, costs:

  • NHS net expenditure (resource plus capital, minus depreciation) has increased from £64.173 billion in 2003/04 to £113.300bn in 2014/15. Planned expenditure for 2015/16 is £116.574bn.
  • Health expenditure (medical services, health research, central and other health services) per capita in England has risen from £1,841 in 2009/10 to £1,994 in 2013/14.
  • The NHS net deficit for the 2014/15 financial year was £471 million (£372m underspend by commissioners and a £843m deficit for trusts and foundation trusts).
  • Current expenditure per capita for the UK was $3,235 in 2013. This can be compared to $8,713 in the USA, $5,131 in the Netherlands, $4,819 in Germany, $4,553 in Denmark, $4,351 in Canada, $4,124 in France and $3,077 in Italy.

The NHS also happens to be the largest employer in the UK. In 2014 the NHS employed 150,273 doctors, 377,191 qualified nursing staff, 155,960 qualified scientific, therapeutic and technical staff and 37,078 managers.

So does it work?

From my recent experience I can honestly say yes. Whilst it may not be the most efficient service in the world the doctors and nurses managed to fix my mothers arm and hopefully set her on the road to recovery. There have been, and I’m sure there will be more, setbacks but given her age (she is 90) they have done an amazing job.

Whilst sitting in those A&E departments whiling away the hours (I did say they could be more efficient) I had plenty of time to observe and think. By its very nature the health service is hugely people intensive. Whilst there is an amazing array of machines beeping and chirping away most activities require people and people cost money.

The UK’s health service, like that of nearly all Western countries, is under a huge amount of pressure:

  • The UK population is projected to increase from an estimated 63.7 million in mid-2012 to 67.13 million by 2020 and 71.04 million by 2030.
  • The UK population is expected to continue ageing, with the average age rising from 39.7 in 2012 to 42.8 by 2037.
  • The number of people aged 65 and over is projected to increase from 10.84m in 2012 to 17.79m by 2037. The number of over-85s is estimated to more than double from 1.44 million in 2012 to 3.64 million by 2037.
  • The number of people of State Pension Age (SPA) in the UK exceeded the number of children for the first time in 2007 and by 2012 the disparity had reached 0.5 million (though this is projected to reverse by).
  • There are an estimated 3.2 million people with diabetes in the UK (2013). This is predicted to reach 4 million by 2025.
  • In England the proportion of men classified as obese increased from 13.2 per cent in 1993 to 26.0 per cent in 2013 (peak of 26.2 in 2010), and from 16.4 per cent to 23.8 per cent for women over the same timescale (peak of 26.1 in 2010).

The doctors and nurses that looked after my mum so well are going to be coming under a increasing pressures as this ageing and less healthy population begins to suck ever more resources out of an already stretched system. So why, given the passion everyone has about the NHS, isn’t there more of a focus on getting technology to ease the burden of these overworked healthcare providers?

Part of the problem of course is that historically the tech industry hasn’t exactly covered itself with glory when it comes to delivering technology to the healthcare sector (I’m thinking the NHS National Programme for IT and the US HealthCare.gov system as being two high profile examples). Whilst some of this may be due to the blunders of government much of it is down to a combination of factors caused by both the providers and consumers of healthcare IT mis-communication and not understanding the real requirements that such complex systems tend to have.

In her essay How to build the Next Unicorn in Healthcare the entrepreneur Yasi Baiani   sets out six tactical tips for how to build a unicorn* digital startup. In summary these are:

  1. Understand the current system.
  2. Know your customers.
  3. Have product hooks.
  4. Have a clear monetization strategy and understand your customers’ willingness-to-pay.
  5. Know the rules and regulations.
  6. Figure out what your unfair competitive advantage is.

Of course, these are strategies that actually apply to any industry when trying to bring about innovation and disruption – they are not unique to healthcare. I would say that when it comes to the healthcare industry the reason why there has been no Uber is because the tech industry is ignoring the generation that is in most need of benefiting from technology, namely the post 65 age group. This is the age group that struggle most with technology either because they are more likely to be digitally disadvantaged or because they simply find it too difficult to get to grips with it.

As the former Yahoo chief technology officer Ashfaq Munshi, who has become interested in ageing tech says:

“Venture capitalists are too busy investing in Uber and things that get virality. The reality is that selling to older people is harder, and if venture capitalists detect resistance, they don’t invest.”

Matters are not helped by the fact that most tech entrepreneurs are between the ages of 20 and 35 and have different interests in life than the problems faced by the aged. As this article by Kevin Maney in the Independent points out:

“Entrepreneurs are told that the best way to start a company is to solve a problem they understand. It makes sense that those problems range from how to get booze delivered 24/7 to how to build a cloud-based enterprise human resources system – the tangible problems in the life and work of a 25- or 30-year-old.”

If it really is the case that entrepreneurs only look at problems they understand or are on their immediate event horizon then clearly we need more entrepreneurs of my age group (let’s just say 45+). We are the people either with elderly parents, like my mum, who are facing the very real problems of old age and poor health and who themselves will very soon be facing the same issues.

A recent Institute of Business Value report from IBM makes the following observation:

“For healthcare in particular, the timing for a game changer couldn’t be better. The industry is coping with upheaval triggered by varied economic, societal and industry influences. Empowered consumers living in an increasingly digital world are demanding more from an industry that is facing growing regulation, soaring costs and a shortage of skilled resources.”

Rather than fearing the new generation of cognitive systems we need to be embracing them and ruthlessly exploiting them to provide solutions that will ease all of our journeys into an ever increasing old age.

At  SXSW, which is running this week in Austin, Texas IBM is providing an exclusive look at its cognitive technology called Watson and showcasing a number of inspiring as well as entertaining applications of this technology. In particular on Tuesday 15th March there is a session called Ageing Populations & The Internet of Caring Things  where you can take a look at accessible technology and how it will create a positive impact on an aging person’s quality of life.

Also at SXSW this year President Obama gave a keynote interview where he called for action in the tech world, especially for applications to improve government IT. The President urged the tech industry to solve some of the nation’s biggest problems by working in conjunction with the government. “It’s not enough to focus on the cool, next big thing,” Obama said, “It’s harnessing the cool, next big thing to help people in this country.”

obama-sxsw
President Barack Obama speaks during the 2016 SXSW Festival at Long Center in Austin, Texas, March 11, 2016. PHOTO: NEILSON BARNARD/GETTY IMAGES FOR SXSW

It is my hope that with the vision that people such as Obama have given the experience of getting old will be radically different 10 or 20 years from now and that cognitive and IoT technology will make all of out lives not only longer but more more pleasant.

* Unicorns are referred to companies whose valuation has exceeded $1 billion dollars.

From Turing to Watson (via Minsky)

This week (Monday 25th) I gave a lecture about IBM’s Watson technology platform to a group of first year students at Warwick Business School. My plan was to write up the transcript of that lecture, with links for references and further study, as a blog post. The following day when I opened up my computer to start writing the post I saw that, by a sad coincidence, Marvin Minsky the American cognitive scientist and co-founder of the Massachusetts Institute of Technology’s AI laboratory had died only the day before my lecture. Here is that blog post, now updated with some references to Minsky and his pioneering work on machine intelligence.

Minsky
Marvin Minsky in a lab at MIT in 1968 (c) MIT

First though, let’s start with Alan Turing, sometimes referred to as “the founder of computer science”, who led the team that developed a programmable machine to break the Nazi’s Enigma code, which was used to encrypt messages sent between units on the battlefield during World War 2. The work of Turing and his team was recently brought to life in the film The Imitation Game starring Benedict Cumberbatch as Turing and Keira Knightley as Joan Clarke, the only female member of the code breaking team.

Turing
Alan Turing

Sadly, instead of being hailed a hero, Turing was persecuted for his homosexuality and committed suicide in 1954 having undergone a course of hormonal treatment to reduce his libido rather than serve a term in prison. It seems utterly barbaric and unforgivable that such an action could have been brought against someone who did so much to affect the outcome of WWII. It took nearly 60 years for his conviction to be overturned when on 24 December 2013, Queen Elizabeth II signed a pardon for Turing, with immediate effect.

In 1949 Turing became Deputy Director of the Computing Laboratory at Manchester University, working on software for one of the earliest computers. During this time he worked in the emerging field of artificial intelligence and proposed an experiment which became known as the Turing test having observed that: “a computer would deserve to be called intelligent if it could deceive a human into believing that it was human.”

The idea of the test was that a computer could be said to “think” if a human interrogator could not tell it apart, through conversation, from a human being.

Turing’s test was supposedly ‘passed’ in June 2014 when a computer called Eugene fooled several of its interrogators that it was a 13 year old boy. There has been much discussion since as to whether this was a valid run of the test and that the so called “supercomputer,” was nothing but a chatbot or a script made to mimic human conversation. In other words Eugene could in no way considered to be intelligent. Certainly not in the sense that Professor Marvin Minsky would have defined intelligence at any rate.

In the early 1970s Minsky, working with the computer scientist and educator Seymour Papert, wrote a book called The Society of Mind, which combined both of their insights from the fields of child psychology and artificial intelligence.

Minsky and Papert believed that there was no real difference between humans and machines. Humans, they maintained, are actually machines of a kind whose brains are made up of many semiautonomous but unintelligent “agents.” Their theory revolutionized thinking about how the brain works and how people learn.

Despite the more widespread accessibility to apparently intelligent machines with programs like Apple Siri Minsky maintained that there had been “very little growth in artificial intelligence” in the past decade, saying that current work had been “mostly attempting to improve systems that aren’t very good and haven’t improved much in two decades”.

Minsky also thought that large technology companies should not get involved the field of AI saying: “we have to get rid of the big companies and go back to giving support to individuals who have new ideas because attempting to commercialise existing things hasn’t worked very well,”

Whilst much of the early work researching AI certainly came out of organisations like Minsky’s AI lab at MIT it seems slightly disingenuous to believe that commercialistion of AI, as being carried out by companies like Google, Facebook and IBM, is not going to generate new ideas. The drive for commercialisation (and profit), just like war in Turing’s time, is after all one of the ways, at least in the capitalist world, that innovation is created.

Which brings me nicely to Watson.

IBM Watson is a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data. It is named after Thomas J. Watson, the first CEO of IBM, who led the company from 1914 – 1956.

Thomas_J_Watson_Sr
Thomas J. Watson

IBM Watson was originally built to compete on the US television program Jeopardy.  On 14th February 2011 IBM entered Watson onto a special 3 day version of the program where the computer was pitted against two of the show’s all-time champions. Watson won by a significant margin. So what is the significance of a machine winning a game show and why is this a “game changing” event in more than the literal sense of the term?

Today we’re in the midst of an information revolution. Not only is the volume of data and information we’re producing dramatically outpacing our ability to make use of it but the sources and types of data that inform the work we do and the decisions we make are broader and more diverse than ever before. Although businesses are implementing more and more data driven projects using advanced analytics tools they’re still only reaching 12% of the data they have, leaving 88% of it to go to waste. That’s because this 88% of data is “invisible” to computers. It’s the type of data that is encoded in language and unstructured information, in the form of text, that is books, emails, journals, blogs, articles, tweets, as well as images, sound and video. If we are to avoid such a “data waste” we need better ways to make use of that data and generate “new knowledge” around it. We need, in other words, to be able to discover new connections, patterns, and insights in order to draw new conclusions and make decisions with more confidence and speed than ever before.

For several decades we’ve been digitizing the world; building networks to connect the world around us. Today those networks connect not just traditional structured data sources but also unstructured data from social networks and increasingly Internet of Things (IoT) data from sensors and other intelligent devices.

Data to Knowledge
From Data to Knowledge

These additional sources of data mean that we’ve reached an inflection point in which the sheer volume of information generated is so vast; we no longer have the ability to use it productively. The purpose of cognitive systems like IBM Watson is to process the vast amounts of information that is stored in both structured and unstructured formats to help turn it into useful knowledge.

There are three capabilities that differentiate cognitive systems from traditional programmed computing systems.

  • Understanding: Cognitive systems understand like humans do, whether that’s through natural language or the written word; vocal or visual.
  • Reasoning: They can not only understand information but also the underlying ideas and concepts. This reasoning ability can become more advanced over time. It’s the difference between the reasoning strategies we used as children to solve mathematical problems, and then the strategies we developed when we got into advanced math like geometry, algebra and calculus.
  • Learning: They never stop learning. As a technology, this means the system actually gets more valuable with time. They develop “expertise”. Think about what it means to be an expert- – it’s not about executing a mathematical model. We don’t consider our doctors to be experts in their fields because they answer every question correctly. We expect them to be able to reason and be transparent about their reasoning, and expose the rationale for why they came to a conclusion.

The idea of cognitive systems like IBM Watson is not to pit man against machine but rather to have both reasoning together. Humans and machines have unique characteristics and we should not be looking for one to supplant the other but for them to complement each other. Working together with systems like IBM Watson, we can achieve the kinds of outcomes that would never have been possible otherwise:

IBM is making the capabilities of Watson available as a set of cognitive building blocks delivered as APIs on its cloud-based, open platform Bluemix. This means you can build cognition into your digital applications, products, and operations, using any one or combination of a number of available APIs. Each API is capable of performing a different task, and in combination, they can be adapted to solve any number of business problems or create deeply engaging experiences.

So what Watson APIs are available? Currently there are around forty which you can find here together with documentation and demos. Four examples of the Watson APIs you will find at this link are:

Watson API - Dialog

 

Dialog

Use natural language to automatically respond to user questions

 

 

Watson API - Visual Recognition

 

Visual Recognition

Analyses the contents of an image or video and classifies by category.

 

 

Watson API - Text to Speech

 

Text to Speech

Synthesize speech audio from an input of plain text.

 

 

Watson API - Personality Insights

 

Personality Insights

Understand someones personality from what they have written.

 

 

It’s never been easier to get started with AI by using these cognitive building blocks. I wonder what Turing would have made of this technology and how soon someone will be able to pin together current and future cognitive building blocks to really pass Turing’s famous test?

Plus Two More

In my previous post on five architectures that changed the world I left out a couple that didn’t fit my self-imposed criteria. Here, therefore, are two more, the first of which is a bit too techie to be a part of everyone’s lives but is nonetheless hugely important and the second of which has not changed the world yet but has pretty big potential to do so.

IBM System/360
Before the System/360 there was very little interchangeability between computers, even from the same manufacturers. Software had to be created for each type of computer making them very difficult to develop applications for as well as maintain. The System/360 practically invented the concept of architecture as applied to computers in that it had an architecture specification that did not make any assumptions on the implementation itself, but rather describes the interfaces and the expected behavior of an implementation. The System/360 was the first family of computers designed to cover the complete range of applications, from small to large, both commercial and scientific. The development of the System/360 cost $5 billion back in 1964, that’s $34 billion of today’s money and almost destroyed IBM.

Watson
Unless you are American you had probably never heard of the TV game show called Jeopardy! up until the start of 2011. Now we know that it is a show that “uses puns, subtlety and wordplay” that humans enjoy but which computers would get tied up in knots over. This, it turns out, was the challenge that David Ferrucci, the IBM scientist who led the four year quest to build Watson, had set himself to compete live against humans in the TV show.

IBM has “form” on building computers to play games! The previous one (Deep Blue) won a six-game match by two wins to one with three draws against world chess champion Garry Kasparov in 1997. Chess, it turns out, is a breeze to play compared to Jeopardy! Here’s why.
Chess…

  •  §Is a finite, mathematically well-defined search space.
  • Has a large but limited number of moves and states.
  • Makes everything explicit and has unambiguous mathematical rules which computers love.

Games like Jeopardy! play on the subtleties of the human language however which is…

  • Ambiguous, contextual and implicit.
  • Grounded only in human cognition.
  • Can have a seemingly infinite number of ways to express the same meaning.

According to IBM Watson is “built on IBM’s DeepQA technology for hypothesis generation, massive evidence gathering, analysis, and scoring.” Phew! The point of Watson however is not its ability to play a game show but in the potential to “weaves its fabric” into the messiness of our human lives where data is not kept in nice ordered relational databases but is unstructured and seemingly unrelated but nevertheless can sometimes have new and undiscovered meaning. One obvious application is in medical diagnosis but it could also be used in a vast array of other situations from help desks through to sorting out what benefits you are entitled to. So, not world changing yet but definitely watch this space.

The Innovation Conundrum and Why Architecture Matters

A number of items in the financial and business news last week set me thinking about why architecture matters to innovation. Both IBM and Apple announced their second quarter results. IBM’s revenue for Q2 2011 was $26.7B, up 12% on the same quarter last year and Apples revenue for the same quarter was  $24.67B, an incredible 83% jump on the same quarter last year. As I’m sure everyone now knows IBM is 100 years old this year whereas Apple is a mere 35 years old. It looks like both Apple and IBM will become $100B companies this year if all goes to plan (IBM having missed joining the $100B club by a mere $0.1B in 2010). Coincidentally a Forbes article also caught my eye. Forbes listed the top 100 innovative companies. Top of the list was salesforce.com, Apple were number 5 and IBM were, er, not in the top 100! So what’s going on here? How can a company that pretty much invented the mainframe and personal computer, helped put a man on the moon, invented the scanning electron microscope and scratched the letters IBM onto a nickel crystal one atom at a time, and, most recently, took artificial intelligence a giant leap forward with Watson not be classed as innovative?

Perhaps the clue is in what the measure of innovation is. The Forbes article measures innovation by an “innovation premium” which it defines as:

A measure of how much investors have bid up the stock price of a company above the value of its existing business based on expectations of future innovative results (new products, services and markets).

So it would appear that, going by this definition of innovation, investors don’t think IBM is expected to be bringing any innovative products or services to market whereas the world will no doubt be inundated with all sorts of shiny iThingys over the course of the next year or so. But is that really all there is to being innovative? I would venture not.

The final article that caught my eye was about Apples cash reserves. Depending on which source you read this is around $60B and as anyone who has any cash to invest knows, sitting on it is not the best way of getting good returns! Companies generally have a few options with what to do when they amass so much cash, pay out higher dividends to shareholders, buy back their own shares, invest more in R&D or go on a buying spree and buy some companies that fill holes in their portfolio. Whilst this is a good way of quickly entering into markets companies may not be active in it tends to backfire on the innovation premium as mergers and acquisitions (M&A) are not, at least initially, seen as bringing anything new to market. M&A’s has been IBM’s approach over the last decade or so. As well as the big software brands like Lotus, Rational and Tivoli IBM has more recently bought lots of smaller software companies such as Cast Iron Systems, SPSS Statistics and Netezza.

A potential problem with this approach is that people don’t want to buy a “bag of bits” and have to assemble their own solutions Lego style. What they want are business solutions that address the very real and complex (wicked, even) problems they face today. This is where the software architect comes into his or her own. The role of the software architect is to take existing components and assemble them in interesting and important ways“. To that I would add innovative ways as well. Companies no longer want the same old solutions (ERP system, contact management system etc) but new and innovative systems that solve their business problems. This is why we have one of the more interesting jobs there is out there today!

Watson, Turing and Clarke

So what do these three have in common?

  • Thomas J. Watson Sr, CEO and founder of IBM (100 years old this year). Currently has a computer named after him.
  • Alan Turing, mathematician and computer scientist (100 years old next year). Has a famous test named after him.
  • Aurthur C. Clarke, scientist and writer (100 years old in 1917). Has a set of laws named after him (and is also the creator of the fictional HAL computer in 2001: A Space Odyssey).

Unless you have moved into a hut, deep in the Amazon rain forest you cannot have missed the publicity over IBM’s ‘Watson’ computer having competed in, and won, the American TV quiz show Jeopardy. I have to confess that until last week I’d not heard of Jeopardy, possibly because a) I’m not a fan of quizzes, b) I’m not American and c) I don’t watch that much television. To those as ignorant as me on these matters the unique thing about Jeopardy is that contestants are presented with clues in the form of answers, and must phrase their responses in the form of a question.

This, it turns out, is what makes this particular quiz such a hard nut for a computer to crack. The clues in the ‘question’ rely on subtle meanings, puns, and riddles; something humans excel at and computers do not. Unlike IBM’s previous game challenger Deep Blue, which defeated chess world champion Gary Kasparov, it’s not sufficient to rely on raw computing ‘brute force’ but this time the computer has to interpret meaning and the nuances of the human language. So has Watson achieved, met or passed the Turing test (which is basically a measure of whether computer can demonstrate intelligence)?

The answer is almost certainly ‘no’. Turing’s test is a measure of a machines ability to exhibit human intelligence. The test, as originally proposed by Turing was that a questioner should ask a series of questions of both a human being and a machine and see whether he can tell which is which through the answers they give. The idea being that if the two were indistinguishable then the machine and the human must both appear to be as intelligent as each other.

As far as I know Turing never stipulated any constraint on the range or type of questions that could be answered which leads us to the nub of the problem. Watson is supremely good at answering Jeopardy type questions just as Deep Blue was good at playing chess. However neither could do what the other does (at least as well). They have been programmed for that given task. Given that Watson is actually a cluster of POWER7 servers any suitably general purpose computer that could win at Jeopardy, play chess as well as exhibit the full range of human emotions and frailties that would be needed to fool a questioner would presumably occupy the area of several football pitches and consume the power of a small city.

That however misses the point completely. The ability of a computer to almost flawlessly answer a range of questions, phrased in a particular way on a range of different subject areas, blindingly fast has enormous potential in fields of medicine, law and other disciplines where questions based on a huge foundation of knowledge built up over decades need to be answered quickly (for example in accident and emergency where quick diagnoses may literally be a matter of life and death). This indeed is one of IBM’s Smarter Planet goals.

Which brings us to Clarke’s third law which states that “any sufficiently advanced technology is indistinguishable from magic”. This is surely something that is attributable to Watson. The other creation of Clarke of course is HAL the computer aboard the spaceship Discovery One on a trip to Saturn that becomes overwhelmed by guilt at having to keep secret the true nature of the spaceships mission and starts killing members of the crew. The point of Clarke’s story (or one of them) being that the downside to a computer that is indistinguishable from a human being is that the computer may also end up mimicking human frailties and weaknesses.  Maybe it’s a good job Watson hasn’t passed Turing’s test then?