Eponymous Laws and the Invasion of Technology

Unless you’ve had your head buried in a devilish software project that has consumed your every waking hour over the last month or so you cannot help but have noticed technology has been getting a lot of bad press lately. Here are some recent news stories that make one wonder whether our technology maybe running away from us.

Is this just the internet reaching a level of maturity that past technologies from the humble telephone, the VCR and the now ubiquitous games consoles have been through or is there something really sinister going on here? What is the implication of all this on the software architect, should we care or do we just stick our head in the sand and keep on building the systems that enable all of the above, and more, to happen?

Here are three epnymous laws* which I think could have been use to predict much of this:

  • Metcalfe’s law (circa 1980): “The value of a system grows as approximately the square of the number of users of the system.” A variation on this is Sarnoff’s law: “The value of a broadcast network is proportional to the number of viewers.”
  • Though I’ve never seen this described as an eponymous law, my feeling is it should be. It’s a quote from Marshall McLuhan (from his book UnderstandingMedia: The Extensions of Man published in 1964): “We become what we behold. We shape our tools and then our tools shape us.”
  • Clarkes third law (from 1962): “Any sufficiently advanced technology is indistinguishable from magic.” This is from Aurthur C. Clarke’s book Profiles of the Future.

Whilst Metcalfe’s law talks of the value of a system growing proportionally as the number of users increases I suspect the same law applies to the disadvantage or detriment of such systems. As more people use a system, the more of them there will be to seek out ways of misusing that system. If only 0.1% of the 2.4 billion people who use the internet use it for illicit purposes that still makes a whopping 2.4 million. A number set to grow just as the number of online users grows.

As to Marshall McLuhan’s law, isn’t the stage we are at with the internet just that? The web is (possibly) beginning to shape us in terms of the way we think and behave. Should we be worried? Possibly. It’s probably too early to tell and there is a lack of hard scientific evidence either way to decide. I suspect this is going to be ripe ground for PhD theses for some years to come. In the meantime there are several more popular theses from the likes of Clay Shirky, Nicholas Carr, Aleks Krotoski and Baroness Susan Greenfield who describe the positive and negative aspects of our online addictions.

And so to Aurthur C, Clarke. I’ve always loved both his non-fiction and science fiction writing and this is possibly one of his most incisive prophecies. It feels to me that technology has probably reached the stage where most of the population really do perceive it as “magic”. And therein lies the problem. Once we stop understanding how something works we just start to believe in it almost unquestioningly. How many of us give a second thought when we climb aboard an aeroplane or train or give ourselves up to our doctors and nurses treating us with drugs unimagined even only a few years ago?

In his essay PRISM is the dark side of design thinking Sam Jacob asks what America’s PRISM surveillance program tells us about design thinking and concludes:

Design thinking annexes the perceived power of design and folds it into the development of systems rather than things. It’s a design ideology that is now pervasive, seeping into the design of government and legislation (for example, the UK Government’s Nudge Unit which works on behavioral design) and the interfaces of democracy (see the Design of the Year award-winning .gov.uk). If these are examples of ways in which design can help develop an open-access, digital democracy, Prism is its inverted image. The black mirror of democratic design, the dark side of design thinking. Back in 1942 the science fiction author Isaac Asimov proposed the three laws of robotics as an inbuilt safety feature of what was then thought likely to become the dominant technology of the latter part of the 20th century, namely intelligent robots. Robots, at least in the form Asimov predicted, have not yet come to pass however, in the internet, we have probably built a technology even more powerful and with more far reaching implications. Maybe, as at least one person as suggested, we should be considering the equivalent of Asimov’s three laws for the internet? Maybe it’s time that we as software architects, the main group of people who are building these systems, should begin thinking about some inbuilt safety mechanisms for the systems we are creating?

*An eponym is a person or thing, whether real or fictional, after which a particular place, tribe, era, discovery, or other item is named. So called eponymous laws are succinct observations or predictions named after a person (either by the persons themselves or by someone else ascribing the law to that person).

Ethics and Architecture

If you’ve not seen the BBC2 documentary All Watched Over By Machines of Loving Grace catch it now on the BBC iPlayer while you can (doesn’t work outside the UK unfortunately). You can see a preview of the series (another two to go) on Adam Curtis’ (the film maker) web site here. The basic premise of the first programme is as follows.Back in the 50’s a small group of people took up the ideas of the novelist Ayn Rand whose philosophy of Objectivism advocated reason as the only means of acquiring knowledge and rejecting all forms of faith and religion. They saw themselves as a prototype for a future society where everyone could follow their own selfish desires. One of the Rand ‘disciples’ was Alan Greenspan. Cut to the 1990’s where several Silicon Valley entrepreneurs,  also followers of Rand’s philosophy, believed that the new computer networks would allow the creation of a society where everyone could follow their own desires without there being any anarchy. Alan Greenspan, now Chairman of the Federal Reserve, also became convinced that the computers were creating a new kind of stable capitalism and convinced President Bill Clinton of a radical approach to cut the United States huge deficit. He proposed that Clinton cut government spending and reduce interest rates letting the markets control the fate of the economy, the country and ultimately the world. Whilst this approach appeared to work in the short term, it set off a chain of events which, according to Curtis’ hypothesis, led to 9/11, the Asian financial crash of 1997/98, the current economic crisis and the rise of China as a superpower that will soon surpass that of the United States. What happened was that the “blind faith” we put in the machines that were meant to serve us led us to a “dream world” where we trusted the machines to manage the markets for us but in fact they were operating in ways we could not understand resulting in outcomes we could never predict.

So what the heck has this got to do with architecture?  Back in the mid-80’s when I worked in Silicon Valley I remember reading an article in the San Jose Mercury News about a programmer who had left his job because he didn’t like the applications that the software he’d been working on were being put to (something of a military nature I suspect). Quite a noble act you might think (though given where he worked I suspect the guy didn’t have too much trouble finding another job pretty quickly). I wonder how many of us really think about what the uses of the software systems we are working on are being put to?

Clearly if you are working on the control software for a guided missile it’s pretty clear cut what the application is going to be used for. However what about if you are creating some piece of generic middleware? Yes it could be put to good use in hospital information systems or food-aid distribution systems however the same software could be used for the ERP system of a tobacco company or in controlling surveillance systems that “watch over us with loving grace”.

Any piece of software can be used for both good and evil and the developers of that software can hardly have it on their conscious to worry about what that end use will be. Just like nuclear power leads to both good (nuclear reactors, okay, okay I know that’s debatable given what’s just happened in Japan) and bad (bombs) it is the application of a particular technology that decides whether something is good or bad. However, here’s the rub. As architects aren’t we the ones who are meant to be deciding on how software components are put together to solve problems, for both better and for worse? Is it not within our remit to control those ‘end uses’ therefore and to walk away from those projects that will result in systems that are being built for bad rather than good purposes? We all have our own moral compass and it is up to us as individuals to decide which way we point our own compasses. From my point of view I would hope that I never got involved in systems that in anyway lead to an infringement of a persons basic human rights but how do I decide or know this? I doubt the people that built the systems that are the subject of the Adam Curtis films ever dreamed they would be used in ways which have almost led to the economic collapse of our society? I guess it is beholden on all of us to research and investigate as much as we can those systems we find ourselves working on and decide for ourselves whether we think we are creating machines that watch over us with “loving grace” or which are likely to have more sinister intents. As ever, Aurthur C. Clarke predicted this several decades ago and if you have not read his short story Dial F for Frankenstein now might be a good time to do so.

Watson, Turing and Clarke

So what do these three have in common?

  • Thomas J. Watson Sr, CEO and founder of IBM (100 years old this year). Currently has a computer named after him.
  • Alan Turing, mathematician and computer scientist (100 years old next year). Has a famous test named after him.
  • Aurthur C. Clarke, scientist and writer (100 years old in 1917). Has a set of laws named after him (and is also the creator of the fictional HAL computer in 2001: A Space Odyssey).

Unless you have moved into a hut, deep in the Amazon rain forest you cannot have missed the publicity over IBM’s ‘Watson’ computer having competed in, and won, the American TV quiz show Jeopardy. I have to confess that until last week I’d not heard of Jeopardy, possibly because a) I’m not a fan of quizzes, b) I’m not American and c) I don’t watch that much television. To those as ignorant as me on these matters the unique thing about Jeopardy is that contestants are presented with clues in the form of answers, and must phrase their responses in the form of a question.

This, it turns out, is what makes this particular quiz such a hard nut for a computer to crack. The clues in the ‘question’ rely on subtle meanings, puns, and riddles; something humans excel at and computers do not. Unlike IBM’s previous game challenger Deep Blue, which defeated chess world champion Gary Kasparov, it’s not sufficient to rely on raw computing ‘brute force’ but this time the computer has to interpret meaning and the nuances of the human language. So has Watson achieved, met or passed the Turing test (which is basically a measure of whether computer can demonstrate intelligence)?

The answer is almost certainly ‘no’. Turing’s test is a measure of a machines ability to exhibit human intelligence. The test, as originally proposed by Turing was that a questioner should ask a series of questions of both a human being and a machine and see whether he can tell which is which through the answers they give. The idea being that if the two were indistinguishable then the machine and the human must both appear to be as intelligent as each other.

As far as I know Turing never stipulated any constraint on the range or type of questions that could be answered which leads us to the nub of the problem. Watson is supremely good at answering Jeopardy type questions just as Deep Blue was good at playing chess. However neither could do what the other does (at least as well). They have been programmed for that given task. Given that Watson is actually a cluster of POWER7 servers any suitably general purpose computer that could win at Jeopardy, play chess as well as exhibit the full range of human emotions and frailties that would be needed to fool a questioner would presumably occupy the area of several football pitches and consume the power of a small city.

That however misses the point completely. The ability of a computer to almost flawlessly answer a range of questions, phrased in a particular way on a range of different subject areas, blindingly fast has enormous potential in fields of medicine, law and other disciplines where questions based on a huge foundation of knowledge built up over decades need to be answered quickly (for example in accident and emergency where quick diagnoses may literally be a matter of life and death). This indeed is one of IBM’s Smarter Planet goals.

Which brings us to Clarke’s third law which states that “any sufficiently advanced technology is indistinguishable from magic”. This is surely something that is attributable to Watson. The other creation of Clarke of course is HAL the computer aboard the spaceship Discovery One on a trip to Saturn that becomes overwhelmed by guilt at having to keep secret the true nature of the spaceships mission and starts killing members of the crew. The point of Clarke’s story (or one of them) being that the downside to a computer that is indistinguishable from a human being is that the computer may also end up mimicking human frailties and weaknesses.  Maybe it’s a good job Watson hasn’t passed Turing’s test then?

New Dog, Old Tricks

I can’t believe this but today I have observed no less than three people using the latest wonder-gadget from Apple (the iPad) to play solitaire, Tetris and some other game which seemed to involve nothing more than poking the screen at moving shapes! Having just bought my own iPad and being convinced it conforms to Aurthur C. Clarke’s third law (any sufficiently advanced technology is indistinguishable from magic) I am aghast that such a technological wonder is being used for such mind numbing activities; just dust off your ZX Spectrums guys!

Architecture for a New Decade

Predicting the future, even 12 months ahead, is a notoriously tricky past-time. Arthur C. Clarke the English scientist and science fiction writer who sadly died in March 2008 said:

If we have learned one thing from the history of invention and discovery, it is that, in the long run – and often in the short one – the most daring prophecies seem laughably conservative.

As we move into a new year and a new decade the world, in my memory at least, has never seemed a more uncertain place. As the doom and gloom merchants predict an even more troubled economy and the ongoing threats from global warming, increasing pressure on dwindling natural resources and yet more wars do not make for a happy start to 2010.

Whilst solving these truly wicked problems is slightly beyond me I am left to wonder what this new year brings for us architects. According to Gartner the top 10 strategic technologies for 2010 include a number of things which we as an architect community need to be getting our heads around . Of the list of ten, there are three technologies in particular that interest me:

  • Cloud Computing
  • Advanced Analytics
  • Social Social Computing

Whilst it is easy to get consumed by the technology that these new architectural “styles” bring to the table I think the key things we as architects need to do is:

  1. Gain sufficient understanding of these architectural styles to be able to articulate their benefits (and of course their risks) to clients.
  2. Understand what the real difference between these technologies and the one that went before it are so we can build solutions that take advantage of these differences rather than more of the “same-old-architecture” in a slightly different guise.
  3. Figure out how we sell these benefits to the really important stakeholders (the RIS’s).

I reckon that in 2010 being able to identify the RIS’s and convincing them of the business benefits of going with solutions based on technology X is going to be the absolute number one priority. Most businesses in 2010 are going to be struggling to survive and not thinking about IT spends. However survival needs businesses to be both agile and also have the ability to swallow less fortunate companies as efficiently and quickly as possible. Thankfully I think the really good architects that can do this and span the business-IT gap will still be around this time next year. I’m not sure about the rest though?