Eponymous Laws and the Invasion of Technology

Unless you’ve had your head buried in a devilish software project that has consumed your every waking hour over the last month or so you cannot help but have noticed technology has been getting a lot of bad press lately. Here are some recent news stories that make one wonder whether our technology maybe running away from us.

Is this just the internet reaching a level of maturity that past technologies from the humble telephone, the VCR and the now ubiquitous games consoles have been through or is there something really sinister going on here? What is the implication of all this on the software architect, should we care or do we just stick our head in the sand and keep on building the systems that enable all of the above, and more, to happen?

Here are three epnymous laws* which I think could have been use to predict much of this:

  • Metcalfe’s law (circa 1980): “The value of a system grows as approximately the square of the number of users of the system.” A variation on this is Sarnoff’s law: “The value of a broadcast network is proportional to the number of viewers.”
  • Though I’ve never seen this described as an eponymous law, my feeling is it should be. It’s a quote from Marshall McLuhan (from his book UnderstandingMedia: The Extensions of Man published in 1964): “We become what we behold. We shape our tools and then our tools shape us.”
  • Clarkes third law (from 1962): “Any sufficiently advanced technology is indistinguishable from magic.” This is from Aurthur C. Clarke’s book Profiles of the Future.

Whilst Metcalfe’s law talks of the value of a system growing proportionally as the number of users increases I suspect the same law applies to the disadvantage or detriment of such systems. As more people use a system, the more of them there will be to seek out ways of misusing that system. If only 0.1% of the 2.4 billion people who use the internet use it for illicit purposes that still makes a whopping 2.4 million. A number set to grow just as the number of online users grows.

As to Marshall McLuhan’s law, isn’t the stage we are at with the internet just that? The web is (possibly) beginning to shape us in terms of the way we think and behave. Should we be worried? Possibly. It’s probably too early to tell and there is a lack of hard scientific evidence either way to decide. I suspect this is going to be ripe ground for PhD theses for some years to come. In the meantime there are several more popular theses from the likes of Clay Shirky, Nicholas Carr, Aleks Krotoski and Baroness Susan Greenfield who describe the positive and negative aspects of our online addictions.

And so to Aurthur C, Clarke. I’ve always loved both his non-fiction and science fiction writing and this is possibly one of his most incisive prophecies. It feels to me that technology has probably reached the stage where most of the population really do perceive it as “magic”. And therein lies the problem. Once we stop understanding how something works we just start to believe in it almost unquestioningly. How many of us give a second thought when we climb aboard an aeroplane or train or give ourselves up to our doctors and nurses treating us with drugs unimagined even only a few years ago?

In his essay PRISM is the dark side of design thinking Sam Jacob asks what America’s PRISM surveillance program tells us about design thinking and concludes:

Design thinking annexes the perceived power of design and folds it into the development of systems rather than things. It’s a design ideology that is now pervasive, seeping into the design of government and legislation (for example, the UK Government’s Nudge Unit which works on behavioral design) and the interfaces of democracy (see the Design of the Year award-winning .gov.uk). If these are examples of ways in which design can help develop an open-access, digital democracy, Prism is its inverted image. The black mirror of democratic design, the dark side of design thinking. Back in 1942 the science fiction author Isaac Asimov proposed the three laws of robotics as an inbuilt safety feature of what was then thought likely to become the dominant technology of the latter part of the 20th century, namely intelligent robots. Robots, at least in the form Asimov predicted, have not yet come to pass however, in the internet, we have probably built a technology even more powerful and with more far reaching implications. Maybe, as at least one person as suggested, we should be considering the equivalent of Asimov’s three laws for the internet? Maybe it’s time that we as software architects, the main group of people who are building these systems, should begin thinking about some inbuilt safety mechanisms for the systems we are creating?

*An eponym is a person or thing, whether real or fictional, after which a particular place, tribe, era, discovery, or other item is named. So called eponymous laws are succinct observations or predictions named after a person (either by the persons themselves or by someone else ascribing the law to that person).

A Step Too Far?

The trouble with technology, especially it seems computer technology, is that it keeps “improving”.  I’ve written before about the ethics of the job that we as software architects do and whether or not we should always accept what we do without asking questions, not least of which should be, is this a technology step too far that I am building or being asked to build?

Three articles have caught my eye this week which have made me ponder this question again.

The first is from the technology watcher and author Nicholas Carr who talks about the Glass Collective, an an investment syndicate made up of three companies: Google Ventures, Andreessen Horowitz and Kleiner Perkins Caufield & Byers whose collective aim is to provide seed funding to entrepreneurs in the Glass ecosystem to help jump start their ideas.For those not in the know about Glass it is, according to the Google blog, all about “getting technology out of the way” and has the aim of building technology that is “seamless, beautiful and empowering“. Glasses first manifestation is to be Internet-connected glasses that take photos, record video and offer hands-free Internet access right in front of a users’ eyes.

Clearly the type of augmented reality that Glass opens up could have huge educational benefits (think of walking around a museum or art gallery and getting information on what you are looking at piped right to you as you look at different works of art) as well as very serious privacy implications. For another view on this read the excellent blog post from my IBM colleague Rick Robinson on privacy in digital cities.

In his blog post Carr refers to a quote from Marshall McLuhan, made a half century ago and now seeming quite prescient:

Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit by taking a lease on our eyes and ears and nerves, we don’t really have any rights left.

The next thing to catch my eye (or actually several thousand things) was around the whole sorry tale of the Boston bombings. This post in particular from the Wall Street Journal discusses the role of Boston’s so called fusion center that “helps investigators scour for connections among potential suspects, by mining hundreds of law enforcement sources around the region, ranging from traffic violations, to jail records and criminal histories, along with public data like property records.”

Whilst I doubt anyone would question the validity of using data in this way to track down people that have performed atrocities such as we saw in Boston, it does highlight just how much data is now collected on us and about us, much of which we have no control over of broadcasting to the world.

Finally, on a much lighter note, we learn that the contraceptive maker Durex has released their “long distance, sexy time fundawear“. I’ll let you watch the first live trial video of this at your leisure (warning, not entirely work safe) but let’s just say here that it adds a whole new dimension to stroking the screen on your smartphone. I guess this one has no immediate privacy issues (providing the participants don’t wear their Google Glass at the same time as playing in their fundawear at least) it does raise some interesting questions about how much we will let technology impinge on the most intimate part of our lives.

So where does this latest foray of mine into digital privacy take us and what conclusions, if any, can we draw? Back in 2006 IBM Fellow and Chief Scientist Jeff Jonas posted a comment on his blog called Responsible Innovation: Designing for Human Rights in which he asks two questions: what if we are creating technologies that go in the face of the Universal Declaration of Human Rights and what if systems are designed without the essential characteristics needed to support basic privacy and civil liberties principles?

Jeff argues that if technologies could play a role in any of the arrest, detention, exile, interference, attacks or deprivation mentioned in the Universal Declaration of Human Rights then they must support disclosure of the source upon which such invasions are predicated. He suggests that systems that could affect one’s privacy or civil liberties should have a number of design characteristics built in that allow for some level of auditability as well as ensuring accuracy of the data they hold. Such characteristics as, every data point is associated to its data source and every data point is associated to its author etc. Given this was written in 2006 when Facebook was only two years old and still largely confined to use in US universities this is a hugely prescient and thoughtful piece of insight (which is why Jeff is an IBM Fellow of course).

So, there’s an idea! New technologies, when they come along should, be examined to ensure they have built in safeguards that mean such rights as are granted to us all in the Universal Declaration of Human Rights are not infringed or taken away from us. How would this be done and, more importantly of course, what bodies or organisations would we empower to ensure such safeguards were both effective and enforceable? No easy or straightforward answers here but certainly a topic for some discussion I believe.

The Moral Architect

I started my career in the telecommunications division of the General Electrical Company (GEC) as a software engineer designing digital signalling systems for Private Branch Exchanges based on the Digital Private Network Signalling System. As part of that role I represented GEC on the working party that defined the DPNSS standard which was owned by British Telecom. I remember at one of the meetings the head of the working party, whose name I unfortunately forget, posed the question: what would have happened if regimes such as those of Nazi Germany or the Stalinist Soviet Union had access to the powerful (sic) technology we were developing? When I look back at that time (early 80’s) such “powerful technology” looks positively antiquated – we were actually talking about little more than the ability to know who was calling whom using calling line identification! However that question was an important one to ask and is now one we should be asking more than ever today.One of the roles of the architect is to ask the questions that others tend to either forget about or purposely don’t ask because the answer is “too hard”. Questions like:

  • So you expect 10,000 people to use your website but what happens if it really takes off and the number of users is 10 or 100 times that?
  • So you’re giving your workforce mobile devices that can be used to access your sales systems, what happens when one of your employees leaves their tablet on a plane/train/taxi?
  • So we are buying database software from a new vendor who will help us migrate from our old systems but what in-house skills do we have to manage and operate this new software?
  • Etc

In many ways these are the easy questions, for a slightly harder question consider this one posed by Nicholas Carr in this blog post.

So you’re happily tweeting away as your Google self-driving car crosses a bridge, its speed precisely synced to the 50 m.p.h. limit. A group of frisky schoolchildren is also heading across the bridge, on the pedestrian walkway. Suddenly, there’s a tussle, and three of the kids are pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?

Pity the poor architect who has to design for that particular use case (and probably several hundred others not yet thought of)! Whilst this might seem to be someway off, the future, as they say, is actually a lot closer than you think. As Carr points out, the US Department of Defence has just issued guidelines designed to:

Minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.

Guidelines which presumably software architects and designers, amongst others, need to get their heads around.

For anyone who has even the remotest knowledge of the genre of science fiction this is probably going to sound familiar. As far back as 1942 the author Isaac Asimov formulated his famous three laws of robotics which current and future software architects may well be minded to adopt as an important set of architectural principles. These three laws, as stated in Asimov’s 1942 short story Runaround, are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

As stated here these laws are beautifully concise and unambiguous however the devil, of course, will be in the implementation. Asimov himself went on to make quite a career of writing stories that tussled with some of the ambiguities that could arise from the conflicts between these laws.

So back to the point of this blog. As our systems become ever more complex and infringe on more and more of our everyday lives are ethical or moral requirements such as these going to be another set of things that software architects need to deal with? I would say absolutely yes. More than ever we need to understand not just the impact on humanity of those systems we are building but also those systems (and tools) we are using everyday. As  Douglas Rushkoff says in his book Program or be Programmed:

If you don’t know what the software you’re using is for, then you’re not using it but being used by it.

In a recent blog post Seth Godin poses a number of questions of what freedom in a digital world really means. Many of these are difficult moral questions with no easy answer and yet systems we are building now, today are implicitly or explicitly embedding assumptions around some of these questions whether we like it or not. One could argue that we should always question whether a particular system should be built or not (just because we can do something does not necessarily mean we should) but often by the time you realise you should be asking such questions it’s already too late. Many of the systems we have today were not built as such, but rather grew or emerged. Facebook may have started out as a means of connecting college friends but now it’s a huge interconnected world of relationships and likes and dislikes and photographs and timelines and goodness knows what else that can be ‘mined’ for all sorts of purposes not originally envisaged.

One of the questions architects and technologists alike must surely be asking is how much mining (of personal data) is it right to do? Technology exists to track our digital presence wherever we go but how much should we be making use of that data and and to what end? The story of how the US retailer Target found out a teenage girl was pregnant before her father did has been doing the rounds for a while now. Apart from the huge embarrassment to the girl and her family this story probably had a fairly harmless outcome however what if that girl had lived in a part of the world where such behavior was treated with less sympathy?

It is of course up to each of us to decide what sort of systems we are or are not prepared to work on in order to earn a living. Each of us must make a moral and ethical judgment based on our own values and beliefs. We should also take care in judging others that create systems we do not agree with or think are “wrong”. What is important however is to always question the motives and the reasons behind those systems and be very clear why you are doing what you are doing and are able to sleep easy having made your decision.

Social Networking and All That Jazz

I was recently asked what I thought the impact of Web 2.0 and social networking has had or is about to have, on our profession. Here is my take:

  • The current generation of students going through secondary school and university (that will be hitting the employment market over the next few years) have spent most of their formative years using Web 2.0. For these people instant messaging, having huge groups of “friends” and organising events online is as second nature as sending emails and using computers to write documents is to us. How will this change the way we do our jobs and software and services companies do business?
    • Instant and informal networks (via Twitter, Facebook etc) will set up, share information and disappear again. This will allow vendors and customers to work together in new ways and more quickly than ever before.D
    • Devices like advanced smart phones and tablets which can be carried anywhere and are always connected will speed up even more how quickly information gets disseminated and used.
    • Whilst the current generation berates the upcoming one for the time wasted sending pointless messages to friends and creating blog entries hardly anyone reads they at least are doing something different and liberating, creating as opposed to simply consuming content. So what if 99.99% of that content is rubbish? 0.01 or even 0.001 amongst a population of several billion is still a lot of potentially good and innovative thoughts and ideas. The challenge is of course finding the good stuff.
  • Email as an effective communication aid is coming to its natural end. The new generation who have grown up on blogs, Twitter and Facebook will laugh at the amount of time we spend sweating over mountains of email. New tools will need to be available that provide effective ways of quickly and accurately searching the content that is published via Web 2.0 to find the good stuff (and also to detect early potential good stuff).
  • More 20th century content distributors (newspapers, TV companies, book and magazine publishers) will go the way of the music industry if they cannot find a new business model to earn money. This is both an opportunity (we can help them create the new opportunities) and a threat (loss of a large customer base if they go under) to IT professionals and service companies.
  • The upcoming generation will not have loyalties to their employers but only to the network they happen to be a part of at the time. This is the natural progression from outsourcing of labour, destruction of company pension schemes and everyone being treated as freelancers. Whilst this has been hard for the people who have gone through that shift, for the new workers in their late teens and early 20’s they will know nothing else and forge new relationships and ways of working using the new tools at their disposal. Employee turnover and the rate at which people change jobs will increase 10 fold according to some pundits (google ‘Shift Happens’ for some examples).
  • Formal classroom type teaching is essentially dead. New devices with small cameras will allow virtual classrooms to spring up anywhere. Plus the speed with which information changes will mean material will be out of date anyway by the time a formal course is prepared. This coupled with further education institutions having to keep raising fees to support increasing numbers of students will lead to a collapse in the traditional ways of delivering learning.
  • The real value of networks comes from sharing information between as diverse a group of people as possible. Given that companies will be relying less on permanent employees and more on freelancers these networks will increasingly use the internet. This provides some interesting challenges around security of information and managing intellectual capital. The domain of enterprise architecture has therefore just increased exponentially as the enterprise has just become the internet. How will companies manage and govern a network most of which they have no or little control over?
  • The new models for distributing software and services (e.g. application stores, cloud providers) as well as existing ones such as open source will mark the end of the traditional package and product software vendors. Apple overtook Microsoft earlier this year in terms of size as measured by market capitalisation and is now second only to Exxon. Much of this revenue was, I suspect, driven by the innovative ways Apple have devised to create and distribute software (i.e. third parties, sometimes individuals create it and Apple distribute it through their App store).

For two good opposing views on what the internet is doing to our brains read the latest books by Clay Shirky and Nicholas Carr.