A Step Too Far?

The trouble with technology, especially it seems computer technology, is that it keeps “improving”.  I’ve written before about the ethics of the job that we as software architects do and whether or not we should always accept what we do without asking questions, not least of which should be, is this a technology step too far that I am building or being asked to build?

Three articles have caught my eye this week which have made me ponder this question again.

The first is from the technology watcher and author Nicholas Carr who talks about the Glass Collective, an an investment syndicate made up of three companies: Google Ventures, Andreessen Horowitz and Kleiner Perkins Caufield & Byers whose collective aim is to provide seed funding to entrepreneurs in the Glass ecosystem to help jump start their ideas.For those not in the know about Glass it is, according to the Google blog, all about “getting technology out of the way” and has the aim of building technology that is “seamless, beautiful and empowering“. Glasses first manifestation is to be Internet-connected glasses that take photos, record video and offer hands-free Internet access right in front of a users’ eyes.

Clearly the type of augmented reality that Glass opens up could have huge educational benefits (think of walking around a museum or art gallery and getting information on what you are looking at piped right to you as you look at different works of art) as well as very serious privacy implications. For another view on this read the excellent blog post from my IBM colleague Rick Robinson on privacy in digital cities.

In his blog post Carr refers to a quote from Marshall McLuhan, made a half century ago and now seeming quite prescient:

Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit by taking a lease on our eyes and ears and nerves, we don’t really have any rights left.

The next thing to catch my eye (or actually several thousand things) was around the whole sorry tale of the Boston bombings. This post in particular from the Wall Street Journal discusses the role of Boston’s so called fusion center that “helps investigators scour for connections among potential suspects, by mining hundreds of law enforcement sources around the region, ranging from traffic violations, to jail records and criminal histories, along with public data like property records.”

Whilst I doubt anyone would question the validity of using data in this way to track down people that have performed atrocities such as we saw in Boston, it does highlight just how much data is now collected on us and about us, much of which we have no control over of broadcasting to the world.

Finally, on a much lighter note, we learn that the contraceptive maker Durex has released their “long distance, sexy time fundawear“. I’ll let you watch the first live trial video of this at your leisure (warning, not entirely work safe) but let’s just say here that it adds a whole new dimension to stroking the screen on your smartphone. I guess this one has no immediate privacy issues (providing the participants don’t wear their Google Glass at the same time as playing in their fundawear at least) it does raise some interesting questions about how much we will let technology impinge on the most intimate part of our lives.

So where does this latest foray of mine into digital privacy take us and what conclusions, if any, can we draw? Back in 2006 IBM Fellow and Chief Scientist Jeff Jonas posted a comment on his blog called Responsible Innovation: Designing for Human Rights in which he asks two questions: what if we are creating technologies that go in the face of the Universal Declaration of Human Rights and what if systems are designed without the essential characteristics needed to support basic privacy and civil liberties principles?

Jeff argues that if technologies could play a role in any of the arrest, detention, exile, interference, attacks or deprivation mentioned in the Universal Declaration of Human Rights then they must support disclosure of the source upon which such invasions are predicated. He suggests that systems that could affect one’s privacy or civil liberties should have a number of design characteristics built in that allow for some level of auditability as well as ensuring accuracy of the data they hold. Such characteristics as, every data point is associated to its data source and every data point is associated to its author etc. Given this was written in 2006 when Facebook was only two years old and still largely confined to use in US universities this is a hugely prescient and thoughtful piece of insight (which is why Jeff is an IBM Fellow of course).

So, there’s an idea! New technologies, when they come along should, be examined to ensure they have built in safeguards that mean such rights as are granted to us all in the Universal Declaration of Human Rights are not infringed or taken away from us. How would this be done and, more importantly of course, what bodies or organisations would we empower to ensure such safeguards were both effective and enforceable? No easy or straightforward answers here but certainly a topic for some discussion I believe.

The Moral Architect

I started my career in the telecommunications division of the General Electrical Company (GEC) as a software engineer designing digital signalling systems for Private Branch Exchanges based on the Digital Private Network Signalling System. As part of that role I represented GEC on the working party that defined the DPNSS standard which was owned by British Telecom. I remember at one of the meetings the head of the working party, whose name I unfortunately forget, posed the question: what would have happened if regimes such as those of Nazi Germany or the Stalinist Soviet Union had access to the powerful (sic) technology we were developing? When I look back at that time (early 80’s) such “powerful technology” looks positively antiquated – we were actually talking about little more than the ability to know who was calling whom using calling line identification! However that question was an important one to ask and is now one we should be asking more than ever today.One of the roles of the architect is to ask the questions that others tend to either forget about or purposely don’t ask because the answer is “too hard”. Questions like:

  • So you expect 10,000 people to use your website but what happens if it really takes off and the number of users is 10 or 100 times that?
  • So you’re giving your workforce mobile devices that can be used to access your sales systems, what happens when one of your employees leaves their tablet on a plane/train/taxi?
  • So we are buying database software from a new vendor who will help us migrate from our old systems but what in-house skills do we have to manage and operate this new software?
  • Etc

In many ways these are the easy questions, for a slightly harder question consider this one posed by Nicholas Carr in this blog post.

So you’re happily tweeting away as your Google self-driving car crosses a bridge, its speed precisely synced to the 50 m.p.h. limit. A group of frisky schoolchildren is also heading across the bridge, on the pedestrian walkway. Suddenly, there’s a tussle, and three of the kids are pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?

Pity the poor architect who has to design for that particular use case (and probably several hundred others not yet thought of)! Whilst this might seem to be someway off, the future, as they say, is actually a lot closer than you think. As Carr points out, the US Department of Defence has just issued guidelines designed to:

Minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.

Guidelines which presumably software architects and designers, amongst others, need to get their heads around.

For anyone who has even the remotest knowledge of the genre of science fiction this is probably going to sound familiar. As far back as 1942 the author Isaac Asimov formulated his famous three laws of robotics which current and future software architects may well be minded to adopt as an important set of architectural principles. These three laws, as stated in Asimov’s 1942 short story Runaround, are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

As stated here these laws are beautifully concise and unambiguous however the devil, of course, will be in the implementation. Asimov himself went on to make quite a career of writing stories that tussled with some of the ambiguities that could arise from the conflicts between these laws.

So back to the point of this blog. As our systems become ever more complex and infringe on more and more of our everyday lives are ethical or moral requirements such as these going to be another set of things that software architects need to deal with? I would say absolutely yes. More than ever we need to understand not just the impact on humanity of those systems we are building but also those systems (and tools) we are using everyday. As  Douglas Rushkoff says in his book Program or be Programmed:

If you don’t know what the software you’re using is for, then you’re not using it but being used by it.

In a recent blog post Seth Godin poses a number of questions of what freedom in a digital world really means. Many of these are difficult moral questions with no easy answer and yet systems we are building now, today are implicitly or explicitly embedding assumptions around some of these questions whether we like it or not. One could argue that we should always question whether a particular system should be built or not (just because we can do something does not necessarily mean we should) but often by the time you realise you should be asking such questions it’s already too late. Many of the systems we have today were not built as such, but rather grew or emerged. Facebook may have started out as a means of connecting college friends but now it’s a huge interconnected world of relationships and likes and dislikes and photographs and timelines and goodness knows what else that can be ‘mined’ for all sorts of purposes not originally envisaged.

One of the questions architects and technologists alike must surely be asking is how much mining (of personal data) is it right to do? Technology exists to track our digital presence wherever we go but how much should we be making use of that data and and to what end? The story of how the US retailer Target found out a teenage girl was pregnant before her father did has been doing the rounds for a while now. Apart from the huge embarrassment to the girl and her family this story probably had a fairly harmless outcome however what if that girl had lived in a part of the world where such behavior was treated with less sympathy?

It is of course up to each of us to decide what sort of systems we are or are not prepared to work on in order to earn a living. Each of us must make a moral and ethical judgment based on our own values and beliefs. We should also take care in judging others that create systems we do not agree with or think are “wrong”. What is important however is to always question the motives and the reasons behind those systems and be very clear why you are doing what you are doing and are able to sleep easy having made your decision.