A Step Too Far?

The trouble with technology, especially it seems computer technology, is that it keeps “improving”.  I’ve written before about the ethics of the job that we as software architects do and whether or not we should always accept what we do without asking questions, not least of which should be, is this a technology step too far that I am building or being asked to build?

Three articles have caught my eye this week which have made me ponder this question again.

The first is from the technology watcher and author Nicholas Carr who talks about the Glass Collective, an an investment syndicate made up of three companies: Google Ventures, Andreessen Horowitz and Kleiner Perkins Caufield & Byers whose collective aim is to provide seed funding to entrepreneurs in the Glass ecosystem to help jump start their ideas.For those not in the know about Glass it is, according to the Google blog, all about “getting technology out of the way” and has the aim of building technology that is “seamless, beautiful and empowering“. Glasses first manifestation is to be Internet-connected glasses that take photos, record video and offer hands-free Internet access right in front of a users’ eyes.

Clearly the type of augmented reality that Glass opens up could have huge educational benefits (think of walking around a museum or art gallery and getting information on what you are looking at piped right to you as you look at different works of art) as well as very serious privacy implications. For another view on this read the excellent blog post from my IBM colleague Rick Robinson on privacy in digital cities.

In his blog post Carr refers to a quote from Marshall McLuhan, made a half century ago and now seeming quite prescient:

Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit by taking a lease on our eyes and ears and nerves, we don’t really have any rights left.

The next thing to catch my eye (or actually several thousand things) was around the whole sorry tale of the Boston bombings. This post in particular from the Wall Street Journal discusses the role of Boston’s so called fusion center that “helps investigators scour for connections among potential suspects, by mining hundreds of law enforcement sources around the region, ranging from traffic violations, to jail records and criminal histories, along with public data like property records.”

Whilst I doubt anyone would question the validity of using data in this way to track down people that have performed atrocities such as we saw in Boston, it does highlight just how much data is now collected on us and about us, much of which we have no control over of broadcasting to the world.

Finally, on a much lighter note, we learn that the contraceptive maker Durex has released their “long distance, sexy time fundawear“. I’ll let you watch the first live trial video of this at your leisure (warning, not entirely work safe) but let’s just say here that it adds a whole new dimension to stroking the screen on your smartphone. I guess this one has no immediate privacy issues (providing the participants don’t wear their Google Glass at the same time as playing in their fundawear at least) it does raise some interesting questions about how much we will let technology impinge on the most intimate part of our lives.

So where does this latest foray of mine into digital privacy take us and what conclusions, if any, can we draw? Back in 2006 IBM Fellow and Chief Scientist Jeff Jonas posted a comment on his blog called Responsible Innovation: Designing for Human Rights in which he asks two questions: what if we are creating technologies that go in the face of the Universal Declaration of Human Rights and what if systems are designed without the essential characteristics needed to support basic privacy and civil liberties principles?

Jeff argues that if technologies could play a role in any of the arrest, detention, exile, interference, attacks or deprivation mentioned in the Universal Declaration of Human Rights then they must support disclosure of the source upon which such invasions are predicated. He suggests that systems that could affect one’s privacy or civil liberties should have a number of design characteristics built in that allow for some level of auditability as well as ensuring accuracy of the data they hold. Such characteristics as, every data point is associated to its data source and every data point is associated to its author etc. Given this was written in 2006 when Facebook was only two years old and still largely confined to use in US universities this is a hugely prescient and thoughtful piece of insight (which is why Jeff is an IBM Fellow of course).

So, there’s an idea! New technologies, when they come along should, be examined to ensure they have built in safeguards that mean such rights as are granted to us all in the Universal Declaration of Human Rights are not infringed or taken away from us. How would this be done and, more importantly of course, what bodies or organisations would we empower to ensure such safeguards were both effective and enforceable? No easy or straightforward answers here but certainly a topic for some discussion I believe.

The Moral Architect

I started my career in the telecommunications division of the General Electrical Company (GEC) as a software engineer designing digital signalling systems for Private Branch Exchanges based on the Digital Private Network Signalling System. As part of that role I represented GEC on the working party that defined the DPNSS standard which was owned by British Telecom. I remember at one of the meetings the head of the working party, whose name I unfortunately forget, posed the question: what would have happened if regimes such as those of Nazi Germany or the Stalinist Soviet Union had access to the powerful (sic) technology we were developing? When I look back at that time (early 80’s) such “powerful technology” looks positively antiquated – we were actually talking about little more than the ability to know who was calling whom using calling line identification! However that question was an important one to ask and is now one we should be asking more than ever today.One of the roles of the architect is to ask the questions that others tend to either forget about or purposely don’t ask because the answer is “too hard”. Questions like:

  • So you expect 10,000 people to use your website but what happens if it really takes off and the number of users is 10 or 100 times that?
  • So you’re giving your workforce mobile devices that can be used to access your sales systems, what happens when one of your employees leaves their tablet on a plane/train/taxi?
  • So we are buying database software from a new vendor who will help us migrate from our old systems but what in-house skills do we have to manage and operate this new software?
  • Etc

In many ways these are the easy questions, for a slightly harder question consider this one posed by Nicholas Carr in this blog post.

So you’re happily tweeting away as your Google self-driving car crosses a bridge, its speed precisely synced to the 50 m.p.h. limit. A group of frisky schoolchildren is also heading across the bridge, on the pedestrian walkway. Suddenly, there’s a tussle, and three of the kids are pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?

Pity the poor architect who has to design for that particular use case (and probably several hundred others not yet thought of)! Whilst this might seem to be someway off, the future, as they say, is actually a lot closer than you think. As Carr points out, the US Department of Defence has just issued guidelines designed to:

Minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.

Guidelines which presumably software architects and designers, amongst others, need to get their heads around.

For anyone who has even the remotest knowledge of the genre of science fiction this is probably going to sound familiar. As far back as 1942 the author Isaac Asimov formulated his famous three laws of robotics which current and future software architects may well be minded to adopt as an important set of architectural principles. These three laws, as stated in Asimov’s 1942 short story Runaround, are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

As stated here these laws are beautifully concise and unambiguous however the devil, of course, will be in the implementation. Asimov himself went on to make quite a career of writing stories that tussled with some of the ambiguities that could arise from the conflicts between these laws.

So back to the point of this blog. As our systems become ever more complex and infringe on more and more of our everyday lives are ethical or moral requirements such as these going to be another set of things that software architects need to deal with? I would say absolutely yes. More than ever we need to understand not just the impact on humanity of those systems we are building but also those systems (and tools) we are using everyday. As  Douglas Rushkoff says in his book Program or be Programmed:

If you don’t know what the software you’re using is for, then you’re not using it but being used by it.

In a recent blog post Seth Godin poses a number of questions of what freedom in a digital world really means. Many of these are difficult moral questions with no easy answer and yet systems we are building now, today are implicitly or explicitly embedding assumptions around some of these questions whether we like it or not. One could argue that we should always question whether a particular system should be built or not (just because we can do something does not necessarily mean we should) but often by the time you realise you should be asking such questions it’s already too late. Many of the systems we have today were not built as such, but rather grew or emerged. Facebook may have started out as a means of connecting college friends but now it’s a huge interconnected world of relationships and likes and dislikes and photographs and timelines and goodness knows what else that can be ‘mined’ for all sorts of purposes not originally envisaged.

One of the questions architects and technologists alike must surely be asking is how much mining (of personal data) is it right to do? Technology exists to track our digital presence wherever we go but how much should we be making use of that data and and to what end? The story of how the US retailer Target found out a teenage girl was pregnant before her father did has been doing the rounds for a while now. Apart from the huge embarrassment to the girl and her family this story probably had a fairly harmless outcome however what if that girl had lived in a part of the world where such behavior was treated with less sympathy?

It is of course up to each of us to decide what sort of systems we are or are not prepared to work on in order to earn a living. Each of us must make a moral and ethical judgment based on our own values and beliefs. We should also take care in judging others that create systems we do not agree with or think are “wrong”. What is important however is to always question the motives and the reasons behind those systems and be very clear why you are doing what you are doing and are able to sleep easy having made your decision.

It’s the NFR’s, Stupid

An apocryphal (to me at least) tale from Forbes that provides a timely reminder of the fact that even in this enlightened age of clouds that give you infrastructure (and more) in minutes and analytical tools that business folk can use to quickly slice and dice data in all manor of ways, fundamentals, like NFRs, don’t (or shouldn’t) go out of fashion.According to Forbes the US retailer Target figured out that a teenager was pregnant before her parents did. Target analysed the buying behaviour of customers and identified 25 products (e.g. cocoa-butter lotion, a purse large enough to double as a diaper bag and zinc and magnesium supplements) that allowed them to assign each shopper a “pregnancy prediction” score. The retailer also reckoned they could estimate the due date of a shopper to within a small window and so could send coupons timed to very specific stages of a pregnancy. In the case of this particular shopper Target sent a letter, containing coupons, to a high-school pupil whose father opened it and was aghast that the retailer should send coupons for baby clothes and cribs to a teenager. The disgruntled father visited his local Target store accusing them of encouraging his daughter to get pregnant. The manager of the store apologised and called the father again a few days later to repeat his apology. However this time the father was somewhat abashed and said he had spoken to his daughter only to find out she was in fact pregnant and was due in August. This time he apologised to the manager.

So, what’s the lesson here for architects? Here’s my zen take:

  1. Don’t assume that simply because technology seems to be more magical and advanced you can ignore fundamentals, in this case a persons basic entitlement to privacy.
  2. With cloud and advanced analytics IT is (apparently) passing control back to the business which it has done in a cyclical fashion over the last 50 – 60 years (i.e. mainframe -> mini -> PC -> client-server -> browser -> cloud). Whoever “owns” the gateway to the system should not forget they should have the interests of the end user at heart. Ignore their wants and needs at your peril!
  3. Legislation, and the lay-mans understanding of what technology can do, will always lag advances in technology itself. Part of an architects role is to explain, not only the benefits of a new technology, but also the potential downside to anyone that may be impacted by that technology. In the connected world that we now live in that can be a very large audience indeed.

Part of being an architect is to talk to everyone to explain not only your craft but also your work. Use every opportunity to do this and reject no one who might want to understand a technology. As Philippe Kruchten says in his brilliant interpretation of Lao-Tsu’s Tao Te Ching for the use of software architects:

The architect is available to everyone and rejects no one.
She is ready to use all situations and does not waste anything.
This is called embodying the light.

Make sure you repeatedly “embody the light”.

What Now for Internet Piracy?

So SOPA is to be kicked into the long grass which means it is at least postponed if not killed altogether. For those who have not been following the Stop Online Piracy Act debate, this is the bill proposed by a U.S Republican Representative to expand the ability of U.S. law enforcement to fight online trafficking in copyrighted intellectual property (IP) and counterfeit goods. Supporters of SOPA said it would protect IP as well as the jobs and livelihoods of people (and organisations) involved in creating books, films music, photographs etc. Opponents reckoned the legislation threatened free speech and innovation and would enable law enforcement officers to block access to entire internet domains as well as violating the First Amendment. Inevitably much of the digerati came out in flat opposition of SOPA and staged an internet blackout on 18th January where many sites “went dark” and Wikipedia was unavailable altogether. Critics of SOPA cited that the fact the bill was supported by the music and movie industry was an indication that it was just another way of these industry dinosaurs protecting their monopoly over content distribution. So, a last minute victory for the new digital industry over the old analogue one?And yet…

Check out this TED talk by digital commentator Clay Shirky called Why SOPA is a bad idea. Shirky in his usual compelling way puts a good case for why SOPA is bad (the talk was published before the recent announcement on the bill being postponed) but the real interest for me in this talk was from the comments about it. There are many people saying yes SOPA may be a bad bill but there is nonetheless a real problem with content being given away that should otherwise be paid for and that content creators (whether they be software developers, writers or photographers) are simply losing their livelihoods because people are stealing their work. Sure, there are copyright laws that are meant to prevent this sort of thing happening but who can really chase down the web sites and peer-to-peer networks that “share” content they have not created or paid for? SOPA may have been a bad bill and really have been about protecting the interests of large corporations who just want to carry on doing what they have always done without having to adapt or innovate. However without some sort of regulation that protects the interests of individuals or small start-ups wishing to earn a living from their art, killing SOPA has not moved us forward in any way and certainly not protected their interests. Unfortunately some sort if internet regulation is inevitable.

For a historical perspective of why this is likely to be so, see the TED talk by the Liberal Democrat Paddy Ashdown called The global power shift. Ashdown argues that “where power goes governance must follow” and that there is plenty of historical evidence showing what happens when this is not the case (the recent/current financial meltdown to name but one).

So SOPA may be dead but something needs to replace it and if we are to get the right kind of governance we must all participate in the debate else the powerful special interest groups will get their own way. Clay Shirky argued that if SOPA failed to be passed it would be replaced by something else. Now then is our chance to ensure that whatever that is, is right for content creators as well as distributors.

Ethics and Architecture

If you’ve not seen the BBC2 documentary All Watched Over By Machines of Loving Grace catch it now on the BBC iPlayer while you can (doesn’t work outside the UK unfortunately). You can see a preview of the series (another two to go) on Adam Curtis’ (the film maker) web site here. The basic premise of the first programme is as follows.Back in the 50’s a small group of people took up the ideas of the novelist Ayn Rand whose philosophy of Objectivism advocated reason as the only means of acquiring knowledge and rejecting all forms of faith and religion. They saw themselves as a prototype for a future society where everyone could follow their own selfish desires. One of the Rand ‘disciples’ was Alan Greenspan. Cut to the 1990’s where several Silicon Valley entrepreneurs,  also followers of Rand’s philosophy, believed that the new computer networks would allow the creation of a society where everyone could follow their own desires without there being any anarchy. Alan Greenspan, now Chairman of the Federal Reserve, also became convinced that the computers were creating a new kind of stable capitalism and convinced President Bill Clinton of a radical approach to cut the United States huge deficit. He proposed that Clinton cut government spending and reduce interest rates letting the markets control the fate of the economy, the country and ultimately the world. Whilst this approach appeared to work in the short term, it set off a chain of events which, according to Curtis’ hypothesis, led to 9/11, the Asian financial crash of 1997/98, the current economic crisis and the rise of China as a superpower that will soon surpass that of the United States. What happened was that the “blind faith” we put in the machines that were meant to serve us led us to a “dream world” where we trusted the machines to manage the markets for us but in fact they were operating in ways we could not understand resulting in outcomes we could never predict.

So what the heck has this got to do with architecture?  Back in the mid-80’s when I worked in Silicon Valley I remember reading an article in the San Jose Mercury News about a programmer who had left his job because he didn’t like the applications that the software he’d been working on were being put to (something of a military nature I suspect). Quite a noble act you might think (though given where he worked I suspect the guy didn’t have too much trouble finding another job pretty quickly). I wonder how many of us really think about what the uses of the software systems we are working on are being put to?

Clearly if you are working on the control software for a guided missile it’s pretty clear cut what the application is going to be used for. However what about if you are creating some piece of generic middleware? Yes it could be put to good use in hospital information systems or food-aid distribution systems however the same software could be used for the ERP system of a tobacco company or in controlling surveillance systems that “watch over us with loving grace”.

Any piece of software can be used for both good and evil and the developers of that software can hardly have it on their conscious to worry about what that end use will be. Just like nuclear power leads to both good (nuclear reactors, okay, okay I know that’s debatable given what’s just happened in Japan) and bad (bombs) it is the application of a particular technology that decides whether something is good or bad. However, here’s the rub. As architects aren’t we the ones who are meant to be deciding on how software components are put together to solve problems, for both better and for worse? Is it not within our remit to control those ‘end uses’ therefore and to walk away from those projects that will result in systems that are being built for bad rather than good purposes? We all have our own moral compass and it is up to us as individuals to decide which way we point our own compasses. From my point of view I would hope that I never got involved in systems that in anyway lead to an infringement of a persons basic human rights but how do I decide or know this? I doubt the people that built the systems that are the subject of the Adam Curtis films ever dreamed they would be used in ways which have almost led to the economic collapse of our society? I guess it is beholden on all of us to research and investigate as much as we can those systems we find ourselves working on and decide for ourselves whether we think we are creating machines that watch over us with “loving grace” or which are likely to have more sinister intents. As ever, Aurthur C. Clarke predicted this several decades ago and if you have not read his short story Dial F for Frankenstein now might be a good time to do so.