On Ethics and Algorithms

franck-v-g29arbbvPjo-unsplash
Photo by Franck V. on Unsplash

An article on the front page of the Observer, Revealed: how drugs giants can access your health records, caught my eye this week. In summary the article highlights that the Department of Health and Social Care (DHSC) has been selling the medical data of NHS patients to international drugs companies and have “misled” the public that the information contained in the records would be “anonymous”.

The data in question is collated from GP surgeries and hospitals and, according to “senior NHS figures”, can “routinely be linked back to individual patients’ medical records via their GP surgeries.” Apparently there is “clear evidence” that companies have identified individuals whose medical histories are of “particular interest.” The DHSC have replied by saying it only sells information after “thorough measures” have been taken to ensure patient anonymity.

As with many articles like this it is frustrating when some of the more technical aspects are not fully explained. Whilst I understand the importance of keeping their general readership on board and not frightening them too much with the intricacies of statistics or cryptography it would be nice to know a bit more about how these records are being made anonymous.

There is a hint of this in the Observer report when it states that the CPRD (the Clinical Practice Research Datalink ) says the data made available for research was “anonymous” but, following the Observer’s story, it changed the wording to say that the data from GPs and hospitals had been “anonymised”. This is a crucial difference. One of the more common methods of ‘anonymisation’  is to obscure or redact some bits of information. So, for example, a record could have patient names removed and ages and postcodes “coarsened”, that is only the first part of a postcode (e.g. SW1A rather than SW1A 2AA)  are included and ages are placed in a range rather than using someones actual age (e.g. 60-70 rather than 63).

The problem with anonymising data records is that they are prone to what is referred to as data re-identification or de-anonymisation. This is the practice of matching anonymous data with publicly available information in order to discover the individual to which the data belongs. One of the more famous examples of this is the competition that Netflix organised encouraging people to improve its recommendation system by offering a $50,000 prize for a 1% improvement. The Netflix Prize was started in 2006 but abandoned in 2010 in response to a lawsuit and Federal Trade Commission privacy concerns. Although the dataset released by Netflix to allow competition entrants to test their algorithms had supposedly been anonymised (i.e. by replacing user names with a meaningless ID and not including any gender or zip code information) a PhD student from the University of Texas was able to find out the real names of people in the supplied dataset by cross-referencing the Netflix dataset with Internet Movie Database (IMDB) ratings which people post publicly using their real names.

Herein lies the problem with the anonymisation of datasets. As Michael Kearns and Aaron Roth highlight in their recent book The Ethical Algorithm, when an organisation releases anonymised data they can try and make an intelligent guess as to which bits of the dataset to anonymise but it can be difficult (probably impossible) to anticipate what other data sources either already exist or could be made available in the future which could be used to correlate records. This is the reason that the computer scientist Cynthia Dwork has said “anonymised data isn’t” – meaning either it isn’t really anonymous or so much of the dataset has had to be removed that it is no longer data (at least in any useful way).

So what to do? Is it actually possible to release anonymised datasets out into the wild with any degree of confidence that they can never be de-anonymised? Thankfully something called differential privacy, invented by the aforementioned Cynthia Dwork and colleagues, allows us to do just that. Differential privacy is a system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in that dataset.

To understand how differential privacy works consider this example*. Suppose we want to conduct a poll of all people in London to find out who have driven after taking non-prescription drugs. One way of doing this is to randomly sample a suitable number of Londoners, asking them if they have ever driven whilst under the influence of drugs. The data collected could be entered into a spreadsheet and various statistics, e.g. number of men, number of women, maybe ages etc derived. The problem is that whilst collecting this information lots of compromising personal details may be collected which, if the data were stolen, could be used against them.

In order to avoid this problem consider the following alternative. Instead of asking people the question directly, first ask them to flip a coin but not to tell us how it landed. If the coin comes up heads they tell us (honestly) if they have driven under the influence. If it comes up tails however they tell us a random answer then flip the coin again and tell us “yes” if it comes up heads or “no” if it is tails. This polling protocol is a simple randomised algorithm which is a form of differential privacy. So how does this work?

differential privacy
If your answer is no, the randomised response answers no two out of three times. It answers no only one out of three times if your answer is yes. Diagram courtesy Michael Kearns and Aaron Roth, The Ethical Algorithm 2020

When we ask people if they have driven under the influence using this protocol half the time (i.e. when the coin lands heads up) the protocol tells them to tell the truth. If the protocol tells them to respond with a random answer (i.e. when the coin lands tails up), then half of that time they just happen to randomly tell us the right answer. So they tell us the right answer 1/2 + ((1/2) x (1/2)) or three-quarters of the time. The remaining one quarter of the time they tell us a lie. There is no way of telling true answers from lies. Surely though, this injection of randomisation completely masks the true results and the data is now highly error prone? Actually, it turns out, this is not the case.

Because we know how this randomisation is introduced we can reverse engineer the answers we get to remove the errors and get an approximation of the right answer. Here’s how. Suppose one-third of people in London have actually driven under the influence of drugs. So of the one-third who have truthfully answered “yes” to the question, three-quarters of those will answer “yes” using the protocol, that is 1/3 x 3/4 = 1/4. Of the two-thirds who have a truthful answer of “no”, one-quarter of those will report “yes”, that is 2/3 x 1/4 = 1/6. So we expect 1/4 + 1/6 = 5/12 ~ 1/3 of the population to answer “yes”.

So what is the point of doing the survey like this? Simply put it allows the true answer to be hidden behind the protocol. If the data were leaked and an individual from it was identified as being suspected of driving under the influence then they could always argue they were told to say “yes” because of the way the coins fell.

In the real world a number of companies including the US census, Apple, Google and Privitar Lens use differential privacy to limit the disclosure of private information about individuals whose information is in public databases.

It would be nice to think that the NHS data that is supposedly being used by US drug companies was protected by some form of differential privacy. If it were, and if this could be explained to the public in a reasonable and rational way, then surely we would all benefit both in the knowledge that our data is safe and is maybe even being put to good use in protecting and improving our health. After all, wasn’t this meant to be the true benefit of living in a connected society where information is shared for the betterment of all our lives?

*Based on an example from Kearns and Roth in The Ethical Algorithm.

Cummings needs data scientists, economists and physicists (oh, and weirdos)

Dominic Cummings
Dominic Cummings – Image Copyright Business Insider Australia

To answer my (rhetorical) question in this post I think it’s been pretty much confirmed since the election that Dominic Cummings is, in equal measures, the most influential, disruptive, powerful and dangerous man in British politics right now. He has certainly set the cat amongst the pigeons in this blog post where he has effectively by-passed the civil service recruitment process by advertising for people to join his ever growing team of SPAD’s (special advisors). Cummings is looking for data scientists, project managers, policy experts and assorted weirdos to join his team. (Interestingly today we hear that the self-proclaimed psychic Uri Geller has applied for the job believing he qualifies because of the super-talented weirdo aspect of the job spec.)

Cummings is famed for his wide reaching reading tastes and the job spec also cites a number of scientific papers potential applicants “will be considering”. The papers mentioned are broadly in the areas of complex systems and the use of maths and statistics in forecasting which give an inkling into the kind of problems Cummings sees as those that need to be ‘fixed’ in the civil service as well as the government at large (including the assertion that “Brexit requires many large changes in policy and in the structure of decision-making”).

Like many of his posts, this particular one tends to ramble and also be contradictory. In one paragraph he’s saying that you “do not need a PhD” but then in the very next one saying you  “must have exceptional academic qualifications from one of the world’s best universities with a PhD or MSc in maths or physics.”

Cummings also returns to one of his favourite topics which is that of the failure of projects – mega projects in particular – and presumably those that governments tend to initiate and not complete on time or to budget (or at all). He’s an admirer of some of the huge project successes of yesteryear such as The Manhattan Project (1940s), ICBMs (1950s) and Apollo (1960s) but reckons that since then the Pentagon has “systematically de-programmed itself from more effective approaches to less effective approaches from the mid-1960s, in the name of ‘efficiency’.” Certainly the UK government is no stranger to some spectacular project failures itself both in the past and present (HS2 and Crossrail being two more contemporary examples of not so much failures but certainly massive cost overruns).

However as John Naughton points out here  “these inspirational projects have some interesting things in common: no ‘politics’, no bureaucratic processes and no legal niceties. Which is exactly how Cummings likes things to be.” Let’s face it both Crossrail and HS2 would be a doddle of only you could do away with all those pesky planning proposals and environmental impact assessments you have to do and just move people out of the way quickly – sort of how they do things in China maybe?

Cummings believes that now is the time to bring together the right set of people with a sufficient amount of cognitive diversity and work in Downing Street with him and other SPADs to start to address some of the wicked problems of government. One ‘lucky’ person will be his personal assistant, a role which he says will “involve a mix of very interesting work and lots of uninteresting trivia that makes my life easier which you won’t enjoy.” He goes on to say that in this role you “will not have weekday date nights, you will sacrifice many weekends — frankly it will hard having a boy/girlfriend at all. It will be exhausting but interesting and if you cut it you will be involved in things at the age of ~21 that most people never see.” That’s quite some sales pitch for a job!

What this so called job posting is really about though is another of Cummings abiding obsessions (which he often discusses in his blog) that the government in general, and civil service in particular (which he groups together as “SW1”), is basically not fit for purpose because it is scientifically and technologically illiterate as well as being staffed largely with Oxbridge humanities graduates. The posting is also a thinly veiled attempt at pushing the now somewhat outdated ‘move fast and break things” mantra of Silicon Valley. An approach that does not always play out well in government (Universal Credit anyone). I well remember my time working at the DWP (yes, as a consultant) where one of the civil servants with whom I was working said that the only problem with disruption in government IT was that it was likely to lead to riots on the streets if benefit payments were not paid on time. Sadly, Universal Credit has shown us that it’s not so much street riots that are caused but a demonstrable increase in demand for food banks. On average, 12 months after roll-out, food banks see a 52% increase in demand, compared to 13% in areas with Universal Credit for 3 months or less.

Cummings of course would say that the problem is not so much that disruption per se causes problems but rather the ineffective, stupid and incapable civil servants who plan and deploy such projects are at fault, hence the need for hiring the right ‘assorted weirdos’ who will bring new insights that fusty old civil servants cannot see. Whilst he may well be right that SW1 is lacking in deep technical experts as well as great project managers and ‘unusual’ economists he needs to realise that government transformation cannot succeed unless it is built on a sound strategy and good underlying architecture. Ideas are just thoughts floating in space until they can be transformed into actions that result in change which takes into account that the ‘products’ that governments deal with are people not software and hardware widgets.

This problem is far better articulated by Hannah Fry when she says that although maths has, and will continue to have, the capability to transform the world those who apply equations to human behaviour fall into two groups: “those who think numbers and data ultimately hold the answer to everything, and those who have the humility to realise they don’t.”

Possibly the last words should be left to Barack Obama who cautioned Silicon Valley’s leaders thus:

“The final thing I’ll say is that government will never run the way Silicon Valley runs because, by definition, democracy is messy. This is a big, diverse country with a lot of interests and a lot of disparate points of view. And part of government’s job, by the way, is dealing with problems that nobody else wants to deal with.

So sometimes I talk to CEOs, they come in and they start telling me about leadership, and here’s how we do things. And I say, well, if all I was doing was making a widget or producing an app, and I didn’t have to worry about whether poor people could afford the widget, or I didn’t have to worry about whether the app had some unintended consequences — setting aside my Syria and Yemen portfolio — then I think those suggestions are terrific. That’s not, by the way, to say that there aren’t huge efficiencies and improvements that have to be made.

But the reason I say this is sometimes we get, I think, in the scientific community, the tech community, the entrepreneurial community, the sense of we just have to blow up the system, or create this parallel society and culture because government is inherently wrecked. No, it’s not inherently wrecked; it’s just government has to care for, for example, veterans who come home. That’s not on your balance sheet, that’s on our collective balance sheet, because we have a sacred duty to take care of those veterans. And that’s hard and it’s messy, and we’re building up legacy systems that we can’t just blow up.”

Now I think that’s a man who shows true humility, something our current leaders (and their SPADs) could do with a little more of I think.

 

The story so far…

 

Photo by Joshua Sortino on Unsplash
Photo by Joshua Sortino on Unsplash

It’s hard to believe that this year is the 30th anniversary of Tim Berners-Lee’s great invention, the World-Wide Web, and that much of the technology that enabled his creation is still less than 60 years old. Here’s a brief history of the Internet and the Web, and how we got to where we are today, in ten significant events.

 

#1: 1963 – Ted Nelson begins developing a model for creating and using linked content he calls hypertext and hypermedia. Hypertext is born.

#2: 1969 – The first message is sent over the ARPANET from computer science Professor Leonard Kleinrock’s laboratory at University of California, Los Angeles to the second network node at Stanford Research Institute. The Internet is born.

#3: 1969 – Charles Goldfarb, leading a small team at IBM, developed the first markup language, called Generalized Markup Language, or GML. Markup languages are born.

#4: 1989 – Tim Berners-Lee whilst working at CERN publishes his paper, Information Management: A Proposal. The World Wide Web (WWW) is born.

#5: 1993Mosaic, a graphical browser aiming to bring multimedia content to non-technical users (images and text on the same page) is invented by Marc Andreessen. The web browser is born.

#6: 1995 – Jeff Bezos launches Amazon “earth’s biggest bookstore” from a garage in Seattle. E-commerce is born.

#7: 1998 – The Google company is officially launched by Larry Page and Sergey Brin to market Google Search. Web search is born.

#8: 2003Facebook (then called FaceMash but changed to The Facebook a year later) is founded by Mark Zuckerberg with his college roommate and fellow Harvard University student Eduardo Saverin. Social media is born.

#9: 2007 – Steve Jobs launches the iPhone at MacWorld Expo in San Francisco. Mobile computing is born.

#10: 2018 – Tim Berners-Lee instigates act II of the web when he announces a new initiative called Solid, to reclaim the Web from corporations and return it to its democratic roots. The web is reborn?

I know there have been countless events that have enabled the development of our modern Information Age and you will no doubt think others should be included in preference to some of my suggestions. Also, I suspect that many people will not have heard of my last choice (unless you are a fairly hardcore computer type). The reason I have added this one is because I think/hope it will start to address what is becoming one of the existential threats of our age, namely how we survive in a world awash with data (our data) that is being mined and used without us knowing, much less understanding, the impact of such usage. Rather than living in an open society in which ideas and data are freely exchanged and used to everyones benefit we instead find ourselves in an age of surveillance capitalism which, according to this source, is defined as being:

…the manifestation of George Orwell’s prophesied Memory Hole combined with the constant surveillance, storage and analysis of our thoughts and actions, with such minute precision, and artificial intelligence algorithmic analysis, that our future thoughts and actions can be predicted, and manipulated, for the concentration of power and wealth of the very few.

In her book The Age of Surveillance Capitalism, Shoshana Zuboff provides a sweeping (and worrying) overview and history of the techniques that the large tech companies are using to spy on us in ways that even George Orwell would have found alarming. Not least because we have voluntarily given up all of this data about ourselves in exchange for what are sometimes the flimsiest of benefits. As Zuboff says:

Thanks to surveillance capitalism the resources for effective life that we seek in the digital realm now come encumbered with a new breed of menace. Under this new regime, the precise moment at which our needs are met is also the precise moment at which our lives are plundered for behavioural data, and all for the sake others gain.

Tim Berners-Lee invented the World-Wide Web then gave it away so that all might benefit. Sadly some have benefited more than others, not just financially but also by knowing more about us than most of us would ever want or wish. I hope for all our sakes the work that Berners-Lee and his small group of supporters is doing make enough progress to reverse the worst excesses of surveillance capitalism before it is too late.

What Are Digital Skills?

Photo by Sabri Tuzcu on Unsplash
Photo by Sabri Tuzcu on Unsplash

There have been many, many reports both globally and within the UK bemoaning the lack of digital skills in todays workforce. The term digital skills is somewhat amorphous however and can mean different things to different people.

To more technical types it can mean the ability to write code, develop new computer hardware or have deep insights into how networks are set up and configured. To less digital savvy people it may just mean the ability to operate digital technology such as tablets and mobile phones or how to find information on the world wide web or even just fill out forms on web sites (e.g. to apply for a bank account).

A recent report from the CBI, Delivering Skills for the New Economy , which comes up with a number of concrete steps on how the UK might address a shortage of digital skills, suggests the following as a way of categorising these skills. Useful if we are to find way in which to address their scarcity.

  • Basic digital skills: Businesses define basic digital skills in similar terms. For most businesses this means computer literacy such as familiarity with Microsoft Office; handling digital information and content; core skills such as communication and problem-solving; and understanding how digital technologies work. This understanding of digital technologies includes understanding how data can be used to glean new insights, how social media provides value for a business or how an algorithm or piece of digitally-enabled machinery works.
    Basic digital skills: Businesses define basic digital skills in similar terms. For most businesses this means computer literacy such as familiarity with Microsoft Office; handling digital information and content; core skills such as communication
    and problem-solving; and understanding how digital technologies work. This understanding of digital technologies includes understanding how data can be used to glean new insights, how social media provides value for a business or how an algorithm or piece of digitally-enabled machinery works.
  • Advanced digital skills: Businesses also broadly agree on the definitions of advanced digital skills. For most businesses, these include software engineering and development (77%), data analytics (77%), IT support and system maintenance (81%) and digital marketing and sales (72%). Businesses have highlighted their increasing need for specific advanced digital skills, including programming, visualisation, machine learning, data analytics, app development, 3D printing expertise, cloud awareness and cybersecurity.
    It is important that a good grounding in the basic (core) skills is given to as many people as possible. The so called digital natives or “Gen Zs” (at least in first world countries) have grown up knowing nothing else but the world wide web, touch screen technology and pervasive social media. Older generations, less so. All need this information if they are to operate effectively in the “New Economy” (or know enough to actively disengage from it if they choose to do so).

The basic skills will also allow for a more critical assessment of what advanced digital skills should be considered if making choices about jobs or if people just need to understand what social media companies they should or should not be using or how artificial intelligence might affect their career prospects.

I would argue that a basic level of advanced digital knowledge is also a requirement so that everyone can play a more active role in this modern economy and understand the implications of technology.

What is Blockchain Good For?

Blockchain?
Photo by Hitesh Choudhary on Unsplash

The genius of Tim Berners-Lee when he invented the World Wide Web back in 1989 was that he brought together three arcane technologies (hypertext, markup languages and internet communication protocols) in a way no one had thought of before and literally transformed the world.  Could blockchain do the same thing?  Satoshi Nakamoto in his paper that introduced the world to bitcoins 20 years later in 2009 also used three existing ideas (distributed databases, public key or asymetric cryptography and proof-of-work) to show how a peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution.

Depending on your point of view, and personal agenda, blockchain the technology that underpins Bitcoin, either promises to do no less than solve the world’s refugee crisis and transform the management of health supply chains or is the most over-hyped, terrifying and foolish technology since Google Glass or MySpace.  Like any new technology we need to be very careful to separate the hype from the reality.

The documentary The Blockchain and Us made by Manuel Stagars in 2017 interviews software developers, cryptologists, researchers, entrepreneurs, consultants, VCs, authors, politicians, and futurists from around the world and poses a number of questions such as:  How can the blockchain benefit the economies of nations?  How will it change society?  What does it  mean for each of us?  The intent of the film is not to explain the technology but to give views on the it and encourage a conversation about its potential wider implications.

Since I have begun to focus my architecture efforts on blockchain I often get asked the question that is the title of this blog post.  According to Gartner blockchain has gone through the ‘peak of inflated expectations’ and is now sliding down into the ‘trough of disillusionment’.  The answer to the question, as is the case for most new technologies, will be it’s “good for” some things but not everything.

As a technologist it pains me to say this but in the business world technology itself is not usually the answer to anything on its own.  As the Venture Capitalist Jimmy Song said at Consensus earlier this year*, “When you have a technology in search of a use, you end up with the crap that we see out there in the enterprise today.” Harsh words indeed but probably true.

Instead, what is needed is the business and organisational change that drives new business models in which technology is, if required, slotted in at the right time and place.  Business people talk about the return on investment of tech and the fact that technology often gobbles up staff time and money, without giving enough back.  Blockchain runs the risk of gobbling up too much time and money if the right considerations are not given to its use and applicability to business.

If we are to ensure blockchain has a valid business use and gets embedded into the technology stack that businesses use then we need to ensure the right questions get asked when considering its use.  As a start in doing this you could do worse than consider this set of questions from the US Department for Homeland Security.

pisa-reassessing-expectations-blockchain-figure1

Many blockchain projects are still at the proof of technology stage although there are some notable exceptions.  The IBM Food Trust is a collaborative network of growers, processors, wholesalers, distributors, manufacturers, retailers and others enhancing visibility and accountability in each step of the food supply chain whilst the recently announced TradeLens aims to apply blockchain to the world’s global supply chain.  Both of these solutions are built on top of the open source Hyperledger Fabric blockchain platform which is one of the projects under the umbrella of the Linux Foundation.

What these and other successful blockchain systems are showing is actually that another question should be tagged onto the flowchart above (probably it should be the first question).  This would be something like: “Are you willing to be part of a collaborative business network to share information on a needs to know basis?”  The thing about permissioned networks like Hyperledger Fabric is that people don’t need to trust everyone on the network but they do need to agree who will be a part of it.  Successful blockchain business networks are proving to be the ones whose participants understand this and are willing to collaborate.

* As discovered in the Medium.com post Firms need business model change, not blockchain.

Software is Eating the World and Some Tech Companies are Eating Us

Today (12th March, 2018) is the World Wide Web’s 29th birthday.  Sir Tim Berners-Lee (the “inventor of the world-wide web”), in an interview with the Financial Times and in this Web Foundation post has used this anniversary to raise awareness of how the web behemoths Facebook, Google and Twitter are “promoting misinformation and ‘questionable’ political advertising while exploiting people’s personal data”.  Whilst I admire hugely Tim Berners-Lee’s universe-denting invention it has to be said he himself is not entirely without fault in the way he bequeathed us with his invention.  In his defence, hindsight is a wonderful thing of course, no one could have possibly predicted at the time just how the web would take off and transform our lives both for better and for worse.

If, as Marc Andreessen famously said in 2011, software is eating the world then many of those powerful tech companies are consuming us (or at least our data and I’m increasingly becoming unsure there is any difference between us and the data we choose to represent ourselves by.

Here are five recent examples of some of the negative ways software is eating up our world.

Over the past 40+ years the computer software industry has undergone some fairly major changes.  Individually these were significant (to those of us in the industry at least) but if we look at these changes with the benefit of hindsight we can see how they have combined to bring us to where we are today.  A world of cheap, ubiquitous computing that has unleashed seismic shocks of disruption which are overthrowing not just whole industries but our lives and the way our industrialised society functions.  Here are some highlights for the 40 years between 1976 and 2016.

waves-since-1976

And yet all of this is just the beginning.  This year we will be seeing technologies like serverless computing, blockchain, cognitive and quantum computing become more and more embedded in our lives in ways we are only just beginning to understand.  Doubtless the fallout from some of the issues I highlight above will continue to make themselves felt and no doubt new technologies currently bubbling under the radar will start to make themselves known.

I have written before about how I believe that we, as software architects, have a responsibility, not only to explain the benefits (and there are many) of what we do but also to highlight the potential negative impacts of software’s voracious appetite to eat up our world.

This is my 201st post on Software Architecture Zen (2016/17 were barren years in terms of updates).  This year I plan to spend more time examining some of the issues raised in this post and look at ways we can become more aware of them and hopefully not become so seduced by those sirenic entrepreneurs.

What Have we Learnt from Ten Years of the iPhone?

Ten years ago this week (on 9th January 2007) the late Steve Jobs, then at the hight of his powers at Apple, introduced the iPhone to an unsuspecting world. The history of that little device (which has got both smaller and bigger in the interceding ten years) is writ large over the entire Internet so I’m not going to repeat it here. However it’s worth looking at the above video on YouTube not just to remind yourself what a monumental and historical moment in tech history this was, even though few of us realised it at the time, but also to see a masterpiece in how to launch a new product.

Within two minutes of Jobs walking on stage he has the audience shouting and cheering as if he’s a rock star rather than a CEO. At around 16:25 when he’s unveiled his new baby and shows for the first time how to scroll through a list in a screen (hard to believe that ten years ago know one knew this was possible) they are practically eating out of his hand and he still has over an hour to go!

This iPhone keynote, probably one of the most important in the whole of tech history, is a case study on how to deliver a great presentation. Indeed, Nancy Duart in her book Resonate, has this as one of her case studies for how to “present visual stories that transform audiences”. In the book she analyses the whole event to show how Jobs’ uses all of the classic techniques of storytelling, establish what is and what could be, build suspense, keep your audience engaged, make them marvel and finally  show them a new bliss.

The iPhone product launch, though hugely important, is not what this post is about though. Rather, it’s about how ten years later the iPhone has kept pace with innovations in technology to not only remain relevant (and much copied) but also to continue to influence (for better and worse) the way people interact, communicate and indeed live. There are a number of enabling ideas and technologies, both introduced at launch as well as since, that have enabled this to happen. What are they and how can we learn from the example set by Apple and how can we improve on them?

Open systems generally beat closed systems

At its launch Apple had created a small set of native apps the making of which was not available to third-party developers. According to Jobs, it was an issue of security. “You don’t want your phone to be an open platform,” he said. “You don’t want it to not work because one of the apps you loaded that morning screwed it up. Cingular doesn’t want to see their West Coast network go down because of some app. This thing is more like an iPod than it is a computer in that sense.”

Jobs soon went back on that decision which is one of the factors that has led to the overwhelming success of the device. There are now 2.2 million apps available for download in the App Store with over 140 billion downloads made since 2007.

As has been shown time and time again, opening systems up and allowing access to third party developers nearly always beat keeping systems closed and locked down.

Open systems need easy to use ecosystems

Claiming your system is open does not mean developers will flock to it to extend your system unless it is both easy and potentially profitable to do so. Further, the second of these is unlikely to happen unless the first enabler is put in place.

Today with new systems being built around Cognitive computing, the Internet of Things (IoT) and Blockchain companies both large and small are vying with each other to provide easy to use but secure ecosystems that allow these new technologies to flourish and grow, hopefully to the benefits to business and society as a whole. There will be casualties on the way but this competition, and the recognition that systems need to be built right rather than us just building the right system at the time is what matters.

Open systems must not mean insecure systems

One of the reasons Jobs gave for not initially making the iPhone an open platform was his concerns over security and for hackers to break into those systems wreaking havoc. These concerns have not gone away but have become even more prominent. IoT and artificial intelligence, when embedded in everyday objects like cars and  kitchen appliances as well as our logistics and defence systems have the potential to cause there own unique and potentially disastrous type of destruction.

The cost of data breaches alone is estimated at $3.8 to $4 million and that’s without even considering the wider reputational loss companies face. Organisations need to monitor how security threats are evolving year to year and get well-informed insights about the impact they can have on their business and reputation.

Ethics matter too

With all the recent press coverage of how fake news may have affected the US election and may impact the upcoming German and French elections as well as the implications of driverless cars making life and death decisions for us, the ethics of cognitive computing is becoming a more and more serious topic for public discussion as well as potential government intervention.

In October last year the Whitehouse released a report called Preparing for the Future of Artificial Intelligence. The report looked at the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy and made a number of recommendations on further actions. These included:

  • Prioritising open training data and open data standards in AI.
  • Industry should work with government to keep government updated on the general progress of AI in industry, including the likelihood of milestones being reached
  • The Federal government should prioritize basic and long-term AI research

As part of the answer to addressing the Whitehouse report this week a group of private investors, including LinkedIn co-founder Reid Hoffman and eBay founder Pierre Omidyar, launched a $27 million research fund, called the Ethics and Governance of Artificial Intelligence Fund. The group’s purpose is to foster the development of artificial intelligence for social good by approaching technological developments with input from a diverse set of viewpoints, such as policymakers, faith leaders, and economists.

I have discussed before about transformative technologies like the world wide web have impacted all of our lives, and not always for the good. I hope that initiatives like that of the US government (which will hopefully continue under the new leadership) will enable a good and rationale public discourse on how  we allow these new systems to shape our lives for the next ten years and beyond.