Why it’s different this time

Image Created Using Adobe Photoshop and Firefly

John Templeton, the American-born British stock investor, once said: “The four most expensive words in the English language are, ‘This time it’s different.’”

Templeton was referring to people and institutions who had invested in the next ‘big thing’ believing that this time it was different, the bubble could not possibly burst and their investments were sure to be safe. But then, for whatever reason, the bubble did burst and fortunes were lost.

Take as an example the tech boom of the late 1980s and 1990s. Previously unimagined technologies that no one could ever see any sign of failing meant investors poured their money into this boom. Then it all collapsed and many fortunes were lost as the Nasdaq dropped 75 percent.

It seems to be an immutable law of economics that busts will follow booms as sure as night follows day. The trick then is to predict the boom and exit your investment at the right time – not too soon and not too late, to paraphrase Goldilocks.

Most recently the phrase “this time it’s different” is being applied to the wave of AI technology which has been hitting our shores, especially since the widespread release of large language model technologies which current AI tools like OpenAI’s ChatGPT, Google’s PaLM, and Meta’s LLaMA use as their underpinning.

Which brings me to the book The Coming Wave by Mustafa Suleyman.

Suleyman was the co-founder of DeepMind (now owned by Google) and is currently CEO of Inflection an AI ‘studio’ that, according to its company blurb is “creating a personal AI for everyone”.

The Coming Wave provides us with an overview not just of the capabilities of current AI systems but also contains a warning which Suleyman refers to as the containment problem. If our future is to depend on AI technology (which it increasingly looks like it will given that, according to Suleyman, LLMs are the “fastest, diffusing consumer models we have ever seen“) how do you make it a force for good rather than evil whereby a bunch of ‘bad actors’ could imperil our very existence? In other words, how do you monitor, control and limit (or even prevent) this technology?

Suleyman’s central premise in this book is that the coming technological wave of AI is different from any that have gone before for five reasons which makes containment very difficult (if not impossible). In summary, these are:

  • Reason #1: Asymmetry – the potential imbalances or disparities caused by artificial intelligence systems being able to transfer extreme power from state to individual actors.
  • Reason #2: Exponentiality – the phenomenon where the capabilities of AI systems, such as processing power, data storage, or problem-solving ability, increase at an accelerating pace over time. This rapid growth is often driven by breakthroughs in algorithms, hardware, and the availability of large datasets.
  • Reason #3: Generality – the ability of an artificial intelligence system to apply their knowledge, skills, or capabilities across a wide range of tasks or domains.
  • Reason #4: Autonomy – the ability of an artificial intelligence system or agent to operate and make decisions independently, without direct human intervention.
  • Reason #5: Technological Hegemony – the malignant concentrations of power that inhibit innovation in the public interest, distort our information systems, and threaten our national security.

Suleyman’s book goes into each of these attributes in detail and I do not intend to repeat any of that here (buy the book or watch his explainer video). Suffice it to say however that collectively these attributes mean that this technology is about to deliver us nothing less than a radical proliferation of power which, if unchecked, could lead to one of two possible (and equally undesirable) outcomes:.

  1. A surveillance state (which China is currently building and exporting).
  2. An eventual catastrophe born of runaway development.

Other technologies have had one or maybe two of these capabilities but I don’t believe any have had all five, certainly at the level AI has. For example electricity was a general purpose technology with multiple applications but even now individuals cannot build their own generators (easily) and there is certainly not any autonomy in power generation. The internet comes closest to having all five attributes but it is not currently autonomous (though AI itself threatens to change that).

To be fair, Suleyman does not just present us with what, by any measure, is a truly wicked problem he also offers a ten point plan for for how we might begin to address the containment problem and at least dilute the effects the coming wave might have. These stretch from including built in safety measures to prevent AI from acting autonomously in an uncontrolled fashion through regulation by governments right up to cultivating a culture around this technology that treats it with caution from the outset rather than adopting the move fast and break things philosophy of Mark Zuckerberg. Again, get the book to find out more about what these measures might involve.

My more immediate concerns are not based solely on the five features described in The Coming Wave but on a sixth feature I have observed which I believe is equally important and increasingly overlooked by our rush to embrace AI. This is:

  • Reason #6: Techno-paralysis – the state of being overwhelmed or paralysed by the rapid pace of technological change caused by technology systems.

As is the case of the impact of the five features of Suleyman’s coming wave I see two, equally undesirable outcomes of techno-paralysis:

  1. People become so overwhelmed and fearful because of their lack of understanding of these technological changes they choose to withdraw from their use entirely. Maybe not just “dropping out” in an attempt to return to what they see as a better world, one where they had more control, but by violently protesting and attacking the people and the organisations they see as being responsible for this “progress”. I’m talking the Tolpuddle Martyrs here but on a scale that can be achieved using the organisational capabilities of our hyper-connected world.
  2. Rather than fighting against techno-paralysis we become irretrevably sucked into the systems that are creating and propagating these new technologies and, to coin a phrase, “drink the Kool-Aid”. The former Greek finance minister and maverick economist Yanis Varoufakis, refers to these systems, and the companies behind them, as the technofeudalists. We have become subservient to these tech overlords (i.e. Amazon, Alphabet, Apple, Meta and Microsoft) by handing over our data to their cloud spaces. By spending all of our time scrolling and browsing digital media we are acting as ‘cloud-serfs’ — working as unpaid producers of data to disproportionately benefit these digital overlords.

There is a reason why the big-five tech overlords are spending hundreds of billions of dollars between them on AI research, LLM training and acquisitions. For each of them this is the next beachhead that must be conquered and occupied, the spoils of which will be huge for those who get their first. Not just in terms of potential revenue but also in terms of new cloud-serfs captured. We run the risk of AI being the new tool of choice in weaponising the cloud to capture larger portions of our time in servitude to these companies who produce evermore ingenious ways of controlling our thoughts, actions and minds.

So how might we deal with this potentially undesirable outcome of the coming wave of AI? Surely it has to be through education? Not just of our children but of everyone who has a vested interest in a future where we control our AI and not the other way round.

Last November the UK governments Department for Education (DfE) released the results from a Call for Evidence on the use of GenAI in education. The report highlighted the following benefits:

  • Freeing up teacher time (e.g. on administrative tasks) to focus on better student interaction.
  • Improving teaching and education materials to aid creativity by suggesting new ideas and approaches to teaching.
  • Helping with assessment and marking.
  • Adaptive teaching by analysing students’ performance and pace, and to tailor educational materials accordingly.
  • Better accessibility and inclusion e.g. for SEND students, teaching materials could be more easily and quickly differentiated for their specific.

whilst also highlighting some potential risks including:

  • An over reliance on AI tools (by students and staff) which would compromise their knowledge and skill development by encouraging them to passively consume information.
  • Tendency of GenAI tools to produce inaccurate, biased and harmful outputs.
  • Potential for plagiarism and damage to academic integrity.
  • Danger that AI will be used for the replacement or undermining of teachers.
  • Exacerbation of digital divides and problems of teaching AI literacy in such a fast changing field.

I believe that to address these concerns effectively, legislators should consider implementing the following seven point plan:

  1. Regulatory Framework: Establish a regulatory framework that outlines the ethical and responsible use of AI in education. This framework should address issues such as data privacy, algorithm transparency, and accountability for AI systems deployed in educational settings.
  2. Teacher Training and Support: Provide professional development opportunities and resources for educators to effectively integrate AI tools into their teaching practices. Emphasize the importance of maintaining a balance between AI-assisted instruction and traditional teaching methods to ensure active student engagement and critical thinking.
  3. Quality Assurance: Implement mechanisms for evaluating the accuracy, bias, and reliability of AI-generated content and assessments. Encourage the use of diverse datasets and algorithms to mitigate the risk of producing biased or harmful outputs.
  4. Promotion of AI Literacy: Integrate AI literacy education into the curriculum to equip students with the knowledge and skills needed to understand, evaluate, and interact with AI technologies responsibly. Foster a culture of critical thinking and digital citizenship to empower students to navigate the complexities of the digital world.
  5. Collaboration with Industry and Research: Foster collaboration between policymakers, educators, researchers, and industry stakeholders to promote innovation and address emerging challenges in AI education. Support initiatives that facilitate knowledge sharing, research partnerships, and technology development to advance the field of AI in education.
  6. Inclusive Access: Ensure equitable access to AI technologies and resources for all students, regardless of their gender, socioeconomic background or learning abilities. Invest in infrastructure and initiatives to bridge the digital divide and provide support for students with special educational needs and disabilities (SEND) to benefit from AI-enabled educational tools.
  7. Continuous Monitoring and Evaluation: Regularly monitor and evaluate the implementation of AI in education to identify potential risks, challenges, and opportunities for improvement. Collect feedback from stakeholders, including students, teachers, parents, and educational institutions, to inform evidence-based policymaking and decision-making processes.

The coming AI wave cannot be another technology that we let wash over and envelop us. Indeed Suleyman himself towards the end of his book makes the following observations…

Technologist cannot be distant, disconnected architects of the future, listening only to themselves.

Technologists must also be credible critics who…
…must be practitioners. Building the right technology, having the practical means to change its course, not just observing and commenting, but actively showing the way, making the change, effecting the necessary actions at source, means critics need to be involved.

If we are to avoid widespread techno-paralysis caused by this coming wave than we need a 21st century education system that is capable of creating digital citizens that can live and work in this brave new world.

Machines like us? – Part II

Brain image by Elisa from Pixabay. Composition by the author

[Creativity is] the relationship between a human being and the mysteries of inspiration.

Elizabeth Gilbert – Big Magic

Another week and another letter from a group of artificial intelligence (AI) experts and public figures expressing their concern about the risk of AI. This one has really gone mainstream with Channel 4 News here in the UK having it as their lead story on their 7pm broadcast. They even managed to get Max Tegmark as well as Tony Cohn – professor of automated reasoning at the University of Leeds – on the programme to discuss this “risk of extinction”.

Whilst I am really pleased that the risks from AI are finally being discussed we must be careful not to focus too much on the Terminator-like existential threat that some people are predicting if we don’t mitigate against them in some way. There are certainly some scenarios which could lead to an artificial general intelligence (AGI) causing destruction on a large scale but I don’t believe these are imminent and as likely to happen as the death and destruction likely to be caused by pandemics, climate change or nuclear war. Instead, some of the more likely negative impacts of AGI might be:

It’s worth pointing out that all of the above scenarios do not involve AI’s suddenly deciding themselves they are going to wreak havoc and destruction but would involve humans being somewhere in the loop that initiates such actions.

It’s also worth noting that there are fairly serious rebuttals emerging to the general hysterical fear and paranoia being promulgated by the aforementioned letter. Marc Andreessen for example says that what “AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence – and many others, from the creation of new medicines to ways to solve climate change to technologies to reach the stars – much, much better from here”.

Whilst it is possible that AI could be used as a force for good is it, as Naomi Klein points out, really going to happen under our current economic system? A system that is built to maximize the extraction of wealth and profit for a small group of hyper-wealthy companies and individuals. Is “AI – far from living up to all those utopian hallucinations – [is] much more likely to become a fearsome tool of further dispossession and despoilation”. I wonder if this topic will be on the agenda for the proposed global AI ‘safety measure’ summit in autumn?

Whilst both sides of this discussion have good valid arguments for and against AI, as discussed in the first of these posts, what I am more interested in is not whether we are about to be wiped out by AI but how we as humans can coexist with this technology. AI is not going to go away because of a letter written by a groups of experts. It may get legislated against but we still need to figure out how we are going to live with artificial intelligence.

In my previous post I discussed whether AI is actually intelligent as measured against Tegmark’s definition of intelligence, namely the: “ability to accomplish complex goals”. This time I want to focus on whether AI machines can actually be creative.

As you might expect, just like with intelligence, there are many, many definitions of creativity. My current favourite is the one by Elizabeth Gilbert quoted above however no discussion on creativity can be had without mentioning the late Ken Robinsons definition: “Creativity is the process of having original ideas that have value”.

In the above short video Robinson notes that imagination is what is distinctive about humanity. Imagination is what enables us to step outside our current space and bring to mind things that are not present to our senses. In other words imagination is what helps us connect our past with the present and even the future. We have, what is quite possibly (or not) the unique ability in all animals that inhabit the earth, to imagine “what if”. But to be creative you do actually have to do something. It’s no good being imaginative if you cannot turn those thoughts into actions that create something new (or at least different) that is of value.

Professor Margaret Ann Boden who is Research Professor of Cognitive Science defines creativity as ”the ability to come up with ideas or artefacts that are new, surprising or valuable.” I would couple this definition with a quote from the marketeer and blogger Seth Godin who, when discussing what architects do, says they “take existing components and assemble them in interesting and important ways”. This too as essential aspect of being creative. Using what others have done and combining these things in different ways.

It’s important to say however that humans don’t just pass ideas around and recombine them – we also occassionally generate new ideas that are entirely left-field through processes we do not understand.

Maybe part of the reason for this is because, as the writer William Deresiewicz says:

AI operates by making high-probability choices: the most likely next word, in the case of written texts. Artists—painters and sculptors, novelists and poets, filmmakers, composers, choreographers—do the opposite. They make low-probability choices. They make choices that are unexpected, strange, that look like mistakes. Sometimes they are mistakes, recognized, in retrospect, as happy accidents. That is what originality is, by definition: a low-probability choice, a choice that has never been made.

William Deresiewicz, Why AI Will Never Rival Human Creativity

When we think of creativity, most of us associate it to some form of overt artistic pursuit such as painting, composing music, writing fiction, sculpting or photography. The act of being creative is much more than this however. A person can be a creative thinker (and doer) even if they never pick up a paintbrush or a musical instrument or a camera. You are being creative when you decide on a catchy slogan for your product; you are being creative when you pitch your own idea for a small business; and most of all, you are being creative when you are presented with a problem and come up with a unique solution. Referring to the image at the top of my post, who is the most creative – Alan Turing who invented a code breaking machine that historians reckon reduced the length of World War II by at least two years saving millions of lives or Picasso whose painting Guernica expressed his outrage against war?

It is because of these very human reasons on what creativity is that AI will never be truly creative or rival our creativity. True creativity (not just a mashup of someone else’s ideas) only has meaning if it has an injection of human experience, emotion, pain, suffering, call it what you will. When Nick Cave was asked what he thought of ChatGPT’s attempt at writing a song in the style of Nick Cave, he answered this:

Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend.

Nick Cave, The Red Hand Files

Imagination, intuition, influence and inspiration (the four I’s of creativity) are all very human characteristics that underpin our creative souls. In a world where having original ideas sets humans apart from machines, thinking creatively is more important than ever and educators have a responsibility to foster, not stifle their students’ creative minds. Unfortunately our current education system is not a great model for doing this. We have a system whose focus is on learning facts and passing exams and which will never allow people to take meaningful jobs that allow them to work alongside machines that do the grunt work whilst allowing them to do what they do best – be CREATIVE. If we don’t do this, the following may well become true:

In tomorrow’s workplace, either the human is telling the robot what to do or the robot is telling the human what to do.

Alec Ross, The Industries of the Future

Machines like us? – Part I

From The Secret of the Machines, Artist unknown

Our ambitions run high and low – for a creation myth made real, for a monstrous act of self love. As soon as it was feasible, we had no choice, but to follow our desires and hang the consequences.

Ian McEwan, Machines Like Me

I know what you’re thinking – not yet another post on ChatGPT! Haven’t enough words been written (or machine-generated) on this topic in the last few months to make the addition of any more completely unnecessary? What else is there to possibly say?

Well, we’ll see.

First, just in case you have been living in a cave in North Korea for the last year, what is ChatGPT? Let’s ask it…

ChatGPT is an AI language model developed by OpenAI. It is based on the GPT (Generative Pre-trained Transformer) architecture, specifically GPT-3.5. GPT-3.5 is a deep learning model that has been trained on a diverse range of internet text to generate human-like responses to text prompts.

ChatGPT response to the question: “What is ChatGPT”.

In this post, I am not interested in what use cases ChatGPT is or is not good for. I’m not even particularly interested in what jobs ChatGPT is going to replace in the coming years. Let’s face it, if the CEO of IBM, Arvind Krishna, is saying I could easily see 30 per cent of [non-customer-facing roles] getting replaced by AI and automation over a five-year period” then many people are already going to be worried so I’m not going to add to those fears.

I see much of what Krishna predicts as inevitable. Unless the world takes note of the recent letter from the tech/AI ‘great and the good’ (which appears to have some fake signatories anyway) then the simple fact is that if you can algorithmically capture a solution to a problem, people who solve those problems for a living are probably going to get replaced by such algorithms.

What interests me is whether deep learning models like ChatGPT really are intelligent and, more significantly, are they creative. In this post I’ll consider intelligence and in a later post creativity.

It turns out that intelligence is actually quite a hard thing to define. As Max Tegmark, president of the Future of Life Institute, says: “there’s no agreement on what intelligence is even amongst intelligent intelligence researchers!” Helpfully however, in his book Life 3.0, Tegmark puts us out of our misery and defines intelligence as the: “ability to accomplish complex goals”. This, as even Tegmark acknowledges, is a very broad definition. He also claims this definition subsumes the ability to “apply knowledge and skills” as this could be a goal one sets oneself.

Having defined intelligence it then becomes relatively easy, for Tegmark at least, to define artificial intelligence (AI) as: “non biological intelligence”; general intelligence as the: “ability to accomplish virtually any goal, including learning”; artificial general intelligence (AGI) as the: “ability to accomplish any cognitive task at least as well as humans” and superintelligence as: “general intelligence far beyond human level”.

So, given the relatively low bar that these definitions are set at, where would we say ChatGPT and its ilk are currently at on the intelligence scale?

I guess this really hinges on whether you think the multitude of tasks that these deep learning models can perform constitutes “accomplishing virtually any goal” or not? For many people (especially as we’ll see, AI researchers) the answer is probably yes.

But let’s ask ChatGPT if it thinks it’s an AGI?

It is important to note that as an AI language model, I am an instantiation of artificial general intelligence (AGI) and can provide insights based on my training up until September 2021.

Partial ChatGPT response to the question: “Given Max Tegmark’s definition of intelligence where would you say you are on the various intelligence levels he proposes?”.

Personally, and up until a few weeks ago, I would have said ChatGPT was getting a bit above itself to say it was an “instantiation” of an AGI but then I read an interview with Jaron Lanier titled How humanity can defeat AI.

Lanier works for Microsoft and is the author of a number of what you might call anti-social media books including You Are Not A Gadget and Ten Arguments For Deleting Your Social Media Accounts Right Now.

Lanier’s argument in this interview is that we have got AI wrong and we should not be treating it as a new form of intelligence at all. Indeed he has previously stated there is no AI. Instead Lanier reckons we have built a new and “innovative form of social collaboration”. Like the other social collaboration platforms that Lanier has argued we should all leave because they have gone horribly wrong this new form too could become perilous in nature if we don’t design it well. In Lanier’s view therefore the sooner we understand there is no such thing as AI, the sooner we’ll start managing our new technology intelligently and learn how to use it as a collaboration tool.

Whilst all of the above is well intentioned the real insightful moment for me came when Lanier was discussing Alan Turing’s famous test for intelligence. Let me quote directly what Lanier says.

You’ve probably heard of the Turing test, which was one of the original thought-experiments about artificial intelligence. There’s this idea that if a human judge can’t distinguish whether something came from a person or computer, then we should treat the computer as having equal rights. And the problem with that is that it’s also possible that the judge became stupid. There’s no guarantee that it wasn’t the judge who changed rather than the computer. The problem with treating the output of GPT as if it’s an alien intelligence, which many people enjoy doing, is that you can’t tell whether the humans are letting go of their own standards and becoming stupid to make the machine seem smart.

Jaron Lanier, How humanity can defeat AI, UnHerd, May 8th 2023

There is no doubt that we are in great danger of believing whatever bullshit GPT’s generate. The past decade or so of social media growth has illustrated just how difficult we humans find it to handle misinformation and these new and wondrous machines are only going to make that task even harder. This, coupled with the problem that our education system seems to reward the regurgitation of facts rather than developing critical thinking skills is, as journalist Kenan Malik says, increasingly going to become more of an issue as we try to figure out what is fake and what is true.

Interestingly, around the time Lanier was saying “there is no AI”, the so called “godfather of AI”, Geoffrey Hinton was announcing he was leaving Google because he was worried that AI could become “more intelligent than humans and could be exploited by ‘bad actors'”. Clearly, as someone who created the early neural networks that were the predecessors to the large language models GPTs are built on Hinton could not be described as being “stupid”, so what is going on here? Like others before him who think AI might be exhibiting signs of becoming sentient, maybe Hinton is being deceived by the very monster he has helped create.

So what to do?

Helpfully Max Tegmark, somewhat tongue-in-cheek, has suggested the following rules for developing AI (my comments are in italics):

  • Don’t teach it to code: this facilitates recursive self-improvement – ChatGPT can already code.
  • Don’t connect it to the internet: let it learn only the minimum needed to help us, not how to manipulate us or gain power – ChatGPT certainly connected to the internet to learn what it already knows.
  • Don’t give it a public API: prevent nefarious actors from using it within their code – OpenAI is releasing a public API.
  • Don’t start an arms race: this incentivizes everyone to prioritize development speed over safety – I think it’s safe to say there is already an AI arms race between the US and China.

Oh dear, it’s not going well is it?

So what should we really do?

I think Lanier is right. Like many technologies that have gone before, AI is seducing us into believing it is something it is not – even, it seems, to its creators. Intelligent it may well be, at least by Max Tegmark’s very broad definition of what intelligence is, but let’s not get beyond ourselves. Whilst I agree (and definitely fear) AI could be exploited by bad actors it is still, at a fundamental level, little more than a gargantuan mash up machine that is regurgitating the work of the people who have written the text and created the images it spits out. These mash ups may be fooling many of us some of the time (myself included) but we must be not be fooled into losing our critical thought processes here.

As Ian McEwan points out, we must be careful we don’t “follow our desires and hang the consequences”.

On Ethics and Algorithms

franck-v-g29arbbvPjo-unsplash
Photo by Franck V. on Unsplash

An article on the front page of the Observer, Revealed: how drugs giants can access your health records, caught my eye this week. In summary the article highlights that the Department of Health and Social Care (DHSC) has been selling the medical data of NHS patients to international drugs companies and have “misled” the public that the information contained in the records would be “anonymous”.

The data in question is collated from GP surgeries and hospitals and, according to “senior NHS figures”, can “routinely be linked back to individual patients’ medical records via their GP surgeries.” Apparently there is “clear evidence” that companies have identified individuals whose medical histories are of “particular interest.” The DHSC have replied by saying it only sells information after “thorough measures” have been taken to ensure patient anonymity.

As with many articles like this it is frustrating when some of the more technical aspects are not fully explained. Whilst I understand the importance of keeping their general readership on board and not frightening them too much with the intricacies of statistics or cryptography it would be nice to know a bit more about how these records are being made anonymous.

There is a hint of this in the Observer report when it states that the CPRD (the Clinical Practice Research Datalink ) says the data made available for research was “anonymous” but, following the Observer’s story, it changed the wording to say that the data from GPs and hospitals had been “anonymised”. This is a crucial difference. One of the more common methods of ‘anonymisation’  is to obscure or redact some bits of information. So, for example, a record could have patient names removed and ages and postcodes “coarsened”, that is only the first part of a postcode (e.g. SW1A rather than SW1A 2AA)  are included and ages are placed in a range rather than using someones actual age (e.g. 60-70 rather than 63).

The problem with anonymising data records is that they are prone to what is referred to as data re-identification or de-anonymisation. This is the practice of matching anonymous data with publicly available information in order to discover the individual to which the data belongs. One of the more famous examples of this is the competition that Netflix organised encouraging people to improve its recommendation system by offering a $50,000 prize for a 1% improvement. The Netflix Prize was started in 2006 but abandoned in 2010 in response to a lawsuit and Federal Trade Commission privacy concerns. Although the dataset released by Netflix to allow competition entrants to test their algorithms had supposedly been anonymised (i.e. by replacing user names with a meaningless ID and not including any gender or zip code information) a PhD student from the University of Texas was able to find out the real names of people in the supplied dataset by cross-referencing the Netflix dataset with Internet Movie Database (IMDB) ratings which people post publicly using their real names.

Herein lies the problem with the anonymisation of datasets. As Michael Kearns and Aaron Roth highlight in their recent book The Ethical Algorithm, when an organisation releases anonymised data they can try and make an intelligent guess as to which bits of the dataset to anonymise but it can be difficult (probably impossible) to anticipate what other data sources either already exist or could be made available in the future which could be used to correlate records. This is the reason that the computer scientist Cynthia Dwork has said “anonymised data isn’t” – meaning either it isn’t really anonymous or so much of the dataset has had to be removed that it is no longer data (at least in any useful way).

So what to do? Is it actually possible to release anonymised datasets out into the wild with any degree of confidence that they can never be de-anonymised? Thankfully something called differential privacy, invented by the aforementioned Cynthia Dwork and colleagues, allows us to do just that. Differential privacy is a system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in that dataset.

To understand how differential privacy works consider this example*. Suppose we want to conduct a poll of all people in London to find out who have driven after taking non-prescription drugs. One way of doing this is to randomly sample a suitable number of Londoners, asking them if they have ever driven whilst under the influence of drugs. The data collected could be entered into a spreadsheet and various statistics, e.g. number of men, number of women, maybe ages etc derived. The problem is that whilst collecting this information lots of compromising personal details may be collected which, if the data were stolen, could be used against them.

In order to avoid this problem consider the following alternative. Instead of asking people the question directly, first ask them to flip a coin but not to tell us how it landed. If the coin comes up heads they tell us (honestly) if they have driven under the influence. If it comes up tails however they tell us a random answer then flip the coin again and tell us “yes” if it comes up heads or “no” if it is tails. This polling protocol is a simple randomised algorithm which is a form of differential privacy. So how does this work?

differential privacy
If your answer is no, the randomised response answers no two out of three times. It answers no only one out of three times if your answer is yes. Diagram courtesy Michael Kearns and Aaron Roth, The Ethical Algorithm 2020

When we ask people if they have driven under the influence using this protocol half the time (i.e. when the coin lands heads up) the protocol tells them to tell the truth. If the protocol tells them to respond with a random answer (i.e. when the coin lands tails up), then half of that time they just happen to randomly tell us the right answer. So they tell us the right answer 1/2 + ((1/2) x (1/2)) or three-quarters of the time. The remaining one quarter of the time they tell us a lie. There is no way of telling true answers from lies. Surely though, this injection of randomisation completely masks the true results and the data is now highly error prone? Actually, it turns out, this is not the case.

Because we know how this randomisation is introduced we can reverse engineer the answers we get to remove the errors and get an approximation of the right answer. Here’s how. Suppose one-third of people in London have actually driven under the influence of drugs. So of the one-third who have truthfully answered “yes” to the question, three-quarters of those will answer “yes” using the protocol, that is 1/3 x 3/4 = 1/4. Of the two-thirds who have a truthful answer of “no”, one-quarter of those will report “yes”, that is 2/3 x 1/4 = 1/6. So we expect 1/4 + 1/6 = 5/12 ~ 1/3 of the population to answer “yes”.

So what is the point of doing the survey like this? Simply put it allows the true answer to be hidden behind the protocol. If the data were leaked and an individual from it was identified as being suspected of driving under the influence then they could always argue they were told to say “yes” because of the way the coins fell.

In the real world a number of companies including the US census, Apple, Google and Privitar Lens use differential privacy to limit the disclosure of private information about individuals whose information is in public databases.

It would be nice to think that the NHS data that is supposedly being used by US drug companies was protected by some form of differential privacy. If it were, and if this could be explained to the public in a reasonable and rational way, then surely we would all benefit both in the knowledge that our data is safe and is maybe even being put to good use in protecting and improving our health. After all, wasn’t this meant to be the true benefit of living in a connected society where information is shared for the betterment of all our lives?

*Based on an example from Kearns and Roth in The Ethical Algorithm.

Cummings needs data scientists, economists and physicists (oh, and weirdos)

Dominic Cummings
Dominic Cummings – Image Copyright Business Insider Australia

To answer my (rhetorical) question in this post I think it’s been pretty much confirmed since the election that Dominic Cummings is, in equal measures, the most influential, disruptive, powerful and dangerous man in British politics right now. He has certainly set the cat amongst the pigeons in this blog post where he has effectively by-passed the civil service recruitment process by advertising for people to join his ever growing team of SPAD’s (special advisors). Cummings is looking for data scientists, project managers, policy experts and assorted weirdos to join his team. (Interestingly today we hear that the self-proclaimed psychic Uri Geller has applied for the job believing he qualifies because of the super-talented weirdo aspect of the job spec.)

Cummings is famed for his wide reaching reading tastes and the job spec also cites a number of scientific papers potential applicants “will be considering”. The papers mentioned are broadly in the areas of complex systems and the use of maths and statistics in forecasting which give an inkling into the kind of problems Cummings sees as those that need to be ‘fixed’ in the civil service as well as the government at large (including the assertion that “Brexit requires many large changes in policy and in the structure of decision-making”).

Like many of his posts, this particular one tends to ramble and also be contradictory. In one paragraph he’s saying that you “do not need a PhD” but then in the very next one saying you  “must have exceptional academic qualifications from one of the world’s best universities with a PhD or MSc in maths or physics.”

Cummings also returns to one of his favourite topics which is that of the failure of projects – mega projects in particular – and presumably those that governments tend to initiate and not complete on time or to budget (or at all). He’s an admirer of some of the huge project successes of yesteryear such as The Manhattan Project (1940s), ICBMs (1950s) and Apollo (1960s) but reckons that since then the Pentagon has “systematically de-programmed itself from more effective approaches to less effective approaches from the mid-1960s, in the name of ‘efficiency’.” Certainly the UK government is no stranger to some spectacular project failures itself both in the past and present (HS2 and Crossrail being two more contemporary examples of not so much failures but certainly massive cost overruns).

However as John Naughton points out here  “these inspirational projects have some interesting things in common: no ‘politics’, no bureaucratic processes and no legal niceties. Which is exactly how Cummings likes things to be.” Let’s face it both Crossrail and HS2 would be a doddle of only you could do away with all those pesky planning proposals and environmental impact assessments you have to do and just move people out of the way quickly – sort of how they do things in China maybe?

Cummings believes that now is the time to bring together the right set of people with a sufficient amount of cognitive diversity and work in Downing Street with him and other SPADs to start to address some of the wicked problems of government. One ‘lucky’ person will be his personal assistant, a role which he says will “involve a mix of very interesting work and lots of uninteresting trivia that makes my life easier which you won’t enjoy.” He goes on to say that in this role you “will not have weekday date nights, you will sacrifice many weekends — frankly it will hard having a boy/girlfriend at all. It will be exhausting but interesting and if you cut it you will be involved in things at the age of ~21 that most people never see.” That’s quite some sales pitch for a job!

What this so called job posting is really about though is another of Cummings abiding obsessions (which he often discusses in his blog) that the government in general, and civil service in particular (which he groups together as “SW1”), is basically not fit for purpose because it is scientifically and technologically illiterate as well as being staffed largely with Oxbridge humanities graduates. The posting is also a thinly veiled attempt at pushing the now somewhat outdated ‘move fast and break things” mantra of Silicon Valley. An approach that does not always play out well in government (Universal Credit anyone). I well remember my time working at the DWP (yes, as a consultant) where one of the civil servants with whom I was working said that the only problem with disruption in government IT was that it was likely to lead to riots on the streets if benefit payments were not paid on time. Sadly, Universal Credit has shown us that it’s not so much street riots that are caused but a demonstrable increase in demand for food banks. On average, 12 months after roll-out, food banks see a 52% increase in demand, compared to 13% in areas with Universal Credit for 3 months or less.

Cummings of course would say that the problem is not so much that disruption per se causes problems but rather the ineffective, stupid and incapable civil servants who plan and deploy such projects are at fault, hence the need for hiring the right ‘assorted weirdos’ who will bring new insights that fusty old civil servants cannot see. Whilst he may well be right that SW1 is lacking in deep technical experts as well as great project managers and ‘unusual’ economists he needs to realise that government transformation cannot succeed unless it is built on a sound strategy and good underlying architecture. Ideas are just thoughts floating in space until they can be transformed into actions that result in change which takes into account that the ‘products’ that governments deal with are people not software and hardware widgets.

This problem is far better articulated by Hannah Fry when she says that although maths has, and will continue to have, the capability to transform the world those who apply equations to human behaviour fall into two groups: “those who think numbers and data ultimately hold the answer to everything, and those who have the humility to realise they don’t.”

Possibly the last words should be left to Barack Obama who cautioned Silicon Valley’s leaders thus:

“The final thing I’ll say is that government will never run the way Silicon Valley runs because, by definition, democracy is messy. This is a big, diverse country with a lot of interests and a lot of disparate points of view. And part of government’s job, by the way, is dealing with problems that nobody else wants to deal with.

So sometimes I talk to CEOs, they come in and they start telling me about leadership, and here’s how we do things. And I say, well, if all I was doing was making a widget or producing an app, and I didn’t have to worry about whether poor people could afford the widget, or I didn’t have to worry about whether the app had some unintended consequences — setting aside my Syria and Yemen portfolio — then I think those suggestions are terrific. That’s not, by the way, to say that there aren’t huge efficiencies and improvements that have to be made.

But the reason I say this is sometimes we get, I think, in the scientific community, the tech community, the entrepreneurial community, the sense of we just have to blow up the system, or create this parallel society and culture because government is inherently wrecked. No, it’s not inherently wrecked; it’s just government has to care for, for example, veterans who come home. That’s not on your balance sheet, that’s on our collective balance sheet, because we have a sacred duty to take care of those veterans. And that’s hard and it’s messy, and we’re building up legacy systems that we can’t just blow up.”

Now I think that’s a man who shows true humility, something our current leaders (and their SPADs) could do with a little more of I think.

 

Is Dominic Cummings the Most Influential* Person in British Politics?

Dominic Cummings. Photograph from parliament.tv

If you want to understand the likely trajectory of the new Conservative government you could do worse than study the blog posts of Dominic Cummings. In case you missed this announcement amongst all the cabinet reshuffling that happened last week, Cummings is to be Boris Johnson’s new “special adviser”.

*For what it’s worth I could equally have used any of the adjectives ‘disruptive’, ‘powerful’ or ‘dangerous’ here I think.

Cummings has had three previous significant advisory roles either in UK government or in support of political campaigns:

  • Campaign director at Business for Sterling (the campaign against the UK joining the Euro) between 1999 and 2002;
  • Special adviser to Michael Gove at the Deprtment for Education between 2010 and 2014;
  • Campaign Director Vote Leave between 2015 and 2016.

Much has already been written about Cummings, some of it more speculative and wishful thinking than factual I suspect, that you can find elsewhere (David Cameron was alleged to have called Cummings a “career psychopath“). What is far more interesting to me is what Cummings writes in his sometimes rambling blog posts, and what I focus on here.

In his capacity advising Gove at the DfE Cummings wrote a 240-page essay, Some thoughts on education and political priorities which was about transforming Britain into a “meritocratic technopolis”. Significantly during Gove’s tenure as education minister we saw far more emphasis on maths and grammar being taught from primary age (8-11) and teaching of ‘proper’ computer science in secondary schools (i.e. programming rather than how to use Microsoft Office products). Clearly his thoughts were being acted upon.

Given that his advise has been implemented before it does not seem unreasonable that a study of Cummings blog posts may give us some insight into what ideas we may see enacted by the current government. Here are a few of Cummings most significant thoughts from my reading of his blog. I have only included thoughts on his more recent posts, mainly those from his time in exile between the end of the Vote Leave campaign and now. Many of these build on previous posts anyway but more significantly are most relevant to what we are about to see happen in Johnsons new government. The name of the post is highlighted in italics and also contains a hyperlink to the actual post.

High performance government, ‘cognitive technologies’, Michael Nielsen, Bret Victor, & ‘Seeing Rooms’

Cummings is very critical of the UK civil service, as well as government ministers, that he maintains do not make decisions based on facts and hard data but more often on intuition, feelings and inevitably their own biases and prejudices. In this post he suggests that ‘systems’ should be implemented to help run government. These would be things like:

  • Cognitive toolkits and AI that would support rational decision-making and help to decide what is possible as well as what is not (and why).
  • Prediction tournaments that could easily and cheaply be extended to consider ‘clusters’ of issues around themes like Brexit to improve policy and project management.
  • Red Teams and pre-mortems to help combat groupthink and “normal cognitive biases” . He advocates that Red Teams should work ‘above’ the Cabinet Office to ensure diversity of opinions, fight groupthink and other standard biases that make sure lessons are learned and government blunders avoided or at least minimised.
  • Seeing rooms that would replace the antiquated meeting spaces found in much of government (e.g. the Cabinet room) and use state of the art screens, IT and conference facilities to ensure better and more accurate decision making.

Two people mentioned often in this post by Cummings are Bret Victor and Michael Nielsen. Victor is a an interface designer, computer scientist, and electrical engineer who writes and talks on the future of technology. Nielsen is also a writer and computer scientist with an interest in neural networks and deep learning. The way Cummings immerses himself in fields outside of his area of expertise (he studied Ancient & Modern History at Oxford) and makes connections between different disciplines is itself instructive. Often the best ideas come from having such a cross-disciplinary approach to life without confining oneself to your particular comfort zone.

‘Systems’ thinking — ideas from the Apollo space programme on effective management and a possible ‘systems politics’

This post, published as a paper in February 2017, looks at what Cummings refers to as ‘mission critical’ political institutions” i.e. government departments with huge budgets, complex programs of work like HS2 (or Brexit) and those dealing with emergency situations such as terrorist incidents and wars. It looks at how disasters can (or could) be avoided by deploying “high performance man-machine teams” where the individuals involved are selected on the basis of their training and education as well “incentives”. The paper considers the development of new ideas about managing complex projects that were used by George Mueller to put men on the moon in 1969.

This quote sums up Cummings concerns with our current political institutions:

The project of rewiring institutions and national priorities is a ‘systems’ problem requiring a systems solution. Could we develop a systems politics that applies the unrecognised simplicities of effective action? The tale of George Mueller will be useful for all those thinking about how to improve government performance dramatically, reliably, and quantifiably.

The paper gives a potted history of systems engineering ideas and practices bringing in everyone from the military strategist John Boyd to the mathematician John von Neumann and along the way. Cummings is also fond of comparing the success of NASA’s mission to put a man on the moon and bring him safely home to the failure of the European Launcher Development Organisation (ELDO) to even launch a rocket. The difference being (according to Cummings) that NASA’s success was due to “a managerial effort, no less prodigious than the technological one”.

Cummings core lessons for politics which he believes “could be applied to re-engineering political institutions such as Downing Street” are many and varied. but here are a few, which even after less than a week of Boris Johnsons government I think we are seeing being enacted. How that is happening are my italics in the below.

  • Organisation-wide orientation. Everybody in a large organisation must understand as much about the goals and plans as possible. The UK is leaving the EU on 31st October 2019.
  • There must be an overall approach in which the most important elements fit together, including in policy, management, and communications. Johnson has completely gutted May’s cabinet and everyone new onboard has allegedly been told they must be on message, tow the party line and vote with the government in any upcoming parliamentary votes.
  • You need a complex mix of centralisation and decentralisation.While overall vision, goals, and strategy usually comes from the top, it is vital that extreme decentralisation dominates operationally so that decisions are fast and unbureaucratic. Interesting that Johnsons first act as prime minister is to visit the regions (not Brussels) promising them various amounts of money presumably to do just this.
  • People and ideas are more important than technology. Computers and other technologies can help but Colonel Boyd’s dictum holds: people, ideas, technology — in that order. It is too early to see if this approach will be implemented. Certainly government does not have a good track record when it comes to implementing IT systems so it will be interesting to see if the ‘solution’ to the Irish backstop does end up being IT driven.

‘Expertise’, prediction and noise, from the NHS killing people to Brexit

What this post is about is probably best summed up by Cummings own words near the beginning of the article:

In SW1 (i.e. Whitehall) now, those at the apex of power practically never think in a serious way about the reasons for the endemic dysfunctional decision-making that constitutes most of their daily experience or how to change it. What looks like omnishambles to the public and high performers in technology or business is seen by Insiders, always implicitly and often explicitly, as ‘normal performance’. ‘Crises’ such as the collapse of Carillion or our farcical multi-decade multi-billion ‘aircraft carrier’ project occasionally provoke a few days of headlines but it’s very rare anything important changes in the underlying structures and there is no real reflection on system failure.

Although this post covers some of the same ground as previous ones it shows how Cummings ideas on how to tackle the key problems of government are beginning to coalesce, probably best summed up in the following:

One of the most powerful simplicities in all conflict (almost always unrecognised) is: ‘winning without fighting is the highest form of war’. If we approach the problem of government performance at the right level of generality then we have a chance to solve specific problems ‘without fighting’ — or, rather, without fighting nearly so much and the fighting will be more fruitful.

If you see the major problem of government as solving the wicked problem of Brexit it will be interesting to see how, and if, Cummings manages to tackle this particular issue. After all it has already led to two prime ministers resigning or being pushed out and even Boris Johnsons’ tenure is not guaranteed if he fails to deliver Brexit or calls an election that gains a greatly increased majority that allows him to push his ideas through.

The Digital Activist’s View

Few would argue that a government that based its decisions on data, more scientific methods and industry best practices around project and systems management would not be a good thing. However, using data to understand people and their needs is very different to using data to try and influence what people think, how they vote and the way they go about their daily lives. Something that Vote Leave (and by implication Cummings) have been accused of by proliferating fake new stories during the leave campaign. In short who is going to sit above the teams that position themselves above our decision makers?

One of Cummings pet hates is the whole Whitehall/civil service infrastructure. He sees it as being archaic and not fit for purpose and an organisation whose leaders come from a particular educational background and set of institutions that religiously follow the rules as well as outdated work practices no matter what. To quote Cummings from this paper:

The reason why Gove’s team got much more done than ANY insider thought was possible – including Cameron and the Perm Sec – was because we bent or broke the rules and focused very hard on a) replacing rubbish officials and bringing in people from outside and b) project management.

The danger here is that by bringing in some of the changes Cummings is advocating just risks replacing one set of biases/backgrounds with another. After all the industries that are spawning both the tools and techniques he is advocating (i.e. predominantly US West Coast tech companies) are hardly known for their gender/ethnic diversity or socially inclusive policies. They too tend to follow particular practices, some of which may work when running a startup business but less so when running a country. I remember being told myself when discussing ‘disruption’ with a civil servant in one of the UK’s large departments of state that the problem with disruption in government is that it can lead to rioting in the streets if it goes wrong.

There is also a concern that by focusing on the large, headline grabbing government departments (e.g. Cabinet Office, DWP, MoD etc) you miss some of the good work being done by lesser departments and agencies within them. I’m thinking of Ordnance Survey and HM Land Registry in particular (both currently part of the Department for Business, Energy and Industrial Strategy and which I have direct experience of working with). The Ordnance Survey (which is classified as a ‘public corporation) has successfully mapped the UK for over 100 years and runs a thriving commercial business for its maps and mapping services. Similarly HM Land Registry has kept several trillion pounds worth of the nations land and property assets safe in digital storage for around 50 years and is looking at innovative ways of extending its services using technologies such as blockchain.

Sometimes when one’s entire working life is spent in the bubble that is Westminster it is easy to miss the innovative thinking that is going on outside. Often this is most successful when that thinking is being done by practitioners. For a good example of this see the work being done by the consultant neurologist Dr. Mark Wardle including this paper on using algorithms in healthcare.

If UK government really is as devoid of skills as Cummings is implying there is the danger they will try to ‘import’ skills by employing ever larger armies of consultants. This approach is fraught with danger as there is no guarantee the consultants will be as well read and immersed in the issues as Cummings hopes. The consultants will of course tell a good story but in my experience (i.e. as a consultant, not in government) unless they are well managed their performance is unlikely to be better than the people they are trying to replace. Cummings acknowledges this potential issue when he asks how we “distinguish between fields dominated by real expertise and those dominated by confident ‘experts’ who make bad predictions?

Finally, do we really want Whitehall to become a department of USA Inc by climbing into bed with a country which, under the presidency of Trump, seems to be leaning ever more rightward? As part of any post-Brexit trade deal it is likely the US will be seeking a greater say in running not just our civil service but health service, schools and universities. All at a time when its tech companies seem to be playing an ever more intrusive part in our daily lives.

So what is the answer to the question that is the title of this post? As someone who trained as a scientist and has worked in software architecture and development all of my life I recognise how some of the practices Cummings advocates could, if implemented properly, lead to change for the better in UK government at this critical time in the nations history. However we need to realise that ultimately by following the ideas of one, or a small group of people, we run the risk of replacing one dogma with another. Dogma always has to be something we are prepared to rip up no matter where or who it comes from. Sometimes we have to depend on what the military strategist John Boyd (one of Cummings influences) calls “intuitive competence” in order to deal with the novelty that permeates human life.

I also think that a government run by technocrats will not necessarily lead to a better world. Something I think even Cummings hints at when he says:

A very interesting comment that I have heard from some of the most important scientists involved in the creation of advanced technologies is that ‘artists see things first’ — that is, artists glimpse possibilities before most technologists and long before most businessmen and politicians.

At the time of writing Boris Johnsons’ government is barely one week old. All we are seeing for now are the headline grabbing statements and sound bites. Behind the scenes though we can be sure that Cummings and his team of advisers are doing much string pulling and arm bending of ministers and civil servants alike. We shall soon see not just what the outcomes of this are, but how long Boris Johnson survives.

Software is Eating the World and Some Tech Companies are Eating Us

Today (12th March, 2018) is the World Wide Web’s 29th birthday.  Sir Tim Berners-Lee (the “inventor of the world-wide web”), in an interview with the Financial Times and in this Web Foundation post has used this anniversary to raise awareness of how the web behemoths Facebook, Google and Twitter are “promoting misinformation and ‘questionable’ political advertising while exploiting people’s personal data”.  Whilst I admire hugely Tim Berners-Lee’s universe-denting invention it has to be said he himself is not entirely without fault in the way he bequeathed us with his invention.  In his defence, hindsight is a wonderful thing of course, no one could have possibly predicted at the time just how the web would take off and transform our lives both for better and for worse.

If, as Marc Andreessen famously said in 2011, software is eating the world then many of those powerful tech companies are consuming us (or at least our data and I’m increasingly becoming unsure there is any difference between us and the data we choose to represent ourselves by.

Here are five recent examples of some of the negative ways software is eating up our world.

Over the past 40+ years the computer software industry has undergone some fairly major changes.  Individually these were significant (to those of us in the industry at least) but if we look at these changes with the benefit of hindsight we can see how they have combined to bring us to where we are today.  A world of cheap, ubiquitous computing that has unleashed seismic shocks of disruption which are overthrowing not just whole industries but our lives and the way our industrialised society functions.  Here are some highlights for the 40 years between 1976 and 2016.

waves-since-1976

And yet all of this is just the beginning.  This year we will be seeing technologies like serverless computing, blockchain, cognitive and quantum computing become more and more embedded in our lives in ways we are only just beginning to understand.  Doubtless the fallout from some of the issues I highlight above will continue to make themselves felt and no doubt new technologies currently bubbling under the radar will start to make themselves known.

I have written before about how I believe that we, as software architects, have a responsibility, not only to explain the benefits (and there are many) of what we do but also to highlight the potential negative impacts of software’s voracious appetite to eat up our world.

This is my 201st post on Software Architecture Zen (2016/17 were barren years in terms of updates).  This year I plan to spend more time examining some of the issues raised in this post and look at ways we can become more aware of them and hopefully not become so seduced by those sirenic entrepreneurs.

What Have we Learnt from Ten Years of the iPhone?

Ten years ago this week (on 9th January 2007) the late Steve Jobs, then at the hight of his powers at Apple, introduced the iPhone to an unsuspecting world. The history of that little device (which has got both smaller and bigger in the interceding ten years) is writ large over the entire Internet so I’m not going to repeat it here. However it’s worth looking at the above video on YouTube not just to remind yourself what a monumental and historical moment in tech history this was, even though few of us realised it at the time, but also to see a masterpiece in how to launch a new product.

Within two minutes of Jobs walking on stage he has the audience shouting and cheering as if he’s a rock star rather than a CEO. At around 16:25 when he’s unveiled his new baby and shows for the first time how to scroll through a list in a screen (hard to believe that ten years ago know one knew this was possible) they are practically eating out of his hand and he still has over an hour to go!

This iPhone keynote, probably one of the most important in the whole of tech history, is a case study on how to deliver a great presentation. Indeed, Nancy Duart in her book Resonate, has this as one of her case studies for how to “present visual stories that transform audiences”. In the book she analyses the whole event to show how Jobs’ uses all of the classic techniques of storytelling, establish what is and what could be, build suspense, keep your audience engaged, make them marvel and finally  show them a new bliss.

The iPhone product launch, though hugely important, is not what this post is about though. Rather, it’s about how ten years later the iPhone has kept pace with innovations in technology to not only remain relevant (and much copied) but also to continue to influence (for better and worse) the way people interact, communicate and indeed live. There are a number of enabling ideas and technologies, both introduced at launch as well as since, that have enabled this to happen. What are they and how can we learn from the example set by Apple and how can we improve on them?

Open systems generally beat closed systems

At its launch Apple had created a small set of native apps the making of which was not available to third-party developers. According to Jobs, it was an issue of security. “You don’t want your phone to be an open platform,” he said. “You don’t want it to not work because one of the apps you loaded that morning screwed it up. Cingular doesn’t want to see their West Coast network go down because of some app. This thing is more like an iPod than it is a computer in that sense.”

Jobs soon went back on that decision which is one of the factors that has led to the overwhelming success of the device. There are now 2.2 million apps available for download in the App Store with over 140 billion downloads made since 2007.

As has been shown time and time again, opening systems up and allowing access to third party developers nearly always beat keeping systems closed and locked down.

Open systems need easy to use ecosystems

Claiming your system is open does not mean developers will flock to it to extend your system unless it is both easy and potentially profitable to do so. Further, the second of these is unlikely to happen unless the first enabler is put in place.

Today with new systems being built around Cognitive computing, the Internet of Things (IoT) and Blockchain companies both large and small are vying with each other to provide easy to use but secure ecosystems that allow these new technologies to flourish and grow, hopefully to the benefits to business and society as a whole. There will be casualties on the way but this competition, and the recognition that systems need to be built right rather than us just building the right system at the time is what matters.

Open systems must not mean insecure systems

One of the reasons Jobs gave for not initially making the iPhone an open platform was his concerns over security and for hackers to break into those systems wreaking havoc. These concerns have not gone away but have become even more prominent. IoT and artificial intelligence, when embedded in everyday objects like cars and  kitchen appliances as well as our logistics and defence systems have the potential to cause there own unique and potentially disastrous type of destruction.

The cost of data breaches alone is estimated at $3.8 to $4 million and that’s without even considering the wider reputational loss companies face. Organisations need to monitor how security threats are evolving year to year and get well-informed insights about the impact they can have on their business and reputation.

Ethics matter too

With all the recent press coverage of how fake news may have affected the US election and may impact the upcoming German and French elections as well as the implications of driverless cars making life and death decisions for us, the ethics of cognitive computing is becoming a more and more serious topic for public discussion as well as potential government intervention.

In October last year the Whitehouse released a report called Preparing for the Future of Artificial Intelligence. The report looked at the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy and made a number of recommendations on further actions. These included:

  • Prioritising open training data and open data standards in AI.
  • Industry should work with government to keep government updated on the general progress of AI in industry, including the likelihood of milestones being reached
  • The Federal government should prioritize basic and long-term AI research

As part of the answer to addressing the Whitehouse report this week a group of private investors, including LinkedIn co-founder Reid Hoffman and eBay founder Pierre Omidyar, launched a $27 million research fund, called the Ethics and Governance of Artificial Intelligence Fund. The group’s purpose is to foster the development of artificial intelligence for social good by approaching technological developments with input from a diverse set of viewpoints, such as policymakers, faith leaders, and economists.

I have discussed before about transformative technologies like the world wide web have impacted all of our lives, and not always for the good. I hope that initiatives like that of the US government (which will hopefully continue under the new leadership) will enable a good and rationale public discourse on how  we allow these new systems to shape our lives for the next ten years and beyond.

Tech: The Missing Generation

I’ve recently been spending a fair bit of time in hospital. Not, thankfully, for myself but with my mother who fell and broke her arm a few weeks back which has resulted in lots of visits to our local Accident & Emergency (A&E)  department as well as a short stay in hospital whilst they pinned her arm back in place.

nhs hospital
An elderly gentleman walks past an NHS hospital sign in London. Photograph: Cate Gillon/Getty Images

Anyone who knows anything about the UK also knows how much we value our National Health Service (NHS). So much so that when it was our turn to run the Olympic Games back in 2012 Danny Boyle’s magnificent opening ceremony dedicated a whole segment to this wonderful institution featuring doctors, nurses and patients dancing around beds to music from Mike Oldfield’s Tubular Bells.

nhs london 2012 olympics
Olympic Opening Ceremony NHS Segment – Picture Courtesy the International Business Times

The NHS was created out of the ideal that good healthcare should be available to all, regardless of wealth. When it was launched by the then minister of health, Aneurin Bevan, on July 5 1948, it was based on three core principles:

  • that it meet the needs of everyone
  • that it be free at the point of delivery
  • that it be based on clinical need, not ability to pay

These three principles have guided the development of the NHS over more than 60 years, remain at its core and are embodied in its constitution.

nhs constitution
NHS Constitution Logo

All of this, of course, costs:

  • NHS net expenditure (resource plus capital, minus depreciation) has increased from £64.173 billion in 2003/04 to £113.300bn in 2014/15. Planned expenditure for 2015/16 is £116.574bn.
  • Health expenditure (medical services, health research, central and other health services) per capita in England has risen from £1,841 in 2009/10 to £1,994 in 2013/14.
  • The NHS net deficit for the 2014/15 financial year was £471 million (£372m underspend by commissioners and a £843m deficit for trusts and foundation trusts).
  • Current expenditure per capita for the UK was $3,235 in 2013. This can be compared to $8,713 in the USA, $5,131 in the Netherlands, $4,819 in Germany, $4,553 in Denmark, $4,351 in Canada, $4,124 in France and $3,077 in Italy.

The NHS also happens to be the largest employer in the UK. In 2014 the NHS employed 150,273 doctors, 377,191 qualified nursing staff, 155,960 qualified scientific, therapeutic and technical staff and 37,078 managers.

So does it work?

From my recent experience I can honestly say yes. Whilst it may not be the most efficient service in the world the doctors and nurses managed to fix my mothers arm and hopefully set her on the road to recovery. There have been, and I’m sure there will be more, setbacks but given her age (she is 90) they have done an amazing job.

Whilst sitting in those A&E departments whiling away the hours (I did say they could be more efficient) I had plenty of time to observe and think. By its very nature the health service is hugely people intensive. Whilst there is an amazing array of machines beeping and chirping away most activities require people and people cost money.

The UK’s health service, like that of nearly all Western countries, is under a huge amount of pressure:

  • The UK population is projected to increase from an estimated 63.7 million in mid-2012 to 67.13 million by 2020 and 71.04 million by 2030.
  • The UK population is expected to continue ageing, with the average age rising from 39.7 in 2012 to 42.8 by 2037.
  • The number of people aged 65 and over is projected to increase from 10.84m in 2012 to 17.79m by 2037. The number of over-85s is estimated to more than double from 1.44 million in 2012 to 3.64 million by 2037.
  • The number of people of State Pension Age (SPA) in the UK exceeded the number of children for the first time in 2007 and by 2012 the disparity had reached 0.5 million (though this is projected to reverse by).
  • There are an estimated 3.2 million people with diabetes in the UK (2013). This is predicted to reach 4 million by 2025.
  • In England the proportion of men classified as obese increased from 13.2 per cent in 1993 to 26.0 per cent in 2013 (peak of 26.2 in 2010), and from 16.4 per cent to 23.8 per cent for women over the same timescale (peak of 26.1 in 2010).

The doctors and nurses that looked after my mum so well are going to be coming under a increasing pressures as this ageing and less healthy population begins to suck ever more resources out of an already stretched system. So why, given the passion everyone has about the NHS, isn’t there more of a focus on getting technology to ease the burden of these overworked healthcare providers?

Part of the problem of course is that historically the tech industry hasn’t exactly covered itself with glory when it comes to delivering technology to the healthcare sector (I’m thinking the NHS National Programme for IT and the US HealthCare.gov system as being two high profile examples). Whilst some of this may be due to the blunders of government much of it is down to a combination of factors caused by both the providers and consumers of healthcare IT mis-communication and not understanding the real requirements that such complex systems tend to have.

In her essay How to build the Next Unicorn in Healthcare the entrepreneur Yasi Baiani   sets out six tactical tips for how to build a unicorn* digital startup. In summary these are:

  1. Understand the current system.
  2. Know your customers.
  3. Have product hooks.
  4. Have a clear monetization strategy and understand your customers’ willingness-to-pay.
  5. Know the rules and regulations.
  6. Figure out what your unfair competitive advantage is.

Of course, these are strategies that actually apply to any industry when trying to bring about innovation and disruption – they are not unique to healthcare. I would say that when it comes to the healthcare industry the reason why there has been no Uber is because the tech industry is ignoring the generation that is in most need of benefiting from technology, namely the post 65 age group. This is the age group that struggle most with technology either because they are more likely to be digitally disadvantaged or because they simply find it too difficult to get to grips with it.

As the former Yahoo chief technology officer Ashfaq Munshi, who has become interested in ageing tech says:

“Venture capitalists are too busy investing in Uber and things that get virality. The reality is that selling to older people is harder, and if venture capitalists detect resistance, they don’t invest.”

Matters are not helped by the fact that most tech entrepreneurs are between the ages of 20 and 35 and have different interests in life than the problems faced by the aged. As this article by Kevin Maney in the Independent points out:

“Entrepreneurs are told that the best way to start a company is to solve a problem they understand. It makes sense that those problems range from how to get booze delivered 24/7 to how to build a cloud-based enterprise human resources system – the tangible problems in the life and work of a 25- or 30-year-old.”

If it really is the case that entrepreneurs only look at problems they understand or are on their immediate event horizon then clearly we need more entrepreneurs of my age group (let’s just say 45+). We are the people either with elderly parents, like my mum, who are facing the very real problems of old age and poor health and who themselves will very soon be facing the same issues.

A recent Institute of Business Value report from IBM makes the following observation:

“For healthcare in particular, the timing for a game changer couldn’t be better. The industry is coping with upheaval triggered by varied economic, societal and industry influences. Empowered consumers living in an increasingly digital world are demanding more from an industry that is facing growing regulation, soaring costs and a shortage of skilled resources.”

Rather than fearing the new generation of cognitive systems we need to be embracing them and ruthlessly exploiting them to provide solutions that will ease all of our journeys into an ever increasing old age.

At  SXSW, which is running this week in Austin, Texas IBM is providing an exclusive look at its cognitive technology called Watson and showcasing a number of inspiring as well as entertaining applications of this technology. In particular on Tuesday 15th March there is a session called Ageing Populations & The Internet of Caring Things  where you can take a look at accessible technology and how it will create a positive impact on an aging person’s quality of life.

Also at SXSW this year President Obama gave a keynote interview where he called for action in the tech world, especially for applications to improve government IT. The President urged the tech industry to solve some of the nation’s biggest problems by working in conjunction with the government. “It’s not enough to focus on the cool, next big thing,” Obama said, “It’s harnessing the cool, next big thing to help people in this country.”

obama-sxsw
President Barack Obama speaks during the 2016 SXSW Festival at Long Center in Austin, Texas, March 11, 2016. PHOTO: NEILSON BARNARD/GETTY IMAGES FOR SXSW

It is my hope that with the vision that people such as Obama have given the experience of getting old will be radically different 10 or 20 years from now and that cognitive and IoT technology will make all of out lives not only longer but more more pleasant.

* Unicorns are referred to companies whose valuation has exceeded $1 billion dollars.