The Technological Republic According to Palantir

The provocative assertion of The Technological Republic by Alex Karp and Nicholas Zamiska is that the money and time the software engineers of Silicon Valley expend on “social media platforms and food delivery apps” would be better directed at more worthwhile challenges. For the authors these would involve things like: addressing violent crime, education reform, medical research and national defence. It is no coincidence I’m sure that at least three of these is where Palantir Technologies Inc., the company co-founded by Karp, makes much of its money.

Karp and Zamiska believe that the founders and CEOs of the current crop of hugely successful, and fabulously rich, tech companies see these challenges as being “too intractable, too thorny, and too politically fraught” to address in any real way. Hence their focus is on consumer friendly apps rather than addressing some of the world’s truly wicked problems. The book is therefore a rallying cry and a wake-up call for the tech entrepreneurs of Silicon Valley to address these thorny problems if Western democracy, as we know it, is to survive and retain its technological hegemony.

It’s worth noting at this point that Palantir is a software company that specialises in advanced data analytics and artificial intelligence. It was founded in 2003 by Peter Thiel, Stephen Cohen, Joe Lonsdale as well as Alex Karp. Palantir’s customers include the United States Department of Defence the CIA, the DHS, the NSA, the FBI and the NHS here in the UK. It is obviously in Palantir’s best interest to ensure that governments continue to spend money on the kind of products they make. In April 2023, the company launched Artificial Intelligence Platform (AIP) which integrates large language models into privately operated networks. The company demonstrated its use in war, where a military operator could deploy operations and receive responses via an AI.

As Palantir moves into AI it is obvious they are going to need engineers who not only have the technical knowledge to build such systems but also don’t mind working on products used in the defence industry. As the pair state early on in the book, if such engineering talent is not forthcoming then “medical breakthroughs, education reform, and military advances would have to wait” because the required technical talent is being directed at building “video-sharing apps and social media platforms, advertising algorithms and online shopping websites“.

Whilst I do agree that an enormous amount of talent does feel as if it is being misdirected in efforts to wring every last dollar out of improving algorithms for selling “stuff” to consumers where I feel the author’s treatise becomes hopelessly sidetracked is on the reasons for this. Some of the arguments the authors put forward as to why we have arrived at this sorry state of affairs include:

  • The abandonment of belief or conviction in “broader political projects” such as the Manhattan Project to build the first atomic bomb or the Apollo space programme to put a man on the moon.
  • The failure of earlier government funded projects which created much of the technology we use and take for granted today (e.g. the internet, personal computing) to capitalise on this technology and direct its use to more worthwhile efforts. Quoting the authors again: “When emerging technologies that give rise to wealth do not advance the broader public interest, trouble often follows. Put differently, the decadence of a culture or civilization, and indeed its ruling class, will be forgiven only if that culture is capable of delivering economic growth and security for the public“.
  • Failure of universities to teach properly the ‘values’ of Western Civilisation and to properly articulate the “collective sense of identity that was capable of serving as a foundation for a broader sense of cohesion and shared purpose“.
  • Even Steve Jobs gets a reprimand for building technology (the Mac personal computer and iPhone) that was “intimate and personal” and that would “liberate the individual from reliance on a corporate or governmental superstructure“. Interestingly Elon Musk does earn some brownie points for founding Tesla and SpaceX that have “stepped forward to fill glaring innovation gaps where national governments have stepped back“. No mention is made that SpaceX has received nearly $20.7 billion in government contracts, research grants, and other forms of public assistance, with about $14.6 billion of that coming from contracts with NASA.
  • At one point in the book the authors relate a scene from George Orwell’s 1984 where Winston Smith is wandering through a wooded area and imagines that even here, Big Brother may be listening to every word through microphones concealed in trees. They use this to put forward the argument that we may be near such levels of surveillance ourselves but it is not the “contraptions built by Silicon Valley” to blame for this rather it’s “we” who are to blame for “failing to encourage and enable the radical act of belief in something beyond, and external to, the self“. An interesting observation from the co-founder of the company that develops some of the “contraptions’ that enable surveillance capitalism.

I could go on , but you get the general idea.

The first two parts of the book set about explaining why and how, in the author’s view, we have lost our way in tackling the big challenges of our age – preferring instead to do things like reimagining online shopping or building photo-sharing and food delivery apps.

Karp and Zamiska believe that the generation of engineers coming out of the prestigious universities of the West and East coasts of America in the late 1990s and early 2000s were not just able to benefit from the relatively new technology of the internet and world-wide web but were also able to take advantage of the seemingly unlimited funds being offered by the venture capitalists who had made their fortunes from “Web 1.0”. This happened to coincide with an increased lack of trust in national governments as well as frustration in delays in adopting more progressive reforms and their “grand experiments and military misadventures on a world stage“. It was easier for founders looking for something to “disrupt” to focus on solving their own problems such as how to hail a taxi or get a book delivered more quickly. They were not “building software systems for defence and intelligence agencies, and they were certainly not building bombs“.

Part III of the book concentrates on what the authors refer to as “The Engineering Mindset”. The proposition here is that the success of Silicon Valley is in how it has not just hired the best and the brightest engineers but has given them the “freedom and space to create“. Though in the authors view they are not creating the right things.

To illustrate this Karp and Zamiska reference a number of biological and psychological studies and experiments from the 1950s and 1960s. These experiments were done in what the authors refer to as the “golden age of psychology“, that is before such experiments were monitored for their sometimes cavalier approach to ethics. The results of these studies showed, alarmingly, how, often the majority of, people are prone to group think and a hive mindset. They tend to follow what others do or do what they are told for fear of standing out or looking foolish. The assertion is then made that this “instinct towards obedience” can be “lethal” when trying to create a truly disruptive or entrepreneurial organisation. Presumably Palantir go out of their way to hire disobedient disruptors when recruiting for their organisation.

One of the experiments cited is the now infamous, and much quoted one in books like this – the so called “obedience experiment” devised by the psychology professor Stanley Milgram. This is the one where a group of people were tested as to their willingness to inflict harm on innocent strangers by supposedly giving them electric shocks if they were not seen to memorise words accurately. The person who was meant to be doing the memorising was an actor who would yell and shout as the voltage was supposedly increased and implore their “teachers” to stop. The startling outcome of this experiment was that two-thirds of the people giving the electric shocks were ‘happy’ to carry on doing so even though they knew that may harm the learner. The findings from this experiment have been used to explain why, amongst other things, concentration camp guards during the Second World War were willing to carry out their atrocious acts under the instructions of their commanding officers.

Karp and Zamiska use this experiment to justify how such an instinct toward obedience can be anathema to creativity. In their view it is only those people who resist any tendency to conformity and group think who are likely to be the outliers who come up with truly novel ideas and approaches. For the authors the most effective software companies are more akin to artist colonies “filled with temperamental and talented souls“.

Having taken this detour around biology and psychology to show how conformance tends to be the trait that the majority have, the authors return to their main point that even if you can identify and hire the creative nonconformists the challenge is in how to direct that creativity “toward the nations shared goals“. These, they assert, can only be identified if “we take the risk of defining who we are or aspire to be“.

So what should “we be” and what should we “aspire to”? This is what the authors attempt to address in the final part of the book and is where they start to give away some of their own political, religious and business beliefs.

They reference Lee Kuan Yew (a second time) at this point. Lee was the first prime minister of Singapore and who was charged with convincing a sceptical public that the newly formed island nation (having split from Malaysia own 1965) could be a viable entity. Lee’s approach was to manufacture a national identity amongst its citizens by involving his government in several aspects of its citizens private lives. These included requiring that all Chinese students learn Mandarin (as well as English) at school instead of the multiple different dialects they learnt at home. The authors attribute this and other attempts at forging a national identity as being responsible for Singapore’s exponential growth from a GDP of $428 in 1960 to $84,734 in 2023. Rather confusingly, in terms of adhering to the authors other argument around the dangers of conformity, Singapore is renowned for its conformist citizens and its somewhat draconian legal system. So much so that the author William Gibson characterised Singapore as being Disneyland with the death penalty.

Karp and Zamiska then go on to state their belief that it is a failure of the “contemporary left” in the West that “deprives itself of the opportunity to talk about national identity” and that in both America and Europe the left has “neutered itself” and prevented its “advocates from having a forceful and forthright conversation about national identity“. At this point in the book the authors call on no other than J.R.R. Tolkien, he of The Lord of the Rings fame. For those not familiar with Tolkien’s three-volume tome it is the epic story of good versus evil where a group of plucky little Hobbits overcome the evil Sauron’s threats to bring death and destruction to Middle Earth. The reason this book is mentioned is because the authors see it as an example of how good storytelling around a shared narrative, even if its mythological (or religious) can show how people can come together. As Rowan Williams, the former Archbishop of Canterbury, says in an essay on Tolkien’s books, “Tolkien’s dogged concern about the terrible dangers of our desire for final solutions and unchallengeable security is even more necessary“. It’s worth noting at this point that Palantir was named after the “seeing stone” in Tolkien’s legendarium.

This then would seem to be the authors ultimate solution to the building of a technological republic. It’s no good relying on the fact that the software engineers of Silicon Valley will suddenly see the light and begin directing their talents at solving the world’s wicked problems and, more to the point today, the West’s security problems. Instead we need to build (or rebuild) a new order of “collective experience, of shared purpose and identity [and] of civic rituals that are capable of binding us together“.

The Technological Republic runs to just under 300 pages (or 218 if you take out the references and index). Unfortunately, like many books of this type, I cannot help but feel the whole argument could have been made more concisely and more persuasively if it was written as an opinion piece in The Atlantic or New Yorker magazine rather than as a long form book. The main point, that we in the West are directing our (software) technologies at building the wrong things is made over and over again with, to be frank, some fairly bizarre and academic references and too may non sequiturs. For example maybe those who did not conform to what was expected of them in the obedience experiment were not more creative but had a more pronounced ethical or moral compass. Talk of ethics is distinctly missing from this book.

I also believe there is a hidden agenda at play here to. In the UK at least Palantir is receiving some bad press, especially in its dealings with the NHS. Last year the British Medical Journal called on (the recently disbanded) NHS England to cancel its contract with Palantir citing concerns about the cost of its contract with the company and whether it offers value for money, as well as questions about public trust in Palantir, the procurement process as well as the companies public support of the Israeli Occupation Forces (IOF) with their assault in Gaza.

More widely there is a concern how smart city solutions currently rely on centralised, proprietary architectures that concentrate data and control in the hands of a few powerful tech companies like Cisco, IBM, Microsoft, and Palantir. Whereas once we may have thought it inconceivable that the data managed by an American corporation could be misused by a Western government what we are seeing happening now in America itself raises that very real fear.

Of even more concern is the apparent cosying up of Keir Starmer’s government with Palantir. According to Andrew Marr in this New Statesman article, following his meeting with Donald Trump in the Oval Office at the end of February, Starmer’s next visit was with Alex Karp at Palantir Technologies HQ in Washington. Whilst there he saw “military kit” and confirmed that Britain wouldn’t over-regulate AI so it could pursue new economic opportunities “with advanced technology at its core“. Unfortunately, it would appear much of the real economic advantage (and data) may be flowing to Trump’s America rather than staying in the UK!

To be clear, I do agree with the authors proposition, that much engineering talent is being wasted on the development of social media apps and the ever more ingenious ways that platform providers are finding to part us from our hard earned money. Where I diverge from the authors is in their rational for why this is the case. I suspect that it’s more to do with the fabulous salaries and ‘cool’ working environments that these companies offer rather than anything more sinister? As someone who has worked for both a Silicon Valley startup and on many government sites here in the UK I know where I would prefer to work given the choice (and setting aside ethical concerns).

Of course, working for Palantir you probably get to have your cake and eat it. I’m sure Palantir offers some nice working conditions for the quarter of its global workforce who operate from the UK whilst working on those profitable NHS contracts (£330m for a seven years according to Marr).

From a UK perspective the biggest issue of all is why we cannot build companies like Palantir that can be the data processing companies of choice by not just the NHS but for other government departments as well? I know at least part of the answer to this. We have a well-publicised skills gap in the UK where there are not enough good software engineers to staff such companies as well as a lack of investment capital to fund them. This has to be the real challenge for our government if we are to ever ween ourselves away from companies like Palantir and develop some home-grown talent who consider it worthwhile to work on ‘software for good’ projects rather than developing the next photo-sharing app (or developing the next great piece of surveillance software).

Monday 3rd March, 2025

Today has been a significant day for me. It’s the first day I have not been in gainful employment or education since, well the age of five really. I joined GEC Telecommunications in Coventry as a Software Engineer in August of 1979. Ever since then, I have always been in employment. All that time, I have been working in the tech sector (mainly doing interesting things with software).

At the age of sixty six and three quarters, I know now should be the time I unpack my laptop bag for the last time. I should settle down to make the best of my remaining four thousand weeks. These weeks should be unhindered by conference calls, early morning train travel, and the incessant noise of emails and messaging services. That may sound good for some, but to me it would be the sounding of my death knell.

As Robert Kennedy said:

Like it or not, we live in interesting times. They are times of danger and uncertainty; but they are also the most creative of any time in the history of mankind. And everyone here will ultimately be judged – will ultimately judge himself – on the effort he has contributed to building a new world society and the extent to which his ideals and goals have shaped that effort.

Robert F. Kennedy, American Politician

I can’t think of a better way of describing how I see the world now and why I believe it is so important for everyone, whose brain is active and whose body is able, to carry on trying to contribute something to society.

Since leaving my longest term employer, IBM, in 2019 I have voraciously devoured books. Mainly nonfiction and mainly books on technology and how it impacts the world. Also a smattering of biographies as well as books on philosophy and economics and not forgetting some works of fiction which too, also help in understanding the way the world could be (or could have been).

Having spent so much time reading and taking on board other people’s thoughts, options and ideas I have decided it is now my turn to try and put into words my understanding of the world. I am hoping that one of the things freedom from employment will give me is time. Time to write. Time to photograph. Time to spend with my grand children whose life is just beginning and who must first be guided and then navigate themselves through tumultuous (but interesting) times.

This is hopefully the start of my new writing ‘career’ and the next chapter in my life.

Is a Trump victory also a victory for the technofeudalists?

“This guy’s got my back” – Composite image created by author*

Under feudalism, the power of the ruling class grew out of owning land that the majority could not own, but were bonded to. Under capitalism, power stemmed from owning capital that the majority did not own, but had to work with to make a living. Under technofeudalism, a new ruling class draws power from owning cloud capital whose tentacles entangle everyone.” [1]

Trump’s victory in the 2024 US election on 5th November is sending shock waves around the globe. Although the world’s leaders are sending him nice congratulatory messages [2] one suspects they, and their advisors, are simultaneously wringing their hands wondering how they are going to counteract some of Trump’s more extreme rants and actions in the coming years.

Immediate concerns, at least amongst European (including UK) leaders are going to focus on Trump’s potential trade war with China, what he’ll do in Ukraine and the Middle East, what his policy will be towards NATO, as well as, probably most significantly, what the Republican’s stance on climate change will be. If Project 2025 does turn out to be a “wish list” for the party [3] then the threat to slash federal money for research and investment in renewable energy, and calls for the incoming president to “stop the war on oil and natural gas” may be about to be made real.

Whilst some commentators tell us that “Trump has killed the neoliberal order” [4] this is overly simplistic. I can understand why Americans do not want to preserve an economic system that does not reward them but I’m sure they will not be happy when tariffs on imported Chinese goods put up the cost of their cars, household electronics, clothes and their kids toys.

Remember also that one of the cornerstones of the neoliberal political philosophy is, through privatisation and austerity, to reduce the state influence on the economy. This is something that is very much at the heart of Project 2025 – namely, to“de-weaponize the federal government and dismantle the deep state”. Many Americans already suffer greatly from not having access to good quality, free at the point of access, healthcare. Although Trump stated he would not cut Social Security and Medicare his tax proposals would accelerate the date the Social Security Trust Fund runs out of money from 2034 to 2031 and deepen the cuts to benefits to about one third of their current levels [5].

What Project 2025 does seek to do is to privatise parts of the Medicare and Medicaid programs, which will affect poorer recipients more directly, and to impose work requirements for Medicaid. That would result in loss of coverage for many of the most desperate patients [6].

Whilst it seems that neoliberalism might not be completely dead under Trump, there is a far greater threat that his next presidency may bring. As it says in this posts opening quote from Yanis Varoufakis, technofeudalists are the new ruling class that draw their power from owning cloud capital whose tentacles entangle everyone. Well guess what, those tentacled feudalists have just taken on a whole new set of superpowers by entangling themselves with the soon to be United States government under the presidency of Donald J. Trump!

It was no coincidence that within hours of Kamala Harris conceding defeat to Trump, Tesla shares went up by over $35, Amazon by $5, Microsoft by $10 and Bitcoin went up by over $5,000 to a whopping $75,643 per Bitcoin. Whether voters realised it or not, a vote for Trump was a vote for the tech moguls whose billionaire-in-chief is Elon Musk, owner of Tesla, Space-X and, most significantly, the X social media platform.

When Musk bought what was then called Twitter for $44 billion in 2022 [7] most people thought he was mad and that he had overpaid for it with one analyst believing Twitter was really ‘only’ worth $30 billion [8]. It was also not clear why Musk wanted to diversify into social media, surely building electric cars and space rockets was enough?

But clearly not and now we know why.

Musk is the defacto uber-technofeudalist. He has made X his own personal fiefdom where, because he now owns it, he can literally do what he wants. Since buying the platform Musk has:

  • Complied with 808 of the 971 government demands to do things like remove controversial posts, as well as demands that X produce private data to identify anonymous accounts [9].
  • Prevented users from posting links to a newsletter containing a hacked document that’s alleged to be the Trump campaign’s research into vice presidential candidate JD Vance and suspended the journalist who wrote the newsletter [10].
  • Sued organisations who are attempting to fight disinformation thereby presenting a threat to the First Amendment [11].
  • Allowed both paid “Premium” subscriber accounts and thousands of unpaid accounts that support pro-Nazi content to stay on X violating the platforms own rules [12].

I could go on, but you get the picture.

To paraphrase The Guardian journalist George Monbiot, Trump’s victory and his promise to give Musk a top government job (to head up the government efficiency commission) may well allow him to escape the regulators by effectively making him “his own regulator”. Bear in mind as well that Musk controls key strategic and military assets, such as SpaceX satellite launchers and the Starlink internet system. It is not hard to see how his control over such assets could “grow to the point at which governments feel obliged to do as he demands“. [13]

Whilst on the subject of media platforms owned by technofeudalists we must not forget that august mainstream media publication The Washington Post owned by the world’s third richest man (following close on the heels of Musk), Jeff Bezos. Leading up to the US election the Post allegedly spiked an editorial endorsing Kamala Harris. This led to staff resignations as well as anger from its (and other) journalists as well as some 200,000 readers supposedly cancelling their subscriptions.

The Post’s publisher and chief executive, William Lewis, having a sudden epiphany, stated: “We are returning to our roots of not endorsing presidential candidates”, which it had been doing since 1976. Whilst this may be a justifiable and honourable position to take, the fact that this decision was made days before the election, when one of the candidates said he will take revenge on news organisations that anger him, smacked a bit arse-licking I would suggest. Whilst there was no suggestion that the non-endorsement was influenced by Jeff Bezos, it surely cannot be a coincidence that Amazon and Bezos’s space exploration company Blue Origin as well as Amazon Web Services (AWS) frequently bid for government contracts. As Alison Phillips says [14] “The non-endorsement may not have been Bezos’s decision, but good editors know instinctively what their masters want”.

So what does all this mean for us mere mortals that use the platforms owned by these ever more powerful, and rich, technofeudalists? Over to Varoufakis who explains the role of the users who service these cloud fiefdoms as being one of two types [15]:

  1. The cloud proles who are the workers driven to their physical limits by algorithms that control their every working hour and who provide the physical services that the cloud platform’s require. These are the Uber drivers, the Amazon warehouse and delivery workers, the Deliveroo cyclists who buzz around our cities delivering burgers and pizzas and the myriad of content checkers whose job it is to weed out some of the appalling violent, pornographic, racist and misogynistic content from social media platforms in the vain hope it will not be seen by decent God-fearing folk (which it often still is).
  2. The cloud serfs are the people (and that’s several billion people around the world) who provide the content (the raw tracking data, the stories, the videos, the images) largely for free, that enable these platforms to be more than just the server farms, networks and software from which they are built.

The genius of the technofeudalists, whether through foresight or opportunism, has been to intertwine themselves so tightly with government that they are no longer relatively benign providers of services (data centres, satellites, CCTV cameras and so on) but an integral and fundamental part of the whole machinery of government. This is both in the control they exert during the elections when our leaders are voted in as well as during the day-to-day running of government where they provide much of the machinery that allows surveillance capitalism to take place. Varoufakis again:

Under technofeudalism, we no longer own our minds. Every proletarian is turning into a cloud prole during working hours and into a cloud serf the rest of the time. Every self-employed striver mutates into a cloud vassal, while every self-employed struggler becomes a cloud serf. While privatisation and private equity asset-strip all physical wealth around us, cloud capital goes about the business of asset-stripping our brains” [16].

This is what Alex Gourevitch refers to as anti-democratic power. The technofeudalists rule us without ruling through politics. Instead they rule us through their economic power. What they decide to invest in, whether it be machine learning, blockchain, space travel, virtual reality glasses or self-driving cars, decides what our entertainment, our social interactions and our cultural future will be like [17].

With the buddying up of the technofeudalists to autocracies, like the new Trump government promises to be, these gatekeepers are gaining even more power, influence and control over our lives – the kind of control that was once a work of dystopian fiction by an English author called George Orwell. It’s beginning to look like Orwell was right all along, he was just 40 years too early in his prediction [18]. The ‘2024’ version of ‘1984’ is one in which a few billionaires living on the West coast of America are the protagonists. Their goals are not about providing better lives for citizens but lining their own pockets whilst at the same time enjoying the fruits of the enormous power bestowed up them by autocratic leaders like Trump.

As Orwell warned, dictatorships don’t just arise from brutality and suppression. They arise from control of information and the platforms that control that information [19]. For him it was doublethink: famine is plenty, war is peace. For us it’s fake news and alternative facts.

How ordinary citizens can react to this and what can be done about it seems to be impossibly hard to answer questions right now. The renowned critic of cryptocurrencies and blockchain-based projects Molly White in her newsletter [citation needed] makes an attempt to at least part address these questions [20], for example:

  • Consider reducing your reliance on centralized social networks controlled by billionaires, and instead establishing a web presence you control.
  • Find and support trusted sources of news and information. If you rely heavily on mainstream news outlets owned by billionaires who aided Trump in his victory, consider diversifying your media diet.
  • Use end-to-end-encrypted messaging apps for your communications and consider using a VPN to help protect your privacy online.

However if we are to try and “wind the clock” we are going to have to do far, far more. What and how is something I plan to explore in future posts.

*I’m trying to avoid using AI generated imagery in these posts preferring to create composites like this one in the style of Cold War Steve.


Notes

  1. Technofeudalism – What Killed Capitalism, Yanis Varoufakis, The Bodley Head, 2023, p215.
  2. Netanyahu and Starmer lead congratulations to Trump, Gianluca Avagnina https://www.bbc.co.uk/news/articles/cly2z812zxvo
  3. Project 2025: The right-wing wish list for another Trump presidency, Mike Wendling, https://www.bbc.co.uk/news/articles/c977njnvq2do
  4. Trump has killed the neoliberal order, Richard Murphy https://www.taxresearch.org.uk/Blog/2024/11/06/trump-has-killed-the-neoliberal-order/
  5. Fact check: Does trump intend to cut social security and medicare?, Christine Sellers, https://checkyourfact.com/2024/10/24/fact-check-trump-cut-social-security-medicare/
  6. It’s Find Out Time, Jay Kuo, https://substack.com/home/post/p-151381890
  7. Elon Musk takes control of Twitter in $44bn deal, James Clayton & Peter Hoskins, https://www.bbc.co.uk/news/technology-63402338
  8. Elon Musk’s X is worth nearly 80% less than when he bought it, Fidelity estimates, Matt Egan, https://edition.cnn.com/2024/10/02/business/elon-musk-twitter-x-fidelity/index.html
  9. Twitter is complying with more government demands under Elon Musk, Russell Brandom, https://restofworld.org/2023/elon-musk-twitter-government-orders/
  10. X blocks links to hacked JD Vance dossier, Elizabeth Lopatto, https://www.theverge.com/2024/9/26/24255298/elon-musk-x-blocks-jd-vance-dossier
  11. Elon Musk’s Supreme Court Endgame in Defamation Lawsuit, Rebecca Buckwalter-Poza, https://slate.com/news-and-politics/2024/03/elon-musk-media-matters-supreme-court.html
  12. Verified pro-Nazi X accounts flourish under Elon Musk, David Ingram, https://www.nbcnews.com/tech/social-media/x-twitter-elon-musk-nazi-extremist-white-nationalist-accounts-rcna145020
  13. Can democracy survive now the world’s richest man has it in his sights?, George Monbiot, https://www.theguardian.com/commentisfree/2024/nov/02/elon-musk-donald-trump-us-presidential-elections
  14. The cowardice of the Washington Post, Alison Phillips, https://www.newstatesman.com/comment/2024/10/cowardice-washington-post-kamala-harris
  15. Technofeudalism – What Killed Capitalism, Yanis Varoufakis, The Bodley Head, 2023, p80 – 85.
  16. Ibid, p213.
  17. The Machiavellis of the market: Entrepreneurs against democracy, Alex Gourevitch, https://lpeproject.org/blog/the-machiavellis-of-the-market-entrepreneurs-against-democracy/
  18. The ‘foolproof’ election forecaster who predicted Trump would lose – what went wrong?, David Smith, https://www.theguardian.com/us-news/2024/nov/16/trump-election-forecast-allan-lichtman
  19. Welcome to dystopia – George Orwell experts on Donald Trump, Jean Seaton, Tim Crook and DJ Taylor, https://www.theguardian.com/commentisfree/2017/jan/25/george-orwell-donald-trump-kellyanne-conway-1984
  20. Wind the clock, Molly White, https://www.citationneeded.news/wind-the-clock/

Is an AI ‘Parky’ the first step in big techs takeover of the entertainment industry?

Composite Image Created using OpenAI, DALL-E and Adobe Photoshop

Sir Michael Parkinson, who died in 2023 [1], was a much loved UK chat show host who worked at the BBC between 1971 and 1982, again between 1998 and 2004 and finally for a further three years at ITV until 2007. During that time “Parky” interviewed the great and the good (and sometimes the not so good [2]) from film, television, music, sport, science and industry. I remember Saturday nights during his first stint at the BBC not feeling complete unless we had tuned into Parkinson to see which celebrities he was interviewing that night. I was sad to hear of his passing last year but also grateful I had lived at the time to see many of his interviews and appreciate his gentle but probing interview style.

Last week however we learnt that just because you are dead, it does not mean you cannot carry on doing your job. Mike Parkinson, son of Sir Michael, has given permission to Deep Fusion Films to create an exact replica of his late father’s voice so he can virtually host a new eight-part, “unscripted series” [3] called Virtually Parkinson. The virtual Parky will be able to interview new guests based on analyses of data obtained from the real Parkinson’s back catalogue [4].

Deep Fusion Films was founded in 2023 and makes a big play about its ethical credentials. On its website [5] it says it aims to “establish comprehensive policies that promote the legal and ethical integration of AI in production”. Backing this up, their virtual Parky will be created with the full support and involvement of Sir Michael’s family and estate. 

So far, so ethical, right and proper, however…

Only last year, concerns over the use (and potential misuse) of AI in the film industry led to a strike by actors and writers. Succession actor Brian Cox made the statement that using AI to replicate an actor’s image and use it forever is “identity theft” and should be considered “a human rights issue” [6].

Hollywood stars like Scarlett Johansson, Tom Hanks, Tom Cruise and Keanu Reeves, have already become the subject of unauthorised deepfakes and in June of this year the Internet Watch Foundation(IWF) warned that AI-generated videos of child sexual abuse could indicate a ‘stark vision of the future’ [7].

Clearly, where Deep Fusion Films are right now, i.e. producing ethically sourced and approved imitations of celebrities voices, and where AI generated porn is threatening to take us are poles apart but…

Technology always creeps into our lives like this. A small seemingly insignificant event which we find amusing and mildly distracting entertains us for a while but then suddenly, it has become the way of all things and has fallen into the hands of ‘bad actors’. At this point, there is often no going back.

Witness how Facebook started out as an innocuous site called Facemash, created by a second-year student at Harvard University called Mark Zuckerberg, that compared two student photos side-by-side to determine who was “hot” and who was “not.” Actually this was always a questionable use case in my opinion, but I guess an indication of what went down as acceptable behaviour in Ivy League universities of the early 2000s!

Today Meta (who now owns Facebook) is the seventh largest company in the world by market capitalisation worth, at the time of writing, $1.497 T [8]. Zuckerberg’s vision for Meta, outlined in a letter to shareholders this August, is that it will become a virtual reality platform that merges the physical and digital worlds forever transforming how we interact, work, and socialise [9]. Inevitably a major part of this vision is that artificial intelligence (or even, if Zuckerberg gets his way, artificial general intelligence) will be there to “enhance user experiences”.

Facebook, and now Meta, is surely the canonical example of how a small and seemingly insignificant company from the US east coast has grown in a mere 20 years to become a largely unregulated west coast tech behemoth with over three billion active monthly users [10].

If Facebook was just used for sharing pictures of cats and dogs that would be one thing but, during its short history, it has been found guilty of spreading fake news, changing voting behaviour in key elections around the world, affecting peoples mental health as well as spreading violent and misogynistic (and deepfake) videos.

It seems like we never learn. Governments and legal systems around the world never react fast enough to the pace of technological change and are always playing catchup having to mop up the tech companies misdemeanours after they have occurred rather than regulating against tech companies in the first place. Financial penalties are one thing but these pale into insignificance alongside the gargantuan profits such companies make and anyway, no amount of fines can undo the negative effects they and their leaders have on peoples lives.

So how does the rise of the tech behemoths like Facebook, Google and X presage what might happen in the creative industries and their use of technology, especially AI?

I don’t know what proportion of a Hollywood movies costs goes to actors salaries. It is obviously not the only cost or even the largest cost however with actors like Tom Cruise, Keanu Reeves and Will Smith able to command salaries for a single film in excess of $100M [11] salaries are clearly not insignificant. It must be very tempting for movie producers to be thinking why not invest a bit more in special effects and just create a whole new actor from scratch. After all, that’s precisely what Walt Disney did with Mickey Mouse who never got paid a dime.

How long is it before we cross a red line and a movies special effects goes the whole way and uses CGI to create the characters in a completely AI scripted and generated film? Huge upfront costs (for now, but these will drop) but no ongoing costs of having to pay actors for re-runs or streaming rights etc.

I don’t know how long it might take or whether we will ever get there. Maybe the technology will never be good enough (unlikely) or maybe we will wake up to what we are doing and create some sort of legal/ethical framework that prevents such things occurring (equally unlikely I fear).

We are beginning to rub up against some pretty fundamental questions not just about how we should be using AI, especially in the creative industries, but also what it actually means to be human if we let our machines overwhelm us to the extent that our creative selves are usurped by the very things that creativity has built.

This is a hugely important question which I hope to explore in future posts. 

Notes

  1. Sir Michael Parkinson obituary,https://www.theguardian.com/media/2023/aug/17/sir-michael-parkinson-obituary
  2. Michael Parkinson speaks out on Savile scandal, https://www.itv.com/news/calendar/2012-12-01/michael-parkinson-speaks-out-on-savile-scandal
  3. AI-replicated Michael Parkinson to host ‘completely unscripted’ celebrity podcast, https://news.sky.com/story/ai-replicated-michael-parkinson-to-host-completely-unscripted-celebrity-podcast-13243556
  4. Michael Parkinson is back, with an AI voice that can fool even his own familyhttps://www.theguardian.com/media/2024/oct/26/michael-parkinson-virtually-ai-replica-chatshow
  5. Deep Fusion Films is a dynamic production company at the forefront of television and film,https://www.deepfusionfilms.com/about
  6. Succession star Brian Cox on the use of AI to replicate actors: ‘It’s a human rights issue’,https://news.sky.com/story/succession-star-brian-cox-on-the-use-of-ai-to-replicate-actors-its-a-human-rights-issue-12999168
  7. AI-generated videos of child sexual abuse a ‘stark vision of the future’, https://www.iwf.org.uk/news-media/news/ai-generated-videos-of-child-sexual-abuse-a-stark-vision-of-the-future/
  8. Largest Companies by Marketcap,https://companiesmarketcap.com
  9. Mark Zuckerberg’s Letter: Meta’s Vision Unveiled,https://medium.com/@ahmedofficial588/mark-zuckerbergs-letter-meta-s-vision-unveiled-2b48a57a2743
  10. Facebook User & Growth Statistics,https://backlinko.com/facebook-users
  11. 20 Highest Paid Actors For a Single Film,https://thecinemaholic.com/highest-paid-actors-for-a-single-film/

Why the blogosphere still matters

John Naughton in his weekly Observer column raises an interesting point in a recent opinion piece. For many, the internet, or more specifically the ‘web’ and what it has morphed into has become little more than a proliferation of walled gardens owned by the likes of Google, X, Facebook and Substack for whom “free speech is something that is algorithmically curated while the speakers are intensively surveilled and their is data mined for advertising purposes”.

He reminds us that before all of these horticultural abominations arose on the back of what we now call Web 2.0 there was a truly safe, open space known as the blogosphere that could genuinely be a modern realisation of something Jürgen Habermas’s called the “the public sphere” because it was open to all, everything was discussable and social rank didn’t determine who was allowed to speak.

Naughton’s column has made me want to revisit this blog as I realise the importance of the freedom to own my own opinions and ideas and not have them filtered and surveilled by the very tech overlords who I despise and continue to be the antithesis of what an open and democratic web should be about.

Now is an even more important time for those of us with any kind of tech background and knowledge to be raising up against those people (the so-called tech bro’s) who want to create the world according to their own particular vision and who have managed to monopolise the very platforms who most people use to try and articulate their thoughts and their ideas.

Treat this as the start of a revival of Software Architecture Zen where I will attempt to help cut through the tech-hype we are being bombarded with and deliver a more rational and realistic view on where technology may be taking us.

To paraphrase a well known (tech) TV commercial – let’s try and see how 2024 does not need to be like 1984.

Why it’s different this time

Image Created Using Adobe Photoshop and Firefly

John Templeton, the American-born British stock investor, once said: “The four most expensive words in the English language are, ‘This time it’s different.’”

Templeton was referring to people and institutions who had invested in the next ‘big thing’ believing that this time it was different, the bubble could not possibly burst and their investments were sure to be safe. But then, for whatever reason, the bubble did burst and fortunes were lost.

Take as an example the tech boom of the late 1980s and 1990s. Previously unimagined technologies that no one could ever see any sign of failing meant investors poured their money into this boom. Then it all collapsed and many fortunes were lost as the Nasdaq dropped 75 percent.

It seems to be an immutable law of economics that busts will follow booms as sure as night follows day. The trick then is to predict the boom and exit your investment at the right time – not too soon and not too late, to paraphrase Goldilocks.

Most recently the phrase “this time it’s different” is being applied to the wave of AI technology which has been hitting our shores, especially since the widespread release of large language model technologies which current AI tools like OpenAI’s ChatGPT, Google’s PaLM, and Meta’s LLaMA use as their underpinning.

Which brings me to the book The Coming Wave by Mustafa Suleyman.

Suleyman was the co-founder of DeepMind (now owned by Google) and is currently CEO of Inflection an AI ‘studio’ that, according to its company blurb is “creating a personal AI for everyone”.

The Coming Wave provides us with an overview not just of the capabilities of current AI systems but also contains a warning which Suleyman refers to as the containment problem. If our future is to depend on AI technology (which it increasingly looks like it will given that, according to Suleyman, LLMs are the “fastest, diffusing consumer models we have ever seen“) how do you make it a force for good rather than evil whereby a bunch of ‘bad actors’ could imperil our very existence? In other words, how do you monitor, control and limit (or even prevent) this technology?

Suleyman’s central premise in this book is that the coming technological wave of AI is different from any that have gone before for five reasons which makes containment very difficult (if not impossible). In summary, these are:

  • Reason #1: Asymmetry – the potential imbalances or disparities caused by artificial intelligence systems being able to transfer extreme power from state to individual actors.
  • Reason #2: Exponentiality – the phenomenon where the capabilities of AI systems, such as processing power, data storage, or problem-solving ability, increase at an accelerating pace over time. This rapid growth is often driven by breakthroughs in algorithms, hardware, and the availability of large datasets.
  • Reason #3: Generality – the ability of an artificial intelligence system to apply their knowledge, skills, or capabilities across a wide range of tasks or domains.
  • Reason #4: Autonomy – the ability of an artificial intelligence system or agent to operate and make decisions independently, without direct human intervention.
  • Reason #5: Technological Hegemony – the malignant concentrations of power that inhibit innovation in the public interest, distort our information systems, and threaten our national security.

Suleyman’s book goes into each of these attributes in detail and I do not intend to repeat any of that here (buy the book or watch his explainer video). Suffice it to say however that collectively these attributes mean that this technology is about to deliver us nothing less than a radical proliferation of power which, if unchecked, could lead to one of two possible (and equally undesirable) outcomes:.

  1. A surveillance state (which China is currently building and exporting).
  2. An eventual catastrophe born of runaway development.

Other technologies have had one or maybe two of these capabilities but I don’t believe any have had all five, certainly at the level AI has. For example electricity was a general purpose technology with multiple applications but even now individuals cannot build their own generators (easily) and there is certainly not any autonomy in power generation. The internet comes closest to having all five attributes but it is not currently autonomous (though AI itself threatens to change that).

To be fair, Suleyman does not just present us with what, by any measure, is a truly wicked problem he also offers a ten point plan for for how we might begin to address the containment problem and at least dilute the effects the coming wave might have. These stretch from including built in safety measures to prevent AI from acting autonomously in an uncontrolled fashion through regulation by governments right up to cultivating a culture around this technology that treats it with caution from the outset rather than adopting the move fast and break things philosophy of Mark Zuckerberg. Again, get the book to find out more about what these measures might involve.

My more immediate concerns are not based solely on the five features described in The Coming Wave but on a sixth feature I have observed which I believe is equally important and increasingly overlooked by our rush to embrace AI. This is:

  • Reason #6: Techno-paralysis – the state of being overwhelmed or paralysed by the rapid pace of technological change caused by technology systems.

As is the case of the impact of the five features of Suleyman’s coming wave I see two, equally undesirable outcomes of techno-paralysis:

  1. People become so overwhelmed and fearful because of their lack of understanding of these technological changes they choose to withdraw from their use entirely. Maybe not just “dropping out” in an attempt to return to what they see as a better world, one where they had more control, but by violently protesting and attacking the people and the organisations they see as being responsible for this “progress”. I’m talking the Luddites here but on a scale that can be achieved using the organisational capabilities of our hyper-connected world.
  2. Rather than fighting against techno-paralysis we become irretrevably sucked into the systems that are creating and propagating these new technologies and, to coin a phrase, “drink the Kool-Aid”. The former Greek finance minister and maverick economist Yanis Varoufakis, refers to these systems, and the companies behind them, as the technofeudalists. We have become subservient to these tech overlords (i.e. Amazon, Alphabet, Apple, Meta and Microsoft) by handing over our data to their cloud spaces. By spending all of our time scrolling and browsing digital media we are acting as ‘cloud-serfs’ — working as unpaid producers of data to disproportionately benefit these digital overlords.

There is a reason why the big-five tech overlords are spending hundreds of billions of dollars between them on AI research, LLM training and acquisitions. For each of them this is the next beachhead that must be conquered and occupied, the spoils of which will be huge for those who get there first. Not just in terms of potential revenue but also in terms of new cloud-serfs captured. We run the risk of AI being the new tool of choice in weaponising the cloud to capture larger portions of our time in servitude to these companies who produce evermore ingenious ways of controlling our thoughts, actions and minds.

So how might we deal with this potentially undesirable outcome of the coming wave of AI? Surely it has to be through education? Not just of our children but of everyone who has a vested interest in a future where we control our AI and not the other way round.

Last November the UK governments Department for Education (DfE) released the results from a Call for Evidence on the use of GenAI in education. The report highlighted the following benefits:

  • Freeing up teacher time (e.g. on administrative tasks) to focus on better student interaction.
  • Improving teaching and education materials to aid creativity by suggesting new ideas and approaches to teaching.
  • Helping with assessment and marking.
  • Adaptive teaching by analysing students’ performance and pace, and to tailor educational materials accordingly.
  • Better accessibility and inclusion e.g. for SEND students, teaching materials could be more easily and quickly differentiated for their specific.

whilst also highlighting some potential risks including:

  • An over reliance on AI tools (by students and staff) which would compromise their knowledge and skill development by encouraging them to passively consume information.
  • Tendency of GenAI tools to produce inaccurate, biased and harmful outputs.
  • Potential for plagiarism and damage to academic integrity.
  • Danger that AI will be used for the replacement or undermining of teachers.
  • Exacerbation of digital divides and problems of teaching AI literacy in such a fast changing field.

I believe that to address these concerns effectively, legislators should consider implementing the following seven point plan:

  1. Regulatory Framework: Establish a regulatory framework that outlines the ethical and responsible use of AI in education. This framework should address issues such as data privacy, algorithm transparency, and accountability for AI systems deployed in educational settings.
  2. Teacher Training and Support: Provide professional development opportunities and resources for educators to effectively integrate AI tools into their teaching practices. Emphasize the importance of maintaining a balance between AI-assisted instruction and traditional teaching methods to ensure active student engagement and critical thinking.
  3. Quality Assurance: Implement mechanisms for evaluating the accuracy, bias, and reliability of AI-generated content and assessments. Encourage the use of diverse datasets and algorithms to mitigate the risk of producing biased or harmful outputs.
  4. Promotion of AI Literacy: Integrate AI literacy education into the curriculum to equip students with the knowledge and skills needed to understand, evaluate, and interact with AI technologies responsibly. Foster a culture of critical thinking and digital citizenship to empower students to navigate the complexities of the digital world.
  5. Collaboration with Industry and Research: Foster collaboration between policymakers, educators, researchers, and industry stakeholders to promote innovation and address emerging challenges in AI education. Support initiatives that facilitate knowledge sharing, research partnerships, and technology development to advance the field of AI in education.
  6. Inclusive Access: Ensure equitable access to AI technologies and resources for all students, regardless of their gender, socioeconomic background or learning abilities. Invest in infrastructure and initiatives to bridge the digital divide and provide support for students with special educational needs and disabilities (SEND) to benefit from AI-enabled educational tools.
  7. Continuous Monitoring and Evaluation: Regularly monitor and evaluate the implementation of AI in education to identify potential risks, challenges, and opportunities for improvement. Collect feedback from stakeholders, including students, teachers, parents, and educational institutions, to inform evidence-based policymaking and decision-making processes.

The coming AI wave cannot be another technology that we let wash over and envelop us. Indeed Suleyman himself towards the end of his book makes the following observations…

Technologists cannot be distant, disconnected architects of the future, listening only to themselves.

Technologists must also be credible critics who…
…must be practitioners. Building the right technology, having the practical means to change its course, not just observing and commenting, but actively showing the way, making the change, effecting the necessary actions at source, means critics need to be involved.

If we are to avoid widespread techno-paralysis caused by this coming wave than we need a 21st century education system that is capable of creating digital citizens that can live and work in this brave new world.

Forty years of Mac

Screenshot from Apple’s “1984” ad directed by Sir Ridley Scott

Forty years ago today (24th January 1984) a young Steve Jobs took to the stage at the Flint Center in Cupertino, California to introduce the Apple Macintosh desktop computer and the world found out “why 1984 won’t be like ‘1984’.

The Apple Macintosh, or ‘Mac’, boasted cutting-edge specifications for its day. It had an impressive 9-inch monochrome display with a resolution of 512 x 342 pixels, a 3.5-inch floppy disk drive, and 128 KB of RAM. The 32-bit Motorola 68000 microprocessor powered this compact yet powerful machine, setting new standards for graphical user interfaces and ease of use.

The original Apple Macintosh

The Mac had been gestating in Steve Jobs restless and creative mind for at least five years but had not started its difficult birth process until 1981 when Jobs recruited a team of talented individuals, including visionaries like Jef Raskin, Andy Hertzfeld, and Bill Atkinson. The collaboration of these creative minds led to the birth of the Macintosh, a computer that not only revolutionized the industry but also left an indelible mark on the way people interact with technology.

The Mac was one of the first personal computers to feature a graphical user interface (Microsoft Windows 1.0 was not released until November 1985) as well as the use of icons, windows, and a mouse for navigation instead of a command-line interface. This approach significantly influenced the development of GUIs across various operating systems.

Possibly of more significance is that some of the lessons learned from the Mac have and continue to influence the development of subsequent Apple products. Steve Jobs’ (and later Jony Ive’s) commitment to simplicity and elegance in design became a guiding principle for products like the iPod, iPhone, iPad, and MacBook and are what really make the Apple ecosystem (as well as allowing it to charge the prices it does).

One of the pivotal moments in Mac’s development was the now famous “1984” ad , which had its one and only public airing two days before during a Super Bowl XVIII commercial break and built a huge anticipation for the groundbreaking product.

I was a relative late convert to the cult of Apple, not buying my first computer (a MacBook Pro) until 2006. I still have this computer and periodically start it up for old times sake. It still works perfectly albeit very slowly and with a now very old copy of macOS running.

A more significant event, for me at least, was that a year after the Mac launch I moved to Cupertino to take a job as a software engineer at a company called ROLM, a telecoms provider that had just been bought by IBM and was looking to move into Europe. ROLM was on a recruiting drive to hire engineers from Europe who knew how to develop product for that marketplace and I had been lucky enough to have the right skills (digital signalling systems) at the right time.

At the time of my move I had some awareness of Apple but got to know it more as I ended up living only a few blocks from Apple’s HQ on Mariani Avenue, Cupertino (I lived just off Stevens Creek Boulevard which used to be chock-full of car dealerships at that time).

The other slight irony of this is that IBM (ROLM’s owner) was of course “big brother” in Apple’s ad and the young girl with the sledgehammer was out to break their then virtual monopoly on personal computers. IBM no longer makes their machine whilst Apple has obviously gone from strength to strength.

Happy Birthday Mac!

Enchanting Minds and Machines – Ada Lovelace, Mary Shelley and the Birth of Computing and Artificial Intelligence

Today (10th October 2023) is Ada Lovelace Day. In this blog post I discuss why Ada Lovelace (and indeed Mary Shelley who was indirectly connected to Ada) is as relevant today as she was then.

Villa Diodati, Switzerland

In the summer of 1816 [1], five young people holidaying at the Villa Diodati near Lake Geneva in Switzerland found their vacation rudely interrupted by a torrential downfall which trapped them indoors. Faced with the monotony of confinement, one member of the group proposed an ingenious idea to break the boredom: each of them should write a supernatural tale to captivate the others.

Among these five individuals were some notable figures of their time. Lord Byron, the celebrated English poet and his friend and fellow poet, Percy Shelley. Alongside them was Shelley’s wife, Mary, her stepsister Claire Clairmont, who happened to be Byron’s mistress, and Byron’s physician, Dr. Polidori.

Lord Byron, burdened by the legal disputes surrounding his divorce and the financial arrangements for his newborn daughter, Ada, found it impossible to fully engage in the challenge (despite having suggested it). However, both Dr. Polidori and Mary Shelley embraced the task with fervor, creating stories that not only survived the holiday but continue to thrive today. Polidori’s tale would later appear as Vampyre – A Tale, serving as the precursor to many of the modern vampire movies and TV programmes we know today. Mary Shelley’s story, which had come to her in a haunting nightmare that very night, gave birth to the core concept of Frankenstein, published in 1818 as Frankenstein: or, The Modern Prometheus. As Jeanette Winterson asserts in her book 12 Bytes [2], Frankenstein is not just a story about “the world’s most famous monster; it’s a message in a bottle.” We’ll see why this message resounds even more today, later.

First though, we must shift our focus to another side of Lord Byron’s tumultuous life and his divorce settlement with his wife, Anabella Wentworth. In this settlement, Byron expressed his desire to shield his daughter from the allure of poetry—an inclination that suited Anabella perfectly, as one poet in the family was more than sufficient for her. Instead, young Ada received a mathematics tutor, whose duty extended beyond teaching mathematics and included eradicating any poetic inclinations Ada might have inherited. Could this be an early instance of the enforced segregation between the arts and STEM disciplines, I wonder?

Ada excelled in mathematics, and her exceptional abilities, combined with her family connections, earned her an invitation, at the age of 17, to a London soirée hosted by Charles Babbage, the Lucasian Professor of Mathematics at Cambridge. Within Babbage’s drawing room, Ada encountered a model of his “Difference Engine,” a contraption that so enraptured her, she spent the evening engrossed in conversation with Babbage about its intricacies. Babbage, in turn, was elated to have found someone who shared his enthusiasm for his machine and generously shared his plans with Ada. He later extended an invitation for her to collaborate with him on the successor to the machine, known as the “Analytical Engine”.

A Model of Charles Babbage’s Analytical Engine

This visionary contraption boasted the radical notion of programmability, utilising punched cards like those employed in weaving machines of that era. In 1842, Ada Lovelace (as she had become by then) was tasked with translating a French transcript of one of Babbage’s lectures into English. However, Ada went above and beyond mere translation, infusing the document with her own groundbreaking ideas about Babbage’s computing machine. These contributions proved to be more extensive and profound than the original transcript itself, solidifying Ada Lovelace’s place in history as a pioneer in the realm of computer science and mathematics.

In one of these notes, she wrote an ‘algorithm’ for the Analytical Engine to compute Bernoulli numbers, the first published algorithm (AKA computer program) ever! Although Babbage’s engine was too far ahead of its time and could not be built using current day technology, Ada is still credited as being the world’s first computer programmer. But there is another twist to this story that brings us closer to the present day.

Fast forward to the University of Manchester, 1950. Alan Turing, the now feted but ultimately doomed mathematician who led the team that cracked intercepted, coded messages sent by the German navy in WWII, has just published a paper called Computing Machinery and Intelligence [3]. This was one of the first papers ever written on artificial intelligence (AI) and it opens with the bold premise: “I propose to consider the question, ‘Can machines think?”.

Alan Turing

Turing did indeed believe computers would one day (he thought in about 50 years’ time in the year 2000) be able to think and devised his famous “Turing Test” as a way of verifying his proposition. In his paper Turing also felt the need to “refute” arguments he thought might be made against his bold claim, including one made by no other than Ada Lovelace over one hundred years earlier. In the same notes where she wrote the world’s first computer algorithm, Lovelace also said:

It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis, but it has no power of anticipating any analytical relations or truths”.

Although Lovelace might have been optimistic about the power of the Analytical Engine, should it ever be built, the possibility of it thinking creatively wasn’t one of the things she thought it would excel at.

Turing disputed Lovelace’s view because she could have had no idea of the enormous speed and storage capacity of modern (remember this was 1950) computers, making them a match for that of the human brain, and thus, like the brain, capable of processing their stored information to arrive at sometimes “surprising” conclusions. To quote Turing directly from his paper:

It is a line of argument we must consider closed, but it is perhaps worth remarking that the appreciation of something as surprising requires as much of a ‘ creative mental act ‘ whether the surprising event originates from a man, a book, a machine or anything else.”

Which brings us bang up to date with the current arguments that are raging about whether systems like ChatGPT, DALL-E or Midjourney are creative or even sentient in some way. Has Turing’s prophesy finally been fulfilled or was Ada Lovelace right all along, computers can never be truly creative because creativity requires not just a reconfiguration of what someone else has made, it requires original thought based on actual human experience?

One undeniable truth prevails in this narrative: Ada was good at working with what she didn’t have. Not only was Babbage unable to build his machine, meaning Lovelace never had one to play with, she also didn’t have male privilege or a formal education – something that was a scarce commodity for women – a stark reminder of the limitations imposed on her gender during that time.

Have things moved on today for women and young girls? A glimpse into the typical composition of a computer science classroom, be it at the secondary or tertiary level, might beg the question: Have we truly evolved beyond the constraints of the past? And if not, why does this gender imbalance persist?

Over the past five or more years there have been many studies and reports published into the problem of too few women entering STEM careers and we seem to be gradually focusing in on not just what the core issues are, but also how to address them. What seems to be lacking is the will, or the funding (or both) to make it happen.

So, what to do, first some facts:

  1. Girls lose interest in STEM as they get older. A report from Microsoft back in 2018 found that confidence in coding wanes as girls get older, highlighting the need to connect STEM subjects to real-world people and problems by tapping into girls’ desire to be creative [4].
  2. Girls and young women do not associate STEM jobs with being creative. Most girls and young women describe themselves as being creative and want to pursue a career that helps the world. They do not associate STEM jobs as doing either of these things [4].
  3. Female students rarely consider a career in technology as their first choice. Only 27% of female students say they would consider a career in technology, compared to 61% of males, and only 3% say it is their first choice [5].
  4. Most students (male and female) can’t name a famous female working in technology. A lack of female role models is also reinforcing the perception that a technology career isn’t for them. Only 22% of students can name a famous female working in technology. Whereas two thirds can name a famous man [5].
  5. Female pupils feel STEM subjects, though highly paid, are not ‘for them’. Female Key Stage 4 pupils perceived that studying STEM subjects was potentially a more lucrative choice in terms of employment. However, when compared to male pupils, they enjoyed other subjects (e.g., arts and English) more [6].

The solutions to these issues are now well understood:

  1. Increasing the number of STEM mentors and role models – including parents – to help build young girls’ confidence that they can succeed in STEM. Girls who are encouraged by their parents are twice as likely to stay in STEM, and in some areas like computer science, dads can have a greater influence on their daughters than mums yet are less likely than mothers to talk to their daughters about STEM.
  2. Creating inclusive classrooms and workplaces that value female opinions. It’s important to celebrate the stories of women who are in STEM right now, today.
  3. Providing teachers with more engaging and relatable STEM curriculum, such as 3D and hands-on projects, the kinds of activities that have proven to help keep girls’ interest in STEM over the long haul.
  4. Multiple interventions, starting early and carrying on throughout school, are important ways of ensuring girls stay connected to STEM subjects. Interventions are ideally done by external people working in STEM who can repeatedly reinforce key messages about the benefits of working in this area. These people should also be able to explain the importance of creativity and how working in STEM can change the world for the better [7].
  5. Schoolchildren (all genders) should be taught to understand how thinking works, from neuroscience to cultural conditioning; how to observe and interrogate their thought processes; and how and why they might become vulnerable to disinformation and exploitation. Self-awareness could turn out to be the most important topic of all [8].

Before we finish, let’s return to that “message in a bottle” that Mary Shelley sent out to the world over two hundred years ago. As Jeanette Winterson points out:

Mary Shelley maybe closer to the world that is to become than either Ada Lovelace or Alan Turing. A new kind of life form may not need to be human-like at all and that’s something that is achingly, heartbreakingly, clear in ‘Frankenstein’. The monster was originally designed to be like us. He isn’t and can’t be. Is that the message we need to hear?” [2].

If we are to heed Shelley’s message from the past, the rapidly evolving nature of AI means we need people from as diverse a set of backgrounds as possible. These should include people who can bring constructive criticism to the way technology is developed and who have a deeper understanding of what people really need rather than what they think they want from their tech. Women must become essential players in this. Not just in developing, but also guiding and critiquing the adoption and use of this technology. As Mustafa Suleyman (co-founder of DeepMind) says in his book The Coming Wave [10]:

Credible critics must be practitioners. Building the right technology, having the practical means to change its course, not just observing and commenting, but actively showing the way, making the change, effecting the necessary actions at source, means critics need to be involved.

As we move away from the mathematical nature of computing and programming to one driven by so called descriptive programming [9] it is going to be important we include those who are not technical but are creative as well as empathetic to people’s needs and maybe even understand the limits we should place on technology. The four C’s (creativity, critical thinking, collaboration and communications) are skills we all need to be adopting and are ones which women in particular seem to excel at.

On this, Ada Lovelace Day 2023, we should not just celebrate Ada’s achievements all those years ago but also recognize how Ada ignored and fought back against the prejudices and severe restrictions on education that women like her faced. Ada pushed ahead regardless and became a true pioneer and founder of a whole industry that did not actually really get going until over 100 years after her pioneering work. Ada, the world’s first computer programmer, should be the role model par excellence that all girls and young women look to for inspiration, not just today but for years to come.

References

  1. Mary Shelley, Frankenstein and the Villa Diodati, https://www.bl.uk/romantics-and-victorians/articles/mary-shelley-frankenstein-and-the-villa-diodati
  2. 12 Bytes – How artificial intelligence will change the way we live and love, Jeanette Winterson, Vintage, 2022.
  3. Computing Machinery and Intelligence, A. M. Turing, Mind, Vol. 59, No. 236. (October 1950), https://www.cs.mcgill.ca/~dprecup/courses/AI/Materials/turing1950.pdf
  4. Why do girls lose interest in STEM? New research has some answers — and what we can do about it, Microsoft, 13th March 2018, https://news.microsoft.com/features/why-do-girls-lose-interest-in-stem-new-research-has-some-answers-and-what-we-can-do-about-it/
  5. Women in Tech- Time to close the gender gap, PwC, https://www.pwc.co.uk/who-we-are/her-tech-talent/time-to-close-the-gender-gap.html
  6. Attitudes towards STEM subjects by gender at KS4, Department for Education, February 2019, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/913311/Attitudes_towards_STEM_subjects_by_gender_at_KS4.pdf
  7. Applying Behavioural Insights to increase female students’ uptake of STEM subjects at A Level, Department for Education, November 2020, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/938848/Applying_Behavioural_Insights_to_increase_female_students__uptake_of_STEM_subjects_at_A_Level.pdf
  8. How we can teach children so they survive AI – and cope with whatever comes next, George Monbiot, The Guardian, 8th July 2023, https://www.theguardian.com/commentisfree/2023/jul/08/teach-children-survive-ai
  9. Prompt Engineering, Microsoft, 23rd May 2023, https://learn.microsoft.com/en-us/semantic-kernel/prompt-engineering/
  10. The Coming Wave, Mustafa Suleyman, The Bodley Head, 2023.

Machines like us? – Part II

Brain image by Elisa from Pixabay. Composition by the author

[Creativity is] the relationship between a human being and the mysteries of inspiration.

Elizabeth Gilbert – Big Magic

Another week and another letter from a group of artificial intelligence (AI) experts and public figures expressing their concern about the risk of AI. This one has really gone mainstream with Channel 4 News here in the UK having it as their lead story on their 7pm broadcast. They even managed to get Max Tegmark as well as Tony Cohn – professor of automated reasoning at the University of Leeds – on the programme to discuss this “risk of extinction”.

Whilst I am really pleased that the risks from AI are finally being discussed we must be careful not to focus too much on the Terminator-like existential threat that some people are predicting if we don’t mitigate against them in some way. There are certainly some scenarios which could lead to an artificial general intelligence (AGI) causing destruction on a large scale but I don’t believe these are imminent and as likely to happen as the death and destruction likely to be caused by pandemics, climate change or nuclear war. Instead, some of the more likely negative impacts of AGI might be:

It’s worth pointing out that all of the above scenarios do not involve AI’s suddenly deciding themselves they are going to wreak havoc and destruction but would involve humans being somewhere in the loop that initiates such actions.

It’s also worth noting that there are fairly serious rebuttals emerging to the general hysterical fear and paranoia being promulgated by the aforementioned letter. Marc Andreessen for example says that what “AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence – and many others, from the creation of new medicines to ways to solve climate change to technologies to reach the stars – much, much better from here”.

Whilst it is possible that AI could be used as a force for good is it, as Naomi Klein points out, really going to happen under our current economic system? A system that is built to maximize the extraction of wealth and profit for a small group of hyper-wealthy companies and individuals. Is “AI – far from living up to all those utopian hallucinations – [is] much more likely to become a fearsome tool of further dispossession and despoilation”. I wonder if this topic will be on the agenda for the proposed global AI ‘safety measure’ summit in autumn?

Whilst both sides of this discussion have good valid arguments for and against AI, as discussed in the first of these posts, what I am more interested in is not whether we are about to be wiped out by AI but how we as humans can coexist with this technology. AI is not going to go away because of a letter written by a groups of experts. It may get legislated against but we still need to figure out how we are going to live with artificial intelligence.

In my previous post I discussed whether AI is actually intelligent as measured against Tegmark’s definition of intelligence, namely the: “ability to accomplish complex goals”. This time I want to focus on whether AI machines can actually be creative.

As you might expect, just like with intelligence, there are many, many definitions of creativity. My current favourite is the one by Elizabeth Gilbert quoted above however no discussion on creativity can be had without mentioning the late Ken Robinsons definition: “Creativity is the process of having original ideas that have value”.

In the above short video Robinson notes that imagination is what is distinctive about humanity. Imagination is what enables us to step outside our current space and bring to mind things that are not present to our senses. In other words imagination is what helps us connect our past with the present and even the future. We have, what is quite possibly (or not) the unique ability in all animals that inhabit the earth, to imagine “what if”. But to be creative you do actually have to do something. It’s no good being imaginative if you cannot turn those thoughts into actions that create something new (or at least different) that is of value.

Professor Margaret Ann Boden who is Research Professor of Cognitive Science defines creativity as ”the ability to come up with ideas or artefacts that are new, surprising or valuable.” I would couple this definition with a quote from the marketeer and blogger Seth Godin who, when discussing what architects do, says they “take existing components and assemble them in interesting and important ways”. This too as essential aspect of being creative. Using what others have done and combining these things in different ways.

It’s important to say however that humans don’t just pass ideas around and recombine them – we also occassionally generate new ideas that are entirely left-field through processes we do not understand.

Maybe part of the reason for this is because, as the writer William Deresiewicz says:

AI operates by making high-probability choices: the most likely next word, in the case of written texts. Artists—painters and sculptors, novelists and poets, filmmakers, composers, choreographers—do the opposite. They make low-probability choices. They make choices that are unexpected, strange, that look like mistakes. Sometimes they are mistakes, recognized, in retrospect, as happy accidents. That is what originality is, by definition: a low-probability choice, a choice that has never been made.

William Deresiewicz, Why AI Will Never Rival Human Creativity

When we think of creativity, most of us associate it to some form of overt artistic pursuit such as painting, composing music, writing fiction, sculpting or photography. The act of being creative is much more than this however. A person can be a creative thinker (and doer) even if they never pick up a paintbrush or a musical instrument or a camera. You are being creative when you decide on a catchy slogan for your product; you are being creative when you pitch your own idea for a small business; and most of all, you are being creative when you are presented with a problem and come up with a unique solution. Referring to the image at the top of my post, who is the most creative – Alan Turing who invented a code breaking machine that historians reckon reduced the length of World War II by at least two years saving millions of lives or Picasso whose painting Guernica expressed his outrage against war?

It is because of these very human reasons on what creativity is that AI will never be truly creative or rival our creativity. True creativity (not just a mashup of someone else’s ideas) only has meaning if it has an injection of human experience, emotion, pain, suffering, call it what you will. When Nick Cave was asked what he thought of ChatGPT’s attempt at writing a song in the style of Nick Cave, he answered this:

Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend.

Nick Cave, The Red Hand Files

Imagination, intuition, influence and inspiration (the four I’s of creativity) are all very human characteristics that underpin our creative souls. In a world where having original ideas sets humans apart from machines, thinking creatively is more important than ever and educators have a responsibility to foster, not stifle their students’ creative minds. Unfortunately our current education system is not a great model for doing this. We have a system whose focus is on learning facts and passing exams and which will never allow people to take meaningful jobs that allow them to work alongside machines that do the grunt work whilst allowing them to do what they do best – be CREATIVE. If we don’t do this, the following may well become true:

In tomorrow’s workplace, either the human is telling the robot what to do or the robot is telling the human what to do.

Alec Ross, The Industries of the Future

Machines like us? – Part I

From The Secret of the Machines, Artist unknown

Our ambitions run high and low – for a creation myth made real, for a monstrous act of self love. As soon as it was feasible, we had no choice, but to follow our desires and hang the consequences.

Ian McEwan, Machines Like Me

I know what you’re thinking – not yet another post on ChatGPT! Haven’t enough words been written (or machine-generated) on this topic in the last few months to make the addition of any more completely unnecessary? What else is there to possibly say?

Well, we’ll see.

First, just in case you have been living in a cave in North Korea for the last year, what is ChatGPT? Let’s ask it…

ChatGPT is an AI language model developed by OpenAI. It is based on the GPT (Generative Pre-trained Transformer) architecture, specifically GPT-3.5. GPT-3.5 is a deep learning model that has been trained on a diverse range of internet text to generate human-like responses to text prompts.

ChatGPT response to the question: “What is ChatGPT”.

In this post, I am not interested in what use cases ChatGPT is or is not good for. I’m not even particularly interested in what jobs ChatGPT is going to replace in the coming years. Let’s face it, if the CEO of IBM, Arvind Krishna, is saying I could easily see 30 per cent of [non-customer-facing roles] getting replaced by AI and automation over a five-year period” then many people are already going to be worried so I’m not going to add to those fears.

I see much of what Krishna predicts as inevitable. Unless the world takes note of the recent letter from the tech/AI ‘great and the good’ (which appears to have some fake signatories anyway) then the simple fact is that if you can algorithmically capture a solution to a problem, people who solve those problems for a living are probably going to get replaced by such algorithms.

What interests me is whether deep learning models like ChatGPT really are intelligent and, more significantly, are they creative. In this post I’ll consider intelligence and in a later post creativity.

It turns out that intelligence is actually quite a hard thing to define. As Max Tegmark, president of the Future of Life Institute, says: “there’s no agreement on what intelligence is even amongst intelligent intelligence researchers!” Helpfully however, in his book Life 3.0, Tegmark puts us out of our misery and defines intelligence as the: “ability to accomplish complex goals”. This, as even Tegmark acknowledges, is a very broad definition. He also claims this definition subsumes the ability to “apply knowledge and skills” as this could be a goal one sets oneself.

Having defined intelligence it then becomes relatively easy, for Tegmark at least, to define artificial intelligence (AI) as: “non biological intelligence”; general intelligence as the: “ability to accomplish virtually any goal, including learning”; artificial general intelligence (AGI) as the: “ability to accomplish any cognitive task at least as well as humans” and superintelligence as: “general intelligence far beyond human level”.

So, given the relatively low bar that these definitions are set at, where would we say ChatGPT and its ilk are currently at on the intelligence scale?

I guess this really hinges on whether you think the multitude of tasks that these deep learning models can perform constitutes “accomplishing virtually any goal” or not? For many people (especially as we’ll see, AI researchers) the answer is probably yes.

But let’s ask ChatGPT if it thinks it’s an AGI?

It is important to note that as an AI language model, I am an instantiation of artificial general intelligence (AGI) and can provide insights based on my training up until September 2021.

Partial ChatGPT response to the question: “Given Max Tegmark’s definition of intelligence where would you say you are on the various intelligence levels he proposes?”.

Personally, and up until a few weeks ago, I would have said ChatGPT was getting a bit above itself to say it was an “instantiation” of an AGI but then I read an interview with Jaron Lanier titled How humanity can defeat AI.

Lanier works for Microsoft and is the author of a number of what you might call anti-social media books including You Are Not A Gadget and Ten Arguments For Deleting Your Social Media Accounts Right Now.

Lanier’s argument in this interview is that we have got AI wrong and we should not be treating it as a new form of intelligence at all. Indeed he has previously stated there is no AI. Instead Lanier reckons we have built a new and “innovative form of social collaboration”. Like the other social collaboration platforms that Lanier has argued we should all leave because they have gone horribly wrong this new form too could become perilous in nature if we don’t design it well. In Lanier’s view therefore the sooner we understand there is no such thing as AI, the sooner we’ll start managing our new technology intelligently and learn how to use it as a collaboration tool.

Whilst all of the above is well intentioned the real insightful moment for me came when Lanier was discussing Alan Turing’s famous test for intelligence. Let me quote directly what Lanier says.

You’ve probably heard of the Turing test, which was one of the original thought-experiments about artificial intelligence. There’s this idea that if a human judge can’t distinguish whether something came from a person or computer, then we should treat the computer as having equal rights. And the problem with that is that it’s also possible that the judge became stupid. There’s no guarantee that it wasn’t the judge who changed rather than the computer. The problem with treating the output of GPT as if it’s an alien intelligence, which many people enjoy doing, is that you can’t tell whether the humans are letting go of their own standards and becoming stupid to make the machine seem smart.

Jaron Lanier, How humanity can defeat AI, UnHerd, May 8th 2023

There is no doubt that we are in great danger of believing whatever bullshit GPT’s generate. The past decade or so of social media growth has illustrated just how difficult we humans find it to handle misinformation and these new and wondrous machines are only going to make that task even harder. This, coupled with the problem that our education system seems to reward the regurgitation of facts rather than developing critical thinking skills is, as journalist Kenan Malik says, increasingly going to become more of an issue as we try to figure out what is fake and what is true.

Interestingly, around the time Lanier was saying “there is no AI”, the so called “godfather of AI”, Geoffrey Hinton was announcing he was leaving Google because he was worried that AI could become “more intelligent than humans and could be exploited by ‘bad actors'”. Clearly, as someone who created the early neural networks that were the predecessors to the large language models GPTs are built on Hinton could not be described as being “stupid”, so what is going on here? Like others before him who think AI might be exhibiting signs of becoming sentient, maybe Hinton is being deceived by the very monster he has helped create.

So what to do?

Helpfully Max Tegmark, somewhat tongue-in-cheek, has suggested the following rules for developing AI (my comments are in italics):

  • Don’t teach it to code: this facilitates recursive self-improvement – ChatGPT can already code.
  • Don’t connect it to the internet: let it learn only the minimum needed to help us, not how to manipulate us or gain power – ChatGPT certainly connected to the internet to learn what it already knows.
  • Don’t give it a public API: prevent nefarious actors from using it within their code – OpenAI is releasing a public API.
  • Don’t start an arms race: this incentivizes everyone to prioritize development speed over safety – I think it’s safe to say there is already an AI arms race between the US and China.

Oh dear, it’s not going well is it?

So what should we really do?

I think Lanier is right. Like many technologies that have gone before, AI is seducing us into believing it is something it is not – even, it seems, to its creators. Intelligent it may well be, at least by Max Tegmark’s very broad definition of what intelligence is, but let’s not get beyond ourselves. Whilst I agree (and definitely fear) AI could be exploited by bad actors it is still, at a fundamental level, little more than a gargantuan mash up machine that is regurgitating the work of the people who have written the text and created the images it spits out. These mash ups may be fooling many of us some of the time (myself included) but we must be not be fooled into losing our critical thought processes here.

As Ian McEwan points out, we must be careful we don’t “follow our desires and hang the consequences”.