Karen Hao’s Empire of AI is without doubt a significant contribution to the ever growing collection of books digging into the workings of the AI industry. It is not just an insiders exposé of the shenanigans that went on leading up to the firing (and rapid rehiring) of Sam Altman, OpenAI’s founder and CEO, but also provides a deeply researched and incredibly well written insight into the current state of play of the AI industry as a whole, and, it’s not pretty.
The book opens with the events that took place over the long weekend of Friday 17th November to Tuesday 21st November 2023. On Friday 17th, Altman was invited to a video call set up by the OpenAI board members where he was unceremoniously told he was being fired. However by the following Tuesday, due to overwhelming pressure from OpenAI employees and, more importantly its investors, especially Microsoft, Altman was back at the helm with a new board of directors. Empire of AI examines how Altman and his fellow conspirators came to create and dominate the techno-industrial complex that is referred to generically as ‘AI’ and how, if things carry on as they currently are, we risk destroying jobs, the environment and most, if not all, forms of human endeavour and behaviour.
Empire of AI is divided into four parts. Part I covers how Sam (Altman) met Elon (Musk), the latter being a “childhood hero” of the former, and decided to build an AI company that would compete with Google head-on to beat them in the race to building AGI (artificial general intelligence). Musk was fearful that Google’s takeover of the British AI research company, DeepMind would lead them to develop AGI first and thereafter “murder all competing AI researchers“. Musk was adamant that “the future of AI should not be controlled by Larry (Page)“. To Musk, Altman was a like minded entrepreneur who wanted AGI to be for the good of all humanity, not something that would allow Google to become even richer by destroying all competitors in its wake. OpenAI was formed with good intentions therefore, to create the first AGI that could be “used for individual empowerment” and which would have safety as “a first-class-requirement“.
OpenAI was launched as a nonprofit company in December 2015. Altman’s co-founders were Greg Brockman (an engineer and fellow entrepreneur) and Ilya Sutskever (an AI researcher poached from Google). The company had a $1B commitment from, amongst others Elon Musk, Peter Thiel (Musk’s fellow cofounder of PayPal) and Reid Hoffman (another of the so called “PayPal mafia” and cofounder of LinkedIn). Having started a company with the ultimate goal of developing AGI, OpenAI needed to do three things quickly – figure out exactly what it was they were going to build to achieve AGI, hire the right talent to do this whilst at the same time securing enough funding to make these two things possible. By 2019 these problems seemed to have been solved. The company would focus on building its AI technology using a large language model (LLM) it called GPT-2. In order to secure the necessary funding to be able to pay for the enormous amounts of compute LLMs needed they switched from being a nonprofit to a for-profit company opening up the floodgates to companies like Microsoft investing in them in the hope of making huge profits if OpenAI were the first to achieve AGI. On this basis Microsoft announced in July 2019 it was to invest $1B in the company.
In Part II the book looks at some of the looming problems OpenAI and other companies began to face as they tried to scale their LLMs. From an ethics point of view many academics as well as people in the industry itself began to question the wisdom of building AI/AGI in an unregulated way. Comparisons were drawn with the development of the atom bomb during World War II and the work done by Oppenheimer and his team on the Manhattan Project. Where was the governance and regulation that was developed alongside nuclear weapons which, despite a few close shaves, have prevented nuclear Armageddon? Companies were building ethics teams to try and develop such governance models but there was often an uneasy relationship between the leadership teams whose focus was on profit and the need for an ethical approach for development. The need for ethical leadership is no more apparent when it comes to one of the activities few of us think about when using LLMs like ChatGPT and that is how these models ‘learn’. It turns out they are trained by people. But these are not the well paid engineers who live and work in San Francisco but are from third world countries like Kenya and Venezuela where labour practices are unregulated and often exploitative. As part of her research for the book Hao travels to several of these places and interviews some of the people who work long hours annotating data they are sent by the tech giants (usually via a third-party companies) describing what they see. This is not only boring and poorly paid (often just a few pennies per task) but in some cases can be hugely distressful as workers can be presented with text and images showing some of the worst forms of child sexual abuse, extreme violence, hate speech and self-harm. It’s very easy to forget, overlook or not understand that for LLMs like CharGPT to present acceptable content for our sensibilities someone, somewhere has had to filter out some of the worst forms of human degradation.
As LLMs are scaled there is a need to build more and more, ever larger data centres to house the processors that crunch the massive amounts of data, not just during training but also in their operation. Many of these large data centres are also being constructed in third-world countries where it is relatively easy to get planning permission and access to natural resources, like water for cooling, but often to the detriment of local people. In Part III, Hao discusses these aspects in detail. As new and improved versions of ChatGPT and it’s picture generation equivalent DALL-E were released and OpenAI became ever closer to Microsoft who were providing the necessary cloud infrastructure that hosted ChatGPT and DALL-E the need for ‘hyperscale’ data centres became ever greater The four largest hyperscalers – Google, Microsoft, Amazon and Meta – are now building so called ‘megacampuses’ with vast buildings containing racks of GPUs each of which will soon require 1,000 to 2,000 megawatts of power – the equivalent energy requirement of up to three and a half San Francisco’s. Such power hungry megacampuses mean that these companies can no longer meet their carbon emission targets (Google’s carbon emissions have soared by 51% since 2019 as they have invested in more and more artificial intelligence).
As Altman’s fame and influence grew his personal life inevitably began to get more attention. In September 2023 a feature writer at New York Magazine, Elizabeth Weil, published a profile of Altman which, for the first time in mainstream media, discussed his estrangement from his sister, Annie, and how financial, physical and mental health issues had caused her to turn to sex work. The New York magazine profile set side-by-side Annie’s life of financial problems with Altman’s lifestyle of expensive homes and luxury cars. Hao draws comparisons with how OpenAI (and other AI companies) ignore the anger of the data workers who try to challenge their domination by fighting for fair working conditions with how Altman seems able to do the same in ignoring his sisters cries for help. It would seem that Altman’s personal and professional lives were beginning to conspire against his so far meteoric success. In the final part of the book we see how a particular aspect of his personality lead to the events of that fateful weekend in November of 2023.
From the outside, much of what began to ensue at OpenAI after ChatGPT had propelled the company to a valuation in excess of $100B could be seen to be problems that face any company that had grown so quickly. As the spotlight on Altman himself had become ever more intense however Altman’s behaviour began to deteriorate. Often exhausted he was cracking under pressure of mounting competition as well as the punishing travel schedule he had set himself to promote OpenAI. According to Hao this pressure was causing Altman to exhibit destructive behaviour. “He was doing what he’d always done, agreeing with everyone to their face, and now, with increasing frequency, badmouthing them behind their backs. It was creating greater confusion and conflict across the company than ever before, with team leads mimicking his bad form and pitting their reports against each other“.
This, together with concerns about Altman forcing his developers to deliver new iterations of ChatGPT without sufficient testing finally drove the board, on Saturday 11th November 2023, to come to their momentous decision – “they would remove Altman and install Murati as interim CEO“. Mira Murati was OpenAIs CTO but in that role had found herself “frequently cleaning up his [Altman’s] messes“.
And so the book returns to where it started with the events of the 17th – 21st November. As we know, Altman survived what is now referred to internally as “The Blip” but pressure on him continues to mount from several directions – multiple lawsuits (including from Altman’s co-founder Elon Musk), investigations from regulators after the board investigation had observed Altman was “”not consistently candid in his communications” and increased competition, even from Microsoft who had decided to diversify its AI portfolio not wishing to put all of its AI eggs in OpenAI’s basket.
As followers of OpenAI will know, Altman and his team have gone on to deliver regular updates to ChatGPT as well as the API which can be used by developers to access its functionality. The current version (at the time of writing this review) of ChatGPT (o3-pro) is ‘multi-modal’ in that it can search the web, analyse files, reason about visual inputs, use Python, personalise responses using memory, and a whole load more. Its competitors too are releasing ever more powerful models though none (yet) claim to have achieved the holy grail of AGI. Empire of AI has captured a relatively small slice in time of the race to AGI and no doubt many more books will be written which chart the twists and turns of that race.
Empire of AI is a BIG book (nearly 500 pages with notes and index) and is the result of over 300 interviews plus a “trove of correspondence and documents” gathered by Karen Hao since she began covering OpenAI in 2019. Like many such books, you may wonder if the editing could have been a bit sharper. Perhaps reducing the number of stories and incidents would have made its points more succinctly (and in fewer pages). Ultimately however this is an important document that describes well the personalities involved in building OpenAI and the design and delivery of its products, not to mention the absolute and total belief the founders have in these products. Like the book Careless People by Sarah Wynn-Williams – which captures the power, greed and madness at Facebook during its early years – you do not come away from reading Empire of AI with much of a sense of trust or admiration for the men (for they are nearly all men) that start and run these companies. One can only hope that the steady drip of publications that are critiquing the tech industry in general and the AI companies in particular will ultimately lead to some form of change which limits and constrains the power of the people that run the companies as well as the technology itself.
I for one am not holding my breath though.
















