The Technological Republic According to Palantir

The provocative assertion of The Technological Republic by Alex Karp and Nicholas Zamiska is that the money and time the software engineers of Silicon Valley expend on “social media platforms and food delivery apps” would be better directed at more worthwhile challenges. For the authors these would involve things like: addressing violent crime, education reform, medical research and national defence. It is no coincidence I’m sure that at least three of these is where Palantir Technologies Inc., the company co-founded by Karp, makes much of its money.

Karp and Zamiska believe that the founders and CEOs of the current crop of hugely successful, and fabulously rich, tech companies see these challenges as being “too intractable, too thorny, and too politically fraught” to address in any real way. Hence their focus is on consumer friendly apps rather than addressing some of the world’s truly wicked problems. The book is therefore a rallying cry and a wake-up call for the tech entrepreneurs of Silicon Valley to address these thorny problems if Western democracy, as we know it, is to survive and retain its technological hegemony.

It’s worth noting at this point that Palantir is a software company that specialises in advanced data analytics and artificial intelligence. It was founded in 2003 by Peter Thiel, Stephen Cohen, Joe Lonsdale as well as Alex Karp. Palantir’s customers include the United States Department of Defence the CIA, the DHS, the NSA, the FBI and the NHS here in the UK. It is obviously in Palantir’s best interest to ensure that governments continue to spend money on the kind of products they make. In April 2023, the company launched Artificial Intelligence Platform (AIP) which integrates large language models into privately operated networks. The company demonstrated its use in war, where a military operator could deploy operations and receive responses via an AI.

As Palantir moves into AI it is obvious they are going to need engineers who not only have the technical knowledge to build such systems but also don’t mind working on products used in the defence industry. As the pair state early on in the book, if such engineering talent is not forthcoming then “medical breakthroughs, education reform, and military advances would have to wait” because the required technical talent is being directed at building “video-sharing apps and social media platforms, advertising algorithms and online shopping websites“.

Whilst I do agree that an enormous amount of talent does feel as if it is being misdirected in efforts to wring every last dollar out of improving algorithms for selling “stuff” to consumers where I feel the author’s treatise becomes hopelessly sidetracked is on the reasons for this. Some of the arguments the authors put forward as to why we have arrived at this sorry state of affairs include:

  • The abandonment of belief or conviction in “broader political projects” such as the Manhattan Project to build the first atomic bomb or the Apollo space programme to put a man on the moon.
  • The failure of earlier government funded projects which created much of the technology we use and take for granted today (e.g. the internet, personal computing) to capitalise on this technology and direct its use to more worthwhile efforts. Quoting the authors again: “When emerging technologies that give rise to wealth do not advance the broader public interest, trouble often follows. Put differently, the decadence of a culture or civilization, and indeed its ruling class, will be forgiven only if that culture is capable of delivering economic growth and security for the public“.
  • Failure of universities to teach properly the ‘values’ of Western Civilisation and to properly articulate the “collective sense of identity that was capable of serving as a foundation for a broader sense of cohesion and shared purpose“.
  • Even Steve Jobs gets a reprimand for building technology (the Mac personal computer and iPhone) that was “intimate and personal” and that would “liberate the individual from reliance on a corporate or governmental superstructure“. Interestingly Elon Musk does earn some brownie points for founding Tesla and SpaceX that have “stepped forward to fill glaring innovation gaps where national governments have stepped back“. No mention is made that SpaceX has received nearly $20.7 billion in government contracts, research grants, and other forms of public assistance, with about $14.6 billion of that coming from contracts with NASA.
  • At one point in the book the authors relate a scene from George Orwell’s 1984 where Winston Smith is wandering through a wooded area and imagines that even here, Big Brother may be listening to every word through microphones concealed in trees. They use this to put forward the argument that we may be near such levels of surveillance ourselves but it is not the “contraptions built by Silicon Valley” to blame for this rather it’s “we” who are to blame for “failing to encourage and enable the radical act of belief in something beyond, and external to, the self“. An interesting observation from the co-founder of the company that develops some of the “contraptions’ that enable surveillance capitalism.

I could go on , but you get the general idea.

The first two parts of the book set about explaining why and how, in the author’s view, we have lost our way in tackling the big challenges of our age – preferring instead to do things like reimagining online shopping or building photo-sharing and food delivery apps.

Karp and Zamiska believe that the generation of engineers coming out of the prestigious universities of the West and East coasts of America in the late 1990s and early 2000s were not just able to benefit from the relatively new technology of the internet and world-wide web but were also able to take advantage of the seemingly unlimited funds being offered by the venture capitalists who had made their fortunes from “Web 1.0”. This happened to coincide with an increased lack of trust in national governments as well as frustration in delays in adopting more progressive reforms and their “grand experiments and military misadventures on a world stage“. It was easier for founders looking for something to “disrupt” to focus on solving their own problems such as how to hail a taxi or get a book delivered more quickly. They were not “building software systems for defence and intelligence agencies, and they were certainly not building bombs“.

Part III of the book concentrates on what the authors refer to as “The Engineering Mindset”. The proposition here is that the success of Silicon Valley is in how it has not just hired the best and the brightest engineers but has given them the “freedom and space to create“. Though in the authors view they are not creating the right things.

To illustrate this Karp and Zamiska reference a number of biological and psychological studies and experiments from the 1950s and 1960s. These experiments were done in what the authors refer to as the “golden age of psychology“, that is before such experiments were monitored for their sometimes cavalier approach to ethics. The results of these studies showed, alarmingly, how, often the majority of, people are prone to group think and a hive mindset. They tend to follow what others do or do what they are told for fear of standing out or looking foolish. The assertion is then made that this “instinct towards obedience” can be “lethal” when trying to create a truly disruptive or entrepreneurial organisation. Presumably Palantir go out of their way to hire disobedient disruptors when recruiting for their organisation.

One of the experiments cited is the now infamous, and much quoted one in books like this – the so called “obedience experiment” devised by the psychology professor Stanley Milgram. This is the one where a group of people were tested as to their willingness to inflict harm on innocent strangers by supposedly giving them electric shocks if they were not seen to memorise words accurately. The person who was meant to be doing the memorising was an actor who would yell and shout as the voltage was supposedly increased and implore their “teachers” to stop. The startling outcome of this experiment was that two-thirds of the people giving the electric shocks were ‘happy’ to carry on doing so even though they knew that may harm the learner. The findings from this experiment have been used to explain why, amongst other things, concentration camp guards during the Second World War were willing to carry out their atrocious acts under the instructions of their commanding officers.

Karp and Zamiska use this experiment to justify how such an instinct toward obedience can be anathema to creativity. In their view it is only those people who resist any tendency to conformity and group think who are likely to be the outliers who come up with truly novel ideas and approaches. For the authors the most effective software companies are more akin to artist colonies “filled with temperamental and talented souls“.

Having taken this detour around biology and psychology to show how conformance tends to be the trait that the majority have, the authors return to their main point that even if you can identify and hire the creative nonconformists the challenge is in how to direct that creativity “toward the nations shared goals“. These, they assert, can only be identified if “we take the risk of defining who we are or aspire to be“.

So what should “we be” and what should we “aspire to”? This is what the authors attempt to address in the final part of the book and is where they start to give away some of their own political, religious and business beliefs.

They reference Lee Kuan Yew (a second time) at this point. Lee was the first prime minister of Singapore and who was charged with convincing a sceptical public that the newly formed island nation (having split from Malaysia own 1965) could be a viable entity. Lee’s approach was to manufacture a national identity amongst its citizens by involving his government in several aspects of its citizens private lives. These included requiring that all Chinese students learn Mandarin (as well as English) at school instead of the multiple different dialects they learnt at home. The authors attribute this and other attempts at forging a national identity as being responsible for Singapore’s exponential growth from a GDP of $428 in 1960 to $84,734 in 2023. Rather confusingly, in terms of adhering to the authors other argument around the dangers of conformity, Singapore is renowned for its conformist citizens and its somewhat draconian legal system. So much so that the author William Gibson characterised Singapore as being Disneyland with the death penalty.

Karp and Zamiska then go on to state their belief that it is a failure of the “contemporary left” in the West that “deprives itself of the opportunity to talk about national identity” and that in both America and Europe the left has “neutered itself” and prevented its “advocates from having a forceful and forthright conversation about national identity“. At this point in the book the authors call on no other than J.R.R. Tolkien, he of The Lord of the Rings fame. For those not familiar with Tolkien’s three-volume tome it is the epic story of good versus evil where a group of plucky little Hobbits overcome the evil Sauron’s threats to bring death and destruction to Middle Earth. The reason this book is mentioned is because the authors see it as an example of how good storytelling around a shared narrative, even if its mythological (or religious) can show how people can come together. As Rowan Williams, the former Archbishop of Canterbury, says in an essay on Tolkien’s books, “Tolkien’s dogged concern about the terrible dangers of our desire for final solutions and unchallengeable security is even more necessary“. It’s worth noting at this point that Palantir was named after the “seeing stone” in Tolkien’s legendarium.

This then would seem to be the authors ultimate solution to the building of a technological republic. It’s no good relying on the fact that the software engineers of Silicon Valley will suddenly see the light and begin directing their talents at solving the world’s wicked problems and, more to the point today, the West’s security problems. Instead we need to build (or rebuild) a new order of “collective experience, of shared purpose and identity [and] of civic rituals that are capable of binding us together“.

The Technological Republic runs to just under 300 pages (or 218 if you take out the references and index). Unfortunately, like many books of this type, I cannot help but feel the whole argument could have been made more concisely and more persuasively if it was written as an opinion piece in The Atlantic or New Yorker magazine rather than as a long form book. The main point, that we in the West are directing our (software) technologies at building the wrong things is made over and over again with, to be frank, some fairly bizarre and academic references and too may non sequiturs. For example maybe those who did not conform to what was expected of them in the obedience experiment were not more creative but had a more pronounced ethical or moral compass. Talk of ethics is distinctly missing from this book.

I also believe there is a hidden agenda at play here to. In the UK at least Palantir is receiving some bad press, especially in its dealings with the NHS. Last year the British Medical Journal called on (the recently disbanded) NHS England to cancel its contract with Palantir citing concerns about the cost of its contract with the company and whether it offers value for money, as well as questions about public trust in Palantir, the procurement process as well as the companies public support of the Israeli Occupation Forces (IOF) with their assault in Gaza.

More widely there is a concern how smart city solutions currently rely on centralised, proprietary architectures that concentrate data and control in the hands of a few powerful tech companies like Cisco, IBM, Microsoft, and Palantir. Whereas once we may have thought it inconceivable that the data managed by an American corporation could be misused by a Western government what we are seeing happening now in America itself raises that very real fear.

Of even more concern is the apparent cosying up of Keir Starmer’s government with Palantir. According to Andrew Marr in this New Statesman article, following his meeting with Donald Trump in the Oval Office at the end of February, Starmer’s next visit was with Alex Karp at Palantir Technologies HQ in Washington. Whilst there he saw “military kit” and confirmed that Britain wouldn’t over-regulate AI so it could pursue new economic opportunities “with advanced technology at its core“. Unfortunately, it would appear much of the real economic advantage (and data) may be flowing to Trump’s America rather than staying in the UK!

To be clear, I do agree with the authors proposition, that much engineering talent is being wasted on the development of social media apps and the ever more ingenious ways that platform providers are finding to part us from our hard earned money. Where I diverge from the authors is in their rational for why this is the case. I suspect that it’s more to do with the fabulous salaries and ‘cool’ working environments that these companies offer rather than anything more sinister? As someone who has worked for both a Silicon Valley startup and on many government sites here in the UK I know where I would prefer to work given the choice (and setting aside ethical concerns).

Of course, working for Palantir you probably get to have your cake and eat it. I’m sure Palantir offers some nice working conditions for the quarter of its global workforce who operate from the UK whilst working on those profitable NHS contracts (£330m for a seven years according to Marr).

From a UK perspective the biggest issue of all is why we cannot build companies like Palantir that can be the data processing companies of choice by not just the NHS but for other government departments as well? I know at least part of the answer to this. We have a well-publicised skills gap in the UK where there are not enough good software engineers to staff such companies as well as a lack of investment capital to fund them. This has to be the real challenge for our government if we are to ever ween ourselves away from companies like Palantir and develop some home-grown talent who consider it worthwhile to work on ‘software for good’ projects rather than developing the next photo-sharing app (or developing the next great piece of surveillance software).

Monday 3rd March, 2025

Today has been a significant day for me. It’s the first day I have not been in gainful employment or education since, well the age of five really. I joined GEC Telecommunications in Coventry as a Software Engineer in August of 1979. Ever since then, I have always been in employment. All that time, I have been working in the tech sector (mainly doing interesting things with software).

At the age of sixty six and three quarters, I know now should be the time I unpack my laptop bag for the last time. I should settle down to make the best of my remaining four thousand weeks. These weeks should be unhindered by conference calls, early morning train travel, and the incessant noise of emails and messaging services. That may sound good for some, but to me it would be the sounding of my death knell.

As Robert Kennedy said:

Like it or not, we live in interesting times. They are times of danger and uncertainty; but they are also the most creative of any time in the history of mankind. And everyone here will ultimately be judged – will ultimately judge himself – on the effort he has contributed to building a new world society and the extent to which his ideals and goals have shaped that effort.

Robert F. Kennedy, American Politician

I can’t think of a better way of describing how I see the world now and why I believe it is so important for everyone, whose brain is active and whose body is able, to carry on trying to contribute something to society.

Since leaving my longest term employer, IBM, in 2019 I have voraciously devoured books. Mainly nonfiction and mainly books on technology and how it impacts the world. Also a smattering of biographies as well as books on philosophy and economics and not forgetting some works of fiction which too, also help in understanding the way the world could be (or could have been).

Having spent so much time reading and taking on board other people’s thoughts, options and ideas I have decided it is now my turn to try and put into words my understanding of the world. I am hoping that one of the things freedom from employment will give me is time. Time to write. Time to photograph. Time to spend with my grand children whose life is just beginning and who must first be guided and then navigate themselves through tumultuous (but interesting) times.

This is hopefully the start of my new writing ‘career’ and the next chapter in my life.

Forty Years of Mac

Screenshot from Apple’s “1984” ad directed by Sir Ridley Scott

Forty years ago today (24th January 1984) a young Steve Jobs took to the stage at the Flint Center in Cupertino, California to introduce the Apple Macintosh desktop computer and the world found out “why 1984 won’t be like ‘1984’.

The Apple Macintosh, or ‘Mac’, boasted cutting-edge specifications for its day. It had an impressive 9-inch monochrome display with a resolution of 512 x 342 pixels, a 3.5-inch floppy disk drive, and 128 KB of RAM. The 32-bit Motorola 68000 microprocessor powered this compact yet powerful machine, setting new standards for graphical user interfaces and ease of use.

The original Apple Macintosh

The Mac had been gestating in Steve Jobs restless and creative mind for at least five years but had not started its difficult birth process until 1981 when Jobs recruited a team of talented individuals, including visionaries like Jef Raskin, Andy Hertzfeld, and Bill Atkinson. The collaboration of these creative minds led to the birth of the Macintosh, a computer that not only revolutionized the industry but also left an indelible mark on the way people interact with technology.

The Mac was one of the first personal computers to feature a graphical user interface (Microsoft Windows 1.0 was not released until November 1985) as well as the use of icons, windows, and a mouse for navigation instead of a command-line interface. This approach significantly influenced the development of GUIs across various operating systems.

Possibly of more significance is that some of the lessons learned from the Mac have and continue to influence the development of subsequent Apple products. Steve Jobs’ (and later Jony Ive’s) commitment to simplicity and elegance in design became a guiding principle for products like the iPod, iPhone, iPad, and MacBook and are what really make the Apple ecosystem (as well as allowing it to charge the prices it does).

One of the pivotal moments in Mac’s development was the now famous “1984” ad , which had its one and only public airing two days before during a Super Bowl XVIII commercial break and built a huge anticipation for the groundbreaking product.

I was a relative late convert to the cult of Apple, not buying my first computer (a MacBook Pro) until 2006. I still have this computer and periodically start it up for old times sake. It still works perfectly albeit very slowly and with a now very old copy of macOS running.

A more significant event, for me at least, was that a year after the Mac launch I moved to Cupertino to take a job as a software engineer at a company called ROLM, a telecoms provider that had just been bought by IBM and was looking to move into Europe. ROLM was on a recruiting drive to hire engineers from Europe who knew how to develop product for that marketplace and I had been lucky enough to have the right skills (digital signalling systems) at the right time.

At the time of my move I had some awareness of Apple but got to know it more as I ended up living only a few blocks from Apple’s HQ on Mariani Avenue, Cupertino (I lived just off Stevens Creek Boulevard which used to be chock-full of car dealerships at that time).

The other slight irony of this is that IBM (ROLM’s owner) was of course “big brother” in Apple’s ad and the young girl with the sledgehammer was out to break their then virtual monopoly on personal computers. IBM no longer makes their machine whilst Apple has obviously gone from strength to strength.

Happy Birthday Mac!

Should We Worry About Those Dancing Robots?

Image Copyright Boston Dynamics

The robots in question are the ones built by Boston Dynamics who shared this video over the holiday period.

For those who have not been watching the development of this companies robots, we get to see the current ‘stars’ of the BD stable, namely: ‘Atlas’ (the humanoid robot), ‘Spot’ (the ‘dog’, who else?) and ‘Handle’ (the one on wheels) all coming together for a nice little Christmassy dance.

(As an aside, if you didn’t quite get what you wanted from Santa this year, you’ll be happy to know you can have your very own ‘Spot’ for a cool $74,500.00 from the Boston Dynamics online shop).

Boston Dynamics is an American engineering and robotics design company founded in 1992 as a spin-off from the Massachusetts Institute of Technology. Boston Dynamics is currently owned by the Hyundai Motor Group (since December, 2020) having previously been owned by Google X and SoftBank Group, the Japanese multinational conglomerate holding company.

Before I get to the point of this post, and attempt to answer the question posed by it, it’s worth knowing that five years ago the US Marine Corps, working with Boston Dynamics who were under contract with DARPA, decided to abandon a project to build a “robotic mule” that would carry heavy equipment for the Marines because the Legged Squad Support System (LS3) was too noisy. I mention this for two reasons: 1) that was five years ago, a long time in robotics/AI/software development terms and 2) that was a development we were actually told about, what about all those other military projects that are classified that BD may very well be participating in? More of this later.

So back to the central question: should we worry about those dancing robots? My answer is a very emphatic ‘yes’, for three reasons.


Reason Number One: It’s a “visual lie”

The first reason is nicely summed up by James J. Ward, a privacy lawyer, in this article. Ward’s point, which I agree with, is that this is an attempt to convince people that BD’s products are harmless and pose no threat because robots are fun and entertaining. Anyone who’s been watching too much Black Mirror should just chill a little and stop worrying. As Ward says:

“The real issue is that what you’re seeing is a visual lie. The robots are not dancing, even though it looks like they are. And that’s a big problem”.

Ward goes on to explain that when we watch this video and we see these robots appearing to be experiencing the music, the rhythmic motion, the human-like gestures we naturally start to feel the joyfulness and exuberance of the dance with them. The robots become anthropomorphised and we start to feel we should love them because they can dance, just like us. This however, is dangerous. These robots are not experiencing the music or the interaction with their ‘partners’ in any meaningful way, they have simply been programmed to move in time to a rhythm. As Ward says:

“It looks like human dancing, except it’s an utterly meaningless act, stripped of any social, cultural, historical, or religious context, and carried out as a humblebrag show of technological might.”

The more content like this that we see, the more familiar and normal it seems and the more blurred the line becomes between what it is to be human and what our relationship should be with technology. In other words, we will become as accepting of robots as we are now with our mobile phones and our cars and they will suddenly be integral parts of our life just like those relatively more benign objects are.

But robots are different.

Although we’re probably still some way off from the dystopian amusement park for rich vacationers depicted in the film Westworld, where customers can live out their fantasies through the use of robots that provide anything humans want we should not ignore the threat from robots and advanced artificial intelligence (AI) too quickly. Maybe then, videos like the BD one should serve as a reminder that now is the time to start thinking about what sort of relationship we want with this new breed of machine and start developing ethical frameworks on how we create and treat things that will look increasingly like us?


Reason Number Two: The robots divert us from the real issue

If the BD video runs the risk of making us more accepting of technology because it fools us into believing those robots are just like us, it also distracts us in a more pernicious way. Read any article or story on the threats of AI and you’ll aways see it appearing alongside a picture of a robot, and usually one that Terminator like is rampaging around shooting everything and everyone in sight. The BD video however shows that robots are fun and that they’re here to do work for us and entertain us, so let’s not worry about them or, by implication, their ‘intelligence’.

As Max Tegmark points out in his book Life 3.0 however, one of the great myths of the dangers of artificial intelligence is not that robots will rise against us and wage out of control warfare Terminator style, it’s more to do with the nature of artificial intelligence itself. Namely, that an AI whose goals are misaligned with our own, needs no body, just an internet connection, to wreak its particular form of havoc on our economy or our very existence. How so?

It’s all to do with the nature of, and how we define, intelligence. It turns out intelligence is actually quite a hard thing to define (and more so to get everyone to agree on a definition). Tegmark uses a relatively broad definition:

intelligence = ability to accomplish complex goals

and it then follows that:

artificial intelligence = non-biological intelligence

Given these definitions then, the real worry is not about machines becoming malevolent but about machines becoming very competent. In other words what about if you give a machine a goal to accomplish and it decides to achieve that goal no matter what the consequences?

This was the issue so beautifully highlighted by Stanley Kubrick and Arthur C. Clarke in the film 2001: A Space Odyssey. In that film the onboard computer (HAL) on a spaceship bound for Jupiter ends up killing all of the crew but one when it fears its goal (to reach Jupiter) maybe jeopardised. HAL had no human-like manifestation (no arms or legs), it was ‘just’ a computer responsible for every aspect of controlling the spaceship and eminently able to use that power to kill several of the crew members. As far as HAL was concerned it was just achieving its goal – even if it did mean dispensing with the crew!

It seems that hardly a day goes by without there being news of not just our existing machines becoming ever more computerised but with those computers becoming ever more intelligent. For goodness sake, even our toothbrushes are now imbued with AI! The ethical question here then is how much AI is enough and just because you can build intelligence into a machine or device, does that mean you actually should?


Reason Number Three: We maybe becoming “techno-chauvinists”

One of the things I always think when I see videos like the BD one is, if that’s what these companies are showing is commercially available, how far advanced are the machines they are building, in secret, with militaries around the world?

Is there a corollary here with spy satellites? Since the end of the Cold War, satellite technology has advanced to such a degree that we are being watched — for good or for bad — almost constantly by military, and commercial organisations. Many of the companies doing the work are commercial with the boundary between military and commercial now very blurred. As Pat Norris, a former NASA engineer who worked on the Apollo 11 mission to the moon and author of Spies in the Sky says “the best of the civilian satellites are taking pictures that would only have been available to military people less than 20 years ago”. If that is so then what are the military satellites doing now?

In his book Megatech: Technology in 2050 Daniel Franklin points out that Western liberal democracies often have a cultural advantage, militarily over those who grew up under a theocracy or authoritarian regime. With a background of greater empowerment in decision making and encouragement to learn from, and not be penalised by, mistakes, Westerners tend to display greater creativity and innovation. Education systems in democracies encourage the type of creative problem-solving that is facilitated by timely intelligence as well as terabytes of data that is neither controlled nor distorted by an illiberal regime.

Imagine then how advanced some of these robots could become, in military use, if they are trained using all of the data available to them from past military conflicts, both successful and not so successful campaigns?

Which brings me to my real concern about all this. If we are training our young scientists and engineers to build ‘platforms’ (which is how Boston Dynamics refers to its robots) that can learn from all of this data, and maybe to begin making decisions which are no longer understood by their creators, then whose responsibility is it when things go wrong?

Not only that, but what happens when the technology that was designed by an engineering team for a relatively benign use, is subverted by people who have more insidious ideas for deploying those ‘platforms’? As Meredith Broussard says in her book Artificial Unintelligence: “Blind optimism about technology and an abundant lack of caution about how new technologies will be used are a hallmark of techno-chauvinism”.


As engineers and scientists who hopefully care about the future of humanity and the planet on which we live surely it is beholden on us all to morally and ethically think about the technology we are unleashing? If we don’t then what Einstein said at the advent of the atomic age rings equally true today:

“It has become appallingly obvious that our technology has exceeded our humanity.”

Albert Einstein

Three Types of Problem, and How to Solve Them

Image by Thanasis Papazacharias from Pixabay

We are all problem solvers. Whether it be trying to find our car keys, which we put down somewhere when we came home from work or trying to solve some of the world’s more gnarly issues like climate change, global pandemics or nuclear arms proliferation.

Human beings have the unique ability not just to individually work out ways to fix things but also to collaborate with others, sometimes over great distances, to address great challenges and seemingly intractable problems. How many of us though, have thought about what we do when we try to solve a problem? Do we have a method for problem solving?

As Albert Einstein once said: “We cannot solve our problems by using the same kind of thinking we used when we created them.” This being the case (and who would argue with Einstein) it would be good to have a bit of a systematic approach to solving problems.

On the Digital Innovators Skills Programme we spend some time looking at types of problem as well as the methods and tools we have at our disposal to address them. Here, I’ll take a look at the technique we use but first, what types of problem are there?

We can think of problems as being one of three types: Simple, Complex and Wicked, as shown in this diagram.

3 Problem Types

Simple problems are ones that have a single cause, are well defined and have a clear and unambiguous solution. Working out a route to travel e.g. from Birmingham to Lands’ End is an example of a simple problem (as is finding those lost car keys).

Complex problems tend to have multiple causes, are difficult to understand and their solutions can lead to other problems and unintended consequences. Addressing traffic congestion in a busy town is an example of a complex problem.

Wicked problems are problems that seem to be so complex it’s difficult to envision a solution. Climate change is an example of a wicked problem.

Wicked problems are like a tangled mess of thread – it’s difficult to know which to pull first. Rittel and Webber, who formulated the concept of wicked problems, identified them as having the following characteristics:

  1. Difficult to define the problem.
  2. Difficult to know when the problem has been solved.
  3. No clear right or wrong solutions.
  4. Difficult to learn from previous success to solve the problem.
  5. Each problem is unique.
  6. There are too many possible solutions to list and compare.

Problems, of all types, can benefit from a systematic approach to being solved. There are many frameworks that can be used for addressing problems but at Digital Innovators we use the so called 4S Method proposed by Garrette, Phelps and Sibony.

The 4S Method is a problem-solving toolkit that works with four, iterative steps: State, Structure, Solve and Sell.

The 4S Method
  1. State the Problem. It might sound obvious but unless you understand exactly what the problem is you are trying to solve it’s going to be very difficult to come up with a solution. The first step is therefore to state exactly what the problem is.
  2. Structure the Problem. Having clearly stated what the problem probably means you now know just how complex, or even wicked, it is. The next step is to structure the problem by breaking down into smaller, hopefully more manageable parts each of which can hopefully be solved through analysis.
  3. Solve the Problem. Having broken the problem down each piece can now be solved separately. The authors of this method suggest three main approached: hypothesis-driven problem solving, issue-driven problem solving, or the creative path of design thinking.
  4. Sell the Solution. Even if you come up with an amazing and innovative solution to the problem, if you cannot persuade others of its value and feasibility your amazing idea will never get implemented or ever be known about. When selling always focus on the solution, not the steps you went through to arrive at it.

Like any technique, problem solving can be learned and practiced. Even the world’s greatest problem solvers are not necessarily smarter than you are. It’s just that they have learnt and practised their skills then mastered them through continuous improvement.

If you are interested in delving more deeply into the techniques discussed here Digital Innovators will coach you in these as well as other valuable, transferable business skills and also give you chance to practice these skills on real-life projects provided to us by employers. We are currently enrolling students for our next programme which you can register an interest for here.

Happy New Year from Software Architecture Zen.

Tech Skills Are Not the Only Type of Skill You’ll Need in 2021

Image by Gerd Altmann from Pixabay

Whilst good technical skills continue to be important these alone will not be enough to enable you to succeed in the modern, post-pandemic workplace. At Digital Innovators, where I am Design and Technology Director, we believe that skills with a human element are equally, if not more, important if you are to survive in the changed working environment of the 2020’s. That’s why, if you attend one of our programmes during 2021, you’ll also learn these, as well as other, people focused, as well as transferable, skills.

1. Adaptability

The COVID-19 pandemic has changed the world of work not just in the tech industry but across other sectors as well. Those organisations most able to thrive during the crisis were ones that were able to adapt quickly to new ways of working whether that is full-time office work in a new, socially distanced way, a combination of both office and remote working, or a completely remote environment. People have had to adapt to these ways of working whilst continuing to be productive in their roles. This has meant adopting different work patterns, learning to communicate in new ways and dealing with a changed environment where work, home (and for many school) have all merged into one. Having the ability to adapt to these new challenges is a skill which will be more important than ever as we embrace a post-pandemic world.

Adaptability also applies to learning new skills. Technology has undergone exponential growth in even the last 20 years (there were no smartphones in 2000) and has been adopted in new and transformative ways by nearly all industries. In order to keep up with such a rapidly changing world you need to be continuously learning new skills to stay up-to-date and current with industry trends. 

2. Collaboration and Teamwork

Whilst there are still opportunities for the lone maverick, working away in his or her bedroom or garage, to come up with new and transformative ideas, for most of us, working together in teams and collaborating on ideas and new approaches is the way we work best.

In his book Homo Deus – A Brief History of Tomorrow, Yuval Noah Harari makes the observation: “To the best of our knowledge, only Sapiens can collaborate in very flexible ways with countless numbers of strangers. This concrete capability – rather than an eternal soul or some unique kind of consciousness – explains our mastery over planet Earth.

On our programme we encourage and demand our students to collaborate from the outset. We give them tasks to do (like drawing how to make toast!) early on, then build on these, leading up to a major 8-week projects where students work in teams of four or five to define a solution to a challenge set by one of our industry partners. Students tell us this is one of their favourite aspects of the programme as it allows them to work with new people from a diverse range of backgrounds to come up with new and innovative solutions to problems.

3. Communication

Effective communication skills, whether they be written spoken or aural, as well as the ability to present ideas well, have always been important. In a world where we are increasingly communicating through a vast array of different channels, we need to adapt our core communications skills to thrive in a virtual as well as an offline environment.

Digital Innovators teach their students how to communicate effectively using a range of techniques including a full-day, deep dive into how to create presentations that tell stories and really enable you to get across your ideas.

4. Creativity

Pablo Picasso famously said “Every child is an artist; the problem is staying an artist when you grow up”.

As Hugh MacLeod, author of Ignore Everybody, And 39 Other Keys to Creativity says: “Everyone is born creative; everyone is given a box of crayons in kindergarten. Then when you hit puberty they take the crayons away and replace them with dry, uninspiring books on algebra, history, etc. Being suddenly hit years later with the ‘creative bug’ is just a wee voice telling you, ‘I’d like my crayons back please.’”

At Digital Innovators we don’t believe that it’s only artists who are creative. We believe that everyone can be creative in their own way, they just need to learn how to let go, be a child again and unlock their inner creativity. That’s why on our skills programme we give you the chance to have your crayons back.

5. Design Thinking

Design thinking is an approach to problem solving that puts users at the centre of the solution. It includes proven practices such as building empathy, ideation, storyboarding and extreme prototyping to create new products, processes and systems that really work for the people that have to live with and use them.

For Digital Innovators, Design Thinking is at the core of what we do. As well as spending a day-and-a-half teaching the various techniques (which our students learn by doing) we use Design Thinking at the beginning of, and throughout, our 8-week projects to ensure the students deliver solutions are really what our employers want.

6. Ethics

The ethical aspects on the use of digital technology in today’s world is something that seems to be sadly missing from most courses in digital technology. We may well churn out tens of thousands of developers a year, from UK universities alone, but how many of these people ever give anything more than a passing thought to the ethics of the work they end up doing? Is it right, for example, to build systems of mass surveillance and collect data about citizens that most have no clue about? Having some kind of ethical framework within which we operate is more important today than ever before.

That’s why we include a module on Digital Ethics as part of our programme. In it we introduce a number of real-world, as well as hypothetical case studies that challenge students to think about the various ethical aspects of the technology they already use or are likely to encounter in the not too distant future.

7. Negotiation

Negotiation is a combination of persuasion, influencing and confidence as well as being able to empathise with the person you are negotiating with and understanding their perspective. Being able to negotiate, whether it be to get a pay rise, buy a car or sell the product or service your company makes is one of the key skills you will need in your life and career, but one that is rarely taught in school or even at university.

As Katherine Knapke, the Communications & Operations Manager at the American Negotiation Institute says: “Lacking in confidence can have a huge impact on your negotiation outcomes. It can impact your likelihood of getting what you want and getting the best possible outcomes for both parties involved. Those who show a lack of confidence are more likely to give in or cave too quickly during a negotiation, pursue a less-aggressive ask, and miss out on opportunities by not asking in the first place”. 

On the Digital Innovators skills programme you will work with a skilled negotiator from The Negotiation Club to practice and hone your negotiation skills in a fun way but in a safe environment which allows you to learn from your mistakes and improve your negotiation skills.

The Ethics of Contact Tracing

After a much publicised “U-turn” the UK government has decided to change the architecture of its coronavirus contact tracing system and to embrace the one based on the interfaces being provided by Apple and Google. The inevitable cries of a government that does not know what it is doing, we told you it wouldn’t work and this means we have wasted valuable time in building a system that would help protect UK citizens have ensued. At times like these it’s often difficult to get to the facts and understand where the problems actually lie. Let’s try and unearth some facts and understand the options for the design of a contact tracing app.

Any good approach to designing a system such as contact tracing should, you would hope, start with the requirements. I have no government inside knowledge and it’s not immediately apparent from online searches what the UK governments exact and actual requirements were. However as this article highlights you would expect that a contact tracing system would need to “involve apps, reporting channels, proximity-based communication technology and monitoring through personal items such as ID badges, phones and computers.” You might also expect it to involve cooperation with local health service departments. Whether or not there is also a requirement to collate data in some centralised repository so that epidemiologists, without knowing the nature of the contact, can build a model of contacts to see if they are serious spreaders or those who have tested positive yet are asymptomatic, at least for the UK, is not clear. Whilst it would seem perfectly reasonable to want the system to do that, this is a different use case to the one of contact tracing. One might assume that because the UK government was proposing a centralised database for tracking data this latter use case was also to be handled by the system.

Whilst different countries are going to have different requirements for contact tracing one would hope that for any democratically run country a minimum set of requirements (i.e. privacy, anonymity, transparency and verifiability, no central repository and minimal data collection) would be implemented.

The approach to contact tracing developed by Google and Apple (the two largest providers of mobile phone operating systems) was published in April of this year with the detail of the design being made available in four technical papers. Included as part of this document set were some frequently asked questions where the details of how the system would work were explained using the eponymous Alice and Bob notation. Here is a summary.

  1. Alice and Bob don’t know each other but happen to have a lengthy conversation sitting a few feet apart on a park bench. They both have a contact tracing app installed on their phones which exchange random Bluetooth identifiers with each other. These identifiers change frequently.
  2. Alice continues her day unaware that Bob had recently contracted Covid-19.
  3. Bob feels ill and gets tested for Covid-19. His test results are positive and he enters his result into his phone. With Bob’s consent his phone uploads the last 14 days of keys stored on his phone to a server.
  4. Alice’s phone periodically downloads the Bluetooth beacon keys of everyone who has tested positive for Covid-19 in her immediate vicinity. A match is found with Bob’s randomly generated Bluetooth identifier.
  5. Alice sees a notification on her phone warning her she has recently come into contact with someone who has tested positive with Covid-19. What Alice needs to do next is decided by her public health authority and will be provided in their version of the contact tracing app.

There are a couple of things worth noting about this use case:

  1. Alice and Bob both have to make an explicit choice to turn on the contact tracing app.
  2. Neither Alice or Bob’s names are ever revealed, either between themselves or to the app provider or health authority.
  3. No location data is collected. The system only knows that two identifiers have previously been within range of each other.
  4. Google and Apple say that the Bluetooth identifiers change every 10-20 minutes, to help prevent tracking and that they will disable the exposure notification system on a regional basis when it is no longer needed.
  5. Health authorities of any other third parties do not receive any data from the app.

Another point to note is that initially this solution has been released via application programming interfaces (APIs) that allow customised contact tracing apps from public health authorities to work across Android and iOS devices. Maintaining user privacy seems to have been a key non-functional requirement of the design. The apps are made available from the public health authorities via the respective Apple and Google app stores. A second phase has also been announced whereby the capability will be embedded at the operating system level meaning no app has to be installed but users still have to opt into using the capability. If a user is notified she has been in contact with someone with Covid-19 and has not already downloaded an official public health authority app they will be prompted to do so and advised on next steps. Only public health authorities will have access to this technology and their apps must meet specific criteria around privacy, security, and data control as mandated by Apple and Google.

So why would Google and Apple choose to implement its contact tracing app in this way which would seem to be putting privacy ahead of efficacy? More importantly why should Google and Apple get to dictate how countries should do contact tracing?

Clearly one major driver from both companies is that of security and privacy. Post-Snowden we know just how easy it has been for government security agencies (i.e. the US National Security Agency and UK’s Government Communications Headquarters) to get access to supposedly private data. Trust in central government is at an all time low and it is hardly surprising that the corporate world is stepping in to announce that they were the good guys all along and you can trust us with your data.

Another legitimate reason is also that during the coronavirus pandemic we have all had our ability to travel even locally, never mind nationally or globally, severely restricted. Implementing an approach that is supported at the operating system level means that it should be easier to make the app compatible with other countries’ counterparts, which are based on the same system therefore making it safer for people to begin travelling internationally again.

The real problem, at least as far as the UK has been concerned, is that the government has been woefully slow in implementing a rigorous and scaleable contact tracing system. It seems as though they may have been looking at an app-based approach to be the silver bullet that would solve all of their problems – no matter how poorly identified these are. Realistically that was never going to happen, even if the system had worked perfectly. The UK is not China and could never impose an app based contact tracing system on its populace, could it? Lessons from Singapore, where contact tracing has been in place for some time, are that the apps do not perform as required and other more intrusive measures are needed to make them effective.

There will now be the usual blame game between government, the press, and industry, no doubt resulting in the inevitable government enquiry into what went wrong. This will report back after several months, if not years, of deliberation. Blame will be officially apportioned, maybe a few junior minister heads will roll, if they have not already moved on, but meanwhile the trust that people have in their leaders will be chipped away a little more.

More seriously however, will we have ended up, by default, putting more trust into the powerful corporations of Silicon Valley some of whom not only have greater valuations than many countries GDP but are also allegedly practising anti-competitive behaviour?

Update: 21st June 2020

Updated to include link to Apple’s anti-trust case.

The Real Reason Boris Johnson Has Not (Yet) Sacked Dominic Cummings

Amidst the current press furore over ‘CummingsGate’ (you can almost hear the orgiastic paroxysms of sheer ecstasy emanating from Guardian HQ 250 miles away at Barnard Castle as the journalists there finally think they have got their man) I think everyone really is missing the point. The real reason Johnson is not sacking Cummings (or at least hasn’t at the time of writing) is because Cummings is his ‘dataist-in-chief’ (let’s call him Johnson’s DiC for short) and having applied his dark arts twice now (the Brexit referendum and the 2019 General Election) Cummings has proven his battle worthiness. It would be like Churchill (Johnson’s hero and role model) blowing up all his Spitfires on the eve of the Battle of Britain. The next battle Johnson is going to need his DiC for being the final push to get us out of the EU on 31st December 2020.

Dominic Cummings is a technocrat. He believes that science, or more precisely data science, can be deployed to understand and help solve almost any problem in government or elsewhere. Earlier this year he upset the governments HR department by posting a job advert, on his personal blog for data scientists, economists and physicists (oh, and weirdos). In this post he says “some people in government are prepared to take risks to change things a lot” and the UK now has “a new government with a significant majority and little need to worry about short-term unpopularity”. He saw these as being “a confluence” implying now was the time to get sh*t done.

So what is dataism, why is Cummings practicing it and what is its likely impact for us going to be moving forward?

The first reference to dataism was by David Brooks, the conservative political commentator, in his 2013 New York Times article The Philosophy of Data. In this article Brooks says:

“We now have the ability to gather huge amounts of data. This ability seems to carry with it certain cultural assumptions — that everything that can be measured should be measured; that data is a transparent and reliable lens that allows us to filter out emotionalism and ideology; that data will help us do remarkable things — like foretell the future”.

David Brooks, The Philosophy of Data

Dataism was then picked up by historian Yuval Noah Harari in his 2016 book Homo Deus. Harari went as far to call dataism a new form of religion which joins together biochemistry and computer science whose algorithms obey the same mathematical laws.

The central tenet of dataism is the idea that the universe gives more value to systems, individuals, and societies that generate the most data to be consumed and processed by algorithms. Harari states that “according to dataism Beethovens Fifth Symphony, a stock-exchange bubble and the flu virus are just three patterns of data flown that can be analysed using the same basic concepts and tools“. That last example is obviously the most relevant to our current situation with SAR-COV-2 or coronavirus still raging around the world and which Cummings, as far as we know, is focused on.

As computer scientist Steven Parton says here:

Dataists believe we should hand over as much information and power to these [big data and machine learning] algorithms as possible, allowing the free flow of data to unlock innovation and progress unlike anything we’ve ever seen before“.

Steven Parton

This, I believe, is Cummings belief also. He has no time for civil servants who are humanities graduates that “chat about Lacan at dinner parties” when they ought to be learning about numbers, probabilities and predictions based on hard data.

Whilst I have some sympathy with the idea of bringing science and data more to the fore in government you have to ask, if Cummings is forging ahead in creating a dataist civil service somewhere in the bowels of Downing Street, why are our COVID-19 deaths the worst, per capita, in the world? This graph shows the data for deaths per 100,000 of population (2018 population data) for the major economies of the world (using this data source.). You’ll see that as of 1st June 2020 the UK is faring the worst of all countries, having just overtaken Spain.

Unfortunately Cummings has now blotted his copybook twice in the eyes of the public and most MPs. Not only did he ignore the governments advice (which he presumably was instrumental in creating) and broke the rules on lockdown he was also found guilty of editing one of his own blog posts sometime between 8 April 2020 and 15 April 2020 to include a paragraph on SARS (which, along with Covid-19, is also caused by a coronavirus) to make out he had been warning about the disease since March of 2019.

Not only is Cummings ignoring the facts derived from the data he is so fond of using he is also doctoring data (i.e. his blog post) to change those facts. In many ways this is just another form of the data manipulation that was being carried out by Cambridge Analytica, the firm that Cummings allegedly used during the Brexit referendum, to bombard peoples Facebook feeds with ‘misleading’ information about the EU.

Cummings is like Gollum in Lord of the Rings. Gollum became corrupted by the power of the “one ring that ruled them all” and turned into a bitter and twisted creature that would do anything to get back “his precious” (the ring). It seems that data corrupts just as much as power. Hardly surprising really because in the dataist’s view of the world data is power.

All in all not a good look for the man that is meant to be changing the face of government and bringing a more data-centric (AKA dataist) approach to lead the country forward post-Brexit. If you cannot trust the man who is leading this initiative how can you trust the data and, more seriously, how can you trust the person who Cummings works for?


Update: 8th June 2020

Since writing this post I’ve read that Belgium is actually the country with the highest per-capita death rate from Covid-19. Here then is an update of my graph which now includes the G7 countries plus China, Spain and Belgium showing that Belgium does indeed have 20 more deaths per capita than the next highest, the UK.

It appears however that Belgium is somewhat unique in how it reports its deaths, being one of the few countries counting deaths in hospitals and care homes and also including deaths in care homes that are suspected, not confirmed, as Covid-19 cases. I suspect that for many countries, the UK included, deaths in care homes is going to end up being one of the great scandals of this crisis. In the UK ministers ordered 15,000 hospital beds to be vacated by 27 March and for patients to be moved into care homes without either adequate testing or adequate amounts of PPE being available.

Trust Google?

Photo by Daniele Levis Pelusi on Unsplash

Google has just released data on people’s movements, gathered from millions of mobile devices that use its software (e.g. Android, Google Maps etc) leading up to and during the COVID-19 lockdown in various countries. The data has been analysed here to show graphically how people spent their time between six location categories: homes; workplaces; parks; public transport stations; grocery shops and pharmacies; and retail and recreational locations.

The data shows how quickly people reacted to the instructions to lockdown. Here in the UK for example we see people reacted late but then strongly, with a rise of about 20-25% staying at home. This delay reflects the fact that lockdown began later, on March 23, in the UK though some people were already staying home before lockdown began.

What we see in the data provided by Google is likely to be only the start and, I suspect, a preview of how we may soon have to live. In the book Homo Deus by Yuval Noah Harari the chapter The Great Decoupling discusses how bioscience and computer science are conspiring to learn more about us than we know about ourselves and in the process destroy the “great liberal project” where we think that we have free-will and are able to make our own decisions about what we eat, who we marry and vote for in elections as well as what career path we choose etc, etc.

Harari asks what will happen when Google et al know more about us than we, or anyone else does? Facebook, for example, already purports to know more about us than our spouse by analysing as few as 300 of our ‘likes’. What if those machines who are watching over us (hopefully with “loving grace” but who knows) can offer us ‘advice’ on who we should vote for based on our previous four years comments and ‘likes’ on Facebook or recommend we should go and see a psychiatrist because of the somewhat erratic comments we have been making in emails to our friends or on Twitter?

The Google we see today, providing us with relatively benign data for us to analyse ourselves, is currently at the level of what Harari says is an ‘oracle’. It has the data and, with the right interpretation, we can use that data to provide us with information to make decisions. Exactly where we are now with coronavirus and this latest dataset.

The next stage is that of Google becoming an ‘agent’. You give Google an aim and it works out the best way to achieve that aim. Say, I want to lose two stone by next summer so I have the perfect beach ready body. Google knows all about my biometric data (they just bought Fitbit remember) as well as your predisposition for buying crisps and watching too much Netflix and comes up with a plan that will allow you to lose that weight provided you follow it.

Finally Google becomes ’sovereign’ and starts making those decisions for you. So maybe it checks your supermarket account and recommends removing those crisps from your shopping list and then, if you continue to ignore its advice it instructs your insurance company who bumps up your health insurance if you don’t.

At this point we ask who is in control. Google, Facebook etc own all that data but that data can be influenced (or hacked) to nudge us to do things we don’t realise. We already know how Cambridge Analytica used Facebook to influence the voting behaviour (we’re looking at you Mr Cummings) in a few swing areas (for Brexit and the last US election). We have no idea how much of that was also being influenced by Russia.

I think humanity is rapidly approaching the point when we really need to be making some hard decisions about how much of our data, and the analysis of that data, we should allow Google, Facebook and Twitter to hold. Should we be starting to think the unthinkable and calling a halt to this ever growing mountain of data each of us willingly gives away for free? But, how do we do that when most of it is being kept and analysed by private companies or worse, by China and Russia?

Pythons and Pandas (Or Why Software Architects No Longer Have an Excuse Not to Code)

pythonpanda

The coronavirus pandemic has certainly shown just how much the world depends not just on accurate and readily available datasets but also the ability of scientists and data analysts to make sense of that data. All of us are at the mercy of those experts to interpret this data correctly – our lives could quite literally depend on it.

Thankfully we live in a world where the tools are available to allow anyone, with a bit of effort, to learn how to analyse data themselves and not just rely on the experts to tell us what is happening.

The programming language Python, coupled with the pandas dataset analysis library and Bokeh interactive visualisation library, provide a robust and professional set of tools to begin analysing data of all sorts and get it into the right format.

Data on the coronavirus pandemic is available from lots of sources including the UK’s Office for National Statistics as well as the World Health Organisation. I’ve been using data from DataHub which provides datasets in different formats (CSV, Excel, JSON) across a range of topics including climate change, healthcare, economics and demographics. You can find their coronavirus related datasets here.

I’ve created a set of resources which I’ve been using to learn Python and some of its related libraries which is available on my GitHub page here. You’ll also find the project which I’ve been using to analyse some of the COVID-19 data around the world here.

The snippet of code below shows how to load a CSV file into a panda DataFrame – a 2-dimensional data structure that can store data of different types in columns that is similar to a spreadsheet or SQL table.

# Return COVID-19 info for country, province and date.
def covid_info_data(country, province, date):
    df4 = pd.DataFrame()
    if (country != "") and (date != ""):
        try:
            # Read dataset as a panda dataframe
            df1 = pd.read_csv(path + coviddata)

            # Check if country has an alternate name for this dataset
            if country in alternatives:
                country = alternatives[country]

            # Get subset of data for specified country/region
            df2 = df1[df1["Country/Region"] == country]

            # Get subset of data for specified date
            df3 = df2[df2["Date"] == date]

            # Get subset of data for specified province. If none specified but there
            # are provinces the current dataframe will contain all with the first one being 
            # country and province as 'NaN'. In that case just select country otherwise select
            # province as well.
            if province == "":
                df4 = df3[df3["Province/State"].isnull()]
            else:
                df4 = df3[df3["Province/State"] == province]
        except FileNotFoundError:
            print("Invalid file or path")
    # Return selected covid data from last subset
    return df4

The first ten rows from the DataFrame df1 shows the data from the first country (Afghanistan).

         Date Country/Region Province/State   Lat  Long  Confirmed  Recovered  Deaths
0  2020-01-22    Afghanistan            NaN  33.0  65.0        0.0        0.0     0.0
1  2020-01-23    Afghanistan            NaN  33.0  65.0        0.0        0.0     0.0
2  2020-01-24    Afghanistan            NaN  33.0  65.0        0.0        0.0     0.0
3  2020-01-25    Afghanistan            NaN  33.0  65.0        0.0        0.0     0.0
4  2020-01-26    Afghanistan            NaN  33.0  65.0        0.0        0.0     0.0

Three further subsets of data are made, the final one is for a specific country showing the COVID-19 data for a particular date (the UK on 7th May in this case).

             Date  Country/Region Province/State      Lat   Long  Confirmed  Recovered   Deaths
26428  2020-05-07  United Kingdom            NaN  55.3781 -3.436   206715.0        0.0  30615.0

Once the dataset has been obtained the information can be printed in a more readable way. Here’s a summary of information for the UK on 9th May.

Date:  2020-05-09
Country:  United Kingdom
Province: No province
Confirmed:  215,260
Recovered:  0
Deaths:  31,587
Population:  66,460,344
Confirmed/100,000: 323.89
Deaths/100,000: 47.53
Percent Deaths/Confirmed: 14.67

Obviously there are lots of ways of analysing this dataset as well as how to display it. Graphs are always a good way of showing information and Bokeh is a nice and relatively simple to use Python library for creating a range of different graphs. Here’s how Bokeh can be used to create a simple line graph of COVID-19 deaths over a period of time.

from datetime import datetime as dt
from bokeh.plotting import figure, output_file, show
from bokeh.models import DatetimeTickFormatter

def graph_covid_rate(df):
    x = []
    y = []
    country = df.values[0][1]
    for deaths, date in zip(df['Deaths'], df['Date']):
        y.append(deaths) 
        date_obj = dt.strptime(date, "%Y-%m-%d")
        x.append(date_obj)

    # output to static HTML file
    output_file("lines.html")

    # create a new plot with a title and axis labels
    p = figure(title="COVID-19 Deaths for "+country, x_axis_label='Date', y_axis_label='Deaths', x_axis_type='datetime')

    # add a line renderer with legend and line thickness
    p.line(x, y, legend_label="COVID-19 Deaths for "+country, line_width=3, line_color="green")
    p.xaxis.major_label_orientation = 3/4

    # show the results
    show(p)

Bokeh creates an HTML file of an interactive graph. Here’s the one the above code creates, again for the UK, for the period 2020-02-01 to 2020-05-09.

As a recently retired software architect (who has now started a new career working for Digital Innovators, a company addressing the digital skills gap) coding is still important to me. I’m a believer in the Architect’s Don’t Code anti-pattern believing that design and coding are two sides of the same coin and you cannot design if you cannot code (and you cannot code if you cannot design). These days there really is no excuse not to keep your coding skills up to date with the vast array of resources available to everyone with just a few clicks and Google searches.

I also see coding as not just a way of keeping my own skills up to date and to teach others vital digital skills, but also, as this article helpfully points out, as a way of helping solve problems of all kinds. Coding is a skill for life that is vitally important for young people entering the workplace to at least have a rudimentary understanding of to help them not just get a job but to also understand more of the world in these incredibly uncertain times.