Is an AI ‘Parky’ the first step in big techs takeover of the entertainment industry?

Composite Image Created using OpenAI, DALL-E and Adobe Photoshop

Sir Michael Parkinson, who died in 2023 [1], was a much loved UK chat show host who worked at the BBC between 1971 and 1982, again between 1998 and 2004 and finally for a further three years at ITV until 2007. During that time “Parky” interviewed the great and the good (and sometimes the not so good [2]) from film, television, music, sport, science and industry. I remember Saturday nights during his first stint at the BBC not feeling complete unless we had tuned into Parkinson to see which celebrities he was interviewing that night. I was sad to hear of his passing last year but also grateful I had lived at the time to see many of his interviews and appreciate his gentle but probing interview style.

Last week however we learnt that just because you are dead, it does not mean you cannot carry on doing your job. Mike Parkinson, son of Sir Michael, has given permission to Deep Fusion Films to create an exact replica of his late father’s voice so he can virtually host a new eight-part, “unscripted series” [3] called Virtually Parkinson. The virtual Parky will be able to interview new guests based on analyses of data obtained from the real Parkinson’s back catalogue [4].

Deep Fusion Films was founded in 2023 and makes a big play about its ethical credentials. On its website [5] it says it aims to “establish comprehensive policies that promote the legal and ethical integration of AI in production”. Backing this up, their virtual Parky will be created with the full support and involvement of Sir Michael’s family and estate. 

So far, so ethical, right and proper, however…

Only last year, concerns over the use (and potential misuse) of AI in the film industry led to a strike by actors and writers. Succession actor Brian Cox made the statement that using AI to replicate an actor’s image and use it forever is “identity theft” and should be considered “a human rights issue” [6].

Hollywood stars like Scarlett Johansson, Tom Hanks, Tom Cruise and Keanu Reeves, have already become the subject of unauthorised deepfakes and in June of this year the Internet Watch Foundation(IWF) warned that AI-generated videos of child sexual abuse could indicate a ‘stark vision of the future’ [7].

Clearly, where Deep Fusion Films are right now, i.e. producing ethically sourced and approved imitations of celebrities voices, and where AI generated porn is threatening to take us are poles apart but…

Technology always creeps into our lives like this. A small seemingly insignificant event which we find amusing and mildly distracting entertains us for a while but then suddenly, it has become the way of all things and has fallen into the hands of ‘bad actors’. At this point, there is often no going back.

Witness how Facebook started out as an innocuous site called Facemash, created by a second-year student at Harvard University called Mark Zuckerberg, that compared two student photos side-by-side to determine who was “hot” and who was “not.” Actually this was always a questionable use case in my opinion, but I guess an indication of what went down as acceptable behaviour in Ivy League universities of the early 2000s!

Today Meta (who now owns Facebook) is the seventh largest company in the world by market capitalisation worth, at the time of writing, $1.497 T [8]. Zuckerberg’s vision for Meta, outlined in a letter to shareholders this August, is that it will become a virtual reality platform that merges the physical and digital worlds forever transforming how we interact, work, and socialise [9]. Inevitably a major part of this vision is that artificial intelligence (or even, if Zuckerberg gets his way, artificial general intelligence) will be there to “enhance user experiences”.

Facebook, and now Meta, is surely the canonical example of how a small and seemingly insignificant company from the US east coast has grown in a mere 20 years to become a largely unregulated west coast tech behemoth with over three billion active monthly users [10].

If Facebook was just used for sharing pictures of cats and dogs that would be one thing but, during its short history, it has been found guilty of spreading fake news, changing voting behaviour in key elections around the world, affecting peoples mental health as well as spreading violent and misogynistic (and deepfake) videos.

It seems like we never learn. Governments and legal systems around the world never react fast enough to the pace of technological change and are always playing catchup having to mop up the tech companies misdemeanours after they have occurred rather than regulating against tech companies in the first place. Financial penalties are one thing but these pale into insignificance alongside the gargantuan profits such companies make and anyway, no amount of fines can undo the negative effects they and their leaders have on peoples lives.

So how does the rise of the tech behemoths like Facebook, Google and X presage what might happen in the creative industries and their use of technology, especially AI?

I don’t know what proportion of a Hollywood movies costs goes to actors salaries. It is obviously not the only cost or even the largest cost however with actors like Tom Cruise, Keanu Reeves and Will Smith able to command salaries for a single film in excess of $100M [11] salaries are clearly not insignificant. It must be very tempting for movie producers to be thinking why not invest a bit more in special effects and just create a whole new actor from scratch. After all, that’s precisely what Walt Disney did with Mickey Mouse who never got paid a dime.

How long is it before we cross a red line and a movies special effects goes the whole way and uses CGI to create the characters in a completely AI scripted and generated film? Huge upfront costs (for now, but these will drop) but no ongoing costs of having to pay actors for re-runs or streaming rights etc.

I don’t know how long it might take or whether we will ever get there. Maybe the technology will never be good enough (unlikely) or maybe we will wake up to what we are doing and create some sort of legal/ethical framework that prevents such things occurring (equally unlikely I fear).

We are beginning to rub up against some pretty fundamental questions not just about how we should be using AI, especially in the creative industries, but also what it actually means to be human if we let our machines overwhelm us to the extent that our creative selves are usurped by the very things that creativity has built.

This is a hugely important question which I hope to explore in future posts. 

Notes

  1. Sir Michael Parkinson obituary,https://www.theguardian.com/media/2023/aug/17/sir-michael-parkinson-obituary
  2. Michael Parkinson speaks out on Savile scandal, https://www.itv.com/news/calendar/2012-12-01/michael-parkinson-speaks-out-on-savile-scandal
  3. AI-replicated Michael Parkinson to host ‘completely unscripted’ celebrity podcast, https://news.sky.com/story/ai-replicated-michael-parkinson-to-host-completely-unscripted-celebrity-podcast-13243556
  4. Michael Parkinson is back, with an AI voice that can fool even his own familyhttps://www.theguardian.com/media/2024/oct/26/michael-parkinson-virtually-ai-replica-chatshow
  5. Deep Fusion Films is a dynamic production company at the forefront of television and film,https://www.deepfusionfilms.com/about
  6. Succession star Brian Cox on the use of AI to replicate actors: ‘It’s a human rights issue’,https://news.sky.com/story/succession-star-brian-cox-on-the-use-of-ai-to-replicate-actors-its-a-human-rights-issue-12999168
  7. AI-generated videos of child sexual abuse a ‘stark vision of the future’, https://www.iwf.org.uk/news-media/news/ai-generated-videos-of-child-sexual-abuse-a-stark-vision-of-the-future/
  8. Largest Companies by Marketcap,https://companiesmarketcap.com
  9. Mark Zuckerberg’s Letter: Meta’s Vision Unveiled,https://medium.com/@ahmedofficial588/mark-zuckerbergs-letter-meta-s-vision-unveiled-2b48a57a2743
  10. Facebook User & Growth Statistics,https://backlinko.com/facebook-users
  11. 20 Highest Paid Actors For a Single Film,https://thecinemaholic.com/highest-paid-actors-for-a-single-film/

Enchanting Minds and Machines – Ada Lovelace, Mary Shelley and the Birth of Computing and Artificial Intelligence

Today (10th October 2023) is Ada Lovelace Day. In this blog post I discuss why Ada Lovelace (and indeed Mary Shelley who was indirectly connected to Ada) is as relevant today as she was then.

Villa Diodati, Switzerland

In the summer of 1816 [1], five young people holidaying at the Villa Diodati near Lake Geneva in Switzerland found their vacation rudely interrupted by a torrential downfall which trapped them indoors. Faced with the monotony of confinement, one member of the group proposed an ingenious idea to break the boredom: each of them should write a supernatural tale to captivate the others.

Among these five individuals were some notable figures of their time. Lord Byron, the celebrated English poet and his friend and fellow poet, Percy Shelley. Alongside them was Shelley’s wife, Mary, her stepsister Claire Clairmont, who happened to be Byron’s mistress, and Byron’s physician, Dr. Polidori.

Lord Byron, burdened by the legal disputes surrounding his divorce and the financial arrangements for his newborn daughter, Ada, found it impossible to fully engage in the challenge (despite having suggested it). However, both Dr. Polidori and Mary Shelley embraced the task with fervor, creating stories that not only survived the holiday but continue to thrive today. Polidori’s tale would later appear as Vampyre – A Tale, serving as the precursor to many of the modern vampire movies and TV programmes we know today. Mary Shelley’s story, which had come to her in a haunting nightmare that very night, gave birth to the core concept of Frankenstein, published in 1818 as Frankenstein: or, The Modern Prometheus. As Jeanette Winterson asserts in her book 12 Bytes [2], Frankenstein is not just a story about “the world’s most famous monster; it’s a message in a bottle.” We’ll see why this message resounds even more today, later.

First though, we must shift our focus to another side of Lord Byron’s tumultuous life and his divorce settlement with his wife, Anabella Wentworth. In this settlement, Byron expressed his desire to shield his daughter from the allure of poetry—an inclination that suited Anabella perfectly, as one poet in the family was more than sufficient for her. Instead, young Ada received a mathematics tutor, whose duty extended beyond teaching mathematics and included eradicating any poetic inclinations Ada might have inherited. Could this be an early instance of the enforced segregation between the arts and STEM disciplines, I wonder?

Ada excelled in mathematics, and her exceptional abilities, combined with her family connections, earned her an invitation, at the age of 17, to a London soirée hosted by Charles Babbage, the Lucasian Professor of Mathematics at Cambridge. Within Babbage’s drawing room, Ada encountered a model of his “Difference Engine,” a contraption that so enraptured her, she spent the evening engrossed in conversation with Babbage about its intricacies. Babbage, in turn, was elated to have found someone who shared his enthusiasm for his machine and generously shared his plans with Ada. He later extended an invitation for her to collaborate with him on the successor to the machine, known as the “Analytical Engine”.

A Model of Charles Babbage’s Analytical Engine

This visionary contraption boasted the radical notion of programmability, utilising punched cards like those employed in weaving machines of that era. In 1842, Ada Lovelace (as she had become by then) was tasked with translating a French transcript of one of Babbage’s lectures into English. However, Ada went above and beyond mere translation, infusing the document with her own groundbreaking ideas about Babbage’s computing machine. These contributions proved to be more extensive and profound than the original transcript itself, solidifying Ada Lovelace’s place in history as a pioneer in the realm of computer science and mathematics.

In one of these notes, she wrote an ‘algorithm’ for the Analytical Engine to compute Bernoulli numbers, the first published algorithm (AKA computer program) ever! Although Babbage’s engine was too far ahead of its time and could not be built using current day technology, Ada is still credited as being the world’s first computer programmer. But there is another twist to this story that brings us closer to the present day.

Fast forward to the University of Manchester, 1950. Alan Turing, the now feted but ultimately doomed mathematician who led the team that cracked intercepted, coded messages sent by the German navy in WWII, has just published a paper called Computing Machinery and Intelligence [3]. This was one of the first papers ever written on artificial intelligence (AI) and it opens with the bold premise: “I propose to consider the question, ‘Can machines think?”.

Alan Turing

Turing did indeed believe computers would one day (he thought in about 50 years’ time in the year 2000) be able to think and devised his famous “Turing Test” as a way of verifying his proposition. In his paper Turing also felt the need to “refute” arguments he thought might be made against his bold claim, including one made by no other than Ada Lovelace over one hundred years earlier. In the same notes where she wrote the world’s first computer algorithm, Lovelace also said:

It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis, but it has no power of anticipating any analytical relations or truths”.

Although Lovelace might have been optimistic about the power of the Analytical Engine, should it ever be built, the possibility of it thinking creatively wasn’t one of the things she thought it would excel at.

Turing disputed Lovelace’s view because she could have had no idea of the enormous speed and storage capacity of modern (remember this was 1950) computers, making them a match for that of the human brain, and thus, like the brain, capable of processing their stored information to arrive at sometimes “surprising” conclusions. To quote Turing directly from his paper:

It is a line of argument we must consider closed, but it is perhaps worth remarking that the appreciation of something as surprising requires as much of a ‘ creative mental act ‘ whether the surprising event originates from a man, a book, a machine or anything else.”

Which brings us bang up to date with the current arguments that are raging about whether systems like ChatGPT, DALL-E or Midjourney are creative or even sentient in some way. Has Turing’s prophesy finally been fulfilled or was Ada Lovelace right all along, computers can never be truly creative because creativity requires not just a reconfiguration of what someone else has made, it requires original thought based on actual human experience?

One undeniable truth prevails in this narrative: Ada was good at working with what she didn’t have. Not only was Babbage unable to build his machine, meaning Lovelace never had one to play with, she also didn’t have male privilege or a formal education – something that was a scarce commodity for women – a stark reminder of the limitations imposed on her gender during that time.

Have things moved on today for women and young girls? A glimpse into the typical composition of a computer science classroom, be it at the secondary or tertiary level, might beg the question: Have we truly evolved beyond the constraints of the past? And if not, why does this gender imbalance persist?

Over the past five or more years there have been many studies and reports published into the problem of too few women entering STEM careers and we seem to be gradually focusing in on not just what the core issues are, but also how to address them. What seems to be lacking is the will, or the funding (or both) to make it happen.

So, what to do, first some facts:

  1. Girls lose interest in STEM as they get older. A report from Microsoft back in 2018 found that confidence in coding wanes as girls get older, highlighting the need to connect STEM subjects to real-world people and problems by tapping into girls’ desire to be creative [4].
  2. Girls and young women do not associate STEM jobs with being creative. Most girls and young women describe themselves as being creative and want to pursue a career that helps the world. They do not associate STEM jobs as doing either of these things [4].
  3. Female students rarely consider a career in technology as their first choice. Only 27% of female students say they would consider a career in technology, compared to 61% of males, and only 3% say it is their first choice [5].
  4. Most students (male and female) can’t name a famous female working in technology. A lack of female role models is also reinforcing the perception that a technology career isn’t for them. Only 22% of students can name a famous female working in technology. Whereas two thirds can name a famous man [5].
  5. Female pupils feel STEM subjects, though highly paid, are not ‘for them’. Female Key Stage 4 pupils perceived that studying STEM subjects was potentially a more lucrative choice in terms of employment. However, when compared to male pupils, they enjoyed other subjects (e.g., arts and English) more [6].

The solutions to these issues are now well understood:

  1. Increasing the number of STEM mentors and role models – including parents – to help build young girls’ confidence that they can succeed in STEM. Girls who are encouraged by their parents are twice as likely to stay in STEM, and in some areas like computer science, dads can have a greater influence on their daughters than mums yet are less likely than mothers to talk to their daughters about STEM.
  2. Creating inclusive classrooms and workplaces that value female opinions. It’s important to celebrate the stories of women who are in STEM right now, today.
  3. Providing teachers with more engaging and relatable STEM curriculum, such as 3D and hands-on projects, the kinds of activities that have proven to help keep girls’ interest in STEM over the long haul.
  4. Multiple interventions, starting early and carrying on throughout school, are important ways of ensuring girls stay connected to STEM subjects. Interventions are ideally done by external people working in STEM who can repeatedly reinforce key messages about the benefits of working in this area. These people should also be able to explain the importance of creativity and how working in STEM can change the world for the better [7].
  5. Schoolchildren (all genders) should be taught to understand how thinking works, from neuroscience to cultural conditioning; how to observe and interrogate their thought processes; and how and why they might become vulnerable to disinformation and exploitation. Self-awareness could turn out to be the most important topic of all [8].

Before we finish, let’s return to that “message in a bottle” that Mary Shelley sent out to the world over two hundred years ago. As Jeanette Winterson points out:

Mary Shelley maybe closer to the world that is to become than either Ada Lovelace or Alan Turing. A new kind of life form may not need to be human-like at all and that’s something that is achingly, heartbreakingly, clear in ‘Frankenstein’. The monster was originally designed to be like us. He isn’t and can’t be. Is that the message we need to hear?” [2].

If we are to heed Shelley’s message from the past, the rapidly evolving nature of AI means we need people from as diverse a set of backgrounds as possible. These should include people who can bring constructive criticism to the way technology is developed and who have a deeper understanding of what people really need rather than what they think they want from their tech. Women must become essential players in this. Not just in developing, but also guiding and critiquing the adoption and use of this technology. As Mustafa Suleyman (co-founder of DeepMind) says in his book The Coming Wave [10]:

Credible critics must be practitioners. Building the right technology, having the practical means to change its course, not just observing and commenting, but actively showing the way, making the change, effecting the necessary actions at source, means critics need to be involved.

As we move away from the mathematical nature of computing and programming to one driven by so called descriptive programming [9] it is going to be important we include those who are not technical but are creative as well as empathetic to people’s needs and maybe even understand the limits we should place on technology. The four C’s (creativity, critical thinking, collaboration and communications) are skills we all need to be adopting and are ones which women in particular seem to excel at.

On this, Ada Lovelace Day 2023, we should not just celebrate Ada’s achievements all those years ago but also recognize how Ada ignored and fought back against the prejudices and severe restrictions on education that women like her faced. Ada pushed ahead regardless and became a true pioneer and founder of a whole industry that did not actually really get going until over 100 years after her pioneering work. Ada, the world’s first computer programmer, should be the role model par excellence that all girls and young women look to for inspiration, not just today but for years to come.

References

  1. Mary Shelley, Frankenstein and the Villa Diodati, https://www.bl.uk/romantics-and-victorians/articles/mary-shelley-frankenstein-and-the-villa-diodati
  2. 12 Bytes – How artificial intelligence will change the way we live and love, Jeanette Winterson, Vintage, 2022.
  3. Computing Machinery and Intelligence, A. M. Turing, Mind, Vol. 59, No. 236. (October 1950), https://www.cs.mcgill.ca/~dprecup/courses/AI/Materials/turing1950.pdf
  4. Why do girls lose interest in STEM? New research has some answers — and what we can do about it, Microsoft, 13th March 2018, https://news.microsoft.com/features/why-do-girls-lose-interest-in-stem-new-research-has-some-answers-and-what-we-can-do-about-it/
  5. Women in Tech- Time to close the gender gap, PwC, https://www.pwc.co.uk/who-we-are/her-tech-talent/time-to-close-the-gender-gap.html
  6. Attitudes towards STEM subjects by gender at KS4, Department for Education, February 2019, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/913311/Attitudes_towards_STEM_subjects_by_gender_at_KS4.pdf
  7. Applying Behavioural Insights to increase female students’ uptake of STEM subjects at A Level, Department for Education, November 2020, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/938848/Applying_Behavioural_Insights_to_increase_female_students__uptake_of_STEM_subjects_at_A_Level.pdf
  8. How we can teach children so they survive AI – and cope with whatever comes next, George Monbiot, The Guardian, 8th July 2023, https://www.theguardian.com/commentisfree/2023/jul/08/teach-children-survive-ai
  9. Prompt Engineering, Microsoft, 23rd May 2023, https://learn.microsoft.com/en-us/semantic-kernel/prompt-engineering/
  10. The Coming Wave, Mustafa Suleyman, The Bodley Head, 2023.

From Turing to Watson (via Minsky)

This week (Monday 25th) I gave a lecture about IBM’s Watson technology platform to a group of first year students at Warwick Business School. My plan was to write up the transcript of that lecture, with links for references and further study, as a blog post. The following day when I opened up my computer to start writing the post I saw that, by a sad coincidence, Marvin Minsky the American cognitive scientist and co-founder of the Massachusetts Institute of Technology’s AI laboratory had died only the day before my lecture. Here is that blog post, now updated with some references to Minsky and his pioneering work on machine intelligence.

Minsky
Marvin Minsky in a lab at MIT in 1968 (c) MIT

First though, let’s start with Alan Turing, sometimes referred to as “the founder of computer science”, who led the team that developed a programmable machine to break the Nazi’s Enigma code, which was used to encrypt messages sent between units on the battlefield during World War 2. The work of Turing and his team was recently brought to life in the film The Imitation Game starring Benedict Cumberbatch as Turing and Keira Knightley as Joan Clarke, the only female member of the code breaking team.

Turing
Alan Turing

Sadly, instead of being hailed a hero, Turing was persecuted for his homosexuality and committed suicide in 1954 having undergone a course of hormonal treatment to reduce his libido rather than serve a term in prison. It seems utterly barbaric and unforgivable that such an action could have been brought against someone who did so much to affect the outcome of WWII. It took nearly 60 years for his conviction to be overturned when on 24 December 2013, Queen Elizabeth II signed a pardon for Turing, with immediate effect.

In 1949 Turing became Deputy Director of the Computing Laboratory at Manchester University, working on software for one of the earliest computers. During this time he worked in the emerging field of artificial intelligence and proposed an experiment which became known as the Turing test having observed that: “a computer would deserve to be called intelligent if it could deceive a human into believing that it was human.”

The idea of the test was that a computer could be said to “think” if a human interrogator could not tell it apart, through conversation, from a human being.

Turing’s test was supposedly ‘passed’ in June 2014 when a computer called Eugene fooled several of its interrogators that it was a 13 year old boy. There has been much discussion since as to whether this was a valid run of the test and that the so called “supercomputer,” was nothing but a chatbot or a script made to mimic human conversation. In other words Eugene could in no way considered to be intelligent. Certainly not in the sense that Professor Marvin Minsky would have defined intelligence at any rate.

In the early 1970s Minsky, working with the computer scientist and educator Seymour Papert, wrote a book called The Society of Mind, which combined both of their insights from the fields of child psychology and artificial intelligence.

Minsky and Papert believed that there was no real difference between humans and machines. Humans, they maintained, are actually machines of a kind whose brains are made up of many semiautonomous but unintelligent “agents.” Their theory revolutionized thinking about how the brain works and how people learn.

Despite the more widespread accessibility to apparently intelligent machines with programs like Apple Siri Minsky maintained that there had been “very little growth in artificial intelligence” in the past decade, saying that current work had been “mostly attempting to improve systems that aren’t very good and haven’t improved much in two decades”.

Minsky also thought that large technology companies should not get involved the field of AI saying: “we have to get rid of the big companies and go back to giving support to individuals who have new ideas because attempting to commercialise existing things hasn’t worked very well,”

Whilst much of the early work researching AI certainly came out of organisations like Minsky’s AI lab at MIT it seems slightly disingenuous to believe that commercialistion of AI, as being carried out by companies like Google, Facebook and IBM, is not going to generate new ideas. The drive for commercialisation (and profit), just like war in Turing’s time, is after all one of the ways, at least in the capitalist world, that innovation is created.

Which brings me nicely to Watson.

IBM Watson is a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data. It is named after Thomas J. Watson, the first CEO of IBM, who led the company from 1914 – 1956.

Thomas_J_Watson_Sr
Thomas J. Watson

IBM Watson was originally built to compete on the US television program Jeopardy.  On 14th February 2011 IBM entered Watson onto a special 3 day version of the program where the computer was pitted against two of the show’s all-time champions. Watson won by a significant margin. So what is the significance of a machine winning a game show and why is this a “game changing” event in more than the literal sense of the term?

Today we’re in the midst of an information revolution. Not only is the volume of data and information we’re producing dramatically outpacing our ability to make use of it but the sources and types of data that inform the work we do and the decisions we make are broader and more diverse than ever before. Although businesses are implementing more and more data driven projects using advanced analytics tools they’re still only reaching 12% of the data they have, leaving 88% of it to go to waste. That’s because this 88% of data is “invisible” to computers. It’s the type of data that is encoded in language and unstructured information, in the form of text, that is books, emails, journals, blogs, articles, tweets, as well as images, sound and video. If we are to avoid such a “data waste” we need better ways to make use of that data and generate “new knowledge” around it. We need, in other words, to be able to discover new connections, patterns, and insights in order to draw new conclusions and make decisions with more confidence and speed than ever before.

For several decades we’ve been digitizing the world; building networks to connect the world around us. Today those networks connect not just traditional structured data sources but also unstructured data from social networks and increasingly Internet of Things (IoT) data from sensors and other intelligent devices.

Data to Knowledge
From Data to Knowledge

These additional sources of data mean that we’ve reached an inflection point in which the sheer volume of information generated is so vast; we no longer have the ability to use it productively. The purpose of cognitive systems like IBM Watson is to process the vast amounts of information that is stored in both structured and unstructured formats to help turn it into useful knowledge.

There are three capabilities that differentiate cognitive systems from traditional programmed computing systems.

  • Understanding: Cognitive systems understand like humans do, whether that’s through natural language or the written word; vocal or visual.
  • Reasoning: They can not only understand information but also the underlying ideas and concepts. This reasoning ability can become more advanced over time. It’s the difference between the reasoning strategies we used as children to solve mathematical problems, and then the strategies we developed when we got into advanced math like geometry, algebra and calculus.
  • Learning: They never stop learning. As a technology, this means the system actually gets more valuable with time. They develop “expertise”. Think about what it means to be an expert- – it’s not about executing a mathematical model. We don’t consider our doctors to be experts in their fields because they answer every question correctly. We expect them to be able to reason and be transparent about their reasoning, and expose the rationale for why they came to a conclusion.

The idea of cognitive systems like IBM Watson is not to pit man against machine but rather to have both reasoning together. Humans and machines have unique characteristics and we should not be looking for one to supplant the other but for them to complement each other. Working together with systems like IBM Watson, we can achieve the kinds of outcomes that would never have been possible otherwise:

IBM is making the capabilities of Watson available as a set of cognitive building blocks delivered as APIs on its cloud-based, open platform Bluemix. This means you can build cognition into your digital applications, products, and operations, using any one or combination of a number of available APIs. Each API is capable of performing a different task, and in combination, they can be adapted to solve any number of business problems or create deeply engaging experiences.

So what Watson APIs are available? Currently there are around forty which you can find here together with documentation and demos. Four examples of the Watson APIs you will find at this link are:

Watson API - Dialog

 

Dialog

Use natural language to automatically respond to user questions

 

 

Watson API - Visual Recognition

 

Visual Recognition

Analyses the contents of an image or video and classifies by category.

 

 

Watson API - Text to Speech

 

Text to Speech

Synthesize speech audio from an input of plain text.

 

 

Watson API - Personality Insights

 

Personality Insights

Understand someones personality from what they have written.

 

 

It’s never been easier to get started with AI by using these cognitive building blocks. I wonder what Turing would have made of this technology and how soon someone will be able to pin together current and future cognitive building blocks to really pass Turing’s famous test?