Screenshot from Apple’s “1984” ad directed by Sir Ridley Scott
Forty years ago today (24th January 1984) a young Steve Jobs took to the stage at the Flint Center in Cupertino, California to introduce the Apple Macintosh desktop computer and the world found out “why 1984 won’t be like ‘1984’.
The Apple Macintosh, or ‘Mac’, boasted cutting-edge specifications for its day. It had an impressive 9-inch monochrome display with a resolution of 512 x 342 pixels, a 3.5-inch floppy disk drive, and 128 KB of RAM. The 32-bit Motorola 68000 microprocessor powered this compact yet powerful machine, setting new standards for graphical user interfaces and ease of use.
The original Apple Macintosh
The Mac had been gestating in Steve Jobs restless and creative mind for at least five years but had not started its difficult birth process until 1981 when Jobs recruited a team of talented individuals, including visionaries like Jef Raskin, Andy Hertzfeld, and Bill Atkinson. The collaboration of these creative minds led to the birth of the Macintosh, a computer that not only revolutionized the industry but also left an indelible mark on the way people interact with technology.
The Mac was one of the first personal computers to feature a graphical user interface (Microsoft Windows 1.0 was not released until November 1985) as well as the use of icons, windows, and a mouse for navigation instead of a command-line interface. This approach significantly influenced the development of GUIs across various operating systems.
Possibly of more significance is that some of the lessons learned from the Mac have and continue to influence the development of subsequent Apple products. Steve Jobs’ (and later Jony Ive’s) commitment to simplicity and elegance in design became a guiding principle for products like the iPod, iPhone, iPad, and MacBook and are what really make the Apple ecosystem (as well as allowing it to charge the prices it does).
One of the pivotal moments in Mac’s development was the now famous “1984” ad , which had its one and only public airing two days before during a Super Bowl XVIII commercial break and built a huge anticipation for the groundbreaking product.
I was a relative late convert to the cult of Apple, not buying my first computer (a MacBook Pro) until 2006. I still have this computer and periodically start it up for old times sake. It still works perfectly albeit very slowly and with a now very old copy of macOS running.
A more significant event, for me at least, was that a year after the Mac launch I moved to Cupertino to take a job as a software engineer at a company called ROLM, a telecoms provider that had just been bought by IBM and was looking to move into Europe. ROLM was on a recruiting drive to hire engineers from Europe who knew how to develop product for that marketplace and I had been lucky enough to have the right skills (digital signalling systems) at the right time.
At the time of my move I had some awareness of Apple but got to know it more as I ended up living only a few blocks from Apple’s HQ on Mariani Avenue, Cupertino (I lived just off Stevens Creek Boulevard which used to be chock-full of car dealerships at that time).
The other slight irony of this is that IBM (ROLM’s owner) was of course “big brother” in Apple’s ad and the young girl with the sledgehammer was out to break their then virtual monopoly on personal computers. IBM no longer makes their machine whilst Apple has obviously gone from strength to strength.
It’s hard to believe that this year is the 30th anniversary of Tim Berners-Lee’s great invention, the World-Wide Web, and that much of the technology that enabled his creation is still less than 60 years old. Here’s a brief history of the Internet and the Web, and how we got to where we are today, in ten significant events.
#1: 1963 – Ted Nelson begins developing a model for creating and using linked content he calls hypertext and hypermedia. Hypertext is born.
#2: 1969 – The first message is sent over the ARPANET from computer science Professor Leonard Kleinrock’s laboratory at University of California, Los Angeles to the second network node at Stanford Research Institute. The Internet is born.
#3: 1969 – Charles Goldfarb, leading a small team at IBM, developed the first markup language, called Generalized Markup Language, or GML. Markup languages are born.
#4: 1989 – Tim Berners-Lee whilst working at CERN publishes his paper, Information Management: A Proposal. The World Wide Web (WWW) is born.
#5: 1993 – Mosaic, a graphical browser aiming to bring multimedia content to non-technical users (images and text on the same page) is invented by Marc Andreessen. The web browser is born.
#6: 1995 – Jeff Bezos launches Amazon “earth’s biggest bookstore” from a garage in Seattle. E-commerce is born.
#7: 1998 – The Google company is officially launched by Larry Page and Sergey Brin to market Google Search. Web search is born.
#8: 2003 – Facebook (then called FaceMash but changed to The Facebook a year later) is founded by Mark Zuckerberg with his college roommate and fellow Harvard University student Eduardo Saverin. Social media is born.
#9: 2007 – Steve Jobs launches the iPhone at MacWorld Expo in San Francisco. Mobile computing is born.
#10: 2018 – Tim Berners-Lee instigates act II of the web when he announces a new initiative called Solid, to reclaim the Web from corporations and return it to its democratic roots. The web is reborn?
I know there have been countless events that have enabled the development of our modern Information Age and you will no doubt think others should be included in preference to some of my suggestions. Also, I suspect that many people will not have heard of my last choice (unless you are a fairly hardcore computer type). The reason I have added this one is because I think/hope it will start to address what is becoming one of the existential threats of our age, namely how we survive in a world awash with data (our data) that is being mined and used without us knowing, much less understanding, the impact of such usage. Rather than living in an open society in which ideas and data are freely exchanged and used to everyones benefit we instead find ourselves in an age of surveillance capitalism which, according to this source, is defined as being:
…the manifestation of George Orwell’s prophesied Memory Hole combined with the constant surveillance, storage and analysis of our thoughts and actions, with such minute precision, and artificial intelligence algorithmic analysis, that our future thoughts and actions can be predicted, and manipulated, for the concentration of power and wealth of the very few.
In her book The Age of Surveillance Capitalism, Shoshana Zuboff provides a sweeping (and worrying) overview and history of the techniques that the large tech companies are using to spy on us in ways that even George Orwell would have found alarming. Not least because we have voluntarily given up all of this data about ourselves in exchange for what are sometimes the flimsiest of benefits. As Zuboff says:
Thanks to surveillance capitalism the resources for effective life that we seek in the digital realm now come encumbered with a new breed of menace. Under this new regime, the precise moment at which our needs are met is also the precise moment at which our lives are plundered for behavioural data, and all for the sake others gain.
Tim Berners-Lee invented the World-Wide Web then gave it away so that all might benefit. Sadly some have benefited more than others, not just financially but also by knowing more about us than most of us would ever want or wish. I hope for all our sakes the work that Berners-Lee and his small group of supporters is doing make enough progress to reverse the worst excesses of surveillance capitalism before it is too late.
Ten years ago this week (on 9th January 2007) the late Steve Jobs, then at the hight of his powers at Apple, introduced the iPhone to an unsuspecting world. The history of that little device (which has got both smaller and bigger in the interceding ten years) is writ large over the entire Internet so I’m not going to repeat it here. However it’s worth looking at the above video on YouTube not just to remind yourself what a monumental and historical moment in tech history this was, even though few of us realised it at the time, but also to see a masterpiece in how to launch a new product.
Within two minutes of Jobs walking on stage he has the audience shouting and cheering as if he’s a rock star rather than a CEO. At around 16:25 when he’s unveiled his new baby and shows for the first time how to scroll through a list in a screen (hard to believe that ten years ago know one knew this was possible) they are practically eating out of his hand and he still has over an hour to go!
This iPhone keynote, probably one of the most important in the whole of tech history, is a case study on how to deliver a great presentation. Indeed, Nancy Duart in her book Resonate, has this as one of her case studies for how to “present visual stories that transform audiences”. In the book she analyses the whole event to show how Jobs’ uses all of the classic techniques of storytelling, establish what is and what could be, build suspense, keep your audience engaged, make them marvel and finally show them a new bliss.
The iPhone product launch, though hugely important, is not what this post is about though. Rather, it’s about how ten years later the iPhone has kept pace with innovations in technology to not only remain relevant (and much copied) but also to continue to influence (for better and worse) the way people interact, communicate and indeed live. There are a number of enabling ideas and technologies, both introduced at launch as well as since, that have enabled this to happen. What are they and how can we learn from the example set by Apple and how can we improve on them?
Open systems generally beat closed systems
At its launch Apple had created a small set of native apps the making of which was not available to third-party developers. According to Jobs, it was an issue of security. “You don’t want your phone to be an open platform,” he said. “You don’t want it to not work because one of the apps you loaded that morning screwed it up. Cingular doesn’t want to see their West Coast network go down because of some app. This thing is more like an iPod than it is a computer in that sense.”
Jobs soon went back on that decision which is one of the factors that has led to the overwhelming success of the device. There are now 2.2 million apps available for download in the App Store with over 140 billion downloads made since 2007.
Claiming your system is open does not mean developers will flock to it to extend your system unless it is both easy and potentially profitable to do so. Further, the second of these is unlikely to happen unless the first enabler is put in place.
Today with new systems being built around Cognitive computing, the Internet of Things (IoT) and Blockchain companies both large and small are vying with each other to provide easy to use but secure ecosystems that allow these new technologies to flourish and grow, hopefully to the benefits to business and society as a whole. There will be casualties on the way but this competition, and the recognition that systems need to be built right rather than us just building the right system at the time is what matters.
Open systems must not mean insecure systems
One of the reasons Jobs gave for not initially making the iPhone an open platform was his concerns over security and for hackers to break into those systems wreaking havoc. These concerns have not gone away but have become even more prominent. IoT and artificial intelligence, when embedded in everyday objects like cars and kitchen appliances as well as our logistics and defence systems have the potential to cause there own unique and potentially disastrous type of destruction.
The cost of data breaches alone is estimated at $3.8 to $4 million and that’s without even considering the wider reputational loss companies face. Organisations need to monitor how security threats are evolving year to year and get well-informed insights about the impact they can have on their business and reputation.
Ethics matter too
With all the recent press coverage of how fake news may have affected the US election and may impact the upcoming German and French elections as well as the implications of driverless cars making life and death decisions for us, the ethics of cognitive computing is becoming a more and more serious topic for public discussion as well as potential government intervention.
In October last year the Whitehouse released a report called Preparing for the Future of Artificial Intelligence. The report looked at the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy and made a number of recommendations on further actions. These included:
Prioritising open training data and open data standards in AI.
Industry should work with government to keep government updated on the general progress of AI in industry, including the likelihood of milestones being reached
The Federal government should prioritize basic and long-term AI research
As part of the answer to addressing the Whitehouse report this week a group of private investors, including LinkedIn co-founder Reid Hoffman and eBay founder Pierre Omidyar, launched a $27 million research fund, called the Ethics and Governance of Artificial Intelligence Fund. The group’s purpose is to foster the development of artificial intelligence for social good by approaching technological developments with input from a diverse set of viewpoints, such as policymakers, faith leaders, and economists.
I have discussed before about transformative technologies like the world wide web have impacted all of our lives, and not always for the good. I hope that initiatives like that of the US government (which will hopefully continue under the new leadership) will enable a good and rationale public discourse on how we allow these new systems to shape our lives for the next ten years and beyond.
As software architects we often get wrapped up in ‘the moment’ and are so focused on the immediate project deliverables and achieving the next milestone or sale that we rarely step back to consider the bigger picture and wider ethical implications of what we are doing. I doubt many of us really think whether the application or system we are contributing to in some way is really one we should be involved in or indeed is one that should be built at all.
To be clear, I’m not just talking here about software systems for the defence industry such as guided missiles, fighter planes or warships which clearly have one very definite purpose. I’m assuming that people who do work on such systems have thought, at least at some point in their life, about the implications of what they are doing and have justified it to themselves. Most times this will be something along the lines of these systems being used for defence and if we don’t have them the bad guys will surely come and get us. After all, the doctrine of mutual assured destruction (MAD) fueled the cold war in this way for the best part of fifty years.
Instead, I’m talking about systems which whilst on the face of it are perfectly innocuous, over time grow into behemoths far bigger than was ever intended and evolve into something completely different from their original purpose.
Obviously the biggest system we are are all dealing with, and the one which has had a profound effect on all of our lives, whether we work to develop it or just use it, is the World Wide Web.
The Web is now in its third decade so is well clear of those tumultuous teenage years of trying to figure out its purpose in life and should now be entering a period of growing maturity and and understanding of where it fits in the world. It should be pretty much ‘grown up’ in fact. However the problem with growing up is that in your early years at least you are greatly influenced, for better or worse, by your parents.
“I articulated the vision, wrote the first Web programs, and came up with the now pervasive acronyms URL, HTTP, HTML, and , of course World Wide Web. But many other people, most of them unknown, contributed essential ingredients, in much the same, almost random fashion. A group of individuals holding a common dream and working together at a distance brought about a great change.”
One of the “unknown” people (at least outside of the field of information technology) was Ted Nelson. Ted coined the term hypertext in his 1965 paper Complex Information Processing: A File Structure for the Complex, the Changing, and the Indeterminate and founded Project Xanadu (in 1960) in which all the worlds information could be published in hypertext and all quotes, references etc would be linked to more information and the original source of that information. Most crucially, for Nelson, was the fact that because every quotation had a link back to its source the original author of that quotation could be compensated in some small way (i.e. using what we now term micro-payments). Berners-Lee borrowed Nelson’s vision for hypertext which is what allows all the links you see in this post to work, however with one important omission.
Nelson himself has stated that some aspects of Project Xanadu are being fulfilled by the Web, but sees it as a gross over-simplification of his original vision:
“HTML is precisely what we were trying to PREVENT— ever-breaking links, links going outward only, quotes you can’t follow to their origins, no version management, no rights management.”
The last of these omissions (i.e. no rights management) is possibly one of the greatest oversights in the otherwise beautiful idea of the Web. Why?
Jaron Lanier the computer scientist, composer and author explains the difference between the Web and what Nelson proposed in Project Xanadu in his book Who Owns the Future as follows:
“A core technical difference between a Nelsonian network and what we have become familiar with online is that [Nelson’s] network links were two-way instead of one-way. In a network with two-way links, each node knows what other nodes are linked to it. … Two-way linking would preserve context. It’s a small simple change in how online information should be stored that couldn’t have vaster implications for culture and the economy.”
So what are the cultural and economic implications that Lanier describes?
In both Who Owns the Future and his earlier book You Are Not a Gadget Lanier articulates a number of concerns about how technology, and more specifically certain technologists, are leading us down a road to a dystopian future where not only will most middle class jobs be almost completely wiped out but we will all be subservient to a small number of what Lanier terms siren servers. Lanier defines a siren server as “an elite computer or coordinated collection of computers, on a network characterised by narcissism, hyper amplified risk aversion, and extreme information asymmetry”. He goes on to make the following observation about them:
“Siren servers gather data from the network, often without having to pay for it. The data is analysed using the most powerful available computers, run by the very best available technical people. The results of the analysis are kept secret, but are used to manipulate the rest of the world to advantage.”
Lanier’s two books tend to ramble a bit but nonetheless contain a number of important ideas.
Idea #1: Is the one stated above that because we essentially rushed into building the Web without thinking of the implications of what we were doing we have built up a huge amount of technical debt which could well be impossible to eradicate.
Idea #2: The really big siren servers (i.e. Facebook, Google, Twitter et al) have encouraged us to upload the most intimate details of our lives and in return given us an apparently ‘free’ service. This however has encouraged us to not want to pay for any services, or pay very little for them. This makes it difficult for any of the workers who create the now digitised information (e.g. journalists, photographers and musicians) to earn a decent living. This is ultimately an economically unsustainable situation however because once those information creators are put out of business who will create original content? The world cannot run on Facebook posts and tweets alone. As the musician David Byrne says here:
“The Internet has laid out a cornucopia of riches before us. I can read newspapers from all over the world, for example—and often for free!—but I have to wonder if that feast will be short-lived if no one is paying for the production of the content we are gorging on.”
Idea #3: The world is becoming overly machine centric and people are too ready to hand over a large part of their lives to the new tech elite. These new sirenic entrepreneurs as Lanier calls them not only know far too much about us but can use the data we provide to modify our behaviour. This may either be deliberately in the case of an infamous experiment carried out by Facebook or in unintended ways we as a society are only just beginning to understand.
Idea #4: Is that the siren servers are imposing a commercial asymmetry on all of us. When we used to buy our information packaged in a physical form it was ours to do with as we wished. If we wanted to share a book, or give away a CD or even sell a valuable record for a profit we were perfectly at liberty to do so. Now all information is digital however we can no longer do that. As Lanier says “with an ebook you are no longer a first-class commercial citizen but instead have tenuous rights within someone else’s company store.” If you want to use a different reading device or connect over a different cloud in most cases you will lose access to your purchase.
There can be little doubt that the Web has had a huge transformative impact on all of our lives in the 21st century. We now have access to more information than it’s possible to assimilate the tiniest fraction of in a human lifetime. We can reach out to almost any citizen in almost any part of the world at any time of the day or night. We can perform commercial transactions faster than ever would have been thought possible even 25 years ago and we have access to new tools and processes that genuinely are transforming our lives for the better. This however all comes at a cost even when access to all these bounties is apparently free. As architects and developers who help shape this brave new world should we not take responsibility to not only point out where we may be going wrong but also suggest ways in which we should improve things? This is something I intend to look at in some future posts.