It’s hard to believe that this year is the 30th anniversary of Tim Berners-Lee’s great invention, the World-Wide Web, and that much of the technology that enabled his creation is still less than 60 years old. Here’s a brief history of the Internet and the Web, and how we got to where we are today, in ten significant events.
#1: 1963 – Ted Nelson begins developing a model for creating and using linked content he calls hypertext and hypermedia. Hypertext is born.
#2: 1969 – The first message is sent over the ARPANET from computer science Professor Leonard Kleinrock’s laboratory at University of California, Los Angeles to the second network node at Stanford Research Institute. The Internet is born.
#3: 1969 – Charles Goldfarb, leading a small team at IBM, developed the first markup language, called Generalized Markup Language, or GML. Markup languages are born.
#5: 1993 – Mosaic, a graphical browser aiming to bring multimedia content to non-technical users (images and text on the same page) is invented by Marc Andreessen. The web browser is born.
#6: 1995 – Jeff Bezos launches Amazon “earth’s biggest bookstore” from a garage in Seattle. E-commerce is born.
#7: 1998 – The Google company is officially launched by Larry Page and Sergey Brin to market Google Search. Web search is born.
#8: 2003 – Facebook (then called FaceMash but changed to The Facebook a year later) is founded by Mark Zuckerberg with his college roommate and fellow Harvard University student Eduardo Saverin. Social media is born.
#9: 2007 – Steve Jobs launches the iPhone at MacWorld Expo in San Francisco. Mobile computing is born.
#10: 2018 – Tim Berners-Lee instigates act II of the web when he announces a new initiative called Solid, to reclaim the Web from corporations and return it to its democratic roots. The web is reborn?
I know there have been countless events that have enabled the development of our modern Information Age and you will no doubt think others should be included in preference to some of my suggestions. Also, I suspect that many people will not have heard of my last choice (unless you are a fairly hardcore computer type). The reason I have added this one is because I think/hope it will start to address what is becoming one of the existential threats of our age, namely how we survive in a world awash with data (our data) that is being mined and used without us knowing, much less understanding, the impact of such usage. Rather than living in an open society in which ideas and data are freely exchanged and used to everyones benefit we instead find ourselves in an age of surveillance capitalism which, according to this source, is defined as being:
…the manifestation of George Orwell’s prophesied Memory Hole combined with the constant surveillance, storage and analysis of our thoughts and actions, with such minute precision, and artificial intelligence algorithmic analysis, that our future thoughts and actions can be predicted, and manipulated, for the concentration of power and wealth of the very few.
In her book The Age of Surveillance Capitalism, Shoshana Zuboff provides a sweeping (and worrying) overview and history of the techniques that the large tech companies are using to spy on us in ways that even George Orwell would have found alarming. Not least because we have voluntarily given up all of this data about ourselves in exchange for what are sometimes the flimsiest of benefits. As Zuboff says:
Thanks to surveillance capitalism the resources for effective life that we seek in the digital realm now come encumbered with a new breed of menace. Under this new regime, the precise moment at which our needs are met is also the precise moment at which our lives are plundered for behavioural data, and all for the sake others gain.
Tim Berners-Lee invented the World-Wide Web then gave it away so that all might benefit. Sadly some have benefited more than others, not just financially but also by knowing more about us than most of us would ever want or wish. I hope for all our sakes the work that Berners-Lee and his small group of supporters is doing make enough progress to reverse the worst excesses of surveillance capitalism before it is too late.
Today (12th March, 2018) is the World Wide Web’s 29th birthday. Sir Tim Berners-Lee (the “inventor of the world-wide web”), in an interview with the Financial Times and in this Web Foundation post has used this anniversary to raise awareness of how the web behemoths Facebook, Google and Twitter are “promoting misinformation and ‘questionable’ political advertising while exploiting people’s personal data”. Whilst I admire hugely Tim Berners-Lee’s universe-denting invention it has to be said he himself is not entirely without fault in the wayhe bequeathed us with his invention. In his defence, hindsight is a wonderful thing of course, no one could have possibly predicted at the time just how the web would take off and transform our lives both for better and for worse.
If, as Marc Andreessen famously said in 2011, software is eating the world then many of those powerful tech companies are consuming us (or at least our data and I’m increasingly becoming unsure there is any difference between us and the data we choose to represent ourselves by.
Here are five recent examples of some of the negative ways software is eating up our world.
Over the past 40+ years the computer software industry has undergone some fairly major changes. Individually these were significant (to those of us in the industry at least) but if we look at these changes with the benefit of hindsight we can see how they have combined to bring us to where we are today. A world of cheap, ubiquitous computing that has unleashed seismic shocks of disruption which are overthrowing not just whole industries but our lives and the way our industrialised society functions. Here are some highlights for the 40 years between 1976 and 2016.
I have written before about how I believe that we, as software architects, have a responsibility, not only to explain the benefits (and there are many) of what we do but also to highlight the potential negative impacts of software’s voracious appetite to eat up our world.
This is my 201st post on Software Architecture Zen (2016/17 were barren years in terms of updates). This year I plan to spend more time examining some of the issues raised in this post and look at ways we can become more aware of them and hopefully not become so seduced by those sirenic entrepreneurs.
I know it’s a new year and that generally is a time to make resolutions, give things up, do something different with your life etc but that is not the reason I have decided to become a Facebook refusenik.
Let’s be clear, I’ve never been a huge Facebook user amassing hundreds of ‘friends’ and spending half my life on there. I’ve tended to use it to keep in touch with a few family and ‘real’ friend members and also as a means of contacting people with a shared interest in photography. I’ve never found the user experience of Facebook particularly satisfying and indeed have found it completely frustrating at times; especially when posts seem to come and go, seemingly at random. I also hated the ‘feature’ that meant videos started playing as soon as you scrolled them into view. I’m sure there was a way of preventing this but was never interested enough to figure out how to disable it. I could probably live with these foibles however as by and large the benefits outweighed the unsatisfactory aspects of Facebook’s usability.
What’s finally decided me to deactivate my account (and yes I know it’s still there just waiting for me to break and log back in again) is the insidious way in which Facebook is creeping into our lives and breaking down all aspects of privacy and even our self-determination. How so?
First off was the news in June 2014 that Facebook had conducted a secret study involving 689,000 users in which friends’ postings were moved to influence moods. Various tests were apparently performed. One test manipulated a users’ exposure to their friends’ “positive emotional content” to see how it affected what they posted. The study found that emotions expressed by friends influence our own moods and was the first experimental evidence for “massive-scale emotional contagion via social networks”. What’s so terrifying about this is whether, as Clay Johnson the co-founder of Blue State Digitalasked via Twitter is “could the CIA incite revolution in Sudan by pressuring Facebook to promote discontent? Should that be legal? Could Mark Zuckerberg swing an election by promoting Upworthy (see later) posts two weeks beforehand? Should that be legal?”
As far as we know this has been a one off which Facebook apologised for but the mere fact they thought they could get away with such a tactic is, to say the least, breathtaking in its audacity and not an organisation I am comfortable with entrusting my data to.
Next was the article by Tom Chatfield called The Attention Economy in which he discusses the idea that “attention is an inert and finite resource, like oil or gold: a tradable asset that the wise manipulator (i.e. Facebook and the like) auctions off to the highest bidder, or speculates upon to lucrative effect. There has even been talk of the world reaching ‘peak attention’, by analogy to peak oil production, meaning the moment at which there is no more spare attention left to spend.” Even though I didn’t believe Facebook was grabbing too much of my attention I was starting to become a little concerned that Facebook was often the first site I visited in the morning and was even becoming diverted by some of those posts in my newsfeed with titles like “This guy went to collect his mail as usual but you won’t believe what he found in his mailbox”. Research is beginning to show that doing more than one task at a time, especially more than one complex task, takes a toll on productivity and that the mind and brain were not designed for heavy-duty multitasking. As Danny Crichton argues here “we need to recognize the context that is distracting us, changing what we can change and advocating for what we can hopefully convince others to do.”
The final straw that has made me throw in the Facebook towel however was reading The Virologist by Andrew Marantz in The New Yorker magazine about Emerson Spartz the so called ‘king of clickbait”. Spartz is twenty-seven and has been successfully launching Web sites for more than half his life. In 1999, when Spartz was twelve, he built MuggleNet, which became the most popular Harry Potter fan site in the world. Spartz’s latest venture is Dose a photo- and video-aggregation site whose posts are collections of images designed to tell a story. The posts have names like “You May Feel Bad For Laughing At These 24 Accidents…But It’s Too Funny To Look Away“. Dose gets most of its feeds through Facebook. A bored teenager absent mindedly clicking links will eventually end up on a site like Dose. Spartz’s goal is to make the site so “sticky”—attention-grabbing and easy to navigate—that the teenager will stay for a while. Money is generated through ads – sometimes there are as many as ten on a page and Spartz hopes to develop traffic-boosting software that he can sell to publishers and advertisers. Here’s the slightly disturbing thing though. Algorithms for analysing users behaviour are “baked in” to the sites Spartz builds. When a Dose post is created, it initially appears under as many as two dozen different headlines, distributed at random to different Facebook users. An algorithm measures which headline is attracting clicks most quickly, and after a few hours, when a statistically significant threshold is reached, the “winning” headline automatically supplants all others. Hence users are “click-bait”, unknowingly taking part in a “test” to see how quickly they respond to a headline.
The final, and most sinister aspect to what Spartz is trying to do with Dose and similar sites is left to the end of Marantz’s article when Spartz gives his vision of the future of media:
The lines between advertising and content are blurring,” he said. “Right now, if you go to any Web site, it will know where you live, your shopping history, and it will use that to give you the best ad. I can’t wait to start doing that with content. It could take a few months, a few years—but I am motivated to get started on it right now, because I know I’ll kill it.
The ‘content’ that Spartz talks about is news. In other words he sees his goal is to feed us the news articles his algorithms calculate we will like. We will no longer be reading the news we want to read but rather that which some computer program thinks we should be reading, coupled of course with the ads the same program thinks we are most likely to respond to.
If all of this is not enough to concern you about what Facebook is doing (and the sort of companies it collaborates with) then the recent announcement of ‘keyword’ or ‘graph’ search might. Keyword search allows you to search content previously shared with you by entering a word or phrase. Privacy settings aren’t changing, and keyword search will only bring up content shared with you, like posts by friends or that friends commented on, not public posts or ones by Pages. But if a friend wanted to easily find posts where you said you were “drunk”, now they could. That accessibility changes how “privacy by obscurity” effectively works on Facebook. Rather than your posts being effectively lost in the mists of time (unless your friends want to methodically step through all your previous posts that is) your previous confessions and misdemeanors are now just a keyword search away. Maybe now is the time to take a look at your Timeline or search for a few dubious words with your name to check for anything scandalous before someone else does? As this article points out there are enormous implications of Facebook indexing trillions of our posts some we can see now but others we can only begin to guess at as ‘Zuck’ and his band of researchers do more and more to mine our collective consciousness’.
So that’s why I have decided to deactivate my Facebook account. For now my main social media interactions will be through Twitter (though that too is obviously working out how it can make money out of better and more targeted advertising of course). I am also investigating Ello which bills itself as “a global community that believes that a social network should be a place to empower, inspire, and connect — not to deceive, coerce, and manipulate.” Ello takes no money from advertising and reckons it will make money from value added services. It is early days for Ello yet and it still receives venture capital money for its development. Who knows where it will go but if you’d like to join with me on there I’m @petercripps (contact me if you want an invite).
I realise this is a somewhat different post from my usual ones on here. I have written posts before on privacy in the internet age but I believe this is an important topic for software architects and one I hope to concentrate on more this year.
I started my career in the telecommunications division of the General Electrical Company (GEC) as a software engineer designing digital signalling systems for Private Branch Exchanges based on the Digital Private Network Signalling System. As part of that role I represented GEC on the working party that defined the DPNSS standard which was owned by British Telecom. I remember at one of the meetings the head of the working party, whose name I unfortunately forget, posed the question: what would have happened if regimes such as those of Nazi Germany or the Stalinist Soviet Union had access to the powerful (sic) technology we were developing? When I look back at that time (early 80’s) such “powerful technology” looks positively antiquated – we were actually talking about little more than the ability to know who was calling whom using calling line identification! However that question was an important one to ask and is now one we should be asking more than ever today.One of the roles of the architect is to ask the questions that others tend to either forget about or purposely don’t ask because the answer is “too hard”. Questions like:
So you expect 10,000 people to use your website but what happens if it really takes off and the number of users is 10 or 100 times that?
So you’re giving your workforce mobile devices that can be used to access your sales systems, what happens when one of your employees leaves their tablet on a plane/train/taxi?
So we are buying database software from a new vendor who will help us migrate from our old systems but what in-house skills do we have to manage and operate this new software?
In many ways these are the easy questions, for a slightly harder question consider this one posed by Nicholas Carr in this blog post.
So you’re happily tweeting away as your Google self-driving car crosses a bridge, its speed precisely synced to the 50 m.p.h. limit. A group of frisky schoolchildren is also heading across the bridge, on the pedestrian walkway. Suddenly, there’s a tussle, and three of the kids are pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?
Pity the poor architect who has to design for that particular use case (and probably several hundred others not yet thought of)! Whilst this might seem to be someway off, the future, as they say, is actually a lot closer than you think. As Carr points out, the US Department of Defence has just issued guidelines designed to:
Minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.
Guidelines which presumably software architects and designers, amongst others, need to get their heads around.
For anyone who has even the remotest knowledge of the genre of science fiction this is probably going to sound familiar. As far back as 1942 the author Isaac Asimov formulated his famous three laws of robotics which current and future software architects may well be minded to adopt as an important set of architectural principles. These three laws, as stated in Asimov’s 1942 short story Runaround, are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
As stated here these laws are beautifully concise and unambiguous however the devil, of course, will be in the implementation. Asimov himself went on to make quite a career of writing stories that tussled with some of the ambiguities that could arise from the conflicts between these laws.
So back to the point of this blog. As our systems become ever more complex and infringe on more and more of our everyday lives are ethical or moral requirements such as these going to be another set of things that software architects need to deal with? I would say absolutely yes. More than ever we need to understand not just the impact on humanity of those systems we are building but also those systems (and tools) we are using everyday. As Douglas Rushkoff says in his book Program or be Programmed:
If you don’t know what the software you’re using is for, then you’re not using it but being used by it.
In a recent blog post Seth Godin poses a number of questions of what freedom in a digital world really means. Many of these are difficult moral questions with no easy answer and yet systems we are building now, today are implicitly or explicitly embedding assumptions around some of these questions whether we like it or not. One could argue that we should always question whether a particular system should be built or not (just because we can do something does not necessarily mean we should) but often by the time you realise you should be asking such questions it’s already too late. Many of the systems we have today were not built as such, but rather grew or emerged. Facebook may have started out as a means of connecting college friends but now it’s a huge interconnected world of relationships and likes and dislikes and photographs and timelines and goodness knows what else that can be ‘mined’ for all sorts of purposes not originally envisaged.
One of the questions architects and technologists alike must surely be asking is how much mining (of personal data) is it right to do? Technology exists to track our digital presence wherever we go but how much should we be making use of that data and and to what end? The story of how the US retailer Target found out a teenage girl was pregnant before her father did has been doing the rounds for a while now. Apart from the huge embarrassment to the girl and her family this story probably had a fairly harmless outcome however what if that girl had lived in a part of the world where such behavior was treated with less sympathy?
It is of course up to each of us to decide what sort of systems we are or are not prepared to work on in order to earn a living. Each of us must make a moral and ethical judgment based on our own values and beliefs. We should also take care in judging others that create systems we do not agree with or think are “wrong”. What is important however is to always question the motives and the reasons behind those systems and be very clear why you are doing what you are doing and are able to sleep easy having made your decision.
Bob Metcalfe, the founder of 3Com and co-inventor of Ethernet has said:
Be prepared to learn how the growth of exponential and disruptive technologies will impact your industry, your company, your career and your life.
The term disruptive technology has been widely used as a synonym of disruptive innovation, but the latter is now preferred, because market disruption has been found to be a function usually not of technology itself but rather of its changing application.Wikepedia defines a disruptive innovation (first coined by Clayton Christensen) as:
An innovation that helps create a new market and value network, and eventually goes on to disrupt an existing market and value network (over a few years or decades), displacing an earlier technology.
Examples of disruptive innovations (and what they have disrupted/displaced) are:
Digital media (CDs/DVDs)
Desktop publishing (traditional publishing)
Digital photography (chemical/film photography)
LCD televisions (CRT televisions)
Wikipedia (traditional encyclopedias)
Tablet computers (personal computers, maybe)
The above are all examples of technologies/innovations that have disrupted existing business models, or even whole industries. However there is another class or type of disruptive innovation which not only disrupts a market but creates a whole new ecosystem upon which a new industry can be created. Examples of these are the likes of Facebook, Twitter and iTunes. What these have provided, as well, are a platform upon which providers, complementors, users and suppliers co-exist to support, nurture and grow the ecosystem of the platform and create a disruptive technology platform (DTP). Here’s a system context diagram for such a platform.
The four actors in this system context play the following roles:
Provider – Develops and provides the core platform. Providers need to ensure the platform exposes interfaces (that Complementors can use) and also ensure standards are defined that allow the platform to grow in a controlled way.
Complementor – Supplement the platform with new features, services and products that increase the value of the platform to End Users (and draw mor of them in to use the platform).
End User – As well as performing the obvious ‘using the platform’ role Users will also drive demand that Complementors help fulfill. Also there are likely to be more Users if there are more Complementors providing new features. A well architected platform also allows End Users to interact with each other.
Supplier – Usually enters into a contract with the core platform provider to provide a known product or service or technology. Probably not innovating in the same way as the complementor would.
If we use Facebook (the platform) as a real instance of the above then the provider is Facebook (the company) who have created a platform that is extensible through a well defined set of interfaces. Complementors are the many third party providers who have developed new features to extend the underlying platform (e.g. Airbnb and The Guardian). End uers are, of course, the 800 million or so people who have Facebook accounts. Suppliers would be the companies who, for example, provide the hardware and software infrastructure upon which Facebook runs.
Of course, just because you are providing a new technology platform does not mean it will automatically be a disruptive technology platform. Looking at some of the technology platforms that are currently out there and have, or are in the process of disrupting businesses or whole industries we can see some common themes however. Here are some of these (in no particular order of priority):
A DTP has a well defined set of open interfaces which complementors can use, possibly in ways not originally envisaged by the platform provider.
The DTP needs to build up a critical mass of both end users and complementors, each of which feeds off the other in a positive feedback loop so the platform grows.
The DTP must be both scaleable but extremely robust.
The DTP must provide an intrinsic value which cannot be obtained elsewhere, or if it can, must give additional benefits which make users come to the DTP rather than go elsewhere. Providing music on iTunes at a low enough cost and easy enough to obtain preventing users going to free file sharing sites is an example.
End users must be allowed to interact amongst themselves, again in ways that may not have been originally envisaged.
Complementors must be provided with the right level of contract that allows them to innovate, but without actually breaking the platform (Apple’s contract to App store developers is an example). The DTP provider needs to retain some level of control.
These are just some of the attributes I would expect a DTP to have, there must be more. Feel free to comment and provide some observations on what you think constitutes a DTP.
The arguments for and against the rise of user generated content on the web continue to rage. Depending on which side of the debate you take we are either on an amoral downward spiral of increasingly meaningless content being generated by amateurs (for free) that is putting the professional writers, musicians, software developers, photographers etc out of work as well as ruining our brains or are entering a new age where the combination of an unbounded publishing engine and the cognitive surplus many people now have means we are able to build a better and more cooperative world.Like most things in life the truth will not be at one of these polar opposites but somewhere in between. Seth Godin makes an interesting point in a recent blog entry that “lifestyle media isn’t a fad it’s what human beings have been doing forever, with a brief, recent interruption for a hundred years of professional media along the way”. He goes on to say “we shouldn’t be surprised when someone chooses to publish their photos, their words, their art or their opinions. We should be surprised when they don’t.”
After all, given the precarious nature of the press in the UK at the moment with stories being obtained through all sorts of dubious means the professionals can hardly be seen as holding the ethical or moral high ground.
The possibilities for creativity, and building interesting and innovative solutions out of this mixed bag of social media self-publishing is going to be the place where architects are going to have a fertile ground over the coming years. A nice example of this is Flipboard which if you have an iPhone or iPad you should definitely download. This free app is a “social magazine” that extends links your friends and contacts are sharing on Facebook, Twitter, Linkedin, 500px and others into beautifully packaged “articles”. It can also pull in content from a raft of other online content. It’s a great example of what architects should be doing, namely taking existing components and assembling them in interesting and innovative ways.
A perfect storm is defined as being: a critical or disastrous situation created by a powerful concurrence of factors.A perfect storm is certainly what RIM, makers of the Blackberry, have been experiencing recently. For three days, starting on 10th October a problem caused by a router in an unassuming two-storey building in Slough, UK affected almost every one of its users around the world. Not only were users unable to Twitter, or Facebook, more seriously, those users who rely on their Blackberrys for email to do their business may have lost valuable work. Whilst many commentators may have made light of the situation, because people could no longer tweet their every movement, there is a far more serious message here which is that as a civilisation we are now completely dependent on software and hardware technology that runs our daily lives.
Here’s what Blackberry had to say on their service bulletin board on 11th October, mid-way through the crisis:
The messaging and browsing delays that some of you are still experiencing were caused by a core switch failure within RIM’s infrastructure. Although the system is designed to failover to a back-up switch, the failover did not function as previously tested…
Unfortunately for Blackberry it was not only this technical and process failure that formed part of their perfect storm but two other factors, they could not hoped to have predicted, also occurred recently. One was the launch of the latest iPhone 4S from Apple which was released the very same week as Blackberry’s network failure. The other is the allegation that Blackberry’s, or more precisely the Blackberry Messaging Service (BBM), were implicated in the recent riots that took place in London and other UK cities in the summer. For many teens armed with a BlackBerry, BBM has replaced text messaging because it is free, instant and more part of a much larger community than regular SMS. Also, unlike Twitter or Facebook, many BBM messages are untraceable by the authorities.
From an IT architecture point of view clearly the technical and process failure of such a crucial data centre should just not have been allowed to happen. In some ways Blackberry has been a victim of its own success with the number of users growing from 10 million in 2005 to 70 million now without a corresponding increase in capacity of its network and fully functioning failover facility. However the more interesting, and in some ways more intractable, problem is the competitive, sociological and even ethical aspects of the situation. When Apple launched the first iPhone back in 2007 they changed forever the way people interacted with their phones. Some people have observed that the tactile way in which people “stroke” an iPhone rather than jab at tiny buttons has led to their more widespread adoption. Clearly a case of getting the human-computer interface right paying great dividends. Who would have thought however that the very aspect that once made Blackberry’s so popular with business users (their security) could backfire on them in quite such a significant way? Architecture (and design) is not just about getting the right features at the right price it is also about thinking through the likely impact of those features in contexts that may not initially have been envisaged.