Trust Google?

Photo by Daniele Levis Pelusi on Unsplash

Google has just released data on people’s movements, gathered from millions of mobile devices that use its software (e.g. Android, Google Maps etc) leading up to and during the COVID-19 lockdown in various countries. The data has been analysed here to show graphically how people spent their time between six location categories: homes; workplaces; parks; public transport stations; grocery shops and pharmacies; and retail and recreational locations.

The data shows how quickly people reacted to the instructions to lockdown. Here in the UK for example we see people reacted late but then strongly, with a rise of about 20-25% staying at home. This delay reflects the fact that lockdown began later, on March 23, in the UK though some people were already staying home before lockdown began.

What we see in the data provided by Google is likely to be only the start and, I suspect, a preview of how we may soon have to live. In the book Homo Deus by Yuval Noah Harari the chapter The Great Decoupling discusses how bioscience and computer science are conspiring to learn more about us than we know about ourselves and in the process destroy the “great liberal project” where we think that we have free-will and are able to make our own decisions about what we eat, who we marry and vote for in elections as well as what career path we choose etc, etc.

Harari asks what will happen when Google et al know more about us than we, or anyone else does? Facebook, for example, already purports to know more about us than our spouse by analysing as few as 300 of our ‘likes’. What if those machines who are watching over us (hopefully with “loving grace” but who knows) can offer us ‘advice’ on who we should vote for based on our previous four years comments and ‘likes’ on Facebook or recommend we should go and see a psychiatrist because of the somewhat erratic comments we have been making in emails to our friends or on Twitter?

The Google we see today, providing us with relatively benign data for us to analyse ourselves, is currently at the level of what Harari says is an ‘oracle’. It has the data and, with the right interpretation, we can use that data to provide us with information to make decisions. Exactly where we are now with coronavirus and this latest dataset.

The next stage is that of Google becoming an ‘agent’. You give Google an aim and it works out the best way to achieve that aim. Say, I want to lose two stone by next summer so I have the perfect beach ready body. Google knows all about my biometric data (they just bought Fitbit remember) as well as your predisposition for buying crisps and watching too much Netflix and comes up with a plan that will allow you to lose that weight provided you follow it.

Finally Google becomes ’sovereign’ and starts making those decisions for you. So maybe it checks your supermarket account and recommends removing those crisps from your shopping list and then, if you continue to ignore its advice it instructs your insurance company who bumps up your health insurance if you don’t.

At this point we ask who is in control. Google, Facebook etc own all that data but that data can be influenced (or hacked) to nudge us to do things we don’t realise. We already know how Cambridge Analytica used Facebook to influence the voting behaviour (we’re looking at you Mr Cummings) in a few swing areas (for Brexit and the last US election). We have no idea how much of that was also being influenced by Russia.

I think humanity is rapidly approaching the point when we really need to be making some hard decisions about how much of our data, and the analysis of that data, we should allow Google, Facebook and Twitter to hold. Should we be starting to think the unthinkable and calling a halt to this ever growing mountain of data each of us willingly gives away for free? But, how do we do that when most of it is being kept and analysed by private companies or worse, by China and Russia?

The story so far…

 

Photo by Joshua Sortino on Unsplash
Photo by Joshua Sortino on Unsplash

It’s hard to believe that this year is the 30th anniversary of Tim Berners-Lee’s great invention, the World-Wide Web, and that much of the technology that enabled his creation is still less than 60 years old. Here’s a brief history of the Internet and the Web, and how we got to where we are today, in ten significant events.

 

#1: 1963 – Ted Nelson begins developing a model for creating and using linked content he calls hypertext and hypermedia. Hypertext is born.

#2: 1969 – The first message is sent over the ARPANET from computer science Professor Leonard Kleinrock’s laboratory at University of California, Los Angeles to the second network node at Stanford Research Institute. The Internet is born.

#3: 1969 – Charles Goldfarb, leading a small team at IBM, developed the first markup language, called Generalized Markup Language, or GML. Markup languages are born.

#4: 1989 – Tim Berners-Lee whilst working at CERN publishes his paper, Information Management: A Proposal. The World Wide Web (WWW) is born.

#5: 1993Mosaic, a graphical browser aiming to bring multimedia content to non-technical users (images and text on the same page) is invented by Marc Andreessen. The web browser is born.

#6: 1995 – Jeff Bezos launches Amazon “earth’s biggest bookstore” from a garage in Seattle. E-commerce is born.

#7: 1998 – The Google company is officially launched by Larry Page and Sergey Brin to market Google Search. Web search is born.

#8: 2003Facebook (then called FaceMash but changed to The Facebook a year later) is founded by Mark Zuckerberg with his college roommate and fellow Harvard University student Eduardo Saverin. Social media is born.

#9: 2007 – Steve Jobs launches the iPhone at MacWorld Expo in San Francisco. Mobile computing is born.

#10: 2018 – Tim Berners-Lee instigates act II of the web when he announces a new initiative called Solid, to reclaim the Web from corporations and return it to its democratic roots. The web is reborn?

I know there have been countless events that have enabled the development of our modern Information Age and you will no doubt think others should be included in preference to some of my suggestions. Also, I suspect that many people will not have heard of my last choice (unless you are a fairly hardcore computer type). The reason I have added this one is because I think/hope it will start to address what is becoming one of the existential threats of our age, namely how we survive in a world awash with data (our data) that is being mined and used without us knowing, much less understanding, the impact of such usage. Rather than living in an open society in which ideas and data are freely exchanged and used to everyones benefit we instead find ourselves in an age of surveillance capitalism which, according to this source, is defined as being:

…the manifestation of George Orwell’s prophesied Memory Hole combined with the constant surveillance, storage and analysis of our thoughts and actions, with such minute precision, and artificial intelligence algorithmic analysis, that our future thoughts and actions can be predicted, and manipulated, for the concentration of power and wealth of the very few.

In her book The Age of Surveillance Capitalism, Shoshana Zuboff provides a sweeping (and worrying) overview and history of the techniques that the large tech companies are using to spy on us in ways that even George Orwell would have found alarming. Not least because we have voluntarily given up all of this data about ourselves in exchange for what are sometimes the flimsiest of benefits. As Zuboff says:

Thanks to surveillance capitalism the resources for effective life that we seek in the digital realm now come encumbered with a new breed of menace. Under this new regime, the precise moment at which our needs are met is also the precise moment at which our lives are plundered for behavioural data, and all for the sake others gain.

Tim Berners-Lee invented the World-Wide Web then gave it away so that all might benefit. Sadly some have benefited more than others, not just financially but also by knowing more about us than most of us would ever want or wish. I hope for all our sakes the work that Berners-Lee and his small group of supporters is doing make enough progress to reverse the worst excesses of surveillance capitalism before it is too late.

Software is Eating the World and Some Tech Companies are Eating Us

Today (12th March, 2018) is the World Wide Web’s 29th birthday.  Sir Tim Berners-Lee (the “inventor of the world-wide web”), in an interview with the Financial Times and in this Web Foundation post has used this anniversary to raise awareness of how the web behemoths Facebook, Google and Twitter are “promoting misinformation and ‘questionable’ political advertising while exploiting people’s personal data”.  Whilst I admire hugely Tim Berners-Lee’s universe-denting invention it has to be said he himself is not entirely without fault in the way he bequeathed us with his invention.  In his defence, hindsight is a wonderful thing of course, no one could have possibly predicted at the time just how the web would take off and transform our lives both for better and for worse.

If, as Marc Andreessen famously said in 2011, software is eating the world then many of those powerful tech companies are consuming us (or at least our data and I’m increasingly becoming unsure there is any difference between us and the data we choose to represent ourselves by.

Here are five recent examples of some of the negative ways software is eating up our world.

Over the past 40+ years the computer software industry has undergone some fairly major changes.  Individually these were significant (to those of us in the industry at least) but if we look at these changes with the benefit of hindsight we can see how they have combined to bring us to where we are today.  A world of cheap, ubiquitous computing that has unleashed seismic shocks of disruption which are overthrowing not just whole industries but our lives and the way our industrialised society functions.  Here are some highlights for the 40 years between 1976 and 2016.

waves-since-1976

And yet all of this is just the beginning.  This year we will be seeing technologies like serverless computing, blockchain, cognitive and quantum computing become more and more embedded in our lives in ways we are only just beginning to understand.  Doubtless the fallout from some of the issues I highlight above will continue to make themselves felt and no doubt new technologies currently bubbling under the radar will start to make themselves known.

I have written before about how I believe that we, as software architects, have a responsibility, not only to explain the benefits (and there are many) of what we do but also to highlight the potential negative impacts of software’s voracious appetite to eat up our world.

This is my 201st post on Software Architecture Zen (2016/17 were barren years in terms of updates).  This year I plan to spend more time examining some of the issues raised in this post and look at ways we can become more aware of them and hopefully not become so seduced by those sirenic entrepreneurs.

Why I Became a Facebook Refusenik

I know it’s a new year and that generally is a time to make resolutions, give things up, do something different with your life etc but that is not the reason I have decided to become a Facebook refusenik.

Image Copyright http://www.keepcalmandposters.com
Image Copyright http://www.keepcalmandposters.com

Let’s be clear, I’ve never been a huge Facebook user amassing hundreds of ‘friends’ and spending half my life on there. I’ve tended to use it to keep in touch with a few family and ‘real’ friend members and also as a means of contacting people with a shared interest in photography. I’ve never found the user experience of Facebook particularly satisfying and indeed have found it completely frustrating at times; especially when posts seem to come and go, seemingly at random. I also hated the ‘feature’ that meant videos started playing as soon as you scrolled them into view. I’m sure there was a way of preventing this but was never interested enough to figure out how to disable it. I could probably live with these foibles however as by and large the benefits outweighed the unsatisfactory aspects of Facebook’s usability.

What’s finally decided me to deactivate my account (and yes I know it’s still there just waiting for me to break and log back in again) is the insidious way in which Facebook is creeping into our lives and breaking down all aspects of privacy and even our self-determination. How so?

First off was the news in June 2014 that Facebook had conducted a secret study involving 689,000 users in which friends’ postings were moved to influence moods. Various tests were apparently performed. One test manipulated a users’ exposure to their friends’ “positive emotional content” to see how it affected what they posted. The study found that emotions expressed by friends influence our own moods and was the first experimental evidence for “massive-scale emotional contagion via social networks”. What’s so terrifying about this is whether, as Clay Johnson the co-founder of Blue State Digital asked via Twitter is “could the CIA incite revolution in Sudan by pressuring Facebook to promote discontent? Should that be legal? Could Mark Zuckerberg swing an election by promoting Upworthy (see later) posts two weeks beforehand? Should that be legal?”

As far as we know this has been a one off which Facebook apologised for but the mere fact they thought they could get away with such a tactic is, to say the least, breathtaking in its audacity and not an organisation I am comfortable with entrusting my data to.

Next was the article by Tom Chatfield called The Attention Economy in which he discusses the idea that “attention is an inert and finite resource, like oil or gold: a tradable asset that the wise manipulator (i.e. Facebook and the like) auctions off to the highest bidder, or speculates upon to lucrative effect. There has even been talk of the world reaching ‘peak attention’, by analogy to peak oil production, meaning the moment at which there is no more spare attention left to spend.” Even though I didn’t believe Facebook was grabbing too much of my attention I was starting to become a little concerned that Facebook was often the first site I visited in the morning and was even becoming diverted by some of those posts in my newsfeed with titles like “This guy went to collect his mail as usual but you won’t believe what he found in his mailbox”. Research is beginning to show that doing more than one task at a time, especially more than one complex task, takes a toll on productivity and that the mind and brain were not designed for heavy-duty multitasking. As Danny Crichton argues here “we need to recognize the context that is distracting us, changing what we can change and advocating for what we can hopefully convince others to do.”

The final straw that has made me throw in the Facebook towel however was reading The Virologist by Andrew Marantz in The New Yorker magazine about Emerson Spartz the so called ‘king of clickbait”. Spartz is twenty-seven and has been successfully launching Web sites for more than half his life. In 1999, when Spartz was twelve, he built MuggleNet, which became the most popular Harry Potter fan site in the world. Spartz’s latest venture is Dose a photo- and video-aggregation site whose posts are collections of images designed to tell a story. The posts have names like “You May Feel Bad For Laughing At These 24 Accidents…But It’s Too Funny To Look Away“. Dose gets most of its feeds through Facebook. A bored teenager absent mindedly clicking links will eventually end up on a site like Dose. Spartz’s goal is to make the site so “sticky”—attention-grabbing and easy to navigate—that the teenager will stay for a while. Money is generated through ads – sometimes there are as many as ten on a page and Spartz hopes to develop traffic-boosting software that he can sell to publishers and advertisers. Here’s the slightly disturbing thing though. Algorithms for analysing users behaviour are “baked in” to the sites Spartz builds. When a Dose post is created, it initially appears under as many as two dozen different headlines, distributed at random to different Facebook users. An algorithm measures which headline is attracting clicks most quickly, and after a few hours, when a statistically significant threshold is reached, the “winning” headline automatically supplants all others. Hence users are “click-bait”, unknowingly taking part in a “test” to see how quickly they respond to a headline.

The final, and most sinister aspect to what Spartz is trying to do with Dose and similar sites is left to the end of Marantz’s article when Spartz gives his vision of the future of media:

The lines between advertising and content are blurring,” he said. “Right now, if you go to any Web site, it will know where you live, your shopping history, and it will use that to give you the best ad. I can’t wait to start doing that with content. It could take a few months, a few years—but I am motivated to get started on it right now, because I know I’ll kill it.

The ‘content’ that Spartz talks about is news. In other words he sees his goal is to feed us the news articles his algorithms calculate we will like. We will no longer be reading the news we want to read but rather that which some computer program thinks we should be reading, coupled of course with the ads the same program thinks we are most likely to respond to.

If all of this is not enough to concern you about what Facebook is doing (and the sort of companies it collaborates with) then the recent announcement of ‘keyword’ or ‘graph’ search might. Keyword search allows you to search content previously shared with you by entering a word or phrase. Privacy settings aren’t changing, and keyword search will only bring up content shared with you, like posts by friends or that friends commented on, not public posts or ones by Pages. But if a friend wanted to easily find posts where you said you were “drunk”, now they could. That accessibility changes how “privacy by obscurity” effectively works on Facebook. Rather than your posts being effectively lost in the mists of time (unless your friends want to methodically step through all your previous posts that is) your previous confessions and misdemeanors are now just a keyword search away. Maybe now is the time to take a look at your Timeline or search for a few dubious words with your name to check for anything scandalous before someone else does? As this article points out there are enormous implications of Facebook indexing trillions of our posts some we can see now but others we can only begin to guess at as ‘Zuck’ and his band of researchers do more and more to mine our collective consciousness’.

So that’s why I have decided to deactivate my Facebook account. For now my main social media interactions will be through Twitter (though that too is obviously working out how it can make money out of better and more targeted advertising of course). I am also investigating Ello which bills itself as “a global community that believes that a social network should be a place to empower, inspire, and connect — not to deceive, coerce, and manipulate.” Ello takes no money from advertising and reckons it will make money from value added services. It is early days for Ello yet and it still receives venture capital money for its development. Who knows where it will go but if you’d like to join with me on there I’m @petercripps (contact me if you want an invite).

I realise this is a somewhat different post from my usual ones on here. I have written posts before on privacy in the internet age but I believe this is an important topic for software architects and one I hope to concentrate on more this year.

The Moral Architect

I started my career in the telecommunications division of the General Electrical Company (GEC) as a software engineer designing digital signalling systems for Private Branch Exchanges based on the Digital Private Network Signalling System. As part of that role I represented GEC on the working party that defined the DPNSS standard which was owned by British Telecom. I remember at one of the meetings the head of the working party, whose name I unfortunately forget, posed the question: what would have happened if regimes such as those of Nazi Germany or the Stalinist Soviet Union had access to the powerful (sic) technology we were developing? When I look back at that time (early 80’s) such “powerful technology” looks positively antiquated – we were actually talking about little more than the ability to know who was calling whom using calling line identification! However that question was an important one to ask and is now one we should be asking more than ever today.One of the roles of the architect is to ask the questions that others tend to either forget about or purposely don’t ask because the answer is “too hard”. Questions like:

  • So you expect 10,000 people to use your website but what happens if it really takes off and the number of users is 10 or 100 times that?
  • So you’re giving your workforce mobile devices that can be used to access your sales systems, what happens when one of your employees leaves their tablet on a plane/train/taxi?
  • So we are buying database software from a new vendor who will help us migrate from our old systems but what in-house skills do we have to manage and operate this new software?
  • Etc

In many ways these are the easy questions, for a slightly harder question consider this one posed by Nicholas Carr in this blog post.

So you’re happily tweeting away as your Google self-driving car crosses a bridge, its speed precisely synced to the 50 m.p.h. limit. A group of frisky schoolchildren is also heading across the bridge, on the pedestrian walkway. Suddenly, there’s a tussle, and three of the kids are pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?

Pity the poor architect who has to design for that particular use case (and probably several hundred others not yet thought of)! Whilst this might seem to be someway off, the future, as they say, is actually a lot closer than you think. As Carr points out, the US Department of Defence has just issued guidelines designed to:

Minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.

Guidelines which presumably software architects and designers, amongst others, need to get their heads around.

For anyone who has even the remotest knowledge of the genre of science fiction this is probably going to sound familiar. As far back as 1942 the author Isaac Asimov formulated his famous three laws of robotics which current and future software architects may well be minded to adopt as an important set of architectural principles. These three laws, as stated in Asimov’s 1942 short story Runaround, are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

As stated here these laws are beautifully concise and unambiguous however the devil, of course, will be in the implementation. Asimov himself went on to make quite a career of writing stories that tussled with some of the ambiguities that could arise from the conflicts between these laws.

So back to the point of this blog. As our systems become ever more complex and infringe on more and more of our everyday lives are ethical or moral requirements such as these going to be another set of things that software architects need to deal with? I would say absolutely yes. More than ever we need to understand not just the impact on humanity of those systems we are building but also those systems (and tools) we are using everyday. As  Douglas Rushkoff says in his book Program or be Programmed:

If you don’t know what the software you’re using is for, then you’re not using it but being used by it.

In a recent blog post Seth Godin poses a number of questions of what freedom in a digital world really means. Many of these are difficult moral questions with no easy answer and yet systems we are building now, today are implicitly or explicitly embedding assumptions around some of these questions whether we like it or not. One could argue that we should always question whether a particular system should be built or not (just because we can do something does not necessarily mean we should) but often by the time you realise you should be asking such questions it’s already too late. Many of the systems we have today were not built as such, but rather grew or emerged. Facebook may have started out as a means of connecting college friends but now it’s a huge interconnected world of relationships and likes and dislikes and photographs and timelines and goodness knows what else that can be ‘mined’ for all sorts of purposes not originally envisaged.

One of the questions architects and technologists alike must surely be asking is how much mining (of personal data) is it right to do? Technology exists to track our digital presence wherever we go but how much should we be making use of that data and and to what end? The story of how the US retailer Target found out a teenage girl was pregnant before her father did has been doing the rounds for a while now. Apart from the huge embarrassment to the girl and her family this story probably had a fairly harmless outcome however what if that girl had lived in a part of the world where such behavior was treated with less sympathy?

It is of course up to each of us to decide what sort of systems we are or are not prepared to work on in order to earn a living. Each of us must make a moral and ethical judgment based on our own values and beliefs. We should also take care in judging others that create systems we do not agree with or think are “wrong”. What is important however is to always question the motives and the reasons behind those systems and be very clear why you are doing what you are doing and are able to sleep easy having made your decision.

Architecting Disruptive Technology Platforms

Bob Metcalfe, the founder of 3Com and co-inventor of  Ethernet has said:

Be prepared to learn how the growth of exponential and disruptive technologies will impact your industry, your company, your career and your life.

The term disruptive technology has been widely used as a synonym of disruptive innovation, but the latter is now preferred, because market disruption has been found to be a function usually not of technology itself but rather of its changing application.Wikepedia defines a disruptive innovation (first coined by Clayton Christensen) as:

An innovation that helps create a new market and value network, and eventually goes on to disrupt an existing market and value network (over a few years or decades), displacing an earlier technology.

Examples of disruptive innovations (and what they have disrupted/displaced) are:

  • Digital media (CDs/DVDs)
  • Desktop publishing (traditional publishing)
  • Digital photography (chemical/film photography)
  • LCD televisions (CRT televisions)
  • Wikipedia (traditional encyclopedias)
  • Tablet computers (personal computers, maybe)

The above are all examples of technologies/innovations that have disrupted existing business models, or even whole industries. However there is another class or type of disruptive innovation which not only disrupts a market but creates a whole new ecosystem upon which a new industry can be created. Examples of these are the likes of Facebook, Twitter and iTunes. What these have provided, as well, are a platform upon which providers, complementors, users and suppliers co-exist to support, nurture and grow the ecosystem of the platform and create a disruptive technology platform (DTP). Here’s a system context diagram for such a platform.

The four actors in this system context play the following roles:

  • Provider – Develops and provides the core platform. Providers need to ensure the platform exposes interfaces (that Complementors can use) and also ensure standards are defined that allow the platform to grow in a controlled way.
  • Complementor – Supplement the platform with new features, services and products that increase the value of the platform to End Users (and draw mor of them in to use the platform).
  • End User – As well as performing the obvious ‘using the platform’ role Users will also drive demand that  Complementors help fulfill. Also there are likely to be more Users if there are more Complementors providing new features. A well architected platform also allows End Users to interact with each other.
  • Supplier – Usually enters into a contract with the core platform provider to provide a known product or service or technology. Probably not innovating in the same way as the complementor would.

If we use Facebook (the platform) as a real instance of the above then the provider is Facebook (the company) who have created a platform that is extensible through a well defined set of interfaces. Complementors are the many third party providers who have developed new features to extend the underlying platform (e.g. Airbnb and The Guardian). End uers are, of course, the 800 million or so people who have Facebook accounts. Suppliers would be the companies who, for example, provide the hardware and software infrastructure upon which Facebook runs.

Of course, just because you are providing a new technology platform does not mean it will automatically be a disruptive technology platform. Looking at some of the technology platforms that are currently out there and have, or are in the process of disrupting businesses or whole industries we can see some common themes however. Here are some of these (in no particular order of priority):

  • A DTP has a well defined set of open interfaces which complementors can use, possibly in ways not originally envisaged by the platform provider.
  • The DTP needs to build up a critical mass of both end users and complementors, each of which feeds off the other in a positive feedback loop so the platform grows.
  • The DTP must be both scaleable but extremely robust.
  • The DTP must provide an intrinsic value which cannot be obtained elsewhere, or if it can, must give additional benefits which make users come to the DTP rather than go elsewhere. Providing music on iTunes at a low enough cost and easy enough to obtain preventing users going to free file sharing sites is an example.
  • End users must be allowed to interact amongst themselves, again in ways that may not have been originally envisaged.
  • Complementors must be provided with the right level of contract that allows them to innovate, but without actually breaking the platform (Apple’s contract to App store developers is an example). The DTP provider needs to retain some level of control.

These are just some of the attributes I would expect a DTP to have, there must be more. Feel free to comment and provide some observations on what you think constitutes a DTP.

Architecting and Social Media

The arguments for and against the rise of user generated content on the web continue to rage. Depending on which side of the debate you take we are either on an amoral downward spiral of increasingly meaningless content being generated by amateurs (for free) that is putting the professional writers, musicians, software developers, photographers etc out of work as well as ruining our brains or are entering a new age where the combination of an unbounded publishing engine and the cognitive surplus many people now have means we are able to build a better and more cooperative world.Like most things in life the truth will not be at one of these polar opposites but somewhere in between. Seth Godin makes an interesting point in a recent blog entry that “lifestyle media isn’t a fad it’s what human beings have been doing forever, with a brief, recent interruption for a hundred years of professional media along the way”. He goes on to say “we shouldn’t be surprised when someone chooses to publish their photos, their words, their art or their opinions. We should be surprised when they don’t.”

After all, given the precarious nature of the press in the UK at the moment with stories being obtained through all sorts of dubious means the professionals can hardly be seen as holding the ethical or moral high ground.

The possibilities for creativity, and building interesting and innovative solutions out of this mixed bag of social media self-publishing is going to be the place where architects are going to have a fertile ground over the coming years. A nice example of this is Flipboard which if you have an iPhone or iPad you should definitely download. This free app is a “social magazine” that extends links your friends and contacts are sharing on Facebook, Twitter, Linkedin, 500px and others into beautifully packaged “articles”. It can also pull in content from a raft of other online content. It’s a great example of what architects should be doing, namely taking existing components and assembling them in interesting and innovative ways.

Blackberry’s Perfect Storm

A perfect storm is defined as being: a critical or disastrous situation created by a powerful concurrence of factors.A perfect storm is certainly what RIM, makers of the Blackberry, have been experiencing recently. For three days, starting on 10th October a problem caused by a router in an unassuming two-storey building in Slough, UK affected almost every one of its users around the world. Not only were users unable to Twitter, or Facebook, more seriously, those users who rely on their Blackberrys for email to do their business may have lost valuable work. Whilst many commentators may have made light of the situation, because people could no longer tweet their every movement, there is a far more serious message here which is that as a civilisation we are now completely dependent on software and hardware technology that runs our daily lives.

Here’s what Blackberry had to say on their service bulletin board on 11th October, mid-way through the crisis:

The messaging and browsing delays that some of you are still experiencing were caused by a core switch failure within RIM’s infrastructure. Although the system is designed to failover to a back-up switch, the failover did not function as previously tested…

Unfortunately for Blackberry it was not only this technical and process failure that formed part of their perfect storm but two other factors, they could not hoped to have predicted, also occurred recently. One was the launch of the latest iPhone 4S from Apple which was released the very same week as Blackberry’s network failure. The other is the allegation that Blackberry’s, or more precisely the Blackberry Messaging Service (BBM), were implicated in the recent riots that took place in London and other UK cities in the summer.  For many teens armed with a BlackBerry, BBM has replaced text messaging because it is free, instant and more part of a much larger community than regular SMS. Also, unlike Twitter or Facebook, many BBM messages are untraceable by the authorities.

From an IT architecture point of view clearly the technical and process failure of such a crucial data centre should just not have been allowed to happen. In some ways Blackberry has been a victim of its own success with the number of users growing from 10 million in 2005 to 70 million now without a corresponding increase in capacity of its network and fully functioning failover facility. However the more interesting, and in some ways more intractable, problem is the competitive, sociological and even ethical aspects of the situation. When Apple launched the first iPhone back in 2007 they changed forever the way people interacted with their phones. Some people have observed that the tactile way in which people “stroke” an iPhone rather than jab at tiny buttons has led to their more widespread adoption. Clearly a case of getting the human-computer interface right paying great dividends. Who would have thought however that the very aspect that once made Blackberry’s so popular with business users (their security) could backfire on them in quite such a significant way? Architecture (and design) is not just about getting the right features at the right price it is also about thinking through the likely impact of those features in contexts that may not initially have been envisaged.

Five Software Architectures That Changed The World

Photo by Kobu Agency on Unsplash
Photo by Kobu Agency on Unsplash

“Software is the invisible thread and hardware is the loom on which computing weaves its fabric, a fabric that we have now draped across all of life”.

Grady Booch

Software, although an “invisible thread” has certainly had a significant impact on our world and now pervades pretty much all of our lives. Some software, and in particular some software architectures, have had a significance beyond just the everyday and have truly changed the world.

But what constitutes a world changing architecture? For me it is one that meets all of the following:

  1. It must have had an impact beyond the field of computer science or a single business area and must have woven its way into peoples lives.
  2. It may not have introduced any new technology but may instead have used some existing components in new and innovative ways.
  3. The architecture itself may be relatively simple, but the way it has been deployed may be what makes it “world changing”.
  4. It has extended the lexicon of our language either literally (as in “I tried googling that word” or indirectly in what we do (e.g. the way we now use App stores to get our software).
  5. The architecture has emergent properties and has been extended in ways the architect(s) did not originally envisage.

Based on these criteria here are five architectures that have really changed our lives and our world.

World Wide Web
When Tim Berners-Lee published his innocuous sounding paper Information Management: A Proposal in 1989 I doubt he could have had any idea what an impact his “proposal” was going to have. This was the paper that introduced us to what we now call the world wide web and has quite literally changed the world forever.

Apple’s iTunes
There has been much talk in cyberspace and in the media in general on the effect and impact Steve Jobs has had on the world. When Apple introduced the iPod in October 2001 although it had the usual Apple cool design makeover it was, when all was said and done, just another MP3 player. What really made the iPod take off and changed everything was iTunes. It not only turned the music industry upside down and inside out but gave us the game-changing concept of the ‘App Store’ as a way of consuming digital media. The impact of this is still ongoing and is driving the whole idea of cloud computing and the way we will consume software.

Google
When Google was founded in 1999 it was just another company building a search engine. As Douglas Edwards says in his book I’m Feeling Lucky “everybody and their brother had a search engine in those days”. When Sergey Brin was asked how he was going to make money (out of search) he said “Well…, we’ll figure something out”. Clearly 12 years later they have figured out that something and become one of the fastest growing companies ever. What Google did was not only create a better, faster, more complete search engine than anyone else but also figured out how to pay for it, and all the other Google applications, through advertising. They have created a new market and value network (in other words a disruptive technology) that has changed the way we seek out and use information.

Wikipedia
Before WIkipedia there was a job called an Encyclopedia Salesman who walked from door to door selling knowledge packed between bound leather covers. Now, such people have been banished to the great redundancy home in the sky along with typesetters and comptometer operators.

If you do a Wikipedia on Wikipedia you get the following definition:

Wikipedia is a multilingual, web-based, free-content encyclopedia project based on an openly editable model. The name “Wikipedia” is a portmanteau of the words wiki (a technology for creating collaborative websites, from the Hawaiian word wiki, meaning “quick”) and encyclopedia. Wikipedia’s articles provide links to guide the user to related pages with additional information.

From an architectural point of view Wikipedia is “just another wiki” however what it has bought to the world is community participation on a massive scale and an architecture to support that collaboration (400 million unique visitors monthly more than 82,000 active contributors working on more than 19 million articles in over 270 languages). Wikipedia clearly meets all of the above crtieria (and more).

Facebook
To many people Facebook is social networking. Not only has it seen off all competitors it makes it almost impossible for new ones to join. Whilst the jury is still out on Google+ it will be difficult to see how it can ever reach the 800 million people Facebook has. Facebook is also the largest photo-storing site on the web and has developed its own photo storage system to store and serve its photographs. See this article on Facebook architecture as well as this presentation (slightly old now but interesting nonetheless).

I’d like to thank both Grady Booch and Peter Eeles for providing input to this post. Grady has been doing great work on software archeology  and knows a thing or two about software architecture. Peter is my colleague at IBM as well as co-author on The Process of Software Architecting.