You can always tell when a technology has reached a certain level of maturity when it gets its own slot on the BBC Radio 4 news program ‘Today‘ which runs here in the UK every weekday morning from 6am – 9am.
Yesterday (Tuesday 19th January) morning saw the UK government’s Chief Scientific Advisor, Sir Mark Walport, talking about blockchain (AKA distributed ledger) and advocating its use for a variety of (government) services. The interview was to publicise a new government report on distributed ledger technology (the Blackett review) which you can find here.
The report has a number of recommendations including the creation of a distributed ledger demonstrator and calls for collaboration between industry, academia and government around standards, security and governance of distributed ledgers.
As you would expect there are a number of startups as well as established companies working on applications of distributed ledger technology including R3CEV whose head of technology is Richard Gendal Brown, an ex-colleague of mine from IBM. Richard tweets on all things blockchain here and has a great blog on the subject here. If you want to understand blockchain you could take a look at Richard’s writings on the topic here. If you want an extremely interesting weekend read on the current state of bitcoin and blockchain technology this is a great article.
IBM, recognising the importance of this technology and the impact it could have on society, is throwing its weight behind the Linux Foundations project that looks to advance this technology following the open source model.
From a software architecture perspective I think this topic is going to be huge and is ripe for some first mover advantage. Those architects who can steal a lead on not only understanding but explaining this technology are going to be in high demand and if you can help with applying the technology in new and innovative ways you are definitely going to be a rockstar!
As software architects we often get wrapped up in ‘the moment’ and are so focused on the immediate project deliverables and achieving the next milestone or sale that we rarely step back to consider the bigger picture and wider ethical implications of what we are doing. I doubt many of us really think whether the application or system we are contributing to in some way is really one we should be involved in or indeed is one that should be built at all.
To be clear, I’m not just talking here about software systems for the defence industry such as guided missiles, fighter planes or warships which clearly have one very definite purpose. I’m assuming that people who do work on such systems have thought, at least at some point in their life, about the implications of what they are doing and have justified it to themselves. Most times this will be something along the lines of these systems being used for defence and if we don’t have them the bad guys will surely come and get us. After all, the doctrine of mutual assured destruction (MAD) fueled the cold war in this way for the best part of fifty years.
Instead, I’m talking about systems which whilst on the face of it are perfectly innocuous, over time grow into behemoths far bigger than was ever intended and evolve into something completely different from their original purpose.
Obviously the biggest system we are are all dealing with, and the one which has had a profound effect on all of our lives, whether we work to develop it or just use it, is the World Wide Web.
The Web is now in its third decade so is well clear of those tumultuous teenage years of trying to figure out its purpose in life and should now be entering a period of growing maturity and and understanding of where it fits in the world. It should be pretty much ‘grown up’ in fact. However the problem with growing up is that in your early years at least you are greatly influenced, for better or worse, by your parents.
“I articulated the vision, wrote the first Web programs, and came up with the now pervasive acronyms URL, HTTP, HTML, and , of course World Wide Web. But many other people, most of them unknown, contributed essential ingredients, in much the same, almost random fashion. A group of individuals holding a common dream and working together at a distance brought about a great change.”
One of the “unknown” people (at least outside of the field of information technology) was Ted Nelson. Ted coined the term hypertext in his 1965 paper Complex Information Processing: A File Structure for the Complex, the Changing, and the Indeterminate and founded Project Xanadu (in 1960) in which all the worlds information could be published in hypertext and all quotes, references etc would be linked to more information and the original source of that information. Most crucially, for Nelson, was the fact that because every quotation had a link back to its source the original author of that quotation could be compensated in some small way (i.e. using what we now term micro-payments). Berners-Lee borrowed Nelson’s vision for hypertext which is what allows all the links you see in this post to work, however with one important omission.
Nelson himself has stated that some aspects of Project Xanadu are being fulfilled by the Web, but sees it as a gross over-simplification of his original vision:
“HTML is precisely what we were trying to PREVENT— ever-breaking links, links going outward only, quotes you can’t follow to their origins, no version management, no rights management.”
The last of these omissions (i.e. no rights management) is possibly one of the greatest oversights in the otherwise beautiful idea of the Web. Why?
Jaron Lanier the computer scientist, composer and author explains the difference between the Web and what Nelson proposed in Project Xanadu in his book Who Owns the Future as follows:
“A core technical difference between a Nelsonian network and what we have become familiar with online is that [Nelson’s] network links were two-way instead of one-way. In a network with two-way links, each node knows what other nodes are linked to it. … Two-way linking would preserve context. It’s a small simple change in how online information should be stored that couldn’t have vaster implications for culture and the economy.”
So what are the cultural and economic implications that Lanier describes?
In both Who Owns the Future and his earlier book You Are Not a Gadget Lanier articulates a number of concerns about how technology, and more specifically certain technologists, are leading us down a road to a dystopian future where not only will most middle class jobs be almost completely wiped out but we will all be subservient to a small number of what Lanier terms siren servers. Lanier defines a siren server as “an elite computer or coordinated collection of computers, on a network characterised by narcissism, hyper amplified risk aversion, and extreme information asymmetry”. He goes on to make the following observation about them:
“Siren servers gather data from the network, often without having to pay for it. The data is analysed using the most powerful available computers, run by the very best available technical people. The results of the analysis are kept secret, but are used to manipulate the rest of the world to advantage.”
Lanier’s two books tend to ramble a bit but nonetheless contain a number of important ideas.
Idea #1: Is the one stated above that because we essentially rushed into building the Web without thinking of the implications of what we were doing we have built up a huge amount of technical debt which could well be impossible to eradicate.
Idea #2: The really big siren servers (i.e. Facebook, Google, Twitter et al) have encouraged us to upload the most intimate details of our lives and in return given us an apparently ‘free’ service. This however has encouraged us to not want to pay for any services, or pay very little for them. This makes it difficult for any of the workers who create the now digitised information (e.g. journalists, photographers and musicians) to earn a decent living. This is ultimately an economically unsustainable situation however because once those information creators are put out of business who will create original content? The world cannot run on Facebook posts and tweets alone. As the musician David Byrne says here:
“The Internet has laid out a cornucopia of riches before us. I can read newspapers from all over the world, for example—and often for free!—but I have to wonder if that feast will be short-lived if no one is paying for the production of the content we are gorging on.”
Idea #3: The world is becoming overly machine centric and people are too ready to hand over a large part of their lives to the new tech elite. These new sirenic entrepreneurs as Lanier calls them not only know far too much about us but can use the data we provide to modify our behaviour. This may either be deliberately in the case of an infamous experiment carried out by Facebook or in unintended ways we as a society are only just beginning to understand.
Idea #4: Is that the siren servers are imposing a commercial asymmetry on all of us. When we used to buy our information packaged in a physical form it was ours to do with as we wished. If we wanted to share a book, or give away a CD or even sell a valuable record for a profit we were perfectly at liberty to do so. Now all information is digital however we can no longer do that. As Lanier says “with an ebook you are no longer a first-class commercial citizen but instead have tenuous rights within someone else’s company store.” If you want to use a different reading device or connect over a different cloud in most cases you will lose access to your purchase.
There can be little doubt that the Web has had a huge transformative impact on all of our lives in the 21st century. We now have access to more information than it’s possible to assimilate the tiniest fraction of in a human lifetime. We can reach out to almost any citizen in almost any part of the world at any time of the day or night. We can perform commercial transactions faster than ever would have been thought possible even 25 years ago and we have access to new tools and processes that genuinely are transforming our lives for the better. This however all comes at a cost even when access to all these bounties is apparently free. As architects and developers who help shape this brave new world should we not take responsibility to not only point out where we may be going wrong but also suggest ways in which we should improve things? This is something I intend to look at in some future posts.
… for everyone to predict what will be happening in the world of tech in 2016. Here’s a roundup of some of the cloud and wider IT predictions that have been hitting my social media feeds over the last week or so.
Hybrid will become the next-generation infrastructure foundation.
Security will continue to be a concern.
We’re entering the second wave of cloud computing where cloud native apps will be the new normal.
Compliance will no longer be such an issue meaning barriers to entry onto the cloud for most enterprises, and even governments, will be lowered or even disappear.
Containers will become mainstream.
Use of cloud storage will grow (companies want to push the responsibility of managing data, especially its security, to third parties).
Momentum of IoT will pick up.
Use of hyper-converged (software defined infrastructure) platforms will increase.
Next up IBM’s Thoughts on Cloud site has a whole slew of predictions including 5 reasons 2016 will be the year of the ‘new IT’ and 5 digital business predictions for 2016. In summary these two sets of predictions believe that the business will increasingly “own the IT” as web scale architectures become available to all and there is increasing pressure on CIOs to move to a consumption based model. At the fore of all CxO’s minds will be that digital business strategy, corporate innovation, and the digital customer experience are all mantras that must be followed. More ominous is the prediction that there will be a cyber attack or data breach in the cloud during 2016 as more and more data is moved to that environment.
No overview of the predictors would be complete without looking at some of the analyst firms of course. Gartner did their 2016 predictions back in October but edged their bets by saying they were for 2016 and beyond (actually until 2020). Most notable, in my view, of Gartner’s predictions are:
By 2018, six billion connected things will be requesting support.
By 2018, two million employees will be required to wear health and fitness tracking devices as a condition of employment.
Through 2020, 95 percent of cloud security failures will be the customer’s fault
Forrester also edged their predictive bets a little by talking about shifts rather than hard predictions.
Shift #1 – Data and analytics energy will continue drive incremental improvement.
Shift #2 – Data science and real-time analytics will collapse the insights time-to-market.
Shift #3 – Connecting insight to action will only be a little less difficult.
To top off the analysts we have IDC. According to IDC Chief Analyst Frank Gens:
“We’ll see massive upshifts in commitment to DX [digital transformation] initiatives, 3rd Platform IT, the cloud, coders, data pipelines, the Internet of Things, cognitive services, industry cloud platforms, and customer numbers and connections. Looked at holistically, the guidance we’ve shared provides a clear blueprint for enterprises looking to thrive and lead in the DX economy.”
Predictions are good fun, especially if you actually go back to them at the end of the year and see how many things you actually got right. Simon Wardley in his excellent blog Bits or pieces? has his own predictions here with the added challenge that these are predictions for things you absolutely should do but will ignore in 2016. Safe to say none of these will come true then!
The role of the Security Chief will include risk and culture.
Process, process, process will become a fundamental aspect of your security strategy.
Phishing-Data Harvesting will grow in sophistication and catch out even more people.
The ‘insider threat’ continues to haunt businesses.
Internet of Things and ‘digital exhaust’ will render the ‘one policy fits all’ approach defunct.
Finally here’s not so much a prediction but a challenge for 2016 for possibly one of the most hyped technologies of 2015: Why Blockchain must die in 2016.
So what should we make of all this?
In a world of ever tighter cost control and IT having to be more responsive than ever before it’s not hard to imagine that the business will be seeking more direct control of infrastructure so it can deploy applications faster and be more responsive to its customers. This will accentuate more than ever two speed IT where legacy systems are supported by the traditional IT shop and new, web, mobile and IoT applications get delivered on the cloud by the business. For this to happen the cloud must effectively ‘disappear’. To paraphrase a quote I read here,
“Ultimately, like mobile, like the internet, and like computers before that, Cloud is not the thing. It’s the thing that enables the thing.”
Once the cloud really does become a utility (and I’m not just talking about the IaaS layer here but the PaaS layer as well) then we can really focus on enabling new applications faster, better, cheaper and not have to worry about the ‘enabling thing’.
Part of making the cloud truly utility like means we must implicitly trust it. That is to say it will be secure, it will recognise our privacy and will always be there.
Hopefully 2016 will be the year when the cloud disappears and we can focus on enabling business value in a safe and secure environment.
This leaves us as architects with a more interesting question of course? In this brave new world where the business is calling the shots and IT is losing control over more and more of its infrastructure, as well as its people, where does that leave the role of the humble architect? That’s a topic I hope to look at in some upcoming posts in 2016.
Happy New Year!
2015-12-31: Updated to add reference to Simon Wardley’s 2016 predictions.
I’ve lost track of the number of times I’ve been asked this question over the last 12 months. Everyone from CIO’s of large organisations through small startups and entrepreneurs, academics and even family members has asked me this when I tell them what I do. Not surprisingly it gets asked a lot more when hacking is on the 10 o’clock news as it has been a number of times over the last year or so with attacks on companies like TalkTalk, iCloud, Fiat Chrysler and, most infamously, Ashley Madison.
I’ve decided therefore to research the facts around cloud and security and even if I cannot come up with the definitive answer (the traditional answer from an architect about any hard question like this usually being “it depends”) at least point people who ask it to somewhere they can find out more information and hopefully be more informed. That is the purpose of this post.
“Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”
Note that that this definition makes no statement about who the cloud service provider actually is. This definition allows for clouds to be completely on premise (that is, within a companies own data centre) and managed by companies whose business is not primarily that of IT just as much as it could be the big ‘public’ cloud service providers such as Microsoft, IBM, Amazon and Google to name but four. As long as there is network access and resources can be rapidly provisioned then it is a cloud as far as NIST is concerned. Of course I suspect the subtleties around this are lost when most people ask questions about security and the cloud. What they are really asking is “is it safe to store my data out on the internet” to which the answer very much is “it depends”.
So, let’s try to get some hard data on this. The website Hackmageddon tracks cyber attacks around the world and publishes twice monthly statistics on who is being hacked by whom (if known). Taking at random the month of August 2015 there were 79 recorded cyber attacks by Hackmageddon (which as the website points out could well be the tip of a very large iceberg as many companies do not report attacks). Of these there seem to be no attacks that are on systems provided by public cloud service providers but the rub here of course is that it is difficult to know who is actually hosting the site and whether or not they are clouds in the NIST definition of the word.
To take one example from the August 2015 data the UK website Mumsnet suffered both a distributed denial of service (DDoS) attack and a hack where some user data was compromised. Mumsnet is built and hosted by the company DSC a hosting company not a provider of cloud services according to the NIST definition. Again this is probably academic as far as the people affected by this attack are concerned. All they know is their data may have been compromised and the website was temporarily offline during the DDoS attack.
Whilst looking at one month of hacking activity is by no stretch of the imagination representative it does seem that most attacks identified were against private or public companies, that is organisations or individuals that either manage their own servers or use a hosting provider. The fact is that when you give your data away to an organisation you have no real way of knowing where they will be storing that data or how much security that organisation has in place (or even who they are). As this post cites the biggest threat to your privacy can often come from the (mis)practices of small (and even not so small) firms who are not only keeping sensitive client information on their own servers but also moving it onto the cloud, even though some haven’t the foggiest notion of what they’re doing.
As individuals and companies start to think more about storing information out in the cloud they should really be asking how cloud service providers are using people, processes and technology to defend against attackers and keep their data safe. Here are a few things you should ask or try to find out about your cloud service provider before entrusting them with your data.
Let’s start with people. According to IBM’s 2014 Cyber Security Intelligence Index 95% of all security incidents involve human error. These incidents tend to be security attacks from external agents who use “human weakness” in order to lure insiders within organisations to unwittingly provide them with access to sensitive information. A white paper from the data security firm Vormetric says that the impacts of successful security attacks involving insiders are exposure of sensitive data, theft of intellectual property and the introduction of malware. Whilst human weakness can never be completely eradicated (well not until humans themselves are removed from data centres) there are security controls that can be put in place. For example insider threats can be protected against by adopting best practice around:
User activity monitoring
Proactive privileged identity management
Separation-of-duty enforcement
Implementing background checks
Conducting security training
Monitoring suspicious behaviour
Next cloud providers need to have effective processes in place to ensure that the correct governance, controls, compliance and risk management approaches are taken to cloud security. Ideally these processes will have evolved over time and take into account multiple different types of cloud deployments to be as robust as possible. They also need to be continuously evolving. As you would expect there are multiple standards (e.g. ISO 27001, ISO 27018, CSA and PCI) that must be followed and good cloud providers will publish what standards they adhere to as well as how they comply.
Finally what about technology? It’s often been said that internet security is a bit like an arms race where the good guys have to continuously play catch up to make sure they have better weapons and defences than the bad guys. As hacking groups get better organised, better financed and more knowledgable so security technology must be continuously updated to stay ahead of the hackers. At the very least your cloud service provider must:
Manage Access: Multiple users spanning employees, vendors and partners require quick and safe access to cloud services but at the same time must have the right security privileges and only have access to what they are authorised to see and do.
Protect Data: Sensitive data must be identified and monitored so developers can find vulnerabilities before attackers do.
Ensure Visibility: To remain ahead of attackers, security teams must understand security threats happening within cloud services and correlate those events with activity across traditional IT infrastructures.
Optimize Security Operations: The traditional security operations center (SOC) can no longer operate by building a perimeter firewall to keep out attackers as the cloud by definition must be able to let in outsiders. Modern security practices need to rely on things like big data analytics and threat intelligence capabilities to continuously monitor what is happening and respond quickly and effectively to threats.
Hopefully your cloud service provider will have deployed the right technology to ensure all of the above are adequately dealt with.
So how do we summarise all this and condense the answer into a nice sentence or two that you can say when you find yourself in the dreaded elevator with the CIO of some large company (preferably without saying “it depends”)? How about this:
The cloud is really a data centre that provides network access to a pool of resources in a fast and efficient way. Like any data centre it must ensure that the right people, processes and technology are in place to protect those resources from unauthorised access. When choosing a cloud provider you need to ensure they are fully transparent and publish as much information as they can about all of this so you can decide whether they meet your particular security requirements.
So, the future has finally arrived and today is ‘Back to the Future Day‘. Just in case you have missed any of the newspaper, internet and television reports that have been ‘flying’ around this week, today is the day that Marty McFly and Doc Brown travel to in the 1980s movie Back To The Future IIas dialled into the very high-tech (I love the Dymo labels) console of the modified (i.e. to make it fly) Delorean DMC-12 motor car. As you can see the official time we can expect Marty and Doc Brown to arrive is (or was) 04:29 (presumably that’s Pacific Time).
Back to the Future Delorean Display
Depending on when you read this therefore you might still get a chance to watch one of the numerous Marty McFly countdown clocks hitting zero.
Most of the articles have focussed on how its creators did or didn’t get the technology right. Whilst things like electric cars, wearable tech, drones and smart glasses have come to fruition what’s more interesting is what the film completely missed i.e. the Internet, smartphones and all the gadgets which we now take for granted thanks to a further 30 years (i.e. since 1985, when the first film came out) of Moore’s Law.
Coincidentally one day before ‘Back to the Future’ day I gave a talk to a group of university students which was focussed on how technology has changed in the last 30 years due to the effects of Moore’s Law. It’s hard to believe that back in 1985, when the first Back to the Future film was released, a gigabyte of hard disk storage cost $71,000 and a megabyte of RAM cost $880. Today those costs are 5 cents and a lot less than 1 cent respectively. This is why it’s now possible for all of us to be walking around carrying smart devices which have more compute power and storage than even the largest and fastest super computers of a decade or so ago.
It’s also why the statement made by Jim Deters, founder of the education community Galvanise, is so true, namely that today:
“Two guys in a Starbucks can have access to the same computing power as a Fortune 500 company.”
Today anyone with a laptop, a good internet connection and the right tools can set themselves up to disrupt whole industries that once seemed secure and impeneterable to newcomers. These are the disruptors who are building new business models that are driving new revenue streams and providing great, differentiated client experiences (I’m talking the likes of Uber, Netflix and further back Amazon and Google). People use the term ‘digital Darwinism’, meaning the phenomenon of technology and society evolving faster than an organization can adapt, to try and describe what is happening here. As Charles Darwin said:
“It’s not the strongest of the species that survive, nor the most intelligent, but the one most responsive to change.”
Interestingly IBM is working with Galvanise in San Francisco at its Bluemix Garage where it brings together entrepreneurs and start ups, as well as established enterprises, to work with new platform as a service (PaaS) tools like IBM Bluemix, Cloudant and Watson to help them create and build new and disruptive applications. IBM also recently announced its Bluemix Garage Method which aims to combine industry best practices on Design Thinking, Lean Startup, Agile Development, DevOps, and Cloud to build and deliver innovative and disruptive solutions.
There are a number of Bluemix Garages opening around the world (currently they are in London, Toronto, Nice and Melbourne) as well as local pop-up garages. If you can’t get to a garage and want to have a play with Bluemix yourself you can sign up for a free registration here.
It’s not clear how long Moore’s Law has left to run and whether non-silicon based technologies, that overcome some of the laws of physics that are threatening the ongoing exponential growth of transistors in chips, will ever be viable. It’s also not clear how relevant Moore’s Law actually is in the age of Cloud computing. One thing that is certain however is that we already have access to enough technology and tools that mean we are only limited by our ideas and imaginations in creating new and disruptive business models.
Now, where did I leave my hoverboard so I can get off to my next meeting.
In his book Fearless Genius the photographer Doug Menuez has produced a photographic essay on the “digital revolution” that was taking place in Silicon Valley, the area of California some 50 miles south of San Francisco that is home to some of the worlds most successful technology companies, during the period 1985 to 2000.
Fearless Genius by Doug Menuez
You can see a review of this book in my other blog here. Whilst the book covers a number of technology companies that were re-shaping the world during that tumultuous period it focuses pretty heavily on Steve Jobs during the time he had been forced out of Apple and was trying to build his Next Computer.
Steve Jobs Enjoying a Joke
In this video Doug Menuez discusses his photo journalism work during the period that the book documents and at the end poses these three, powerful questions:
Computers will gain consciousness, shouldn’t we be having a public dialogue about that?
On education – who will be the next Steve Jobs, and where will she come from?
Why are all investments today so short term?
Where Will the Next Steve Jobs Come From?
All of which are summed up in the following wonderful quote:
If anything in the future is possible, how do we create the best possible future?
Here in the UK we are about to have an election and choose our leader(s) for the next five years. I find it worrying that there has been practically no debate on the impact that technology is likely to have during this time and how, as citizens of this country, we can get involved in trying to “create the best possible future”.
We’re still wasting colossal fortunes on bad processes and bad technologies. In a digital world, it is perfectly possible to have good public services, keep investing in frontline staff and spend a lot less money. Saving money from the cold world of paper and administration and investing more in the warm hands of doctors, nurses and teachers.
Martha Lane Fox Delivering Her Richard Dimbleby Lecture
I urge everyone to take a look at both Doug and Martha’s inspirational talks and, if you are here in the UK, to go to change.org and sign the petition to “create a new institution and make Britain brilliant at the internet” and ensure we here in the UK have a crack at developing our own fearless genius like Steve Jobs, wherever she may now be living.
Please note that all images in this post, apart from the last one, are (c) Doug Menuez and used with permission of the photographer.
So we finally have the official price of privacy. AT&T (one of the largest telecommunications companies in America) have announced that their GigaPower super-fast broadband service can be obtained at a discount if customers “let us use your individual Web browsing information, like the search terms you enter and the web pages you visit, to tailor ads and offers to your interests.” The cost of not letting AT&T do this? $29 a month. And don’t think you can use your browsers privacy settings to stop AT&T tracking your browser history or search requests. It looks like they use deep packet inspection to examine the data packets that pass through their network and allow them to eavesdrop on your data.
So far so bad but it gets worse. It is not at all clear what GigaPower subscribers get when they pay their $29 fee to opt out of the snooping service. AT&T says that it “may collect and use web browsing information for other purposes, as described in our Privacy Policy, even if you do not participate in the Internet Preferences program.” In other words even if you pay your ‘privacy tax’ there is no actual guarantee that AT&T won’t snoop on you anyway!
The even worse thing about this, as Bruce Schneier points out here is that “privacy becomes a luxury good” that means only those that can afford the tax can have their privacy recognised thereby driving even more of a wedge between the digital haves and have not’s.
In many ways of course at least AT&T are being transparent and telling you what they do and giving you the option of opting out (whatever that means) of not taking their service at all (assuming you don’t live in a part of the country where they don’t have a virtual monopoly). Google on the other hand offers a ‘free’ email service on the basis that it scans your emails to display what it considers are relevant ads in the hope that the user is more likely to click on them and generate more advertising revenue. This is a service you cannot opt out of. Maybe it’s time for us gmail users to switch to services like those offered by Apple which has a different business model that does not rely on building “a profile based on your email content or web browsing habits to sell to advertisers”. They just make a fortune selling us nice, shiny gadgets.
I know it’s a new year and that generally is a time to make resolutions, give things up, do something different with your life etc but that is not the reason I have decided to become a Facebook refusenik.
Let’s be clear, I’ve never been a huge Facebook user amassing hundreds of ‘friends’ and spending half my life on there. I’ve tended to use it to keep in touch with a few family and ‘real’ friend members and also as a means of contacting people with a shared interest in photography. I’ve never found the user experience of Facebook particularly satisfying and indeed have found it completely frustrating at times; especially when posts seem to come and go, seemingly at random. I also hated the ‘feature’ that meant videos started playing as soon as you scrolled them into view. I’m sure there was a way of preventing this but was never interested enough to figure out how to disable it. I could probably live with these foibles however as by and large the benefits outweighed the unsatisfactory aspects of Facebook’s usability.
What’s finally decided me to deactivate my account (and yes I know it’s still there just waiting for me to break and log back in again) is the insidious way in which Facebook is creeping into our lives and breaking down all aspects of privacy and even our self-determination. How so?
First off was the news in June 2014 that Facebook had conducted a secret study involving 689,000 users in which friends’ postings were moved to influence moods. Various tests were apparently performed. One test manipulated a users’ exposure to their friends’ “positive emotional content” to see how it affected what they posted. The study found that emotions expressed by friends influence our own moods and was the first experimental evidence for “massive-scale emotional contagion via social networks”. What’s so terrifying about this is whether, as Clay Johnson the co-founder of Blue State Digitalasked via Twitter is “could the CIA incite revolution in Sudan by pressuring Facebook to promote discontent? Should that be legal? Could Mark Zuckerberg swing an election by promoting Upworthy (see later) posts two weeks beforehand? Should that be legal?”
As far as we know this has been a one off which Facebook apologised for but the mere fact they thought they could get away with such a tactic is, to say the least, breathtaking in its audacity and not an organisation I am comfortable with entrusting my data to.
Next was the article by Tom Chatfield called The Attention Economy in which he discusses the idea that “attention is an inert and finite resource, like oil or gold: a tradable asset that the wise manipulator (i.e. Facebook and the like) auctions off to the highest bidder, or speculates upon to lucrative effect. There has even been talk of the world reaching ‘peak attention’, by analogy to peak oil production, meaning the moment at which there is no more spare attention left to spend.” Even though I didn’t believe Facebook was grabbing too much of my attention I was starting to become a little concerned that Facebook was often the first site I visited in the morning and was even becoming diverted by some of those posts in my newsfeed with titles like “This guy went to collect his mail as usual but you won’t believe what he found in his mailbox”. Research is beginning to show that doing more than one task at a time, especially more than one complex task, takes a toll on productivity and that the mind and brain were not designed for heavy-duty multitasking. As Danny Crichton argues here “we need to recognize the context that is distracting us, changing what we can change and advocating for what we can hopefully convince others to do.”
The final straw that has made me throw in the Facebook towel however was reading The Virologist by Andrew Marantz in The New Yorker magazine about Emerson Spartz the so called ‘king of clickbait”. Spartz is twenty-seven and has been successfully launching Web sites for more than half his life. In 1999, when Spartz was twelve, he built MuggleNet, which became the most popular Harry Potter fan site in the world. Spartz’s latest venture is Dose a photo- and video-aggregation site whose posts are collections of images designed to tell a story. The posts have names like “You May Feel Bad For Laughing At These 24 Accidents…But It’s Too Funny To Look Away“. Dose gets most of its feeds through Facebook. A bored teenager absent mindedly clicking links will eventually end up on a site like Dose. Spartz’s goal is to make the site so “sticky”—attention-grabbing and easy to navigate—that the teenager will stay for a while. Money is generated through ads – sometimes there are as many as ten on a page and Spartz hopes to develop traffic-boosting software that he can sell to publishers and advertisers. Here’s the slightly disturbing thing though. Algorithms for analysing users behaviour are “baked in” to the sites Spartz builds. When a Dose post is created, it initially appears under as many as two dozen different headlines, distributed at random to different Facebook users. An algorithm measures which headline is attracting clicks most quickly, and after a few hours, when a statistically significant threshold is reached, the “winning” headline automatically supplants all others. Hence users are “click-bait”, unknowingly taking part in a “test” to see how quickly they respond to a headline.
The final, and most sinister aspect to what Spartz is trying to do with Dose and similar sites is left to the end of Marantz’s article when Spartz gives his vision of the future of media:
The lines between advertising and content are blurring,” he said. “Right now, if you go to any Web site, it will know where you live, your shopping history, and it will use that to give you the best ad. I can’t wait to start doing that with content. It could take a few months, a few years—but I am motivated to get started on it right now, because I know I’ll kill it.
The ‘content’ that Spartz talks about is news. In other words he sees his goal is to feed us the news articles his algorithms calculate we will like. We will no longer be reading the news we want to read but rather that which some computer program thinks we should be reading, coupled of course with the ads the same program thinks we are most likely to respond to.
If all of this is not enough to concern you about what Facebook is doing (and the sort of companies it collaborates with) then the recent announcement of ‘keyword’ or ‘graph’ search might. Keyword search allows you to search content previously shared with you by entering a word or phrase. Privacy settings aren’t changing, and keyword search will only bring up content shared with you, like posts by friends or that friends commented on, not public posts or ones by Pages. But if a friend wanted to easily find posts where you said you were “drunk”, now they could. That accessibility changes how “privacy by obscurity” effectively works on Facebook. Rather than your posts being effectively lost in the mists of time (unless your friends want to methodically step through all your previous posts that is) your previous confessions and misdemeanors are now just a keyword search away. Maybe now is the time to take a look at your Timeline or search for a few dubious words with your name to check for anything scandalous before someone else does? As this article points out there are enormous implications of Facebook indexing trillions of our posts some we can see now but others we can only begin to guess at as ‘Zuck’ and his band of researchers do more and more to mine our collective consciousness’.
So that’s why I have decided to deactivate my Facebook account. For now my main social media interactions will be through Twitter (though that too is obviously working out how it can make money out of better and more targeted advertising of course). I am also investigating Ello which bills itself as “a global community that believes that a social network should be a place to empower, inspire, and connect — not to deceive, coerce, and manipulate.” Ello takes no money from advertising and reckons it will make money from value added services. It is early days for Ello yet and it still receives venture capital money for its development. Who knows where it will go but if you’d like to join with me on there I’m @petercripps (contact me if you want an invite).
I realise this is a somewhat different post from my usual ones on here. I have written posts before on privacy in the internet age but I believe this is an important topic for software architects and one I hope to concentrate on more this year.
The UK government, under the auspices of Francis Maude and his Cabinet Office colleagues, have instigated a fundamental rethink of how government does IT following the arrival of the coalition in May 2010. You can find a brief summary here of what has happened since then (and why).
One of the approaches that the Cabinet Office favours is the idea of services built on a shared core, otherwise known as Government as a Platform (GaaP). In the governments own words:
A platform provides essential technology infrastructure, including core applications that demonstrate the potential of the platform. Other organisations and developers can use the platform to innovate and build upon. The core platform provider enforces “rules of the road” (such as the open technical standards and processes to be used) to ensure consistency, and that applications based on the platform will work well together.
The UK government sees the adoption of platform based services as a way of breaking down the silos that have existed in governments, pretty much since the dawn of computing, as well as loosening the stranglehold it thinks the large IT vendors have on its IT departments. This is a picture from the Government Digital Service (GDS), part of the Cabinet Office, that shows how providing a platform layer, above the existing legacy (and siloed) applications, can help move towards GaaP.
In a paper on GaaP, Tim O’Reilly sets out a number of lessons learnt from previous (successful) platforms which are worth summarising here:
Platforms must be built on open standards. Open standards foster innovation as they let anyone play more easily on the platform. “When the barriers to entry to a market are low, entrepreneurs are free to invent the future. When barriers are high, innovation moves elsewhere.”
Don’t abuse your power as the provider of the platform. Platform providers must not abuse their privileged position or market power otherwise the platform will decline (usually because the platform provider has begun to compete with its developer ecosystem).
Build a simple system and let it evolve. As John Gall wrote: “A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true. A complex system designed from scratch never works and cannot be made to work. You have to start over beginning with a working simple system.”
Design for participation. Participatory systems are often remarkably simple—they have to be, or they just don’t work. But when a system is designed from the ground up to consist of components developed by independent developers (in a government context, read countries, federal agencies, states, cities, private sector entities), magic happens.
Learn from your hackers. Developers may use APIs in unexpected ways. This is a good thing. If you see signs of uses that you didn’t consider, respond quickly, adapting the APIs to those new uses rather than trying to block them.
Harness implicit participation. On platforms like Facebook and Twitter people give away their information for free (or more precisely to use those platforms for free). They are implicitly involved therefore in the development (and funding) of those platforms. Mining and linking datasets is where the real value of platforms can be obtained. Governments should provide open government data to enable innovative private sector participants to improve their products and services.
Lower the barriers to experimentation. Platforms must be designed from the outset not as a fixed set of specifications, but as being open-ended to allow for extensibility and revision by the marketplace. Platform thinking is an antidote to the complete specifications that currently dominate the government approach not only to IT but to programs of all kinds.
Lead by example. A great platform provider does things that are ahead of the curve and that take time for the market to catch up to. It’s essential to prime the pump by showing what can be done.
In IBM, and elsewhere, we have been talking for a while about so called disruptive business platforms (DBP). A DBP has four actors associated with it:
Provider – Develops and provides the core platform. Providers need to ensure the platform exposes interfaces (that Complementors can use) and also ensure standards are defined that allow the platform to grow in a controlled way.
Complementor – Supplement the platform with new features, services and products that increase the value of the platform to End Users (and draw more of them in to use the platform).
End User – As well as performing the obvious ‘using the platform’ role End Users will also drive demand that Complementors help fulfill. Also there are likely to be more Users if there are more Complementors providing new features. A well architected platform also allows End Users to interact with each other.
Supplier – Usually enters into a contract with the core platform provider to provide a known product or service or technology. Probably not innovating in the same way as the complementor would.
Walled Garden at Chartwell – Winston Churchill’s Home
We can see platform architectures as being the the ideal balance between the two political extremes of those who want to see a fully stripped back government that privatises all of its services and those who want central government to provide and manage all of these services. Platforms, if managed properly, provide the ideal ‘walled garden’ approach which is often attributed to the Apple iTunes and App Store way of doing business. Apple did not build all of the apps out their on the App Store. Instead they provided the platform on which others could provide the apps and create a diverse and thriving “app economy”.
It’s early days to see if this could work in a government context. What’s key is applying some of the above principles suggested by Tim O’Reilly to enforce the rules that others must comply with. There also of course needs to be the right business models in place that encourage people to invest in the platform in the first place and that allow new start ups to grow and thrive.
Every major change in technology comes with an inevitable upheaval in the job market. New jobs appear, existing ones go away and others morph into something different. When the automobile came along and gradually replaced the horse drawn carriage, I’m sure carriage designers and builders were able to apply their skills to designing the new horseless carriage (at least initially) whilst engine design was a completely new role that had to be invented. The role of the blacksmith however declined rapidly as far fewer horses were needed to pull carriages. The business of IT has clearly gone through several transformational stages since the modern age of commercial computing began 55 years ago with the introduction of the IBM 1401 the world’s first fully transistorized computer. By the mid-1960?s almost half of all computer systems in the world were 1401 type machines. During the subsequent 50 years we have gone through a number of different ages of computing; each corresponding to the major, underlying architecture which was dominant during that period. The ages with their (very) approximate time spans are:
The Age of the Mainframe (1960 – 1975)
The Age of the Mini Computer (1975 – 1990)
The Age of Client-Server (1990 – 2000)
The Age of the Internet (2000 – 2010)
The Age of Mobile (2010 – 20??)
Of course, the technologies from each age have never completely gone away, they are just not the predominant driving IT force any more. For example there are still estimated to be some 15,000 mainframe installations world-wide so mainframe programmers are not about to see the end of their careers any time soon. Similarly, there are other technologies bubbling under the surface running alongside and actually overlapping these major waves. For example, networking has evolved from providing the ability to connect a “green screen” to a centralised mainframe, and then mini, to the ability to connect thousands, then millions and now billions of devices. The client-server age and internet age were dependent on cheap and ubiquitous desktop personal computers whilst the age of mobile is driven by offspring’s of the PC, now unshackled from the desktop, which run the same applications (and much, much more) on smaller and smaller devices.
The current mobile age is about far more than the ubiquitous smart devices which we now all own. It’s also driven by the technologies of cloud, analytics and social media but more than anything, it’s about how these technologies are coming together to form a perfect storm that promises to take us beyond computing as just a utility, which serves up the traditional corporate data from systems of record, to systems of engagement where our devices become an extension of ourselves that anticipate our needs and help us get what we want, when we want it. If the first three ages helped us define our systems of record the last two have not just moved us to systems of engagement they have also created what has been termed the age of context – an always on society where pervasive computing is reshaping our lives in ways that could not have been possible as little as ten years ago.
For those of us that work in IT what does this new contextual age mean in terms of our jobs and the roles we play in interacting with our peers and our clients? Is the shift to cloud, analytics, mobile and social just another technology change or does it represent something far more fundamental in how we go about doing the business of IT?
In 2012 the IBM Distinguished Engineer John Easton produced a thought leadership white paper Exploring the impact of Cloud on IT roles and responsibilities which used an IBM patented technique called Component Business Modeling to map out the key functions of a typical IT department and look at how each of these might change when the delivery of IT services was moved to a cloud provider. Not entirely without surprise John’s paper concluded that “many roles will move from the enterprise to the cloud provider” and that “the responsibilities and importance of the surviving IT roles will change in the new world.”
As might be expected the roles that are likely to be no longer needed are the ones that are today involved in the building and running of IT systems, those to do with the development and deployment aspects of IT and those in ancillary functions like support operations and planning.
Some functions, whilst they still exist, are likely to be dramatically reduced in scope. Things like risk and compliance, information architecture and security, privacy and data protection fall into this category. These are all functions which the enterprise at least needs to have some say in but which will largely be dictated by the cloud provider and have to be taken or not depending on the service levels needed by the enterprise.
The most interesting category of functions affected by moving to the cloud are those that grow in importance. These by and large are in the competencies of customer relationship and business strategy and administration. These cover areas like enterprise architecture, portfolio & service management and demand & performance planning. In other words the areas that are predicted to grow in importance are those that involve IT talking to the business to understand what it is they want both in terms of functionality and service levels as well as ensuring the enterprise has a vision of how it can use IT to maintain competitive advantage.
Back in 2005 the research firm Gartner predicted that demand for IT specialists could shrink as much as 40 percent within the next five years. It went on to coin the term “IT versatilist”, people who are not only specialized in IT, but who demonstrate business competencies by handling multidisciplinary assignments. According to the research firm, businesses will increasingly look to employ versatilists saying “the long-term value of today’s IT specialists will come from understanding and navigating the situations, processes and buying patterns that characterize vertical industries and cross-industry processes”. In 2005 the concept of cloud computing was still in its infancy; the term did not really enter popular usage until a year later when Amazon introduced the Elastic Compute Cloud. What had been talked about before this was the concept of utility computing and indeed as far back as 1961 the computer scientist John McCarthy predicted that “computation may someday be organized as a public utility.”
Fast forward to 2014 and cloud computing is very much here to stay. IT professionals are in the midst of a fundamental change that, just like with the advent of the “horseless carriage” (AKA the motor car), is going to remove some job roles altogether but at the same time open up new and exciting opportunities that allow us to focus on our clients real needs and craft IT solutions that provide new and innovative ways of doing business. The phrase “may you live in interesting times” has been taken to mean “may you experience much disorder and trouble in your life”. I prefer to interpret the phrase as “may you experience much disruption and amazement in your life” for that is most certainly what this age of context seems to be creating.
A slightly edited version of this post also appears here.