The Price of Privacy

So we finally have the official price of privacy. AT&T (one of the largest telecommunications companies in America) have announced that their GigaPower super-fast broadband service can be obtained at a discount if customers “let us use your individual Web browsing information, like the search terms you enter and the web pages you visit, to tailor ads and offers to your interests.”  The cost of not letting AT&T do this? $29 a month. And don’t think you can use your browsers privacy settings to stop AT&T tracking your browser history or search requests. It looks like they use deep packet inspection to examine the data packets that pass through their network and allow them to eavesdrop on your data.

So far so bad but it gets worse. It is not at all clear what GigaPower subscribers get when they pay their $29 fee to opt out of the snooping service. AT&T says that it “may collect and use web browsing information for other purposes, as described in our Privacy Policy, even if you do not participate in the Internet Preferences program.”  In other words even if you pay your ‘privacy tax’ there is no actual guarantee that AT&T won’t snoop on you anyway!

The even worse thing about this, as Bruce Schneier points out here is that “privacy becomes a luxury good” that means only those that can afford the tax can have their privacy recognised thereby driving even more of a wedge between the digital haves and have not’s.

In many ways of course at least AT&T are being transparent and telling you what they do and giving you the option of opting out (whatever that means) of not taking their service at all (assuming you don’t live in a part of the country where they don’t have a virtual monopoly). Google on the other hand offers a ‘free’ email service on the basis that it scans your emails to display what it considers are relevant ads in the hope that the user is more likely to click on them and generate more advertising revenue. This is a service you cannot opt out of. Maybe it’s time for us gmail users to switch to services like those offered by Apple which has a different business model that does not rely on building “a profile based on your email content or web browsing habits to sell to advertisers”. They just make a fortune selling us nice, shiny gadgets.

Why I Became a Facebook Refusenik

I know it’s a new year and that generally is a time to make resolutions, give things up, do something different with your life etc but that is not the reason I have decided to become a Facebook refusenik.

Image Copyright http://www.keepcalmandposters.com
Image Copyright http://www.keepcalmandposters.com

Let’s be clear, I’ve never been a huge Facebook user amassing hundreds of ‘friends’ and spending half my life on there. I’ve tended to use it to keep in touch with a few family and ‘real’ friend members and also as a means of contacting people with a shared interest in photography. I’ve never found the user experience of Facebook particularly satisfying and indeed have found it completely frustrating at times; especially when posts seem to come and go, seemingly at random. I also hated the ‘feature’ that meant videos started playing as soon as you scrolled them into view. I’m sure there was a way of preventing this but was never interested enough to figure out how to disable it. I could probably live with these foibles however as by and large the benefits outweighed the unsatisfactory aspects of Facebook’s usability.

What’s finally decided me to deactivate my account (and yes I know it’s still there just waiting for me to break and log back in again) is the insidious way in which Facebook is creeping into our lives and breaking down all aspects of privacy and even our self-determination. How so?

First off was the news in June 2014 that Facebook had conducted a secret study involving 689,000 users in which friends’ postings were moved to influence moods. Various tests were apparently performed. One test manipulated a users’ exposure to their friends’ “positive emotional content” to see how it affected what they posted. The study found that emotions expressed by friends influence our own moods and was the first experimental evidence for “massive-scale emotional contagion via social networks”. What’s so terrifying about this is whether, as Clay Johnson the co-founder of Blue State Digital asked via Twitter is “could the CIA incite revolution in Sudan by pressuring Facebook to promote discontent? Should that be legal? Could Mark Zuckerberg swing an election by promoting Upworthy (see later) posts two weeks beforehand? Should that be legal?”

As far as we know this has been a one off which Facebook apologised for but the mere fact they thought they could get away with such a tactic is, to say the least, breathtaking in its audacity and not an organisation I am comfortable with entrusting my data to.

Next was the article by Tom Chatfield called The Attention Economy in which he discusses the idea that “attention is an inert and finite resource, like oil or gold: a tradable asset that the wise manipulator (i.e. Facebook and the like) auctions off to the highest bidder, or speculates upon to lucrative effect. There has even been talk of the world reaching ‘peak attention’, by analogy to peak oil production, meaning the moment at which there is no more spare attention left to spend.” Even though I didn’t believe Facebook was grabbing too much of my attention I was starting to become a little concerned that Facebook was often the first site I visited in the morning and was even becoming diverted by some of those posts in my newsfeed with titles like “This guy went to collect his mail as usual but you won’t believe what he found in his mailbox”. Research is beginning to show that doing more than one task at a time, especially more than one complex task, takes a toll on productivity and that the mind and brain were not designed for heavy-duty multitasking. As Danny Crichton argues here “we need to recognize the context that is distracting us, changing what we can change and advocating for what we can hopefully convince others to do.”

The final straw that has made me throw in the Facebook towel however was reading The Virologist by Andrew Marantz in The New Yorker magazine about Emerson Spartz the so called ‘king of clickbait”. Spartz is twenty-seven and has been successfully launching Web sites for more than half his life. In 1999, when Spartz was twelve, he built MuggleNet, which became the most popular Harry Potter fan site in the world. Spartz’s latest venture is Dose a photo- and video-aggregation site whose posts are collections of images designed to tell a story. The posts have names like “You May Feel Bad For Laughing At These 24 Accidents…But It’s Too Funny To Look Away“. Dose gets most of its feeds through Facebook. A bored teenager absent mindedly clicking links will eventually end up on a site like Dose. Spartz’s goal is to make the site so “sticky”—attention-grabbing and easy to navigate—that the teenager will stay for a while. Money is generated through ads – sometimes there are as many as ten on a page and Spartz hopes to develop traffic-boosting software that he can sell to publishers and advertisers. Here’s the slightly disturbing thing though. Algorithms for analysing users behaviour are “baked in” to the sites Spartz builds. When a Dose post is created, it initially appears under as many as two dozen different headlines, distributed at random to different Facebook users. An algorithm measures which headline is attracting clicks most quickly, and after a few hours, when a statistically significant threshold is reached, the “winning” headline automatically supplants all others. Hence users are “click-bait”, unknowingly taking part in a “test” to see how quickly they respond to a headline.

The final, and most sinister aspect to what Spartz is trying to do with Dose and similar sites is left to the end of Marantz’s article when Spartz gives his vision of the future of media:

The lines between advertising and content are blurring,” he said. “Right now, if you go to any Web site, it will know where you live, your shopping history, and it will use that to give you the best ad. I can’t wait to start doing that with content. It could take a few months, a few years—but I am motivated to get started on it right now, because I know I’ll kill it.

The ‘content’ that Spartz talks about is news. In other words he sees his goal is to feed us the news articles his algorithms calculate we will like. We will no longer be reading the news we want to read but rather that which some computer program thinks we should be reading, coupled of course with the ads the same program thinks we are most likely to respond to.

If all of this is not enough to concern you about what Facebook is doing (and the sort of companies it collaborates with) then the recent announcement of ‘keyword’ or ‘graph’ search might. Keyword search allows you to search content previously shared with you by entering a word or phrase. Privacy settings aren’t changing, and keyword search will only bring up content shared with you, like posts by friends or that friends commented on, not public posts or ones by Pages. But if a friend wanted to easily find posts where you said you were “drunk”, now they could. That accessibility changes how “privacy by obscurity” effectively works on Facebook. Rather than your posts being effectively lost in the mists of time (unless your friends want to methodically step through all your previous posts that is) your previous confessions and misdemeanors are now just a keyword search away. Maybe now is the time to take a look at your Timeline or search for a few dubious words with your name to check for anything scandalous before someone else does? As this article points out there are enormous implications of Facebook indexing trillions of our posts some we can see now but others we can only begin to guess at as ‘Zuck’ and his band of researchers do more and more to mine our collective consciousness’.

So that’s why I have decided to deactivate my Facebook account. For now my main social media interactions will be through Twitter (though that too is obviously working out how it can make money out of better and more targeted advertising of course). I am also investigating Ello which bills itself as “a global community that believes that a social network should be a place to empower, inspire, and connect — not to deceive, coerce, and manipulate.” Ello takes no money from advertising and reckons it will make money from value added services. It is early days for Ello yet and it still receives venture capital money for its development. Who knows where it will go but if you’d like to join with me on there I’m @petercripps (contact me if you want an invite).

I realise this is a somewhat different post from my usual ones on here. I have written posts before on privacy in the internet age but I believe this is an important topic for software architects and one I hope to concentrate on more this year.

Government as a Platform

The UK government, under the auspices of Francis Maude and his Cabinet Office colleagues, have instigated a fundamental rethink of how government does IT following the arrival of the coalition in May 2010. You can find a brief summary here of what has happened since then (and why).

One of the approaches that the Cabinet Office favours is the idea of services built on a shared core, otherwise known as Government as a Platform (GaaP). In the governments own words:

A platform provides essential technology infrastructure, including core applications that demonstrate the potential of the platform. Other organisations and developers can use the platform to innovate and build upon. The core platform provider enforces “rules of the road” (such as the open technical standards and processes to be used) to ensure consistency, and that applications based on the platform will work well together.

The UK government sees the adoption of platform based services as a way of breaking down the silos that have existed in governments, pretty GaaPmuch since the dawn of computing, as well as loosening the stranglehold it thinks the large IT vendors have on its IT departments. This is a picture from the Government Digital Service (GDS), part of the Cabinet Office, that shows how providing a platform layer, above the existing legacy (and siloed) applications, can help move towards GaaP.

In a paper on GaaP, Tim O’Reilly sets out a number of lessons learnt from previous (successful) platforms which are worth summarising here:

  1. Platforms must be built on open standards. Open standards foster innovation as they let anyone play more easily on the platform. “When the barriers to entry to a market are low, entrepreneurs are free to invent the future. When barriers are high, innovation moves elsewhere.”
  2. Don’t abuse your power as the provider of the platform. Platform providers must not abuse their privileged position or market power otherwise the platform will decline (usually because the platform provider has begun to compete with its developer ecosystem).
  3. Build a simple system and let it evolve. As John Gall wrote: “A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true. A complex system designed from scratch never works and cannot be made to work. You have to start over beginning with a working simple system.”
  4. Design for participation. Participatory systems are often remarkably simple—they have to be, or they just don’t work. But when a system is designed from the ground up to consist of components developed by independent developers (in a government context, read countries, federal agencies, states, cities, private sector entities), magic happens.
  5. Learn from your hackers. Developers may use APIs in unexpected ways. This is a good thing. If you see signs of uses that you didn’t consider, respond quickly, adapting the APIs to those new uses rather than trying to block them.
  6. Harness implicit participation. On platforms like Facebook and Twitter people give away their information for free (or more precisely to use those platforms for free). They are implicitly involved therefore in the development (and funding) of those platforms. Mining and linking datasets is where the real value of platforms can be obtained. Governments should provide open government data to enable innovative private sector participants to improve their products and services.
  7. Lower the barriers to experimentation. Platforms must be designed from the outset not as a fixed set of specifications, but as being open-ended  to allow for extensibility and revision by the marketplace. Platform thinking is an antidote to the complete specifications that currently dominate the government approach not only to IT but to programs of all kinds.
  8. Lead by example. A great platform provider does things that are ahead of the curve and that take time for the market to catch up to. It’s essential to prime the pump by showing what can be done.

In IBM, and elsewhere, we have been talking for a while about so called disruptive business platforms (DBP). A DBP has four actors associated with it:

  • Provider – Develops and provides the core platform. Providers need to ensure the platform exposes interfaces (that Complementors can use) and also ensure standards are defined that allow the platform to grow in a controlled way.
  • Complementor – Supplement the platform with new features, services and products that increase the value of the platform to End Users (and draw more of them in to use the platform).
  • End User – As well as performing the obvious ‘using the platform’ role End Users will also drive demand that  Complementors help fulfill. Also there are likely to be more Users if there are more Complementors providing new features. A well architected platform also allows End Users to interact with each other.
  • Supplier – Usually enters into a contract with the core platform provider to provide a known product or service or technology. Probably not innovating in the same way as the complementor would.
Walled Garden at Chartwell - Winston Churchill's Home
Walled Garden at Chartwell – Winston Churchill’s Home

We can see platform architectures as being the the ideal balance between the two political extremes of those who want to see a fully stripped back government that privatises all of its services and those who want central government to provide and manage all of these services. Platforms, if managed properly, provide the ideal ‘walled garden’ approach which is often attributed to the Apple iTunes and App Store way of doing business. Apple did not build all of the apps out their on the App Store. Instead they provided the platform on which others could provide the apps and create a diverse and thriving “app economy”.

It’s early days to see if this could work in a government context. What’s key is applying some of the above principles suggested by Tim O’Reilly to enforce the rules that others must comply with. There also of course needs to be the right business models in place that encourage people to invest in the platform in the first place and that allow new start ups to grow and thrive.

How Cloud is Changing Roles in IT

Every major change in technology comes with an inevitable upheaval in the job market. New jobs appear, existing ones go away and others morph into something different. When the automobile came along and gradually replaced the horse drawn carriage, I’m sure carriage designers and builders were able to apply their skills to designing the new horseless carriage (at least initially) whilst engine design was a completely new role that had to be invented. The role of the blacksmith however declined rapidly as far fewer horses were needed to pull carriages.
Smedje i Hornbæk, 1875The business of IT has clearly gone through several transformational stages since the modern age of commercial computing began 55 years ago with the introduction of the IBM 1401 the world’s first fully transistorized computer.  By the mid-1960?s almost half of all computer systems in the world were 1401 type machines.
IBM 1401During the subsequent 50 years we have gone through a number of different ages of computing; each corresponding to the major, underlying architecture which was dominant during that period. The ages with their (very) approximate time spans are:

  1.  The Age of the Mainframe (1960 – 1975)
  2. The Age of the Mini Computer (1975 – 1990)
  3. The Age of Client-Server (1990 – 2000)
  4. The Age of the Internet (2000 – 2010)
  5. The Age of Mobile (2010 – 20??)

Of course, the technologies from each age have never completely gone away, they are just not the predominant driving IT force any more. For example there are still estimated to be some 15,000 mainframe installations world-wide so mainframe programmers are not about to see the end of their careers any time soon. Similarly, there are other technologies bubbling under the surface running alongside and actually overlapping these major waves. For example, networking has evolved from providing the ability to connect a “green screen” to a centralised mainframe, and then mini, to the ability to connect thousands, then millions and now billions of devices. The client-server age and internet age were dependent on cheap and ubiquitous desktop personal computers whilst the age of mobile is driven by offspring’s of the PC, now unshackled from the desktop, which run the same applications (and much, much more) on smaller and smaller devices.

The current mobile age is about far more than the ubiquitous smart devices which we now all own. It’s also driven by the technologies of cloud, analytics and social media but more than anything, it’s about how these technologies are coming together to form a perfect storm that promises to take us beyond computing as just a utility, which serves up the traditional corporate data from systems of record, to systems of engagement where our devices become an extension of ourselves that anticipate our needs and help us get what we want, when we want it. If the first three ages helped us define our systems of record the last two have not just moved us to systems of engagement they have also created what has been termed the age of context – an always on society where pervasive computing is reshaping our lives in ways that could not have been possible as little as ten years ago.

For those of us that work in IT what does this new contextual age mean in terms of our jobs and the roles we play in interacting with our peers and our clients? Is the shift to cloud, analytics, mobile and social just another technology change or does it represent something far more fundamental in how we go about doing the business of IT?

perfect-stormIn 2012 the IBM Distinguished Engineer John Easton produced a thought leadership white paper Exploring the impact of Cloud on IT roles and responsibilities which used an IBM patented technique called Component Business Modeling to map out the key functions of a typical IT department and look at how each of these might change when the delivery of IT services was moved to a cloud provider. Not entirely without surprise John’s paper concluded that “many roles will move from the enterprise to the cloud provider” and that “the responsibilities and importance of the surviving IT roles will change in the new world.”

As might be expected the roles that are likely to be no longer needed are the ones that are today involved in the building and running of IT systems, those to do with the development and deployment aspects of IT and those in ancillary functions like support operations and planning.

Some functions, whilst they still exist, are likely to be dramatically reduced in scope. Things like risk and compliance, information architecture and security, privacy and data protection fall into this category. These are all functions which the enterprise at least needs to have some say in but which will largely be dictated by the cloud provider and have to be taken or not depending on the service levels needed by the enterprise.

The most interesting category of functions affected by moving to the cloud are those that grow in importance. These by and large are in the competencies of customer relationship and business strategy and administration. These cover areas like enterprise architecture, portfolio & service management and demand & performance planning. In other words the areas that are predicted to grow in importance are those that involve IT talking to the business to understand what it is they want both in terms of functionality and service levels as well as ensuring the enterprise has a vision of how it can use IT to maintain competitive advantage.

Back in 2005 the research firm Gartner predicted that demand for IT specialists could shrink as much as 40 percent within the next five years. It went on to coin the term “IT versatilist”, people who are not only specialized in IT, but who demonstrate business competencies by handling multidisciplinary assignments. According to the research firm, businesses will increasingly look to employ versatilists saying “the long-term value of today’s IT specialists will come from understanding and navigating the situations, processes and buying patterns that characterize vertical industries and cross-industry processes”. In 2005 the concept of cloud computing was still in its infancy; the term did not really enter popular usage until a year later when Amazon introduced the Elastic Compute Cloud.  What had been talked about before this was the concept of utility computing and indeed as far back as 1961 the computer scientist John McCarthy predicted that “computation may someday be organized as a public utility.”

Fast forward to 2014 and cloud computing is very much here to stay. IT professionals are in the midst of a fundamental change that, just like with the advent of the “horseless carriage” (AKA the motor car), is going to remove some job roles altogether but at the same time open up new and exciting opportunities that allow us to focus on our clients real needs and craft IT solutions that provide new and innovative ways of doing business. The phrase “may you live in interesting times” has been taken to mean “may you experience much disorder and trouble in your life”.  I prefer to interpret the phrase as “may you experience much disruption and amazement in your life” for that is most certainly what this age of context seems to be creating.

A slightly edited version of this post also appears here.

Software Architecture Zen is Five Years Old

So Software Architecture Zen is five years old (actually on 28th July, I missed my own birthday).

I started this blog on the Blogger platform and moved to WordPress earlier this year. Whilst my WordPress following has not built up to the same level I had on Blogger I far prefer the tools and whole look and feel offered by WordPress so do not regret the move.

I’ve had just over 103,000 hits in total across both platforms in five years. Not quite up there with the Joel on Software blogs of the world but not too shabby either I think. Here are my top five posts of the last five years:

  1. Architecture vs. Design
  2. How to Create Effective Technical Presentations
  3. On Thinking Architecturally
  4. Two Diagrams All Software Architects Need
  5. The Moral Architect

I’m pleased the last one is up there as it’s a topic dear to my heart and one I plan to blog on more in the future. Here’s to the next five years!

 

The Wicked Problems of Government

The dichotomy of our age is surely that as our machines become more and more intelligent the problems that we need them to solve are becoming ever more difficult and intractable. They are indeed truly wicked problems, no more so than in our offices of power where the addition of political and social ‘agendas’ would seem to make some of the problems we face even more difficult to address.

Poll Tax
A Demonstration Against the Infamous ‘Poll Tax’

In their book The Blunders of Our Governments the authors Anthony King and Ivor Crewe recall some of the most costly mistakes made by British governments over the last three decades. These include policy blunders such as the so called poll tax introduced by the Thatcher government in 1990 which led to rioting on the streets of many UK cities (above). Like the poll tax many, in fact most, of the blunders recounted are not IT related however the authors do devote a whole chapter (chapter 13 rather appropriately) to the more egregious examples of successive governments IT blunders. These include:

  • The Crown Prosecution Service, 1989 – A computerised system for tracking prosecutions. Meant to be up and running by 1993-94, abandoned in 1997 following a critical report from the National Audit Office (NAO).
  • The Department of Social Security, 1994 – A system to issue pensions and child benefits using swipe cards rather than the traditional books which were subject to fraud and also inefficient. The government cancelled the project in 1999 after repeated delays and disputes between the various stakeholders and following another critical report by the NAO.
  • The Home Office (Immigration and Nationality Directorate), 1996 – An integrated casework system to deal with asylum, refugee and citizenship applications. The system was meant to be live by October of 1998 but was cancelled in 1999 at a cost to the UK taxpayer of at least £77 million. The backlog of cases for asylum and citizenship which the system had meant to address had got worse not better.

Whilst the authors don’t offer any cast iron solutions to how to solve these problems they do highlight a number of factors these blunders had in common. Many of these were highlighted in a joint Royal Academy of Engineering and British Computer Society report published 10 years ago this month called The Challenges of Complex IT Projects.The major reasons found for why complex IT projects fail included:

  • Lack of agreed measures of success.
  • Lack of clear senior management ownership.
  • Lack of effective stakeholder management.
  • Lack of project/risk management skills.
  • Evaluation of proposals driven by price rather than business benefits.
  • Projects not broken into manageable steps.

In an attempt to address at least some of the issues around the procurement and operation of government IT systems (which is not restricted to the UK of course), in particular those citizen facing services over the internet, the coalition government that came to power in May 2010 commissioned a strategic review of its online delivery of public services by the UK Digital Champion Martha Lane Fox. Her report published in November 2010 recommended:

  • Provision of a common look and feel for all government departments’ transactional online services to citizens and business.
  • The opening up of government services and content, using application programme interfaces (APIs), to third parties.
  • Putting a new central team in Cabinet Office that is in absolute control of the overall user experience across all digital channels and that commissions all government online information from other departments.
  • Appointing a new CEO for digital in the Cabinet Office with absolute authority over the user experience across all government online services and the power to direct all government online spending.

Another government report, published in July of 2011, by the Public Administration Select Committee entitled Government and IT – “a recipe for rip-offs” – time for a new approach proposed 33 recommendations on how government could improve it’s woeful record for delivering IT. These included:

  • Developing a strategy to either replace legacy systems with newer, less costly systems, or open up the intellectual property rights to competitors.
  • Contracts to be broken up to allow for more effective competition and to increase opportunities for SMEs.
  • The Government must stop departments specifying IT solutions and ensure they specify what outcomes they wish to achieve.
  • Having a small group within government with the skills to both procure and manage a contract in partnership
    with its suppliers.
  • Senior Responsible Owners (SROs) should stay in post to oversee the delivery of the benefits for which they are accountable and which the project was intended to deliver.

At least partly as a result of these reports and their recommendations the Government Digital Service (GDS) was established in April 2011 under the leadership of Mike Bracken (previously Director of Digital Development at The Guardian newspaper). GDS works in three core areas:

  • Transforming 25 high volume key exemplars from across government into digital services.
  • Building and maintaining the consolidated GOV.UK website –  which brings government services together in one place.
  • Changing the way government procures IT services.

To the large corporates that have traditionally provided IT software, hardware and services to government GDS has had a big impact on how they do business. Not only does most business now have to be transacted through the governments own CloudStore but GDS also encourages a strong bias in favour of:

  • Software built on open source technology.
  • Systems that conform to open standards.
  • Using the cloud where it makes sense to do so.
  • Agile based development.
  • Working with small to medium enterprises (SME’s) rather than the large corporates seen as “an oligarchy that is ripping off the government“.

There can be no doubt that the sorry litany of public sector IT project failures, rightly or wrongly, have caused the pendulum to swing strongly in the direction that favours the above approach when procuring IT. However some argue that the pendulum has now swung a little too far. Indeed the UK Labour party has launched its own digital strategy review led by shadow Cabinet Office minister Chi Onwurah. She talks about a need to be more context-driven, rather than transaction focused saying that while the GDS focus has been on redesigning 25 “exemplar” transactions, Labour feels this is missing the complexity of delivering public services to the individual. Labour is also critical of the GDSs apparent hostility to large IT suppliers saying it is an “exaggeration” that big IT suppliers are “the bogeymen of IT”. While Labour supports competition and creating opportunities for SMEs, she said that large suppliers “shouldn’t be locked out, but neither should they be locked in”.

The establishment of the GDS has certainly provided a wake up call for the large IT providers however, and here I agree with the views expressed by Ms Onwurah, context is crucial and it’s far too easy to take an overly simplistic approach to trying to solve government IT issues. A good example of this is that of open source software. Open source software is certainly not free and often not dramatically cheaper than proprietary software (which is often built using some elements of open source anyway) once support costs are taken into account. The more serious problem with open source is where the support from it comes from. As the recent Heartbleed security issue with OpenSSL has shown there are dangers in entrusting mission critical enterprise software to people who are not accountable (and even unknown).

One aspect to ‘solving’ wicked problems is to bring more of a multi-disciplinary approach to the table. I have blogged before about the importance of a versatilist approach in solving such problems. Like it or not, the world cannot be viewed in high contrast black and white terms. One of the attributes of a wicked problem is that there is often no right or wrong answer and addressing one aspect of the problem can often introduce other issues. Understanding context and making smart architecture decisions is one aspect to this. Another aspect is whether the so called SMAC (social, mobile, analytics and cloud) technologies can bring a radically new approach to the way government makes use of IT? This is something for discussion in future blog posts.

“I’ll Send You the Deck”

Warning, this is a rant!

I’m sure we’ve all been here. You’re in a meeting or on a conference call or just having a conversation with a colleague discussing some interesting idea or proposal which he or she has previous experience of and at some point they issue the immortal words “I’ll send you the deck”. The “deck” in question is usually a (at least) 20 page presentation, maybe with lots of diagrams so quite large, of material some of which may, if you’re lucky, relate to what you were actually talking about but most of which won’t. Now, I’m not sure about you but I find this hugely annoying for several reasons. Here are some:

  1. A presentation is for, well presenting. It’s not for relaying information after the event with no speaker to justify its existence. That’s what documents are for. We need to make careful decisions about the tools we use for conveying information recognising that the choice of tool can equally well enhance as well as detract from the information being presented.
  2. Sending a presentation in an email just clogs up your inbox with useless megabytes of data. Not only that but you are then left with the dilemma of what to do with the presentation. Do you detach it and store it somewhere in the hope you will find it later or just leave it in the email to ultimately get lost or forgotten?
  3. Chances are that only a small part of the presentation is actually relevant to what was been discussed so you are left trying to find out what part of the presentation is important and what is largely irrelevant.

So, what is the alternative to “sending a deck”? In this age of social the alternatives are almost too overwhelming but here are a few.

  • If your presentation contains just a few core ideas then take the time to extract the relevant ones and place in the email itself.
  • If the information is actually elsewhere on the internet (or your company intranet) then send a link. If it’s not commercially sensitive and available externally to your organisation why not use Twitter? That way you can also socialize the message more widely.
  • Maybe the content you need to send is actually worth creating as a blog post for a wider, and more permanent distribution (I actually create a lot of my posts like that).
  • Many large organisations are now investing in enterprise social software. Technology such as IBM Connections provides on premise, hybrid and in the cloud based software that not only seamlessly integrates email, instant messaging, blogs, wikis and files but also delivers the information to virtually any mobile device. Enterprise social software allows people to share content and collaborate in new and more creative ways and avoids the loss of information in the ‘tar pits‘ of our hard drives and mail inboxes.

Finally, here’s the last word from Dilbert, who is spot on the money as usual.

Dilbert PowerPoint

(c) 2010 Scott Adams Inc

The Times They Are A-Changin’

Come senators, congressmen
Please heed the call
Don’t stand in the doorway
Don’t block up the hall
For he that gets hurt
Will be he who has stalled
There’s a battle outside and it is ragin’
It’ll soon shake your windows and rattle your walls
For the times they are a-changin’

So sang Bob Dylan in The Times They Are a-Changin’ from his third album of the same name released in early 1964 which makes it 50 years old this year.

These are certainly epochal changing times as we all try to understand the combined forces that social, mobile, analytic and cloud computing are going to have on the world and how we as software architects react to them.

You may have noticed a lack of posts in this blog recently. This is partly due to my own general busyness but also due to the fact that I have been trying to understand and assimilate myself what impact these changes are likely to have on this profession of ours. Is it more of the same, just that the underlying technology is changing (again) or is it really a fundamental change in the way the world is going to work from now on? Whichever it is these are some of the themes I will be covering in upcoming posts in this (hopefully) reinvigorated blog.

I’d like to welcome you to my new place for Software Architecture Zen on the WordPress blogging platform. I’ve been running this blog over on Blogger for getting on five years now but have decided this year to gradually move over here. I hope my readers will follow me here but for now aim to put posts in both places.

Eponymous Laws and the Invasion of Technology

Unless you’ve had your head buried in a devilish software project that has consumed your every waking hour over the last month or so you cannot help but have noticed technology has been getting a lot of bad press lately. Here are some recent news stories that make one wonder whether our technology maybe running away from us.

Is this just the internet reaching a level of maturity that past technologies from the humble telephone, the VCR and the now ubiquitous games consoles have been through or is there something really sinister going on here? What is the implication of all this on the software architect, should we care or do we just stick our head in the sand and keep on building the systems that enable all of the above, and more, to happen?

Here are three epnymous laws* which I think could have been use to predict much of this:

  • Metcalfe’s law (circa 1980): “The value of a system grows as approximately the square of the number of users of the system.” A variation on this is Sarnoff’s law: “The value of a broadcast network is proportional to the number of viewers.”
  • Though I’ve never seen this described as an eponymous law, my feeling is it should be. It’s a quote from Marshall McLuhan (from his book UnderstandingMedia: The Extensions of Man published in 1964): “We become what we behold. We shape our tools and then our tools shape us.”
  • Clarkes third law (from 1962): “Any sufficiently advanced technology is indistinguishable from magic.” This is from Aurthur C. Clarke’s book Profiles of the Future.

Whilst Metcalfe’s law talks of the value of a system growing proportionally as the number of users increases I suspect the same law applies to the disadvantage or detriment of such systems. As more people use a system, the more of them there will be to seek out ways of misusing that system. If only 0.1% of the 2.4 billion people who use the internet use it for illicit purposes that still makes a whopping 2.4 million. A number set to grow just as the number of online users grows.

As to Marshall McLuhan’s law, isn’t the stage we are at with the internet just that? The web is (possibly) beginning to shape us in terms of the way we think and behave. Should we be worried? Possibly. It’s probably too early to tell and there is a lack of hard scientific evidence either way to decide. I suspect this is going to be ripe ground for PhD theses for some years to come. In the meantime there are several more popular theses from the likes of Clay Shirky, Nicholas Carr, Aleks Krotoski and Baroness Susan Greenfield who describe the positive and negative aspects of our online addictions.

And so to Aurthur C, Clarke. I’ve always loved both his non-fiction and science fiction writing and this is possibly one of his most incisive prophecies. It feels to me that technology has probably reached the stage where most of the population really do perceive it as “magic”. And therein lies the problem. Once we stop understanding how something works we just start to believe in it almost unquestioningly. How many of us give a second thought when we climb aboard an aeroplane or train or give ourselves up to our doctors and nurses treating us with drugs unimagined even only a few years ago?

In his essay PRISM is the dark side of design thinking Sam Jacob asks what America’s PRISM surveillance program tells us about design thinking and concludes:

Design thinking annexes the perceived power of design and folds it into the development of systems rather than things. It’s a design ideology that is now pervasive, seeping into the design of government and legislation (for example, the UK Government’s Nudge Unit which works on behavioral design) and the interfaces of democracy (see the Design of the Year award-winning .gov.uk). If these are examples of ways in which design can help develop an open-access, digital democracy, Prism is its inverted image. The black mirror of democratic design, the dark side of design thinking. Back in 1942 the science fiction author Isaac Asimov proposed the three laws of robotics as an inbuilt safety feature of what was then thought likely to become the dominant technology of the latter part of the 20th century, namely intelligent robots. Robots, at least in the form Asimov predicted, have not yet come to pass however, in the internet, we have probably built a technology even more powerful and with more far reaching implications. Maybe, as at least one person as suggested, we should be considering the equivalent of Asimov’s three laws for the internet? Maybe it’s time that we as software architects, the main group of people who are building these systems, should begin thinking about some inbuilt safety mechanisms for the systems we are creating?

*An eponym is a person or thing, whether real or fictional, after which a particular place, tribe, era, discovery, or other item is named. So called eponymous laws are succinct observations or predictions named after a person (either by the persons themselves or by someone else ascribing the law to that person).