I Think Therefore I Blog

I recently delivered a short presentation called “I Think Therefore I Blog”. Whilst this does not not specifically have anything to do with software architecture, I hope it might provide some encouragement to colleagues and others out there in the blogosphere as to why blogging can be good for you and why it’s worth pursuing, sometimes in the face of no or very little feedback!

Reason #1: Blogging helps you think (and reflect)
The author Joan Didion once said, “I don’t know what I think until I try to write it down.” Amazon CEO Jeff Bezos preaches the value of writing long form prose to clarify thinking. Blogging, as a form of self expression (and I’m not talking about blogs that just post references to other material)  forces you to think by writing down your arguments and assumptions. This is the single biggest reason to do it, and I think it alone makes it worth it.

You have a lot of opinions and I’m sure you hold some of them pretty strongly. Pick one and write it up in a post — I’m sure your opinion will change somewhat, or at least become more nuanced. Putting something down on ‘paper’ means a lot of the uncertainty and vagueness goes away leaving you to defend your position for yourself. Even if no one else reads or comments on your blog (and they often don’t) you still get the chance to clarify your thoughts in your own mind, and as you write, they become even clearer.

The more you blog, the better you become at writing for your audience, managing your arguments, defending your position, thinking critically. I find that if I don’t understand something very well and want to learn more about it, writing a blog post about that topic focuses my thinking and helps me learn it better.

Reason #2: Blogging enforces discipline
A blog is a broadcast, not a publication. It is not static. Like a shark, if it stops moving, it dies. If you want your blog to last and grow you need to write regularly, it therefore enforces some form of discipline on your life.

Although I don’t always achieve this I do find that writing a little, a lot is better than trying to write a whole post in one go. Start a post with an idea, write it down, then add to it as your thoughts develop, you’ll soon have something you are happy with and are ready to publish.  The key thing is to start as soon as you have an idea, capture it straight away before you forget it then expand on it.

Reason #3: Blogging gives you wings
If you persist with blogging, you will discover that you develop new and creative ways to articulate what you want to say. As I write, I often search for alternative ways to express myself. This can be through images, quotes, a retelling of old experiences through stories, videos, audio, or useful hyperlinks to related web resources.

You have many ways to convey your ideas, and you are only limited by your own imagination. Try out new ways of communicating and take risks. Blogging is the platform that allows you to be creative.

Reason #4: Blogging creates personal momentum
Blogging puts you out there, for all the word to see, to be judged and criticized for both your words and how you structure them. It’s a bit intimidating, but I know the only way to become a better writer is to keep doing it.

Once you have started blogging, and you realise that you can actually do it, you will probably want to develop your skills further. Blogging can be time consuming, but the rewards are ultimately worth it. In my experience, I find myself breaking out of inertia to create some forward movement in my thinking, especially when I blog about topics that may be emotive, controversial, challenging. The more you blog, the better you become at writing for your audience, managing your arguments, defending your position, thinking critically. The photographer Henri Cartier-Bresson said “your first 10,000 photos are your worst”, a similar rule probably applies to blog posts!

I also believe blogging makes be better at my job. I can’t share my expertise or ideas if I don’t have any. My commitment to write 2-4 times per month keeps me motivated to experiment and discover new things that help me develop at work and personally.

Conversely, if I am not blogging regularly then I need to ask myself why that is. Is it because I’m not getting sufficient stimulus or ideas from what I am doing and if so what can I do to change that?

Reason #5: Blogging gives you (more) eminence
Those of us that work in the so called knowledge economy need to build and maintain, for want of a better word, our ’eminence’. Eminence is defined as being “a position of superiority, high rank or fame”. What I mean by eminence here is having a position which others look to for guidance, expertise or inspiration. You are known as someone who can offer a point of view or an opinion. A blog gives you that platform and also allows you to engage in the real world.

So, there you have it, my reasons for blogging. As a postscript to this I fortuitously came across this post as I was writing which adds some kind of perspective to the act of blogging. I suggest you give the post a read but here is a quote which gives a good summary:

…if you start blogging thinking that you’re well on your way to achieving Malcolm Gladwell’s career, you are setting yourself for disappointment. It will suck the enjoyment out of writing. Every completed post will be saddled with a lot of time staring at traffic stats that refuse to go up. It’s depressing.

I have to confess to doing the occasional bit of TSS (traffic stat staring) myself but at the same time have concluded there is no point in chasing the ratings as they might have said in more traditional broadcast media. If you want to blog, do it for its own sake and (some of) the reasons above, don’t do it because you think you will become famous and/or rich (though don’t entirely close the door to that possibility).

This is for Everyone

Twenty years ago today on 30th April 1993 CERN published a brief statement that made World Wide Web technology available on a royalty free basis and changed the world forever. Here’s the innocuous piece of paper that shows this and that truly allowed Tim Berners-Lee, at the fantastic London 2012 Olympics opening ceremony to claim “this is for everyone”. Over the past twenty years the web has become imbedded in all of our lives in ways which most of us could never have dreamed of and has probably given many of us in the software industry quite a secure (and for some, lucrative) living during that time.How fitting then that yesterday, almost 20 years to the day since CERN’s historic announcement, IBM announced a new appliance called IBM MessageSight designed to help organizations manage and communicate with the billions of mobile devices and sensors found in systems such as automobiles, traffic management systems, smart buildings and household appliances, the so called Internet of Things.

I’ve no idea what this announcement means in terms of capabilities, other than what is available in the press release, however it is comforting to note that foundational to IBM MessageSight is its support of MQTT, which was recently proposed to become an OASIS standard, providing a lightweight messaging transport for communication in machine to machine (M2M) and mobile environments. Today more than ever enterprises and governments are demanding compliance with open standards rather than proprietary ones so it is good to see that platforms such as MessageSight will be adhering to such standards.

A Step Too Far?

The trouble with technology, especially it seems computer technology, is that it keeps “improving”.  I’ve written before about the ethics of the job that we as software architects do and whether or not we should always accept what we do without asking questions, not least of which should be, is this a technology step too far that I am building or being asked to build?

Three articles have caught my eye this week which have made me ponder this question again.

The first is from the technology watcher and author Nicholas Carr who talks about the Glass Collective, an an investment syndicate made up of three companies: Google Ventures, Andreessen Horowitz and Kleiner Perkins Caufield & Byers whose collective aim is to provide seed funding to entrepreneurs in the Glass ecosystem to help jump start their ideas.For those not in the know about Glass it is, according to the Google blog, all about “getting technology out of the way” and has the aim of building technology that is “seamless, beautiful and empowering“. Glasses first manifestation is to be Internet-connected glasses that take photos, record video and offer hands-free Internet access right in front of a users’ eyes.

Clearly the type of augmented reality that Glass opens up could have huge educational benefits (think of walking around a museum or art gallery and getting information on what you are looking at piped right to you as you look at different works of art) as well as very serious privacy implications. For another view on this read the excellent blog post from my IBM colleague Rick Robinson on privacy in digital cities.

In his blog post Carr refers to a quote from Marshall McLuhan, made a half century ago and now seeming quite prescient:

Once we have surrendered our senses and nervous systems to the private manipulation of those who would try to benefit by taking a lease on our eyes and ears and nerves, we don’t really have any rights left.

The next thing to catch my eye (or actually several thousand things) was around the whole sorry tale of the Boston bombings. This post in particular from the Wall Street Journal discusses the role of Boston’s so called fusion center that “helps investigators scour for connections among potential suspects, by mining hundreds of law enforcement sources around the region, ranging from traffic violations, to jail records and criminal histories, along with public data like property records.”

Whilst I doubt anyone would question the validity of using data in this way to track down people that have performed atrocities such as we saw in Boston, it does highlight just how much data is now collected on us and about us, much of which we have no control over of broadcasting to the world.

Finally, on a much lighter note, we learn that the contraceptive maker Durex has released their “long distance, sexy time fundawear“. I’ll let you watch the first live trial video of this at your leisure (warning, not entirely work safe) but let’s just say here that it adds a whole new dimension to stroking the screen on your smartphone. I guess this one has no immediate privacy issues (providing the participants don’t wear their Google Glass at the same time as playing in their fundawear at least) it does raise some interesting questions about how much we will let technology impinge on the most intimate part of our lives.

So where does this latest foray of mine into digital privacy take us and what conclusions, if any, can we draw? Back in 2006 IBM Fellow and Chief Scientist Jeff Jonas posted a comment on his blog called Responsible Innovation: Designing for Human Rights in which he asks two questions: what if we are creating technologies that go in the face of the Universal Declaration of Human Rights and what if systems are designed without the essential characteristics needed to support basic privacy and civil liberties principles?

Jeff argues that if technologies could play a role in any of the arrest, detention, exile, interference, attacks or deprivation mentioned in the Universal Declaration of Human Rights then they must support disclosure of the source upon which such invasions are predicated. He suggests that systems that could affect one’s privacy or civil liberties should have a number of design characteristics built in that allow for some level of auditability as well as ensuring accuracy of the data they hold. Such characteristics as, every data point is associated to its data source and every data point is associated to its author etc. Given this was written in 2006 when Facebook was only two years old and still largely confined to use in US universities this is a hugely prescient and thoughtful piece of insight (which is why Jeff is an IBM Fellow of course).

So, there’s an idea! New technologies, when they come along should, be examined to ensure they have built in safeguards that mean such rights as are granted to us all in the Universal Declaration of Human Rights are not infringed or taken away from us. How would this be done and, more importantly of course, what bodies or organisations would we empower to ensure such safeguards were both effective and enforceable? No easy or straightforward answers here but certainly a topic for some discussion I believe.

A Tale of Two Cities

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair…

So began A Tale of Two Cities written by Charles Dickens in 1859. The novel depicts the plight of the French peasantry demoralized by their aristocracy in the years leading up to the revolution and many unflattering social parallels with life in London during the same time period.

This week I’ve come across two very interesting and contrasting views of what smarter cities might look like which could well be summed up by the opening words of Dickens’ novel. One very new (February 2013) and one quite old (September 2008), they offer respectively a utopian and dystopian view of of the future of our cities.

The first view comes from the engineering and architectural consultancy company Arup. Their internal think-tank, Foresight + Innovation, has produced a report called It’s Alive – Can you imaine the urban building of the future?  In the report the author, Josef Hargrave, imagines what life will be like in 2050 if, as is predicted, 75% of the planets 9 billion population are living in cities. Hargrave asks:

As city living takes center stage, what will we come to expect from the design and function of urban structures and buildings?

In the future cities of Hargraves view:

  • The buildings in our cities will be manipulated in real-time and the components that they are made up from will be a part of the internet of things.They will be flexible structures whose components can be upgraded and rearranged over time.
  • Buildings will understand an individuals personal preferences, possibly at the level of their genetic composition.
  • Buildings will be more akin to living organisms and react to external conditions through a series of feedback loops. They will function as a “synthetic and highly sensitive nervous system”.
  • Buildings will not only be made from sustainable resources but will become an integral component of urban food production, containing areas for food production as well as bio-fuel cells that provide energy for the building.
  • Buildings will be integrated with the systems around them (green spaces, public transport and smart energy grids).

All of the above are obviously going to require a smart infrastructure of sensors generating data that can be analysed in real-time and reacted to by both the buildings systems as well as individuals who live and work in them. A nice job not only for the building architect but also the IT architect who needs to design those systems and make sure they all work together.

The other future vision I stumbled across this week is not quite as reassuring or cozy. Written in 2008 The Internet of Things – A critique of the ambient technology and the all seeing network of RFID is a series of essays which describe a slightly more alarming world where the promise of large numbers of interconnected devices (AKA The Internet of Things) are used for more surreptitious monitoring of the earths citizens.

Mark Weiser, Chief Scientist at Xerox-PARK and the so-called father of ubiquitous computing once said:

The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.

As the name of this paper suggests it largely focuses on the threat of ubiquitous RFID devices. At the time the paper was written smart phones like the iPhone, introduced one year earlier, were only just taking off and the tracking and monitoring capabilities of these devices was in its infancy. This paper provides a series of warnings of what might happen when computers disappear completely and really become fully integrated into our lives.

For example at one level there might be benefits from tracking John who goes to shop A and buys object B, then visits shop C and buys object D because we know the ingredients for making that bomb too. For some governments however if shop A happens to be the offices of an “illegal” human rights organisation and shop C is actually an outside public space where an organised march is taking place arresting John may be for a different purpose.

I guess the second city tale could be summed up by asking: when the environment becomes the interface, where is the off switch?

Whichever vision comes to pass (and it is most likely to be some combination of the two) as technologists we have it within out power to shape our future for the good not the worse. In the United Kingdom, where I live, we sometimes work ourselves into a bit of a frenzy over the machinations of government and industry whether it be the latest sex scandal, expenses misconduct or banking wrongdoing. We do, compared with many countries, have a relatively free press however where we eventually learn of these scandals. We also have unfettered access to the internet and tools like this where we can make our voices heard. It is beholden on us all therefore to make sure we do express concerns where they are valid and make sure we continue to make out governments and business leaders are held to account and use technology wisely. I certainly know which of these two cities I would rather live in.

The Art of What’s Possible (and What’s Not)

One of the things Apple are definitely good at is giving us products we didn’t know we needed (e.g. the iPad). Steve Jobs, who died a year ago this week, famously said “You’ve got to start with the customer experience and work back to the technology — not the other way around”  (see this video at around 1:55 as well as this interview with Steve Jobs in Wired).

The subtle difference from the “normal” requirements gathering process here is that, rather than asking what the customer wants, you are looking at the customer experience you want to create and then trying to figure out how available technology can realise that experience. In retrospect, we can all see why a device like the iPad is so useful (movies and books on the go, a cloud enabled device that lets you move data between it and other devices, mobile web on a screen you can actually read etc, etc). Chances are however that it would have been very difficult to elicit a set of requirements from someone that would have ended up with such a device.

Jobs goes on to say “you can’t start with the technology and try to figure out where you’re going to try and sell it”. In many ways this is a restatement of the well known “golden hammer” anti-pattern (to a man with a hammer, everything appears as a nail) from software development, the misapplication of a favored technology, tool or concept in solving a problem.

Whilst all this is true and would seem to make sense, at least as far as Apple is concerned, there is still another subtlety at play when building truly successful products that people didn’t know they wanted. As an illustration of this consider another, slightly more infamous Apple product, the Newton Message Pad.

In many ways the Newton was an early version of the iPad or iPhone (see above for the two side by side), some 25 years ahead of its time. One of its goals was to “reinvent personal computing”. There were many reasons why the Newton did not succeed (including it’s large, clunky size and poor handwriting recognition system) however one of them must surely have been that the device was just too far ahead of the technology available at the time in terms of processing power, memory, battery life and display technology. Sometimes ideas can be really great but the technology is just not there to support them.So, whilst Jobs is right in saying you cannot start with the technology then decide how to sell it equally you cannot start with an idea if the technology is not there to support it, as was the case with the Newton. So what does this mean for architects?

A good understanding of technology, how it works and how it can be used to solve business problems is, of course, a key skill of any architect however, equally important is an understanding of what is not possible with current technology. It is sometimes too easy to be seduced by technology and to overstate what it is capable of. Looking out for this, especially when there may be pressure on to close a sale, is something we must all do and be forceful in calling it out when we think something is not possible.

What Business Leaders Want

I’m not a big fan of Mel Gibson (in fact, not a fan at all) but this week have been reminded of the film What Women Want in which he played the lead role. For those who have not seen it (and I’m not recommending it by the way) the film revolves around a chauvinistic executive (Gibson) who, after an accident, gains the ability to hear what women are really thinking. This reminder came about during a conference I have just attended on the role that architects play (or more to the point, should be playing) in industry today. I guess the conference could have been called: What business leaders want (from us architects).

Here’s something I drew during one of the sessions showing the dichotomy we face when trying to build and deliver solutions to a business whose key drivers are less cost, more value.

The perception is that value is only obtained if solutions can be built quickly and cheaply. To a business this usually means within a financial year (or less). For an architect bought up on the importance of delivering integrity and solutions that adhere to best practice and standards that equates to “fast and dirty” which gives us the black curve. To be clear, value is what the business want, which often comes at the cost (in the eyes of the architect) to both their, and that of the systems they are building, integrity. The “trick” then is how to deliver both integrity and value (i.e. the green line)? Here’s my take:

  1. Value can be delivered quickly but only if its done in increments. Plan to deliver something quick (within a financial quarter) but not dirty.
  2. Create a hassle map and focus on the big and nasty hassles first.
  3. Don’t throw out everything you’ve learnt about architectural integrity but instead learn to focus on what matters for the short term. For example architecting for every possible change case may not be relevant if the entire nature of the business is likely to change within the lifetime of the system. Maybe throwing out and staring again is actually an option.
  4. Adopt a “bring you own” rather than “build your own” philosophy. Learn how to prove the business value of bringing rather than building.
  5. Do build for scaleability. Be optimistic that the business will flourish and require more not less of your solution. Take advantage of cloud technology to smooth temporary blips in workload.

These were five things I thought of straight away, there must be loads more (tell me). What is clear is that in troubled times such as these, we must look at adapting our approach to building systems so that we deliver measurable business value more quickly than ever, or we won’t be around to enjoy the next Mel Gibson tale!

The Changing Nature of Use Cases

It might be just me but I have detected a subtle, but significant, change in the meaning of the term use case of late. Something I have found I’ve been going along with but not without feeling slightly uncomfortable whilst doing so. The two definitions I’ve always used for use cases, namely a business use case and system use case come from Alistair Cockburn’s excellent and pretty much definitive guide: Writing Effective Use Cases (shame on you if this is not in your library).

A business use case is one in which the design scope is business operations. It is about an actor outside the organisation achieving a goal with respect to the organisation. The business use case often contains no mention of technology, since it is concerned with how the business operates.

A system use case is one in which the design scope is the computer system to be designed. It is about an actor achieving a goal with the computer system; it is about technology.”

For a discussion on the differences see this blog post. Ivar Jacobson, the person who pretty much invented the term ‘use case’, defines it in the following way:

When a user uses the system, she or he will perform a behaviorally related sequence of transactions in a dialogue with the system. We call such a special sequence a use case.

The way in which I am increasingly seeing use case being used is in a more informal, or less precise way, which is best described in terms of how I actually hear them being used. Here are some examples:

  • I need to understand the use case(s) for how we would use that product.
  • Is that a use case our operations people would support?
  • We need to run some use cases through that solution to test out some of the assumptions.
  • That use case calls for a larger number of users than we had envisaged.

In all of these examples the term ‘use case’ is pretty much synonymous with the term ‘requirement’. The scope may be a product, a system, a solution or an organisation and the actor may or may not be clearly identified. The use cases may imply some function of those entities or some quality (such number of users in the last example above).

Does this matter? At first the purist in me said it did. A use case has to have a well defined actor and a clearly stated set of ‘steps’ that actor performs when interacting with the system or organisation. However on reflection I’ve become slightly more relaxed about it on the basis that:

  • Any term which becomes a common, de-facto currency of understanding is better than not having one.
  • Adding the prefix ‘system’ or ‘business’ still gives us the more formal definitions most architects (business, solution or application) would recognise) so why not relax the use of the term when it does not have one of these prefixes?

Entering into the spirit of this ‘relaxed’ use of the term here are a set of use cases I’ve recently used when trying to articulate some of the uses of ‘Big Data’.This maps use cases onto two of the three so called “Big Data 3-V’ framework” (velocity, variety and (missing on here) volume).

Each of the boxes represents a ‘use case’ (no prefix) which has an informal description as in the following example.

Use Case: Social media analysis: Although basic insights into social media can tell you what people are saying and how sentiment is trending, they cannot answer what is ultimately a more important question: Why are people saying what they are saying and behaving the way they are behaving? Answering this type of question requires enriching the social media feeds with additional data residing in other enterprise systems. In other words, linking behaviour, and the driver of that behaviour, requires relating social media analytics back to traditional data repositories.

In the traditional context of a business or system use case this could be decomposed into a number of these more detailed and precise types of use case. What it does do however is provide a basic scope for such a further, more detailed, analysis to be performed.

Its Pretty Interactive, Yeah

I have said a number of times in this space that I believe Tim Berners-Lee to be one of the greatest software architects of all time. This conversation, as recorded in Wired, not only reiterates this belief but also shows how incredibly humble and self-effacing Berners-Lee is, as well as being the grand master of the understatement.Last week in a place called Tyler, eastern Texas, a scene which could have come straight out of a Woody Allen film was played out. For background on the case see here but, in a nutshell, a company called Eolas, claims it owns patents that entitle it to royalties from anyone whose website uses “interactive” features, like pictures that the visitor can manipulate, or streaming video. The claim, by Eolas’s owner, one Michael Doyle, is that his was the first computer program enabling an “interactive web.” Tim Berners-Lee was called as an expert witness and was being cross-examined by Jennifer Doan, a Texas lawyer representing two of the defendants Yahoo and Amazon. This is how part of the cross-examination went.

“When Berners-Lee invented the web, did he apply for a patent on it,” Doan asked.

“No,” said Berners-Lee.

“Why not?” asked Doan.

“The internet was already around. I was taking hypertext, and it was around a long time too. I was taking stuff we knew how to do…. All I was doing was putting together bits that had been around for years in a particular combination to meet the needs that I have.” [My italics]

Doan: “And who owns the web?”

Berners-Lee: “We do.”

Doan: “The web we all own, is it ‘interactive’?”

“It is pretty interactive, yeah,” said Berners-Lee, smiling.

I just love this. Here’s the guy that has given us one of the most game changing technologies of all time FOR NO PERSONAL GAIN TO HIMSELF, finding himself in a out of the way courtroom explaining one of the fundamental tenets of  software architecture: putting together bits that have been around.

Setting aside the whole thorny question of software patents and whether they are actually evil this is surely one of the greatest and most understated descriptions of what we, as software architects, actually do by the master himself. Thank you Tim.

You’re Building Me a What?

This week I’ve been attending a cloud architecture workshop. Not to architect a cloud for anyone in particular but to learn what the approach to architecting clouds should be. This being an IBM workshop there was, of course, lots of Tivoli this, WebSphere that and Power the other. Whilst the workshop was full of good advice I couldn’t help of thinking of this cartoon from 2008:

Courtesy geekandpoke.typepad.com

Just replace the word ‘SOA’ with ‘cloud’ (as ‘SOA’ could have been replaced by ‘client-server’ in the early nineties) and you get the idea. As software architects it is very easy to get seduced by technology, especially when it is new and your vendors, consultants and analysts are telling you this really is the future. However if you cannot explain to your client why you’re building him a cloud and what business benefit it will bring him then you are likely to fail just as much with this technology as people have with previous technology choices.

What Now for Internet Piracy?

So SOPA is to be kicked into the long grass which means it is at least postponed if not killed altogether. For those who have not been following the Stop Online Piracy Act debate, this is the bill proposed by a U.S Republican Representative to expand the ability of U.S. law enforcement to fight online trafficking in copyrighted intellectual property (IP) and counterfeit goods. Supporters of SOPA said it would protect IP as well as the jobs and livelihoods of people (and organisations) involved in creating books, films music, photographs etc. Opponents reckoned the legislation threatened free speech and innovation and would enable law enforcement officers to block access to entire internet domains as well as violating the First Amendment. Inevitably much of the digerati came out in flat opposition of SOPA and staged an internet blackout on 18th January where many sites “went dark” and Wikipedia was unavailable altogether. Critics of SOPA cited that the fact the bill was supported by the music and movie industry was an indication that it was just another way of these industry dinosaurs protecting their monopoly over content distribution. So, a last minute victory for the new digital industry over the old analogue one?And yet…

Check out this TED talk by digital commentator Clay Shirky called Why SOPA is a bad idea. Shirky in his usual compelling way puts a good case for why SOPA is bad (the talk was published before the recent announcement on the bill being postponed) but the real interest for me in this talk was from the comments about it. There are many people saying yes SOPA may be a bad bill but there is nonetheless a real problem with content being given away that should otherwise be paid for and that content creators (whether they be software developers, writers or photographers) are simply losing their livelihoods because people are stealing their work. Sure, there are copyright laws that are meant to prevent this sort of thing happening but who can really chase down the web sites and peer-to-peer networks that “share” content they have not created or paid for? SOPA may have been a bad bill and really have been about protecting the interests of large corporations who just want to carry on doing what they have always done without having to adapt or innovate. However without some sort of regulation that protects the interests of individuals or small start-ups wishing to earn a living from their art, killing SOPA has not moved us forward in any way and certainly not protected their interests. Unfortunately some sort if internet regulation is inevitable.

For a historical perspective of why this is likely to be so, see the TED talk by the Liberal Democrat Paddy Ashdown called The global power shift. Ashdown argues that “where power goes governance must follow” and that there is plenty of historical evidence showing what happens when this is not the case (the recent/current financial meltdown to name but one).

So SOPA may be dead but something needs to replace it and if we are to get the right kind of governance we must all participate in the debate else the powerful special interest groups will get their own way. Clay Shirky argued that if SOPA failed to be passed it would be replaced by something else. Now then is our chance to ensure that whatever that is, is right for content creators as well as distributors.