Architecting and Social Media

The arguments for and against the rise of user generated content on the web continue to rage. Depending on which side of the debate you take we are either on an amoral downward spiral of increasingly meaningless content being generated by amateurs (for free) that is putting the professional writers, musicians, software developers, photographers etc out of work as well as ruining our brains or are entering a new age where the combination of an unbounded publishing engine and the cognitive surplus many people now have means we are able to build a better and more cooperative world.Like most things in life the truth will not be at one of these polar opposites but somewhere in between. Seth Godin makes an interesting point in a recent blog entry that “lifestyle media isn’t a fad it’s what human beings have been doing forever, with a brief, recent interruption for a hundred years of professional media along the way”. He goes on to say “we shouldn’t be surprised when someone chooses to publish their photos, their words, their art or their opinions. We should be surprised when they don’t.”

After all, given the precarious nature of the press in the UK at the moment with stories being obtained through all sorts of dubious means the professionals can hardly be seen as holding the ethical or moral high ground.

The possibilities for creativity, and building interesting and innovative solutions out of this mixed bag of social media self-publishing is going to be the place where architects are going to have a fertile ground over the coming years. A nice example of this is Flipboard which if you have an iPhone or iPad you should definitely download. This free app is a “social magazine” that extends links your friends and contacts are sharing on Facebook, Twitter, Linkedin, 500px and others into beautifully packaged “articles”. It can also pull in content from a raft of other online content. It’s a great example of what architects should be doing, namely taking existing components and assembling them in interesting and innovative ways.

Steve Jobs 1955 – 2011

During the coming days and weeks millions of words will be written about Steve Jobs, many of them on devices he created. Why does the world care so much about an American CEO and computer nerd? For those of us that work with technology, and hope to use it to make the world a better place, the reason Steve Jobs was such a role model is that he not only had great vision and a brilliant understanding of design but also knew how to deliver technology in a form that was usable by everyone, not just technophiles, nerds and developers. Steve Jobs and Apple have transformed the way we interact with data, and the way that we think about computing, moving it from the desktop to the palm of our hands. As IT becomes ever more pervasive we could all learn from that and maybe even hope to emulate Steve Jobs a little.

Happy Birthday WWW

Today is the 20th anniversary of the world wide web; or at least the anniversary of the first web page. On this day in 1991 Tim Berners-Lee posted a short summary of the World Wide Web project on the alt.hypertext newsgroup:

The World Wide Web (WWW) project aims to allow all links to be made to any information anywhere. […] The WWW project was started to allow high energy physicists to share data, news, and documentation. We are very interested in spreading the web to other areas, and having gateway servers for other data. Collaborators welcome!

He certainly found a lot of collaborators!

As I’ve said before I believe the WWW is one of the greatest feats of software architecture ever performed. Happy Birthday WWW!

The Tools We Use

Back in 1964 Marshall McLuhan said  “We shape our tools and afterwards our tools shape us”. McLuhan was actually talking about the media when he said this but much of what he said then has a great deal of relevance in today’s mixed up media world too.It occurs to me that McLuhan’s tool quote equally applies to the tools we use, or misuse, as software architects. PowerPoint (or Keynote for that matter) has received pretty bad press over the years as being a tool that inhibits rather than enhances our creativity. Whilst this does not have to be the case, too many people take tools, such as PowerPoint, and use them in ways I’m pretty sure their creators never intended. Here are some common tool (mis)uses I’ve observed over the years (anti-patterns for tools if you like):

  1. Spreadsheets as a databases. Too many people seem to use spreadsheets as a sort of global repository for dumping ideas, data and information in general because it gives them the ability to easily sort and categorise information. Spreadsheets are good at numbers and presenting analytical data but not for capturing textual information.
  2. Presentations as documents. Sometimes what started out as a presentation to illustrate a good idea seems to grow into a more detailed description of that idea and eventually turns into a full-blown specification! The excuse for doing this being “we can use this to present to the client as well as leaving it with them at the end of the project as the design of the system”. Bad idea!
  3. Presentations as a substitute for presenting. The best presenters present “naked”. Minimal presentations (where sometimes minimal = 0) where the presenter is at the fore and his or her slides are illustrating the key ideas is what presenting is or should be about. Did John F Kennedy, Winston Churchill or Martin Luther King rely on PowerPoint to get their big ideas across? I think not!
  4. Word processors as presentations. This is the opposite of number 2. Whilst not so common  people have been known in my experience to ‘present’ their documents on a screen in a meeting. It goes without saying, or should do, that 12pt (or less) text does not come across well on a screen.
  5. Word processors as web sites. Although most word processors have the capability of generating HTML this is not a good reason for using them to build web sites. There are a multitude of free, open and paid for tools that do a far better job of this.
  6. Emails as documents. This is variant (generalisation) of one of my favourite [sic] anti-patterns. e-mails are one of the greatest sources of unstructured data in the world today. There must be, literally, terabytes of data stored using this medium that should otherwise be captured in a more readily consumable and accessible form. e-mails clearly have a place for forming ideas but not for capturing outcomes and persisting those ideas so others can see them and learn from them.

The Legacy Issue

Much of the work we do as architects involves dealing with the dreaded “legacy systems”. Of course legacy actually means the last system built, not one that is necessarily 5, 10, 20 or more years old. As soon as a system goes into production it is basically “legacy”. As soon as new features get added that legacy system gets harder to maintain and  more difficult to understand; entropy (in the sense of an expression of disorder or randomness) sets in.

Apple have recently been in the news again for the wrong reasons because some of the latest iPod’s do not work with previous versions of Mac OSX. Users have been complaining that they are being forced to to upgrade to the latest version of OSX in order to get their shiny new iPods to work. To make matters worse however Apple do support the relatively ancient version of Windows XP. Apple have always taken a fairly hard line when it comes to legacy by not supporting backwards compatibility particularly well when their OS gets upgraded. The upside is the operating systems does not suffer from the “OS bloat” that Windows seems to (the last version of OSX actually had a smaller footprint than the previous version).

As architects it is difficult to focus both on maintaining legacy systems and also figuring out how to replace them. As Seth Godin says:“Driving with your eyes on the rearview mirror is difficult indeed”. At some point you need to figure out whether it is better to abandon the legacy system and replace it or soldier on supporting an ever harder to maintain system. There comes a point where the effort and cost in maintaining legacy is greater than that needed to replace the system entirely. I’m not aware of any formal methods that would help answer this particularly hard architectural decision but it’s one I think any architect should try and answer before embarking on a risky upgrade program that involves updating existing systems.

The Next Generation?

Demographers, social scientists and new media watchers are fond of dividing people into generations based on what recent (i.e. post-World War II) period of history they were born in. Whilst there are no consistent definitions of when these generations begin and end they roughly fall into these periods:

  • Baby-boomers: 1940 – 1960. Those born during the post–World War II demographic boom in births. This generation more than any other rejected the moral and religous beliefs of their parents and created their own sets of values. This is the generation that invented sex, drugs and rock’n’roll and is still largely the one that is ruling the roost so to speak (President Obama, born in 1961, catching the tail end of this particular demographic).
  • Generation X (post boomers): 1960 – 1980. This term was apparently coined by the great Magnum photographer Robert Capa in the early 1950s. He used it as the title for a photo-essay about young men and women growing up immediately after the Second World War. Sometimes referred to as the “unknown” or “lost” genaration this group has signified people without identity who face an uncertain, ill-defined (and perhaps hostile) future. This is the generation that grew up during the fall of the Berlin war, the end of the Cold War and various economic crises (such as the 1979 olil crisis) and were most likely to be the children of divorced parents.
  • Generation Y (the Millenial generation): 1980 – 2000. This is the culturally liberal generation that witnessed the start and wide-spread adoption of the internet and are the children of baby-boomers. This is the generation that owns, and is most comfortable with using, most computers, mobile phones and MP3 players.

So what is the next generation born during the last 10 years and possibly the next 10 to be called? The obvious name would be “Generation Z” although this would mean we will have run out of letters already so will have problems naming the post-2020 generation. Rather than following the obvious trend therefore how about naming this upcoming generation who will be entering the higher education system and workforce during the next 10 years “Generation V”, the versatilist generation? These are the people, more than any others, who will need to adopt a whole new set of skills if they are to survive and prosper during their lifetimes. These are the ones who will be suffering the after-shocks of the baby-boom, X and Y generations and who will need to fix the wicked problems those generations have left in their wake. This is the generation that will probably have more jobs, in their lifetime, than the other three generations put together and who will, as Daniel Pink has suggested have to survive in a world dominated by the three A’s:

  • Automation – Jobs can be done faster and more efficiently by computers.
  • Abundance – We have more stuff than we know what to do with and it is increasingly being produced at cheaper and cheaper rates.
  • Asia (or Africa) – More and more work is outsourced to these low cost economies.
The skills that this generation will need to adopt will be many and varied and include:
  • Objectively viewing experiences and roles, learning from these (failures as well as successes) and using this knowledge to gain new roles.
  • Looking outside the confines of current roles, regions, employers or business units. The more informed a professional is about a company, its industry segment and the forces that affect it, the greater chance will the person have to predict and survive economic downturns.
  • Laying out opportunities and assignments methodically. Focusing on the areas and challenges that fall outside the comfort zone; those areas generally will be the areas of greatest growth.
  • Exploring possibilities outside the world of large, corporate business. Charities, startup companies, government agencies, even your own web-startup offer new and interesting ways to build experiences, learn new skills and maybe even modify behaviours.
  • Enrolling in advanced education courses to expand perspective, preferably outside your current discipline and area of expertise.
  • Targeting companies, projects, assignments, education and training courses that will increase professional value and make you more marketable.

Sadly Gartner seem to have coined the use of “Generation V” already, where V is for virtual. Pity, as they also coined the term “virtualist”, missed opportunity I reckon.

Watson, Turing and Clarke

So what do these three have in common?

  • Thomas J. Watson Sr, CEO and founder of IBM (100 years old this year). Currently has a computer named after him.
  • Alan Turing, mathematician and computer scientist (100 years old next year). Has a famous test named after him.
  • Aurthur C. Clarke, scientist and writer (100 years old in 1917). Has a set of laws named after him (and is also the creator of the fictional HAL computer in 2001: A Space Odyssey).

Unless you have moved into a hut, deep in the Amazon rain forest you cannot have missed the publicity over IBM’s ‘Watson’ computer having competed in, and won, the American TV quiz show Jeopardy. I have to confess that until last week I’d not heard of Jeopardy, possibly because a) I’m not a fan of quizzes, b) I’m not American and c) I don’t watch that much television. To those as ignorant as me on these matters the unique thing about Jeopardy is that contestants are presented with clues in the form of answers, and must phrase their responses in the form of a question.

This, it turns out, is what makes this particular quiz such a hard nut for a computer to crack. The clues in the ‘question’ rely on subtle meanings, puns, and riddles; something humans excel at and computers do not. Unlike IBM’s previous game challenger Deep Blue, which defeated chess world champion Gary Kasparov, it’s not sufficient to rely on raw computing ‘brute force’ but this time the computer has to interpret meaning and the nuances of the human language. So has Watson achieved, met or passed the Turing test (which is basically a measure of whether computer can demonstrate intelligence)?

The answer is almost certainly ‘no’. Turing’s test is a measure of a machines ability to exhibit human intelligence. The test, as originally proposed by Turing was that a questioner should ask a series of questions of both a human being and a machine and see whether he can tell which is which through the answers they give. The idea being that if the two were indistinguishable then the machine and the human must both appear to be as intelligent as each other.

As far as I know Turing never stipulated any constraint on the range or type of questions that could be answered which leads us to the nub of the problem. Watson is supremely good at answering Jeopardy type questions just as Deep Blue was good at playing chess. However neither could do what the other does (at least as well). They have been programmed for that given task. Given that Watson is actually a cluster of POWER7 servers any suitably general purpose computer that could win at Jeopardy, play chess as well as exhibit the full range of human emotions and frailties that would be needed to fool a questioner would presumably occupy the area of several football pitches and consume the power of a small city.

That however misses the point completely. The ability of a computer to almost flawlessly answer a range of questions, phrased in a particular way on a range of different subject areas, blindingly fast has enormous potential in fields of medicine, law and other disciplines where questions based on a huge foundation of knowledge built up over decades need to be answered quickly (for example in accident and emergency where quick diagnoses may literally be a matter of life and death). This indeed is one of IBM’s Smarter Planet goals.

Which brings us to Clarke’s third law which states that “any sufficiently advanced technology is indistinguishable from magic”. This is surely something that is attributable to Watson. The other creation of Clarke of course is HAL the computer aboard the spaceship Discovery One on a trip to Saturn that becomes overwhelmed by guilt at having to keep secret the true nature of the spaceships mission and starts killing members of the crew. The point of Clarke’s story (or one of them) being that the downside to a computer that is indistinguishable from a human being is that the computer may also end up mimicking human frailties and weaknesses.  Maybe it’s a good job Watson hasn’t passed Turing’s test then?

Software Developments Best Kept Secret

A few people have asked what I meant in my previous entry when a said we should be “killing off the endless debates of agile versus waterfall.” Don’t get me wrong, I’m a big fan of doing development in as efficient a way as possible, after all why would you want to be doing things in a ‘non-agile’ way! However I think that the agile versus waterfall debate really does miss the point. If you have ever worked on anything but the most trivial of software development projects you will quickly realise that there is no such thing as a ‘one size fits all’ software delivery lifecycle (SDLC) process. Each project is different and each brings its own challenges in terms of the best way to specify, develop, deliver and run it. Which brings me to the topic of this entry, the snappily titled Software and Systems Process Engineering Metamodel or ‘SPEM’ (but not SSPEM).

SPEM is a standard owned by the Object Management Group (OMG), the body that also owns the Unified Modeling Language (UML), the Systems Modeling Language (SysML) and a number of other open standards. Essentially SPEM gives you the language (the metamodel) for defining software and system processes in a consistent and repeatable way. SPEM also allows vendors to build tools that automate the way processes are defined and delivered. Just like vendors have built system and software modeling tools based around UML so too can vendors build delivery process modeling tools built around SPEM.

So what exactly does SPEM define and why should you be interested in it? For me there are two reasons why you should look at adopting SPEM on your next project.

  1. SPEM separates out what you create (i.e. the content) from how you create it (i.e. the process) whilst at the same time providing instructions for how to do these two things (i.e. guidance).
  2. SPEM (or at least tools that implement SPEM) allows you to create a customised process by varying what you create and when you create it.

Here’s a diagram to explain the first of these.

SPEM Method Framework
The SPEM Method Framework represents a consistent and repeatable approach to accomplishing a set of objectives based on a collection of well-defined techniques and best practices. The framework consists of three parts:
  • Content: represents the primary reusable building blocks of the method that exist outside of any predefined lifecycle. These are the work products that are created as a result of roles performing tasks.
  • Process: assembles method content into a sequence or workflow (represented by a work breakdown structure) used to organise the project and develop a solution. Process includes the phases that make up an end-to-end SDLC, the activities that phases are broken down into as well as reusable chunks of process referred to as ‘capability patterns’.
  • Guidance: is the ‘glue’ which supports content development and process execution. It describes techniques and best-practice for developing content or ‘executing’ a process.

As well as giving us the ‘language’ for building our own processes SPEM also defines the rules for building those processes. For example phases consist of other phases or activities, activities group tasks, tasks take work products as input and output other work products and so on.

This is all well and good you might say but I don’t want to have to laboriously build a whole process every time I want to run a project. This is where the second advantage of using SPEM comes in. A number of vendors (IBM and Sparx to name two) have built tools that not only automate the process for building a process but which also contain one or more ‘ready-rolled’ processes to get you started. You can either use those ‘out of the box’, extend them by adding your own content or start from scratch (not recommended for novices). What’s more the Eclipse foundation have developed an open software tool, called the Eclipse Process Framework (EPF) that not only gives you a tool for building processes but also comes with a number of existing processes, including OpenUP (open version of the Rational Unified Process) as well as Scrum and DSDM.

If you download and install EPF together with the appropriate method libraries you can use these as the basis for creating your own processes. Here’s what EPF looks like when you open the OpenUP SDLC.

EPF and OpenUP

The above view shows the browsing perspective of EPF, however there is also an authoring perspective which allows you to not only reconfigure a process to suit your own project but also add and remove content (i.e. roles, tasks and work products). Once you have made your changes you can republish the new process (as HTML) and anyone with a browser can then view the process together with all of it work products and, most crucially, associated guidance (i.e. examples, templates, guidelines etc) that allow you to use the process in an effective way.

This is, I believe, the true power of using a tool like EPF (or IBM’s Rational Method Composer which comes preloaded with the Rational Unified Process). You can take an existing SDLC (one you have created or one you have obtained from elsewhere) and customise it to meet the needs of your project. The amount of agility and number of iterations etc that you want to run will depend on the intricacies of your project and not what some method guru tells you that you should be using!

By the way for an excellent introduction and overview of EPF see here and here. The Eclipse web site also contains a wealth of information on EPF. You can also download the complete SPEM 2 specification from the OMG web site here.

Five Inspirational Videos

As a follow-up to my six non-IT books here are five videos I have found some inspiration from recently (plus one that whilst cannot be described as inspirational is at least amusing in a vaguely nerdy programmer kind of way):

  1. Steve Jobs (A CEO): How to Live Before You Die Steve Jobs steps out from his usual Apple presentation mode and delivers this keynote to students at Stanford. He highlights three things which have had a major impact on his life and how important it is to learn from such life experiences.
  2. Winston Royce (A Methodologist): The Rise and Fall of Waterfall Not actually by Winston Royce but a humorous look at how we ended up with waterfall. An example of how to get a point across by telling a story (and using wonderfully simple graphics).
  3. Grady Booch (A Software Architect): The Promise, The Limits, The Beauty of Software Grady is an inspirational speaker on all things software related. We were lucky enough to get him to write the forword to our book (which I’m sure has done its sales the world of good).
  4. Sir Ken Robinson (An Innovator and Educationalist): Do Schools Kill Creativity SKR (as he calls himself on his website) has some strong views on how our present education system is letting down youngsters.For a great rendition of another of Sir Ken’s talks see here.
  5. David Eustace (A Photographer): In Search of Eustace Nothing to do with IT but related to one of my other passions. This simple and beautifully filmed video set to music will resonate with anyone on life’s journey.

And finally…

  1. Lady Gaga (A Singer) Lookalike: Sings About Java Programming An example of how creativity (the video production) can be used to improve even the worst ideas (the song). What else can I say!

Social Networking and All That Jazz

I was recently asked what I thought the impact of Web 2.0 and social networking has had or is about to have, on our profession. Here is my take:

  • The current generation of students going through secondary school and university (that will be hitting the employment market over the next few years) have spent most of their formative years using Web 2.0. For these people instant messaging, having huge groups of “friends” and organising events online is as second nature as sending emails and using computers to write documents is to us. How will this change the way we do our jobs and software and services companies do business?
    • Instant and informal networks (via Twitter, Facebook etc) will set up, share information and disappear again. This will allow vendors and customers to work together in new ways and more quickly than ever before.D
    • Devices like advanced smart phones and tablets which can be carried anywhere and are always connected will speed up even more how quickly information gets disseminated and used.
    • Whilst the current generation berates the upcoming one for the time wasted sending pointless messages to friends and creating blog entries hardly anyone reads they at least are doing something different and liberating, creating as opposed to simply consuming content. So what if 99.99% of that content is rubbish? 0.01 or even 0.001 amongst a population of several billion is still a lot of potentially good and innovative thoughts and ideas. The challenge is of course finding the good stuff.
  • Email as an effective communication aid is coming to its natural end. The new generation who have grown up on blogs, Twitter and Facebook will laugh at the amount of time we spend sweating over mountains of email. New tools will need to be available that provide effective ways of quickly and accurately searching the content that is published via Web 2.0 to find the good stuff (and also to detect early potential good stuff).
  • More 20th century content distributors (newspapers, TV companies, book and magazine publishers) will go the way of the music industry if they cannot find a new business model to earn money. This is both an opportunity (we can help them create the new opportunities) and a threat (loss of a large customer base if they go under) to IT professionals and service companies.
  • The upcoming generation will not have loyalties to their employers but only to the network they happen to be a part of at the time. This is the natural progression from outsourcing of labour, destruction of company pension schemes and everyone being treated as freelancers. Whilst this has been hard for the people who have gone through that shift, for the new workers in their late teens and early 20’s they will know nothing else and forge new relationships and ways of working using the new tools at their disposal. Employee turnover and the rate at which people change jobs will increase 10 fold according to some pundits (google ‘Shift Happens’ for some examples).
  • Formal classroom type teaching is essentially dead. New devices with small cameras will allow virtual classrooms to spring up anywhere. Plus the speed with which information changes will mean material will be out of date anyway by the time a formal course is prepared. This coupled with further education institutions having to keep raising fees to support increasing numbers of students will lead to a collapse in the traditional ways of delivering learning.
  • The real value of networks comes from sharing information between as diverse a group of people as possible. Given that companies will be relying less on permanent employees and more on freelancers these networks will increasingly use the internet. This provides some interesting challenges around security of information and managing intellectual capital. The domain of enterprise architecture has therefore just increased exponentially as the enterprise has just become the internet. How will companies manage and govern a network most of which they have no or little control over?
  • The new models for distributing software and services (e.g. application stores, cloud providers) as well as existing ones such as open source will mark the end of the traditional package and product software vendors. Apple overtook Microsoft earlier this year in terms of size as measured by market capitalisation and is now second only to Exxon. Much of this revenue was, I suspect, driven by the innovative ways Apple have devised to create and distribute software (i.e. third parties, sometimes individuals create it and Apple distribute it through their App store).

For two good opposing views on what the internet is doing to our brains read the latest books by Clay Shirky and Nicholas Carr.