Non-Functional Requirements and the Cloud

As discussed here the term non-functional requirements really is a complete misnomer. Who would, after all, create a system based on requirements that were “not functional”? Non-functional requirements refer to the qualities that a system should have and the constraints under which it must operate. Non-functional requirements are sometimes referred to as the “ilities,” because many end in “ility,” such as, availability, reliability, and maintainability etc.

Non-functional requirements will  of course an impact on the functionality of the system. For example, a system may quite easily address all of the functional requirements that have been specified for it but if that system is not available for certain times during the day then it is quite useless even though it may be functionally ‘complete’.

Non-functional requirements are not abstract things which are written down when considering the design of a system and then ignored but must be engineered into the systems design just like functional requirements are. Non-functional requirements can have a bigger impact on systems design than can functional requirements and certainly if you get them wrong can lead to more costly rework. Missing a functional requirement usually means adding it later or doing some rework. Getting a non-functional requirement wrong can lead to some very costly rework or even cancelled projects with the knock-on effect that has on reputation etc.

From an architects point of view, when defining how a system will address non-functional requirements, it mainly (though not exclusively) boils down to how the compute platforms (whether that be processors, storage or networking) are specified and configured to be able to satisfy the qualities and constraints specified of it. As more and more workloads get moved to the cloud how much control do we as architects have in specifying the non-functional requirements for our systems and which non-functionals are the ones which should concern us most?

As ever the answer to this question is “it depends”. Every situation is different and for each case some things will matter more than others. If you are a bank or a government department holding sensitive customer data the security of your providers cloud may be upper most in your mind. If on the other hand you are an on-line retailer who wants your customers to be able to shop at any time of the day then availability may be most important. If you are seeking a cloud platform to develop new services and products then maybe the ease of use of the development tools is key. The question really is therefore not so much which are the important non-functional requirements but which ones should I be considering in the context of a cloud platform?

Below are some of the key NFR’s I would normally expect to be taken into consideration when looking at moving workloads to the cloud. These apply whether they are public or private or a mix of the two. These apply to any of the layers of the cloud stack (i.e. Infrastructure, Platform or Software as a Service) but will have an impact on different users. For example availability (or lack of) of a SaaS service is likely to have more of an impact on the business user than developers or IT operations whereas availability of the infrastructure will effect all users.

  • Availability – What percentage of time does the cloud vendor guarantee cloud services will be available (including scheduled maintenance down-times)? Bear in mind that although 99% availability may sound good that actually equates to just over 3.5 days potential downtime a year. Even 99.99 could mean 8 hours down time. Also consider as part of this Disaster Recovery aspects of availability and if more then one physical data centre is used where do they reside? The latter is especially true where data residency is an issue if your data needs to reside on-shore for legal or regulatory reasons.
  • Elasticity (Scalability) – How easy is it to bring on line or take down compute resources (CPU, memory, network) as workload increases or decreases?
  • Interoperability – If using services from multiple cloud providers how easy is it to move workloads between cloud providers? (Hint: open standards help here). Also what about if you want to migrate from one cloud provider to another ? (Hint: open standards help here as well).
  • Security – What security levels and standards are in place? for public/private clouds not in your data centre also consider physical security of the cloud providers data centres as well as networks. Data residency again needs to be considered as part of this.
  • Adaptability – How easy is it to extend, add to or grow services as business needs change? For example if I want to change my business processes or connect to new back end or external API’s how easy would it be to do that?
  • Performance – How well suited is my cloud infrastructure to supporting the workloads that will be deployed onto it, particularly as workloads grow?
  • Usability – This will be different depending on who the client is (i.e. business users, developers/architects or IT operations). In all cases however you need to consider ease of use of the software and how well designed interfaces are etc. IT is no longer hidden inside your own company, instead your systems of engagement are out there for all the world to see. Effective design of those systems is more important than ever before.
  • Maintainability – More from an IT operations and developer point of view.  How easy is it to manage (and develop) the cloud services?
  • Integration – In a world of hybrid cloud where some workloads and data need to remain in your own data centre (usually systems of record) whilst others need to be deployed in public or private clouds (usually systems of engagement) how those two clouds integrate is crucial.

I mentioned at the beginning of this post that non-functional requirements should actually be considered in terms of the qualities you want from your IT system as well as the constraints you will be operating under. The decision to move to cloud in many ways adds a constraint to what you are doing. You don’t have complete free reign to do whatever you want if you choose off-premise cloud operated by a vendor but have to align with the service levels they provide. An added bonus (or complication depending on how you look at it) is that you can choose from different service levels to match what you want and also change these as and when your requirements change. Probably one of the most important decisions you need to make when choosing a cloud provider is that they have the ability to expand with you and don’t lock you in to their cloud architecture too much. This is a topic I’ll be looking at in a future post.

Consideration of non-functional requirements does not go away in the world of cloud. Cloud providers have very different capabilities, some will be more relevant to you than others. These, coupled with the fact that you also need to be architecting for both on-premise as well as off-premise clouds actually make some of the architecture decisions that need to be made more not less difficult. It seems the advent of cloud computing is not about to make us architects redundant just yet.

For a more detailed discussion of non-functional requirements and cloud computing see this article on IBM’s developerWorks site.

A Cloudy Conversation with My Mum

Traditionally (and I’m being careful not to over-generalise here) parents of the Baby Boomer generation are not as tech savvy as the Boomers (age 50 – 60), Gen X’ers (35 – 49) and certainly Millenials (21 – 34). This being the generation that grew up with “the wireless”, corded telephones (with a rotary dial) and black and white televisions with diminutive screens. Technology however is invading more and more on their lives as ‘webs’, ‘tablets’ and ‘clouds’ encroach into what they read and hear.

IT, like any profession, is guilty of creating it’s own language, supposedly to help those in the know understand what each other are talking about in a short hand form but often at the expense of confusing the hell out of those on the outside. As hinted at above IT is worse than most other professions because rather than create new words it seems particularly good at hijacking existing ones and then changing their meaning completely!

‘The Cloud’ is one of the more recent terms to jump from mainstream into IT and is now making its way back into mainstream with its new meaning. This being the case I thought the following imaginary conversation between myself and my mum (a Boomer parent) given my recent new job* might be fun to envisage. Here’s how it might start…

Cloud Architect and Mum

Here’s how it might carry on…

Me: “Ha, ha very funny mum but seriously, that is what I’m doing now”.

Mum: “Alright then dear what does a ‘Cloud Architect’ do?”

Me: “Well ‘cloud computing’ is what people are talking about now for how they use computers and can get access to programs. Rather than companies having to buy lots of expensive computers for their business they can get what they need, when they need it from the cloud. It’s meant to be cheaper and more flexible.”

Mum: “Hmmm, but why is it called ‘the cloud’ and I still don’t understand what you are doing with it?”

Me: “Not sure where the name came from to be honest mum, I guess it’s because the computers are now out there and all around us, just like clouds are”. At this point I look out of the window and see a clear blue sky without a cloud in sight but quickly carry on. “People compare it with how you get your electricity and water – you just flick a switch or turn on the tap and its there, ready and waiting for when you want to use it.”

Mum: “Yes I need to talk to you about my electricity, I had a nice man on the phone the other day telling me I was probably paying too much for that, now where did I put that bill I was going to show you…”

Me: “Don’t worry mum, I can check that on the Internet, I can find out if there are any better deals for you.”

Mum: “So will you do that using one of these clouds?”

Me “Well the company that I contact to do the check for you might well be using computers and programs that are in the cloud yes. It would mean they don’t have to buy and maintain lots of expensive computers themselves but let someone else deal with that.”

Mum: “Well it all sounds a bit complicated to me dear and anyway, you still haven’t told me what you are doing now?”

Me: “Oh yes. Well I’m supposed to be helping people work out how they can make use of cloud computing and helping them move the computers they might have in their own offices today to make use of ones IBM have in the cloud. It’s meant to help them save money and do things a bit quicker.”

Mum: “I don’t know why everyone is in such a rush these days – people should slow down a bit, walk not run everywhere.”

Me: “Yes, you’re probably right about that mum but anyway have a look at this. It’s a video some of my colleagues from IBM made and it explains what cloud computing is.”

Mum: “Alright dear, but it won’t be on long will it – I want to watch Countdown in a minute.”

*IBM has gone through another of its tectonic shifts of late creating a number of new business units as well as job roles, including that of ‘Cloud Architect’.

Government as a Platform

The UK government, under the auspices of Francis Maude and his Cabinet Office colleagues, have instigated a fundamental rethink of how government does IT following the arrival of the coalition in May 2010. You can find a brief summary here of what has happened since then (and why).

One of the approaches that the Cabinet Office favours is the idea of services built on a shared core, otherwise known as Government as a Platform (GaaP). In the governments own words:

A platform provides essential technology infrastructure, including core applications that demonstrate the potential of the platform. Other organisations and developers can use the platform to innovate and build upon. The core platform provider enforces “rules of the road” (such as the open technical standards and processes to be used) to ensure consistency, and that applications based on the platform will work well together.

The UK government sees the adoption of platform based services as a way of breaking down the silos that have existed in governments, pretty GaaPmuch since the dawn of computing, as well as loosening the stranglehold it thinks the large IT vendors have on its IT departments. This is a picture from the Government Digital Service (GDS), part of the Cabinet Office, that shows how providing a platform layer, above the existing legacy (and siloed) applications, can help move towards GaaP.

In a paper on GaaP, Tim O’Reilly sets out a number of lessons learnt from previous (successful) platforms which are worth summarising here:

  1. Platforms must be built on open standards. Open standards foster innovation as they let anyone play more easily on the platform. “When the barriers to entry to a market are low, entrepreneurs are free to invent the future. When barriers are high, innovation moves elsewhere.”
  2. Don’t abuse your power as the provider of the platform. Platform providers must not abuse their privileged position or market power otherwise the platform will decline (usually because the platform provider has begun to compete with its developer ecosystem).
  3. Build a simple system and let it evolve. As John Gall wrote: “A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true. A complex system designed from scratch never works and cannot be made to work. You have to start over beginning with a working simple system.”
  4. Design for participation. Participatory systems are often remarkably simple—they have to be, or they just don’t work. But when a system is designed from the ground up to consist of components developed by independent developers (in a government context, read countries, federal agencies, states, cities, private sector entities), magic happens.
  5. Learn from your hackers. Developers may use APIs in unexpected ways. This is a good thing. If you see signs of uses that you didn’t consider, respond quickly, adapting the APIs to those new uses rather than trying to block them.
  6. Harness implicit participation. On platforms like Facebook and Twitter people give away their information for free (or more precisely to use those platforms for free). They are implicitly involved therefore in the development (and funding) of those platforms. Mining and linking datasets is where the real value of platforms can be obtained. Governments should provide open government data to enable innovative private sector participants to improve their products and services.
  7. Lower the barriers to experimentation. Platforms must be designed from the outset not as a fixed set of specifications, but as being open-ended  to allow for extensibility and revision by the marketplace. Platform thinking is an antidote to the complete specifications that currently dominate the government approach not only to IT but to programs of all kinds.
  8. Lead by example. A great platform provider does things that are ahead of the curve and that take time for the market to catch up to. It’s essential to prime the pump by showing what can be done.

In IBM, and elsewhere, we have been talking for a while about so called disruptive business platforms (DBP). A DBP has four actors associated with it:

  • Provider – Develops and provides the core platform. Providers need to ensure the platform exposes interfaces (that Complementors can use) and also ensure standards are defined that allow the platform to grow in a controlled way.
  • Complementor – Supplement the platform with new features, services and products that increase the value of the platform to End Users (and draw more of them in to use the platform).
  • End User – As well as performing the obvious ‘using the platform’ role End Users will also drive demand that  Complementors help fulfill. Also there are likely to be more Users if there are more Complementors providing new features. A well architected platform also allows End Users to interact with each other.
  • Supplier – Usually enters into a contract with the core platform provider to provide a known product or service or technology. Probably not innovating in the same way as the complementor would.
Walled Garden at Chartwell - Winston Churchill's Home
Walled Garden at Chartwell – Winston Churchill’s Home

We can see platform architectures as being the the ideal balance between the two political extremes of those who want to see a fully stripped back government that privatises all of its services and those who want central government to provide and manage all of these services. Platforms, if managed properly, provide the ideal ‘walled garden’ approach which is often attributed to the Apple iTunes and App Store way of doing business. Apple did not build all of the apps out their on the App Store. Instead they provided the platform on which others could provide the apps and create a diverse and thriving “app economy”.

It’s early days to see if this could work in a government context. What’s key is applying some of the above principles suggested by Tim O’Reilly to enforce the rules that others must comply with. There also of course needs to be the right business models in place that encourage people to invest in the platform in the first place and that allow new start ups to grow and thrive.

Wardley Maps

A Wardley map (invented by Simon Wardley who works for the Leading Edge Forum, a global research and thought leadership community within CSC) is a model which helps companies understand and communicate their business/IT value chains.

The basic premise of value chain mapping is that pretty much every product and service can be viewed in terms of a lifecycle which starts from an early genesis stage and proceeds through to eventually being standardised and becoming a commodity.

From a system perspective – when the system is made up from a number of loosely coupled components which have one or more dependencies – it is interesting and informative to show where those components are in terms of their individual lifecycle or evolution. Some components will be new and leading edge and therefore in the genesis stage whilst other components will be more mature and therefore commoditised.

At the same time, some components will be of higher value in that they are closer to what the customer actually sees and interacts with whereas others will be ‘hidden’ and part of the infrastructure that a customer does not see but nonetheless are important because they are the ‘plumbing’ which makes the system actually work.

A Wardley map is a neat way of visualising these two aspects of a system (i.e. their ‘value’ and their ‘evolutionary stage’). An example Wardley map is shown below. This comes from Simon Wardley’s blog Bits or pieces?; in particular this blog post.

Wardley Map 2

The above map is actually for the proposed High Speed 2 (HS2) rail system which will run from London to Birmingham. Mapping components according to their value and their stage of evolution allows a number of useful questions to be asked which might help avoid future project issues (if the map is produced early enough). For example:

  1.  Are we making good and proper use of commoditised components and procuring or outsourcing them in the right way?
  2. Where components are new or first of a kind have we put into place the right development techniques to build them?
  3. Where a component has lots of dependencies (i.e. lines going in and out) have we put into place the right risk management techniques to ensure that component is delivered in time and does not delay the whole project?
  4. Are the user needs properly identified and are we devoting enough time and energy to build what could be the important differentiating components for the company.

Wardley has captured an extensive list of the advantages of building value chain maps which can be found here. He also captures a simple and straight forward process for creating them which can be found here. Finally a more detailed account of value chain maps can be found in the workbook The Future is More Predictable Than You Think written by Simon Wardley and David Moschella.

The power of Wardley maps seems to be that although they are relatively simple to produce they convey a lot of useful information. Once created they allow ‘what if’ questions to be asked by moving components around and asking, for example, what would happen if we built this component from scratch rather than try to use an existing product – would it give us any business advantage?

Finally, Wardley suggests that post-it notes and white boards are the best tool for building a map. The act of creating the map therefore becomes a collaborative process and encourages discussion and debate early on. As Wardley says:

With a map, it becomes possible to learn how to play the game, what techniques work and what failed – the map itself is a vehicle for learning.

Complexity is Simple

I was taken with this cartoon and the comments put up by Hugh Macleod last week over at his gapingvoid.com blog so I hope he doesn’t mind me reproducing it here.

Complexity is Simple (c) Hugh Macleod 2014
Complexity is Simple (c) Hugh Macleod 2014

Complex isn’t complicated. Complex is just that, complex.

Think about an airplane taking off and landing reliably day after day. Thousands of little processes happening all in sync. Each is simple. Each adds to the complexity of the whole.

Complicated is the other thing, the thing you don’t want. Complicated is difficult. Complicated is separating your business into silos, and then none of those silos talking to each other.

At companies with a toxic culture, even what should be simple can end up complicated. That’s when you know you’ve really got problems…

I like this because it resonates perfectly well with a blog post I put up almost four years ago now called Complex Systems versus Complicated Systems. where I make the point that “whilst complicated systems may be complex (and exhibit emergent properties) it does not follow that complex systems have to be complicated“. A good architecture avoids complicated systems by building them out of lots of simple components whose interactions can certainly create a complex system but not one that needs to be overly complicated.

Avoiding the Legacy of the Future

Kerrie Holley is a software architect and IBM Fellow. The title of IBM Fellow is not won easily. They are a group that includes a Kyoto Prize winner and five Nobel Prize winners, they have fostered some of the IBM company’s most stunning technical breakthroughs―from the Fortran computing language to the systems that helped put the first man on the moon to the Scanning Tunneling Microscope, the first instrument to image atoms. These are people that are big thinkers and don’t shy away from tackling some of the worlds wicked problems.

Listening to Kerrie give an inspirational talk called New Era of Computing at an IBM event recently I was struck by a comment he made which is exactly the kind of hard question I would expect an IBM Fellow to make. It was:

The challenge we have is to avoid the legacy of the future. How do we avoid applications becoming an impediment to business change?

Estimates vary, but it is reckoned that most organizations spend between 70% and 80% on maintenance and only 30% to 20% on innovation. When 80% of a companies IT budget is being spent in just keeping the existing systems running then how are they to deploy new capabilities that keep them competitive? That, by any measure, is surely an “impediment to business change”. So, what to do? Here are a few things that might help avoid the legacy of the future and show how we as architects can play our part in addressing the challenge posed in Kerrie’s question.

  1. Avoid (or reduce) technical debt. Technical debt is what you get when you release not-quite-right code out into the world. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an error-prone deployment. Reducing the amount of technical debt clearly reduces the amount of money you have to spend on finding and patching buggy code. Included here is code that is “buggy” because it is not doing what was intended of it. The more you can do to ensure “right-first-time” deployments the lower your maintenance costs and the more of your budget you’ll have to spend on innovation rather than maintenance. Agile is one tried and tested approach to ensuring better, less buggy code that meets the original requirements but traditionally agile has only focused on a part of the software delivery lifecycle, the development part. DevOps uses the best of the agile approach but extends that into operations. DevOps works by engaging and aligning all participants in the software delivery lifecycle — business teams; architects, developers, and testers; and IT operations and production — around a single, shared goal: sustained innovation, fueled by continuous delivery and shaped by continuous feedback.
  2. Focus on your differentiators. It’s tempting for CIOs and CTOs to think all of the technology they use is somehow going to give them that competitive advantage and must therefore be bespoke or at least highly customised packages. This means more effort in supporting those systems once they are deployed. Better is to focus on those aspects of the business’ IT which truly give real business advantage and focus IT budget on those. For the rest use COTS packages or put as much as possible into the cloud and standardise as much as possible. One of the implications of standardisation is that your business needs to change to match the systems you use rather than the other way around. This can often be a hard pill for a business to swallow as they think their processes are unique. Rarely is this the case however so recognising this and adopting standard processes is a good way of freeing up IT time and budget to focus on stuff that really is novel.
  3. Adopt open standards and componentisation. Large monolithic packages which purport to do everything, with appropriate levels of customisation, are not only expensive to build in the first place are likely to be more expensive to run as they cannot easily be updated in a piecemeal fashion. If you want to upgrade the user interface or open up the package to different user channels it may be difficult if interfaces are not published or packages themselves do not have replaceable parts. Very often you may have to replace the whole package or wait for the vendor to come up with the updates. Building applications from a mix of COTS and bespoke components and services which talk through an open API allows more of a mix and match approach to procuring and operating business systems. It also makes it easier to retire services that are no longer required or used. The term API economy is usually used to refer to how a business can expose its business functions (as APIs) to external parties however there is no reason why an internal API economy should not exist. This allows for the ability to quickly subscribe to or unsubscribe to business functionality making business more agile by driving a healthy competition for business function.

Businesses will always need to devote some portion of their IT budget to “keeping the lights on” however there is no reason why, with the adoption of one of more of these practices, the split between maintenance and innovation budgets should not be a more 50:50 one than the current highly imbalanced 70:30 or worse!

Architect Salary Survey

The first ever architecture specific salary survey has just been published in the UK by FMC Technology. In total over 1000 architects responded to the survey. The report looks at architect roles in six main areas:

  • Architecture Management
  • Enterprise Architecture
  • Business Architecture
  • Information Architecture
  • Application Architecture
  • Technology Architecture

It also looks at pay rises (in 2013), regional and industry differences as well as motivational factors

The results can be viewed here.

The Art of the Possible

This is an edited version of a talk I recently gave to a client. The full talk used elements of my “Let’s Build a Smarter Planet” presentation which you can find starting here.

The author, entrepreneur, marketer, public speaker and blogger Seth Godin has a wonderful definition for what architects do:

Architects take existing components and assemble them in interesting and important ways.

Software architects today have at their disposal a number of ‘large grain’ components, the elements of which we can assemble in a multitude of “interesting and important” ways to make fundamental changes to the world and truly build a smarter planet. These components are shown in the diagram below.

The authors Robert Scoble and Shel Israel in their book Age of Context describe the coming together of these components (actually their components are mobile, social, data, sensors and location) as a perfect storm comparing them with the forces of nature that occasionally converge to whip up a fierce tropical storm.

Of course, like any technological development, there is a down side to all this. As Scoble and Israel point out in their book:

The more the technology knows about you, the more benefits you will receive. That can leave you with the chilling sensation that big data is watching you…

I’ve taken a look at some of this myself here.

Predicting the future is of course a notoriously tricky business. As the late, great science fiction author Aurtur C. Clarke said:

When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

The future, even five years hence, is likely to be very different from what it is now and predicting what might be or not be, even that far ahead, is not an exact science. Despite the perils of making predictions such as this IBM Research’s so called 5 in 5 predictions for this year describe five innovations that will change the way we live, from classrooms that learn to cyber guardians, within the next five years. Here are five YouTube videos that describe these innovations. Further information of 5 in 5 can be found here.

  1. The classroom will learn you.
  2. Buying local will beat online.
  3. Doctors will routinely use your DNA to keep you well.
  4. The city will help you live in it.
  5. A digital guardian will protect you online.

We already have the technology to make our planet ‘smarter’. How we use that technology is limited only by our imagination…

Let’s Build a Smarter Planet – Part IV

This is the fourth and final part of the transcript of a lecture I recently gave at the University of Birmingham in the UK.In Part I of this set of four posts I tried to give you a flavour of what IBM is and what it is trying to do to make our planet smarter. In Part II I looked at my role in IBM and in Part III I looked at what kind of attributes IBM looks for in its graduate entrants. In this final part I take a look at what I see as some of the challenges we face in a world of open and ubiquitous data where potentially anyone can know anything about us and what implications that has on people who design systems that allow that to happen.

So let’s begin with another apocryphal tale…ec12d-whosewatchingyou

Target is the second largest (behind Walmart) discount retail store in America. Using advanced analytics software one of Target’s data analysts identified 25 products that when purchased together indicate a women is likely to be pregnant. The value of this information was that Target could send coupons to the pregnant woman at an expensive and habit-forming period of her life.

In early 2012 a man walked into a Target store outside Minneapolis and demanded to see the manager. He was clutching coupons that had been sent to his daughter, and he was angry, according to an employee who participated in the conversation. “My daughter got this in the mail!” he said. “She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?”

The manager didn’t have any idea what the man was talking about. He looked at the mailer. Sure enough, it was addressed to the man’s daughter and contained advertisements for maternity clothing, nursery furniture and pictures of smiling infants. The manager apologized and then called a few days later to apologize again.

On the phone, though, the father was somewhat abashed. “I had a talk with my daughter,” he said. “It turns out there’s been some activities in my house I haven’t been completely aware of. She’s due in August. I owe you an apology.”fd140-thisisforeveryone

Two of the greatest inventions of our time are the internet and the mobile phone. When Tim Berners-Lee appeared from beneath the semi-detached house that lifted up from the ground of the Olympic stadium during the London 2012 opening ceremony and the words “this is for everyone” flashed up around the edge of the stadium there can surely be little doubt that he had earned his place there. However as with any technology there is a downside as well as an upside. A technology that gives anyone, anywhere access to anything they choose has to be treated with great care and responsibility (as Spiderman’s uncle said, “with great power comes great responsibility”). The data analyst at Target was only trying to improve his companies profits by identifying potential new consumers of its baby products. Inadvertently however he was uncovering information that previously would have been kept very private and only known to a few people. What should companies do in balancing a persons right to privacy with a companies right to identify new customers?

There is an interesting book out at the moment called Age of Context in which the authors examine the combined effects of five technological ‘forces’ that they see as coming together to form a ‘perfect storm’ that they believe are going to change forever our world. These five forces are mobile, social media, (big) data, sensors and location aware services. As the authors state:

The more the technology knows about you, the more benefits you will receive. That can leave you with the chilling sensation that big data is watching you…

In the Internet of Things paradigm, data is gold. However, making that data available relies on a ‘contract’ between suppliers (usually large corporations) and consumers (usually members of the public). Corporations provide a free or nominally-priced service in exchange for a consumer’s personal data. This data is either sold to advertisers or used to develop further products or services useful to consumers. Third-party applications, which build off the core service, poach customers (and related customer data) from such applications. For established networks and large corporations, this can be detrimental practice because such applications eventually poach their customers. In such a scenario, large corporations need to balance their approach to open source with commercial considerations.

Companies know that there is a difficult balancing act between doing what is commercially advantageous and doing what is ethically the right. As the saying goes – a reputation takes years to be built but can be destroyed in a matter of minutes.

IBM has an organisation within it called the Academy of Technology (AoT) which has as its membership around 1000 IBM’ers from its technical community. The job of the AoT is to focus on “uncharted business and technical opportunities” that help to “facilitate IBM’s technical development” as well as “more tightly integrate the company’s business and technical strategy”. As an example of the way IBM concerns itself with issues highlighted by the story about Target one of the studies the academy looked at recently was into the ethics of big data and how it should approach problems we have mentioned here. Out of that study came a recommendation for a framework the company should follow in pursuing such activities.

This ethical framework is articulated as a series of questions that should be asked when embarking on a new or challenging business venture.

  1. What do we want to do?
  2. What does the technology allow us to do?
  3. What is legally allowable?
  4. What is ethically allowable?
  5. What does the competition do?
  6. What should we do?

As an example of this consider the insurance industry.

  • The Insurance Industry provides a service to society by enabling groups of people to pool risk and protect themselves against catastrophic loss.
  • There is a duty to ensure that claims are legitimate.
  • More information could enable groups with lower risk factors to reduce their cost basis but those in higher risk areas would need to increase theirs.
  • Taken to the extreme, individuals may no longer be able to buy insurance – e.g. using genetic information to determine medical insurance premium.

How far should we take using technology to support this extreme case? Whilst it may not be breaking any laws to raise someones insurance premium to a level where they cannot afford it, is it ethically the right thing to do?Make no mistake the challenges we face in making our planet smarter through the proper and considered use of information technology are considerable. We need to address questions such as how do we build the systems we need, where does the skilled and creative workforce come from that can do this and how do we approach problems in new and innovative ways whilst at the same time doing what is legally and ethically right.

The next part is up to you…

Thank you for your time this afternoon. I hope I have given you a little more insight into the type of company IBM is, how and why it is trying to make the planet smarter and what you might do to help if you choose to join us. You can find more information about IBM and its graduate scheme here and you can find me on Twitter and Linkedin if you’d like to continue the conversation (and I’d love it if you did).

Thank you!

Let’s Build a Smarter Planet – Part III

This is the third part of the transcript of a lecture I recently gave at the University of Birmingham in the UK.In Part I of this set of four posts I tried to give you a flavour of what IBM is and what it is trying to do to make our planet smarter.

In Part II I looked at my role in IBM and here I look at what kind of attributes IBM looks for in its graduate entrants.When I found out I was going to be doing this lecture one of the things I realised was that there was a danger I would appear too remote and disconnected from where you are today. After all, it was nearly 35 years ago when I was sitting where you are and I suspect that the thought of listening to an old timer like me going on for an hour was not very enticing. This being the case I asked a few of my (much) younger colleagues, graduate entrants, what their thoughts were on IBM and why they had joined.

One person in particular, a young zoology graduate from Cardiff University, whom I work with at the moment said that as well as the good R&D record IBM had and it’s whole smarter planet agenda the reason she joined IBM was that she:

“Wanted to be part of an organisation that cared about the world and was making an effort to change things for the better. With a fast growing and aging population we need to prepare cities, towns, hospitals, transport systems etc to be able to cope with the change. IBM seemed to understand that and seemed to be involved in trying to work out what options there are.”

I hope this shows you that IBM’s smarter planet agenda is not just marketing hype but is also about genuinely trying to make a difference to the way the world works through the intelligent application of information technology.  In order to do that it needs people who can solve some of the wicked problems there are out in the world today as well as challenge conventional wisdom. Here’s another story to show this…

In the early 70’s stores needed a quick way of entering product data into their systems so they knew what they had in stock. There were a number of competing standards for what were referred to as Universal Product Codes or UPC’s. An IBM engineer was asked to write a technical paper in support of a spherical code from the company RCA to be presented to executives to get the go ahead to support that standard and develop scanning hardware. The engineer however investigated the feasibility of this and realised it would not work. The error rate on scanning this pattern was too high. He went against what his management  asked him to do and went for this format instead…c9055-barcode

Something very familiar to you all I’m sure. The point being that even then anybody in IBM could challenge their managers and be listened to provided they had the right evidence to back it up. Challenging conventional wisdom is something that is and always has been valued in this company.

Today we can challenge conventional wisdom and question things more easily than ever. Thanks to technology anyone can get to anyone.  There are no boundaries, no real hierarchies, in a world where we are all just a few Facebook friends or LinkedIn connections away from nearly everyone. You no longer have to worry about where you sit in a hierarchy, instead you just need to concentrate on what your contribution is going to be (how are you going to make that dent in the universe).

As the blogger Hugh MacLeod says:

“So your job title and job description is not what matters anymore.  A smart recruiter is not going to ask you what your title is.  They are going to ask you what have you actually done lately.  What have you accomplished? More importantly what do you want to do? Who and what will you challenge?”

87f10-stopworryingHere’s a set of characteristics that the most successful people in IBM share…
  • Adaptability. How do you cope with changing demands and stress? Are you flexible? Have you successfully completed several projects with competing deadlines?
  • Communication. Do you present information clearly, precisely and succinctly? Adapt the way you communicate to your audience? And listen to others?
  • Client focus. Can you see a situation from a client’s viewpoint, whether that’s colleagues or customers? Can you anticipate their needs?
  • Creative problem solving. Do you use ingenuity, supported by logical methods and analysis, to propose solutions? Can you anticipate problems? Do you put forward innovative ideas?
  • Drive. Will you proactively learn new skills – even if they’re beyond the scope of your current job? Will you put in the time and energy needed to achieve results?
  • Passion for IBM. Do you know what IBM does and what our most recent achievements are? Are you up to speed with the latest trends in our industry? What are the biggest challenges we face? You’ll need the facts at your fingertips and the enthusiasm to match.
  • Teamwork. How do you work with others to achieve shared goals? Do you easily build relationships with others? Are you a team player?
  • Taking ownership. Do you take responsibility for tasks/decisions? And implement decisions with speed? Can you show when you’ve worked to correct your mistakes?

You can find more detail on what IBM is looking for in its graduates and how to apply if you are interested by going here.

Part IV of this talk is here.