The ethics of contact tracing

After a much publicised “U-turn” the UK government has decided to change the architecture of its coronavirus contact tracing system and to embrace the one based on the interfaces being provided by Apple and Google. The inevitable cries of a government that does not know what it is doing, we told you it wouldn’t work and this means we have wasted valuable time in building a system that would help protect UK citizens have ensued. At times like these it’s often difficult to get to the facts and understand where the problems actually lie. Let’s try and unearth some facts and understand the options for the design of a contact tracing app.

Any good approach to designing a system such as contact tracing should, you would hope, start with the requirements. I have no government inside knowledge and it’s not immediately apparent from online searches what the UK governments exact and actual requirements were. However as this article highlights you would expect that a contact tracing system would need to “involve apps, reporting channels, proximity-based communication technology and monitoring through personal items such as ID badges, phones and computers.” You might also expect it to involve cooperation with local health service departments. Whether or not there is also a requirement to collate data in some centralised repository so that epidemiologists, without knowing the nature of the contact, can build a model of contacts to see if they are serious spreaders or those who have tested positive yet are asymptomatic, at least for the UK, is not clear. Whilst it would seem perfectly reasonable to want the system to do that, this is a different use case to the one of contact tracing. One might assume that because the UK government was proposing a centralised database for tracking data this latter use case was also to be handled by the system.

Whilst different countries are going to have different requirements for contact tracing one would hope that for any democratically run country a minimum set of requirements (i.e. privacy, anonymity, transparency and verifiability, no central repository and minimal data collection) would be implemented.

The approach to contact tracing developed by Google and Apple (the two largest providers of mobile phone operating systems) was published in April of this year with the detail of the design being made available in four technical papers. Included as part of this document set were some frequently asked questions where the details of how the system would work were explained using the eponymous Alice and Bob notation. Here is a summary.

  1. Alice and Bob don’t know each other but happen to have a lengthy conversation sitting a few feet apart on a park bench. They both have a contact tracing app installed on their phones which exchange random Bluetooth identifiers with each other. These identifiers change frequently.
  2. Alice continues her day unaware that Bob had recently contracted Covid-19.
  3. Bob feels ill and gets tested for Covid-19. His test results are positive and he enters his result into his phone. With Bob’s consent his phone uploads the last 14 days of keys stored on his phone to a server.
  4. Alice’s phone periodically downloads the Bluetooth beacon keys of everyone who has tested positive for Covid-19 in her immediate vicinity. A match is found with Bob’s randomly generated Bluetooth identifier.
  5. Alice sees a notification on her phone warning her she has recently come into contact with someone who has tested positive with Covid-19. What Alice needs to do next is decided by her public health authority and will be provided in their version of the contact tracing app.

There are a couple of things worth noting about this use case:

  1. Alice and Bob both have to make an explicit choice to turn on the contact tracing app.
  2. Neither Alice or Bob’s names are ever revealed, either between themselves or to the app provider or health authority.
  3. No location data is collected. The system only knows that two identifiers have previously been within range of each other.
  4. Google and Apple say that the Bluetooth identifiers change every 10-20 minutes, to help prevent tracking and that they will disable the exposure notification system on a regional basis when it is no longer needed.
  5. Health authorities of any other third parties do not receive any data from the app.

Another point to note is that initially this solution has been released via application programming interfaces (APIs) that allow customised contact tracing apps from public health authorities to work across Android and iOS devices. Maintaining user privacy seems to have been a key non-functional requirement of the design. The apps are made available from the public health authorities via the respective Apple and Google app stores. A second phase has also been announced whereby the capability will be embedded at the operating system level meaning no app has to be installed but users still have to opt into using the capability. If a user is notified she has been in contact with someone with Covid-19 and has not already downloaded an official public health authority app they will be prompted to do so and advised on next steps. Only public health authorities will have access to this technology and their apps must meet specific criteria around privacy, security, and data control as mandated by Apple and Google.

So why would Google and Apple choose to implement its contact tracing app in this way which would seem to be putting privacy ahead of efficacy? More importantly why should Google and Apple get to dictate how countries should do contact tracing?

Clearly one major driver from both companies is that of security and privacy. Post-Snowden we know just how easy it has been for government security agencies (i.e. the US National Security Agency and UK’s Government Communications Headquarters) to get access to supposedly private data. Trust in central government is at an all time low and it is hardly surprising that the corporate world is stepping in to announce that they were the good guys all along and you can trust us with your data.

Another legitimate reason is also that during the coronavirus pandemic we have all had our ability to travel even locally, never mind nationally or globally, severely restricted. Implementing an approach that is supported at the operating system level means that it should be easier to make the app compatible with other countries’ counterparts, which are based on the same system therefore making it safer for people to begin travelling internationally again.

The real problem, at least as far as the UK has been concerned, is that the government has been woefully slow in implementing a rigorous and scaleable contact tracing system. It seems as though they may have been looking at an app-based approach to be the silver bullet that would solve all of their problems – no matter how poorly identified these are. Realistically that was never going to happen, even if the system had worked perfectly. The UK is not China and could never impose an app based contact tracing system on its populace, could it? Lessons from Singapore, where contact tracing has been in place for some time, are that the apps do not perform as required and other more intrusive measures are needed to make them effective.

There will now be the usual blame game between government, the press, and industry, no doubt resulting in the inevitable government enquiry into what went wrong. This will report back after several months, if not years, of deliberation. Blame will be officially apportioned, maybe a few junior minister heads will roll, if they have not already moved on, but meanwhile the trust that people have in their leaders will be chipped away a little more.

More seriously however, will we have ended up, by default, putting more trust into the powerful corporations of Silicon Valley some of whom not only have greater valuations than many countries GDP but are also allegedly practising anti-competitive behaviour?

Update: 21st June 2020

Updated to include link to Apple’s anti-trust case.

The real reason Boris Johnson has not (yet) sacked Dominic Cummings

Amidst the current press furore over ‘CummingsGate’ (you can almost hear the orgiastic paroxysms of sheer ecstasy emanating from Guardian HQ 250 miles away at Barnard Castle as the journalists there finally think they have got their man) I think everyone really is missing the point. The real reason Johnson is not sacking Cummings (or at least hasn’t at the time of writing) is because Cummings is his ‘dataist-in-chief’ (let’s call him Johnson’s DiC for short) and having applied his dark arts twice now (the Brexit referendum and the 2019 General Election) Cummings has proven his battle worthiness. It would be like Churchill (Johnson’s hero and role model) blowing up all his Spitfires on the eve of the Battle of Britain. The next battle Johnson is going to need his DiC for being the final push to get us out of the EU on 31st December 2020.

Dominic Cummings is a technocrat. He believes that science, or more precisely data science, can be deployed to understand and help solve almost any problem in government or elsewhere. Earlier this year he upset the governments HR department by posting a job advert, on his personal blog for data scientists, economists and physicists (oh, and weirdos). In this post he says “some people in government are prepared to take risks to change things a lot” and the UK now has “a new government with a significant majority and little need to worry about short-term unpopularity”. He saw these as being “a confluence” implying now was the time to get sh*t done.

So what is dataism, why is Cummings practicing it and what is its likely impact for us going to be moving forward?

The first reference to dataism was by David Brooks, the conservative political commentator, in his 2013 New York Times article The Philosophy of Data. In this article Brooks says:

“We now have the ability to gather huge amounts of data. This ability seems to carry with it certain cultural assumptions — that everything that can be measured should be measured; that data is a transparent and reliable lens that allows us to filter out emotionalism and ideology; that data will help us do remarkable things — like foretell the future”.

David Brooks, The Philosophy of Data

Dataism was then picked up by historian Yuval Noah Harari in his 2016 book Homo Deus. Harari went as far to call dataism a new form of religion which joins together biochemistry and computer science whose algorithms obey the same mathematical laws.

The central tenet of dataism is the idea that the universe gives more value to systems, individuals, and societies that generate the most data to be consumed and processed by algorithms. Harari states that “according to dataism Beethovens Fifth Symphony, a stock-exchange bubble and the flu virus are just three patterns of data flown that can be analysed using the same basic concepts and tools“. That last example is obviously the most relevant to our current situation with SAR-COV-2 or coronavirus still raging around the world and which Cummings, as far as we know, is focused on.

As computer scientist Steven Parton says here:

Dataists believe we should hand over as much information and power to these [big data and machine learning] algorithms as possible, allowing the free flow of data to unlock innovation and progress unlike anything we’ve ever seen before“.

Steven Parton

This, I believe, is Cummings belief also. He has no time for civil servants who are humanities graduates that “chat about Lacan at dinner parties” when they ought to be learning about numbers, probabilities and predictions based on hard data.

Whilst I have some sympathy with the idea of bringing science and data more to the fore in government you have to ask, if Cummings is forging ahead in creating a dataist civil service somewhere in the bowels of Downing Street, why are our COVID-19 deaths the worst, per capita, in the world? This graph shows the data for deaths per 100,000 of population (2018 population data) for the major economies of the world (using this data source.). You’ll see that as of 1st June 2020 the UK is faring the worst of all countries, having just overtaken Spain.

Unfortunately Cummings has now blotted his copybook twice in the eyes of the public and most MPs. Not only did he ignore the governments advice (which he presumably was instrumental in creating) and broke the rules on lockdown he was also found guilty of editing one of his own blog posts sometime between 8 April 2020 and 15 April 2020 to include a paragraph on SARS (which, along with Covid-19, is also caused by a coronavirus) to make out he had been warning about the disease since March of 2019.

Not only is Cummings ignoring the facts derived from the data he is so fond of using he is also doctoring data (i.e. his blog post) to change those facts. In many ways this is just another form of the data manipulation that was being carried out by Cambridge Analytica, the firm that Cummings allegedly used during the Brexit referendum, to bombard peoples Facebook feeds with ‘misleading’ information about the EU.

Cummings is like Gollum in Lord of the Rings. Gollum became corrupted by the power of the “one ring that ruled them all” and turned into a bitter and twisted creature that would do anything to get back “his precious” (the ring). It seems that data corrupts just as much as power. Hardly surprising really because in the dataist’s view of the world data is power.

All in all not a good look for the man that is meant to be changing the face of government and bringing a more data-centric (AKA dataist) approach to lead the country forward post-Brexit. If you cannot trust the man who is leading this initiative how can you trust the data and, more seriously, how can you trust the person who Cummings works for?


Update: 8th June 2020

Since writing this post I’ve read that Belgium is actually the country with the highest per-capita death rate from Covid-19. Here then is an update of my graph which now includes the G7 countries plus China, Spain and Belgium showing that Belgium does indeed have 20 more deaths per capita than the next highest, the UK.

It appears however that Belgium is somewhat unique in how it reports its deaths, being one of the few countries counting deaths in hospitals and care homes and also including deaths in care homes that are suspected, not confirmed, as Covid-19 cases. I suspect that for many countries, the UK included, deaths in care homes is going to end up being one of the great scandals of this crisis. In the UK ministers ordered 15,000 hospital beds to be vacated by 27 March and for patients to be moved into care homes without either adequate testing or adequate amounts of PPE being available.

Do Startups Need Enterprise Architectures?

And by implication, do they need Enterprise Architects?

For the last few years I have been meeting with startup and scaleup founders, in and around my local community of Birmingham in the United Kingdom, offering thoughts and advice on how they should be thinking about the software and systems architectures of the applications and platforms they are looking to build. Inevitably, as the discussions proceed and we discuss not just what they need now (the minimum viable architecture if you like) but what they might need in the future, as they hopefully scale and grow, the question arises of how and why they need to worry about architecture now and can’t leave it until later when hopefully they’ll have a bit more investment and be able to hire people to do it “properly”.

To my mind this is a bit of a chicken and egg type question. Do they set the groundwork properly now, in the hope that this will then form the foundation on which they grow or, do they just hack something together “quick and dirty” to get paying customers onto the platform and only then think about architecture and go through the pain of moving onto a more robust platform that will be future proof? Or do they try and have their cake and eat it and somehow do a bit of both? Spending too much time on pointless architecture may mean you never get to launch because you run out of time or money. On the other hand if you don’t have at least the basics of an architecture you might launch but quickly collapse if your system cannot support an unexpected surge in users or maybe a security breach.

These are important questions to consider and ones which need to be addressed early on in the startup cycle, at least to the degree that if an enterprise architecture (EA) is not laid out, the reasons for not doing so are clear. In other words has an architecture decision been made for having an EA or not?

No startup being formed today should consider that IT and business are separate endeavours. Whether you are a new cake shop selling patisseries using locally sourced ingredients or a company launching a new online social media enterprise that hopes to take onFacebook, IT will be fundamental to your business model. Decisions you make when starting out could live with you for a long, long time (and remember just five years is a long time in the IT world).

Interestingly it’s often the business folk that understand the need for doing architecture early on rather than IT people. Possibly this is because business people are looking at costs not just today but over a five year period and want to minimise the amount of IT rework they need to do. IT folk just see it as bringing in new cool toys to play with every year or so (and may not even be around in five years time anyway as they are more likely to be contract staff).

Clearly the amount of EA a startup or scaleup needs has to be in proportion to the size and ambitions of the business. If they are a one or two man band who just needs a minimum viable product to show to investors then maybe a simple solution architecture captured using, for example, the C4 model for software architecture will suffice.

For businesses who are beyond their first seed rounding of funding and are into series A or B investment then I do believe they should be thinking seriously about using part of that investment to build some elements of an EA. Whether you use TOGAF or Zachman or any other framework doesn’t really matter. What does matter is that you capture the fundamental organisation of the system you wish to build to help offset the amount of technical debt you are prepared to take on.

Here are some pointers and guidelines for justifying an EA to your founders, investors and employees.

Reducing Your Technical Debt

Technical debt is what you get when you release not-quite-right code out into the world. Every minute spent on not-quite-right code counts as interest on that debt. The more you can do to ensure “right-first-time” deployments the lower your maintenance costs and the more of your budget you’ll have to spend on innovation rather than maintenance. The lower, in other words, are your technical debt interest repayments. For a scaleup this is crucial. As much of every dollar of your investment funding that can go on marketing or innovating your platform leads to an improved bottom line, more customers and overall making you attractive to potential buyers. This leads to…

Enhancing Your Exit Strategy

Many, if not most, startups have an exit strategy which usually involves being brought out by a larger company making the founders and initial investors rich beyond their wildest dreams enabling them to go on and found other startups (or retire onto a yacht in the Caribbean). Those large companies are going to want to see what they are getting for their money and having a well thought through and documented EA is a step on the way to proving that. The large company does not want to become another HP who got its fingers badly burnt in the infamous Autonomy buyout.

To Infinity and Beyond

Maybe your founder is another Mark Zuckerberg who refuses the advances of other companies and instead decides to go it alone in conquering the world. If successful there is going to be some point at which your user base grows beyond what you ever thought possible. If that’s the case hopefully you architected your system to be extensible as well as manageable to support all those users. Whilst there are no guarantees at least having the semblance of an EA will reduce the chances of having to go back to square one and rebuilding your whole technology base from scratch.

Technology Changes

Hardly an Earth shattering statement but often one which you tend not to consider when starting out. As technologists, we all know the ever increasing pace of change of the technology we deal with and the almost impossible task of keeping up with such change. Whilst there may be no easy ways to deal with the complete left-field, unexpected (who’d have thought blockchain would be a thing just 10 years go) having a well thought through business, data, application and technology architecture which contains loosely coupled components with well understood functionality and relationships at least means you can change or upgrade those components when the technology improves.

K.I.S.S (Keep It Simple Stupid)

It is almost certain that the first version of the system you come up with will be more complicated than it needs to be (which is different from saying the system is complex). Anyone who invests in V1.0 or even V2.0 of a system is almost certainly buying more complexity than they need. That complexity will manifest itself in how easy (or not) it is to maintain or upgrade the solution or how often it fails or under-performs. By following at least part of a well thought through architecture development method (ADM) which you iterate over a few times in the early stages of the development process you should be able to spot complexity, redundant components or ones which should not be built by you at all but which should be procured instead as commercial of the shelf (COTS) products. Bear in mind of course that COTS products are not always what they seem and using EA and an ADM to evaluate such products can save much grief further down the road.

It’s All About Strategy

For a startup, or even a scaleup, getting your strategy right is key. Investors often want to see a three or even five year plan which shows how you will execute on your strategy. Without having a strategy which maps out where you want to go how will you ever be able to build a plan that shows how you will get there?

Henry Mintzberg developed the so called 5P’s of Strategy as being:

  1. Strategy as Planning – large planning exercises, defining the future of the organisation.
  2. Strategy as a Ploy – to act to influence a competitor or market.
  3. Strategy as a Position – to act to take a chosen place in the chosen market.
  4. Strategy as Pattern – the strategy that has evolved over time.
  5. Strategy as a Perspective – basing the strategy on cultural values or similar non-tangible concepts.

For startups, where it is important to understand early on your ploy (who are you going to disrupt), your position (where do you want to place yourself and what differentiates you in the market) and perspective (what are your cultural values that makes you stand out), an EA can help you formulate these. Your business architecture can be developed to show how it supports these P’s and your technology architecture can show how it implements them.

As an aside one of the most effective tools I have used for understanding strategy and mapping and deciding what should be built versus what should be bought are Wardley maps. Not usually part of an EA or an ADM but worthwhile investigating.

So, in summary, whilst having an EA for a startup will in no way guarantee its success I believe not having an EA could inhibit its smooth growth and in the long run cost more in rework and refactoring (i.e. to reduce the amount of technical debt) than the cost of putting together, at least an outline EA, in the first place. How much EA is right for your startup/scaleup is up to you but it’s worthwhile giving some thought to this as part of your initial planning.

 

 

How could blockchain drive a more responsible approach to engaging with the arts?

Image Courtesy of Tran Mai Khanh
Image Courtesy of Tran Mai Khanh

This is the transcript of a talk I gave at the 2018 Colloquium on Responsibility in Arts, Heritage, Non-profit and Social Marketing at the University of Birmingham Business School  on 17th September 2018.

Good morning everyone. My name is Peter Cripps and I work as a Software Architect for IBM in its Blockchain Division.

As a Software Architect my role is to help IBM’s clients understand blockchain and to architect systems built on this exciting new technology.

My normal world is that of finance, commerce and government computer systems that we all interact with on a day to day basis. In this talk however I’d like to discuss something a little bit different from my day to day role. I would like to explore with you how blockchain could be used to build a trust based system for the arts world that I believe could lead to a more responsible way for both creators and consumers of art to interact and transact to the mutual benefit of all parties.

First however let’s return to the role of the Software Architect and explain how two significant architectures have got us to where we are today (showing that the humble Software Architect really can change the world).

Architects take existing components and…

Seth on Architects
Seth on Architects

This is one of my favourite definitions of what architects do. Although Seth was talking about architects in the construction industry, it’s a definition that very aptly applies to Software Architects as well. By way of illustration here are two famous examples of how architects took some existing components and assembled them in very interesting ways.

1989: Tim Berners-Lee invents the World Wide Web

Tim Berners Lee and the World Wide Web
Tim Berners Lee and the World Wide Web

The genius of Tim Berners-Lee, when he invented the World Wide Web in 1989, was that he brought together three arcane technologies (hypertext, mark-up languages and Internet communication protocols) in a way no one had thought of before and literally transformed the world by democratising information. Recently however, as Berners Lee discusses in an interview in Vanity Fair, the web has begun to reveal its dark underside with issues of trust, privacy and so called fake news dominating much of the headlines over the past two years.

Interestingly, another invention some 20 years later, promises to address some of the problems now being faced by a society that is increasingly dependent on the technology of the web.

2008: Satoshi Nakamoto invents Bitcoin

Satoshi Nakamoto and Bitcoin
Satoshi Nakamoto and Bitcoin

Satoshi Nakamoto’s paper Bitcoin: A Peer-to-Peer Electronic Cash System, that introduced the world to Bitcoin in 2009, also used three existing ideas (distributed databases, cryptography and proof-of-work) to show how a peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. His genius idea, in a generalised form, was a platform that creates a new basis of trust for business transactions that could ultimately lead to a simplification and acceleration of the economy. We call this blockchain. Could blockchain, the technology that underpins Bitcoin, be the next great enabling technology that not only changes the world (again) but also puts back some of the trust in a the World Wide Web?

Blockchain: Snake oil or a miracle cure?

Miracle Cure or Snake Oil?
Miracle Cure or Snake Oil?

Depending on your point of view, and personal agenda, blockchain either promises to be a game changing technology that will help address such issues such as the world’s refugee crisis and the management of health supply chains or is the most over-hyped, terrifying and foolish technology ever. Like any new technology we need to be very careful to separate the hype from the reality.

What is blockchain?

Setting aside the hype, blockchain, at its core, is all about trust. It provides a mechanism that allows parties over the internet, who not only don’t trust each other but may not even know each other, to exchange ‘assets’ in a secure and traceable way. These assets can be anything from physical items like cars and diamonds or intangible ones such as music or financial instruments.

Here’s a definition of what a blockchain is.

An append-only, distributed system of record (a ledger) shared across a business network that provides transaction visibility to all involved participants.

Let’s break this down:

  1. A blockchain is ‘append only’. That means once you’ve added a record (a block) to it you cannot remove it.
  2. A blockchain is ‘distributed’ which means the ledger, or record book, is not just sitting in one computer or data centre but is typically spread around several.
  3. A ‘system of record’ means that, at its heart, a blockchain is a record book describing the state of some asset. For example that a car with a given VIN is owned by me.
  4. A blockchain is ‘shared’ which means all participants get their own copy, kept up to date and synchronised with all other copies.
  5. Because it’s shared all participants have ‘visibility’ of the records or transactions of everyone else (if you want them to).

A business blockchain network has four characteristics…

Business blockchains can be characterised as having these properties:

Consensus

All parties in the network have to agree that a transaction is valid and that it can be added as a new block on the ledger. Gaining such agreement is referred to as consensus and various ways of reaching such consensus are available. One such consensus technique, which is used by the Bitcoin blockchain is referred to as proof-of-work. In Bitcoin proof-of-work is referred to as mining which is a highly compute intensive process as miners must compete to solve a mathematically complex problem to earn new coins. Because of its complexity, mining uses large amounts of computing power. In 2015 it was estimated that the Bitcoin mining network consumed about the same amount of energy as the whole of Ireland!

Happily, however, not all blockchain networks suffer from this problem as they do all not use proof-of-work as a consensus mechanism. Hyperledger, an open source software project owned and operated by the Linux Foundation , provides several different technologies that do not require proof-of-work as a consensus mechanism and so have vastly reduced energy requirements. Hyperledger was formed by over 20 founding companies in December 2015. Hyperledger blockchains are finding favour in the worlds of commerce, government and even the arts! Further, because Hyperledger is an open source project, anyone with access to the right skillset can build and operate their own blockchain network.

Immutability

This means that once you add a new block onto the ledger, it cannot be removed. It’s there for ever and a day. If you do need to change something then you must add a new record saying what that change is. The state of an asset on a blockchain is then the sum of all blocks that refer to asset.

Provenance

Because you can never remove a block from the ledger you can always trace back in time the history of assets being described on the ledger and therefore determine, for example, where it originated or how ownership has changed over time.

Finality

The shared ledger is the place that all participants agree stores ‘the truth’. Because the ledgers records cannot be removed and everyone has agreed them being recorded on there, that is the final source of truth.

… with smart contracts controlling who does what

Another facet of blockchain is the so called ‘smart contract’. Smart contracts are pieces of code that autonomously run on the blockchain, in response to some event, without the interference of a human being. Smart contracts change the state of the blockchain and are responsible for adding new blocks to the chain. In a smart contract the computer code is law and, provided all parties have agreed in advance the validity of that code, once it has run changes to the blockchain cannot be undone but become immutable. The blockchain therefore acts as a source of permanent knowledge about the state of an asset and allows the provenance of any asset to be understood. This is a fundamental difference between a blockchain and an ordinary database. Once a record is entered it cannot be removed.

Some blockchain examples

Finally, for this quick tour of blockchain, let’s take a look at a couple of industry examples that IBM has been working on with its clients.

The first is a new company called Everledger which aims to record on a blockchain the provenance of high value assets, such as diamonds. This allows people to know where assets have come from and how ownership has changed over time avoiding fraud and issues around so called ‘blood diamonds’ which can be used to finance terrorism and other illegal activities.

The second example is the IBM Food Trust Network, a consortium made up of food manufacturers, processors, distributors, retailers and others that allow for food to be tracked from ‘farm to fork’. This allows, for example, the origin of a particular food to be quickly determined in the case of a contamination or outbreak of disease and for only effected items to be taken out the supply chain.

What issues can blockchain address in the arts world?

In the book Artists Re:Thinking the Blockchain various of the contributors discuss how blockchain could be used to create new funding models for the arts by the “renegotiation of the economic and social value of art” as well as helping artists “to engage with new kinds of audiences, patrons and participants.” (For another view of blockchain and the arts see the documentary The Blockchain and Us).

I also believe blockchain could help tackle some of the current problems around trust and lack of privacy on the web as well as address issues around the accumulation of large amounts of user generated content at virtually no cost to the owners in what the American computer scientist Jaron Lanier calls “siren-servers” .

Let’s consider two aspects of the art world that blockchain could address:

Trust

As a creator how do I know people are using my art work legitimately? As a consumer how do I know the creator of the art work is who they say they are and the art work is authentic?

Value

As a creator how do I get the best price for my art work? As a consumer how do I know I am not paying too much for an art work?

Challenges/issues of the global art market (and how blockchain could address them)

Let’s drill down a bit more into what some of these issues are and how a blockchain network could begin to address them. Here’s a list of nine key issues that various players in the world of arts say impacts the multi-billion pound art market and which blockchain could help address in the ways I suggest.

Art Issues
To be clear, not all of these issues will be addressed by technology alone. As with any system that is introduced it needs not only the buy-in of the existing players but also a sometimes radical change in the underlying business model that the current system has developed. ArtChain is one such company that is looking to use blockchain to address some of these issues. Another is the online photography community YouPic.

Introducing YouPic Blockchain

YouPic is an online community for photographers which allows photographers to not only share their images but also receive payment. YouPic is in the process of implementing a blockchain that allows photographers to retain more control over their images. For example:

  1. Copyright attribution.
  2. Image tracking and copyright tools.
  3. Smart contracts for licensing

Every image has a unique fingerprint so when you look up the fingerprint or a version of the image it points out all of the licensing information the creator has listed.

The platform could, for example, search the web to identify illicit uses of images and if identified contact the creator to notify them of a potential copyright breach.

You could also use smart contracts to manage you images automatically, e.g. receive payments in different currencies, or maybe you want to distribute payment to other contributors or just file a claim if your image is used without your consent.

ArtLedger

ArtLedger

ArtLedger is a sandbox I am developing for exploring some of these ideas. It’s open source and available on GitHub. I have a very rudimentary proof of concept running in the IBM Cloud that allows you to interact with a blockchain network with some of these actors.

I’m encouraging people to go onto my GitHub project, take a look at the code and the instructions for getting it working and have a play with the live system. I will be adapting it over time to add more functions and see how the issues in the previous stage could be addressed as well as exploring funding models for how such a network could become established and grow.

Summary

So, to summarise:

  • Blockchains can provide a system that engenders trust through the combined attributes of: Consensus; Immutability; Provenance; Finality.
  • Consortiums of engaged participants should build networks where all benefit.
  • Many such networks are at the early stages of development. It is still early days for the technology but results are promising and, for the right use cases, systems based on blockchain have the promise of another step change in driving the economy in a fairer and more just way.
  • For the arts world blockchain holds the promise of engaging with new kinds of audiences, patrons and participants and maybe even the creation of new funding models.

The Fall and Rise of the Full Stack Architect

strawberry-layer-cake

Almost three years ago to the day on here I wrote a post called Happy 2013 and Welcome to the Fifth Age! The ‘ages’ of (commercial) computing discussed there were:

  • First Age: The Mainframe Age (1960 – 1975)
  • Second Age: The Mini Computer Age (1975 – 1990)
  • Third Age: The Client-Server Age (1990 – 2000)
  • Fourth Age: The Internet Age (2000 – 2010)
  • Fifth Age: The Mobile Age (2010 – 20??)

One of the things I wrote in that article was this:

“Until a true multi-platform technology such as HTML5 is mature enough, we are in a complex world with lots of new and rapidly changing technologies to get to grips with as well as needing to understand how the new stuff integrates with all the old legacy stuff (again). In other words, a world which we as architects know and love and thrive in.”

So, three years later, are we any closer to having a multi-platform technology? Where does cloud computing fit into all of this and is multi-platform technology making the world get more or less complex for us as architects?

In this post I argue that cloud computing is actually taking us to an age where rather than having to spend our time dealing with the complexities of the different layers of architecture we can be better utilised by focussing on delivering business value in the form of new and innovative services. In other words, rather than us having to specialise as layer architects we can become full-stack architects who create value rather than unwanted or misplaced technology. Let’s explore this further.

The idea of the full stack architect.

Vitruvius, the Roman architect and civil engineer, defined the role of the architect thus:

“The ideal architect should be a [person] of letters, a mathematician, familiar with historical studies, a diligent student of philosophy, acquainted with music, not ignorant of medicine, learned in the responses of juriconsults, familiar with astronomy and astronomical calculations.”

Vitruvius also believed that an architect should focus on three central themes when preparing a design for a building: firmitas (strength), utilitas (functionality), and venustas (beauty).

vitruvian man
Vitruvian Man by Leonardo da Vinci

For Vitruvius then the architect was a multi-disciplined person knowledgable of both the arts and sciences. Architecture was not just about functionality and strength but beauty as well. If such a person actually existed then they had a fairly complete picture of the whole ‘stack’ of things that needed to be considered when architecting a new structure.

So how does all this relate to IT?

In the first age of computing (roughly 1960 – 1975) life was relatively simple. There was a mainframe computer hidden away in the basement of a company managed by a dedicated team of operators who guarded their prized possession with great care and controlled who had access to it and when. You were limited by what you could do with these systems not only by cost and availability but also by the fact that their architectures were fixed and the choice of programming languages (Cobol, PL/I and assembler come to mind) to make them do things was also pretty limited. The architect (should such a role have actually existed then) had a fairly simple task as their options were relatively limited and the number of architectural decisions that needed to be made were correspondingly fairly straight forward. Like Vitruvias’ architect one could see that it would be fairly straight forward to understand the full compute stack upon which business applications needed to run.

Indeed, as the understanding of these computing engines increased you could imagine that the knowledge of the architects and programmers who built systems around these workhorses of the first age reached something of a ‘plateau of productivity’*.

Architecture Stacks 3

However things were about to get a whole lot more complicated.

The fall of the full stack architect.

As IT moved into its second age and beyond (i.e. with the advent of mini computers, personal computers, client-server, the web and early days of the internet) the breadth and complexity of the systems that were built increased. This is not just because of the growth in the number of programming languages, compute platforms and technology providers but also because each age has built another layer on the previous one. The computers from a previous age never go away, they just become the legacy that subsequent ages must deal with. Complexity has also increased because of the pervasiveness of computers. In the fifth age the number of people whose lives are now affected by these machines is orders of magnitude greater than it was in the first age.

All of this has led to niches and specialisms that were inconceivable in the early age of computing. As a result, architecting systems also became more complex giving rise to what have been termed ‘layer’ architects whose specialities were application architecture, infrastructure architecture, middleware architecture and so on.

Architecture Stacks

Whole professions have been built around these disciplines leading to more and more specialisation. Inevitably this has led to a number of things:

  1. The need for communications between the disciplines (and for them to understand each others ‘language’).
  2. As more knowledge accrues in one discipline, and people specialise in it more, it becomes harder for inter-disciplinary understanding to happen.
  3. Architects became hyper-specialised in their own discipline (layer) leading to a kind of ‘peak of inflated expectations’* (at least amongst practitioners of each discipline) as to what they could achieve using the technology they were so well versed in but something of a ‘trough of disillusionment’* to the business (who paid for those systems) when they did not deliver the expected capabilities and came in over cost and behind schedule.

Architecture Stacks 4

So what of the mobile and cloud age which we now find ourselves in?

The rise of the full stack architect.

As the stack we need to deal with has become more ‘cloudified’ and we have moved from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS) it has become easier to understand the full stack as an architect. We can, to some extent, take for granted the lower, specialised parts of the stack and focus on the applications and data that are the differentiators for a business.

Architecture Stacks 2

We no longer have to worry about what type of server to use or even what operating system or programming environments have to be selected. Instead we can focus on what the business needs and how that need can be satisfied by technology. With the right tools and the right cloud platforms we can hopefully climb the ‘slope of enlightenment’ and reach a new ‘plateau of productivity’*.

Architecture Stacks 5

As Neal Ford, Software Architect at Thoughtworks says in this video:

“Architecture has become much more interesting now because it’s become more encompassing … it’s trying to solve real problems rather than play with abstractions.”

 

I believe that the fifth age of computing really has the potential to take us to a new plateau of productivity and hopefully allow all of us to be architects described by this great definition from the author, marketeer and blogger Seth Godin:

“Architects take existing components and assemble them in interesting and important ways.”

What interesting and important things are you going to do in this age of computing?

* Diagrams and terms borrowed from Gartner’s hype cycle.

Non-Functional Requirements and the Cloud

As discussed here the term non-functional requirements really is a complete misnomer. Who would, after all, create a system based on requirements that were “not functional”? Non-functional requirements refer to the qualities that a system should have and the constraints under which it must operate. Non-functional requirements are sometimes referred to as the “ilities,” because many end in “ility,” such as, availability, reliability, and maintainability etc.

Non-functional requirements will  of course an impact on the functionality of the system. For example, a system may quite easily address all of the functional requirements that have been specified for it but if that system is not available for certain times during the day then it is quite useless even though it may be functionally ‘complete’.

Non-functional requirements are not abstract things which are written down when considering the design of a system and then ignored but must be engineered into the systems design just like functional requirements are. Non-functional requirements can have a bigger impact on systems design than can functional requirements and certainly if you get them wrong can lead to more costly rework. Missing a functional requirement usually means adding it later or doing some rework. Getting a non-functional requirement wrong can lead to some very costly rework or even cancelled projects with the knock-on effect that has on reputation etc.

From an architects point of view, when defining how a system will address non-functional requirements, it mainly (though not exclusively) boils down to how the compute platforms (whether that be processors, storage or networking) are specified and configured to be able to satisfy the qualities and constraints specified of it. As more and more workloads get moved to the cloud how much control do we as architects have in specifying the non-functional requirements for our systems and which non-functionals are the ones which should concern us most?

As ever the answer to this question is “it depends”. Every situation is different and for each case some things will matter more than others. If you are a bank or a government department holding sensitive customer data the security of your providers cloud may be upper most in your mind. If on the other hand you are an on-line retailer who wants your customers to be able to shop at any time of the day then availability may be most important. If you are seeking a cloud platform to develop new services and products then maybe the ease of use of the development tools is key. The question really is therefore not so much which are the important non-functional requirements but which ones should I be considering in the context of a cloud platform?

Below are some of the key NFR’s I would normally expect to be taken into consideration when looking at moving workloads to the cloud. These apply whether they are public or private or a mix of the two. These apply to any of the layers of the cloud stack (i.e. Infrastructure, Platform or Software as a Service) but will have an impact on different users. For example availability (or lack of) of a SaaS service is likely to have more of an impact on the business user than developers or IT operations whereas availability of the infrastructure will effect all users.

  • Availability – What percentage of time does the cloud vendor guarantee cloud services will be available (including scheduled maintenance down-times)? Bear in mind that although 99% availability may sound good that actually equates to just over 3.5 days potential downtime a year. Even 99.99 could mean 8 hours down time. Also consider as part of this Disaster Recovery aspects of availability and if more then one physical data centre is used where do they reside? The latter is especially true where data residency is an issue if your data needs to reside on-shore for legal or regulatory reasons.
  • Elasticity (Scalability) – How easy is it to bring on line or take down compute resources (CPU, memory, network) as workload increases or decreases?
  • Interoperability – If using services from multiple cloud providers how easy is it to move workloads between cloud providers? (Hint: open standards help here). Also what about if you want to migrate from one cloud provider to another ? (Hint: open standards help here as well).
  • Security – What security levels and standards are in place? for public/private clouds not in your data centre also consider physical security of the cloud providers data centres as well as networks. Data residency again needs to be considered as part of this.
  • Adaptability – How easy is it to extend, add to or grow services as business needs change? For example if I want to change my business processes or connect to new back end or external API’s how easy would it be to do that?
  • Performance – How well suited is my cloud infrastructure to supporting the workloads that will be deployed onto it, particularly as workloads grow?
  • Usability – This will be different depending on who the client is (i.e. business users, developers/architects or IT operations). In all cases however you need to consider ease of use of the software and how well designed interfaces are etc. IT is no longer hidden inside your own company, instead your systems of engagement are out there for all the world to see. Effective design of those systems is more important than ever before.
  • Maintainability – More from an IT operations and developer point of view.  How easy is it to manage (and develop) the cloud services?
  • Integration – In a world of hybrid cloud where some workloads and data need to remain in your own data centre (usually systems of record) whilst others need to be deployed in public or private clouds (usually systems of engagement) how those two clouds integrate is crucial.

I mentioned at the beginning of this post that non-functional requirements should actually be considered in terms of the qualities you want from your IT system as well as the constraints you will be operating under. The decision to move to cloud in many ways adds a constraint to what you are doing. You don’t have complete free reign to do whatever you want if you choose off-premise cloud operated by a vendor but have to align with the service levels they provide. An added bonus (or complication depending on how you look at it) is that you can choose from different service levels to match what you want and also change these as and when your requirements change. Probably one of the most important decisions you need to make when choosing a cloud provider is that they have the ability to expand with you and don’t lock you in to their cloud architecture too much. This is a topic I’ll be looking at in a future post.

Consideration of non-functional requirements does not go away in the world of cloud. Cloud providers have very different capabilities, some will be more relevant to you than others. These, coupled with the fact that you also need to be architecting for both on-premise as well as off-premise clouds actually make some of the architecture decisions that need to be made more not less difficult. It seems the advent of cloud computing is not about to make us architects redundant just yet.

For a more detailed discussion of non-functional requirements and cloud computing see this article on IBM’s developerWorks site.

A Cloudy Conversation with My Mum

Traditionally (and I’m being careful not to over-generalise here) parents of the Baby Boomer generation are not as tech savvy as the Boomers (age 50 – 60), Gen X’ers (35 – 49) and certainly Millenials (21 – 34). This being the generation that grew up with “the wireless”, corded telephones (with a rotary dial) and black and white televisions with diminutive screens. Technology however is invading more and more on their lives as ‘webs’, ‘tablets’ and ‘clouds’ encroach into what they read and hear.

IT, like any profession, is guilty of creating it’s own language, supposedly to help those in the know understand what each other are talking about in a short hand form but often at the expense of confusing the hell out of those on the outside. As hinted at above IT is worse than most other professions because rather than create new words it seems particularly good at hijacking existing ones and then changing their meaning completely!

‘The Cloud’ is one of the more recent terms to jump from mainstream into IT and is now making its way back into mainstream with its new meaning. This being the case I thought the following imaginary conversation between myself and my mum (a Boomer parent) given my recent new job* might be fun to envisage. Here’s how it might start…

Cloud Architect and Mum

Here’s how it might carry on…

Me: “Ha, ha very funny mum but seriously, that is what I’m doing now”.

Mum: “Alright then dear what does a ‘Cloud Architect’ do?”

Me: “Well ‘cloud computing’ is what people are talking about now for how they use computers and can get access to programs. Rather than companies having to buy lots of expensive computers for their business they can get what they need, when they need it from the cloud. It’s meant to be cheaper and more flexible.”

Mum: “Hmmm, but why is it called ‘the cloud’ and I still don’t understand what you are doing with it?”

Me: “Not sure where the name came from to be honest mum, I guess it’s because the computers are now out there and all around us, just like clouds are”. At this point I look out of the window and see a clear blue sky without a cloud in sight but quickly carry on. “People compare it with how you get your electricity and water – you just flick a switch or turn on the tap and its there, ready and waiting for when you want to use it.”

Mum: “Yes I need to talk to you about my electricity, I had a nice man on the phone the other day telling me I was probably paying too much for that, now where did I put that bill I was going to show you…”

Me: “Don’t worry mum, I can check that on the Internet, I can find out if there are any better deals for you.”

Mum: “So will you do that using one of these clouds?”

Me “Well the company that I contact to do the check for you might well be using computers and programs that are in the cloud yes. It would mean they don’t have to buy and maintain lots of expensive computers themselves but let someone else deal with that.”

Mum: “Well it all sounds a bit complicated to me dear and anyway, you still haven’t told me what you are doing now?”

Me: “Oh yes. Well I’m supposed to be helping people work out how they can make use of cloud computing and helping them move the computers they might have in their own offices today to make use of ones IBM have in the cloud. It’s meant to help them save money and do things a bit quicker.”

Mum: “I don’t know why everyone is in such a rush these days – people should slow down a bit, walk not run everywhere.”

Me: “Yes, you’re probably right about that mum but anyway have a look at this. It’s a video some of my colleagues from IBM made and it explains what cloud computing is.”

Mum: “Alright dear, but it won’t be on long will it – I want to watch Countdown in a minute.”

*IBM has gone through another of its tectonic shifts of late creating a number of new business units as well as job roles, including that of ‘Cloud Architect’.

Government as a Platform

The UK government, under the auspices of Francis Maude and his Cabinet Office colleagues, have instigated a fundamental rethink of how government does IT following the arrival of the coalition in May 2010. You can find a brief summary here of what has happened since then (and why).

One of the approaches that the Cabinet Office favours is the idea of services built on a shared core, otherwise known as Government as a Platform (GaaP). In the governments own words:

A platform provides essential technology infrastructure, including core applications that demonstrate the potential of the platform. Other organisations and developers can use the platform to innovate and build upon. The core platform provider enforces “rules of the road” (such as the open technical standards and processes to be used) to ensure consistency, and that applications based on the platform will work well together.

The UK government sees the adoption of platform based services as a way of breaking down the silos that have existed in governments, pretty GaaPmuch since the dawn of computing, as well as loosening the stranglehold it thinks the large IT vendors have on its IT departments. This is a picture from the Government Digital Service (GDS), part of the Cabinet Office, that shows how providing a platform layer, above the existing legacy (and siloed) applications, can help move towards GaaP.

In a paper on GaaP, Tim O’Reilly sets out a number of lessons learnt from previous (successful) platforms which are worth summarising here:

  1. Platforms must be built on open standards. Open standards foster innovation as they let anyone play more easily on the platform. “When the barriers to entry to a market are low, entrepreneurs are free to invent the future. When barriers are high, innovation moves elsewhere.”
  2. Don’t abuse your power as the provider of the platform. Platform providers must not abuse their privileged position or market power otherwise the platform will decline (usually because the platform provider has begun to compete with its developer ecosystem).
  3. Build a simple system and let it evolve. As John Gall wrote: “A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true. A complex system designed from scratch never works and cannot be made to work. You have to start over beginning with a working simple system.”
  4. Design for participation. Participatory systems are often remarkably simple—they have to be, or they just don’t work. But when a system is designed from the ground up to consist of components developed by independent developers (in a government context, read countries, federal agencies, states, cities, private sector entities), magic happens.
  5. Learn from your hackers. Developers may use APIs in unexpected ways. This is a good thing. If you see signs of uses that you didn’t consider, respond quickly, adapting the APIs to those new uses rather than trying to block them.
  6. Harness implicit participation. On platforms like Facebook and Twitter people give away their information for free (or more precisely to use those platforms for free). They are implicitly involved therefore in the development (and funding) of those platforms. Mining and linking datasets is where the real value of platforms can be obtained. Governments should provide open government data to enable innovative private sector participants to improve their products and services.
  7. Lower the barriers to experimentation. Platforms must be designed from the outset not as a fixed set of specifications, but as being open-ended  to allow for extensibility and revision by the marketplace. Platform thinking is an antidote to the complete specifications that currently dominate the government approach not only to IT but to programs of all kinds.
  8. Lead by example. A great platform provider does things that are ahead of the curve and that take time for the market to catch up to. It’s essential to prime the pump by showing what can be done.

In IBM, and elsewhere, we have been talking for a while about so called disruptive business platforms (DBP). A DBP has four actors associated with it:

  • Provider – Develops and provides the core platform. Providers need to ensure the platform exposes interfaces (that Complementors can use) and also ensure standards are defined that allow the platform to grow in a controlled way.
  • Complementor – Supplement the platform with new features, services and products that increase the value of the platform to End Users (and draw more of them in to use the platform).
  • End User – As well as performing the obvious ‘using the platform’ role End Users will also drive demand that  Complementors help fulfill. Also there are likely to be more Users if there are more Complementors providing new features. A well architected platform also allows End Users to interact with each other.
  • Supplier – Usually enters into a contract with the core platform provider to provide a known product or service or technology. Probably not innovating in the same way as the complementor would.

Walled Garden at Chartwell - Winston Churchill's Home
Walled Garden at Chartwell – Winston Churchill’s Home

We can see platform architectures as being the the ideal balance between the two political extremes of those who want to see a fully stripped back government that privatises all of its services and those who want central government to provide and manage all of these services. Platforms, if managed properly, provide the ideal ‘walled garden’ approach which is often attributed to the Apple iTunes and App Store way of doing business. Apple did not build all of the apps out their on the App Store. Instead they provided the platform on which others could provide the apps and create a diverse and thriving “app economy”.

It’s early days to see if this could work in a government context. What’s key is applying some of the above principles suggested by Tim O’Reilly to enforce the rules that others must comply with. There also of course needs to be the right business models in place that encourage people to invest in the platform in the first place and that allow new start ups to grow and thrive.

Wardley Maps

A Wardley map (invented by Simon Wardley who works for the Leading Edge Forum, a global research and thought leadership community within CSC) is a model which helps companies understand and communicate their business/IT value chains.

The basic premise of value chain mapping is that pretty much every product and service can be viewed in terms of a lifecycle which starts from an early genesis stage and proceeds through to eventually being standardised and becoming a commodity.

From a system perspective – when the system is made up from a number of loosely coupled components which have one or more dependencies – it is interesting and informative to show where those components are in terms of their individual lifecycle or evolution. Some components will be new and leading edge and therefore in the genesis stage whilst other components will be more mature and therefore commoditised.

At the same time, some components will be of higher value in that they are closer to what the customer actually sees and interacts with whereas others will be ‘hidden’ and part of the infrastructure that a customer does not see but nonetheless are important because they are the ‘plumbing’ which makes the system actually work.

A Wardley map is a neat way of visualising these two aspects of a system (i.e. their ‘value’ and their ‘evolutionary stage’). An example Wardley map is shown below. This comes from Simon Wardley’s blog Bits or pieces?; in particular this blog post.

Wardley Map 2

The above map is actually for the proposed High Speed 2 (HS2) rail system which will run from London to Birmingham. Mapping components according to their value and their stage of evolution allows a number of useful questions to be asked which might help avoid future project issues (if the map is produced early enough). For example:

  1.  Are we making good and proper use of commoditised components and procuring or outsourcing them in the right way?
  2. Where components are new or first of a kind have we put into place the right development techniques to build them?
  3. Where a component has lots of dependencies (i.e. lines going in and out) have we put into place the right risk management techniques to ensure that component is delivered in time and does not delay the whole project?
  4. Are the user needs properly identified and are we devoting enough time and energy to build what could be the important differentiating components for the company.

Wardley has captured an extensive list of the advantages of building value chain maps which can be found here. He also captures a simple and straight forward process for creating them which can be found here. Finally a more detailed account of value chain maps can be found in the workbook The Future is More Predictable Than You Think written by Simon Wardley and David Moschella.

The power of Wardley maps seems to be that although they are relatively simple to produce they convey a lot of useful information. Once created they allow ‘what if’ questions to be asked by moving components around and asking, for example, what would happen if we built this component from scratch rather than try to use an existing product – would it give us any business advantage?

Finally, Wardley suggests that post-it notes and white boards are the best tool for building a map. The act of creating the map therefore becomes a collaborative process and encourages discussion and debate early on. As Wardley says:

With a map, it becomes possible to learn how to play the game, what techniques work and what failed – the map itself is a vehicle for learning.

Complexity is Simple

I was taken with this cartoon and the comments put up by Hugh Macleod last week over at his gapingvoid.com blog so I hope he doesn’t mind me reproducing it here.

Complexity is Simple (c) Hugh Macleod 2014
Complexity is Simple (c) Hugh Macleod 2014

Complex isn’t complicated. Complex is just that, complex.

Think about an airplane taking off and landing reliably day after day. Thousands of little processes happening all in sync. Each is simple. Each adds to the complexity of the whole.

Complicated is the other thing, the thing you don’t want. Complicated is difficult. Complicated is separating your business into silos, and then none of those silos talking to each other.

At companies with a toxic culture, even what should be simple can end up complicated. That’s when you know you’ve really got problems…

I like this because it resonates perfectly well with a blog post I put up almost four years ago now called Complex Systems versus Complicated Systems. where I make the point that “whilst complicated systems may be complex (and exhibit emergent properties) it does not follow that complex systems have to be complicated“. A good architecture avoids complicated systems by building them out of lots of simple components whose interactions can certainly create a complex system but not one that needs to be overly complicated.