Architects Don’t Code

WikiWikiWeb is one of the first wiki experiences I, and I suspect many people of a certain age, had. WikiWikiWeb was created by Ward Cunningham for the Portland Pattern Repository, a fantastic source of informal guidance and advice by experts on how to build software. It contains a wealth of patterns (and antipatterns) on pretty much any software topic known to man and a good few that are fast disappearing into the mists of time (TurboPascal anyone?).For a set of patterns related to the topics I cover in this blog go to the search page, type ‘architect’ into the search field and browse through some of the 169 (as of this date) topics found.  I was doing just this the other day and came across the ArchitectsDontCode pattern (or possibly antipattern). The problem statement for this pattern is as follows:

The System Architect responsible for designing your system hasn’t written a line of code in two years. But they’ve produced quite a lot of ISO9001-compliant documentation and are quite proud of it.

The impact of this is given as:

A project where the code reflects a design that the SystemArchitect never thought of because the one he came up with was fundamentally flawed and the developers couldn’t explain why it was flawed since the SystemArchitect never codes and is uninterested in implementation details.

Hmmm, pretty damning for System Architects. Just for the record such a person is defined here as being:

[System Architect] – A person with the leading vision, the overall comprehension of how the hardware, software, and network fit together.

The meaning of job titles can of course vary massively from one organisation to another. What matters is the role itself and what that person does in the role. It is often the case that any role with ‘architect’ in the title is much denigrated by developers, especially in my experience on agile projects, who see such people as being an overhead who contribute nothing to a project but reams of documents, or worse UML models, that no one reads.

Sometimes software developers, by which I mean people who actually write code for a living, can take a somewhat parochial view of the world of software. In the picture below their world is often constrained to the middle Application layer, that is to say they are developing application software, maybe using two or three programming languages, with a quite clear boundary and set of requirements (or at least requirements that can be fairly easily agreed through appropriate techniques). Such software may of course run into tens of thousands of lines of code and have  several tens of developers working on it. There needs therefore to be someone who maintains an overall vision of what this application should do. Whether that is someone with the title of Application Architect, Lead Programmer or Chief Designer does not really matter; it is the fact they look after the overall integrity of the application that matters. Such a person on a small team may indeed do some of the coding or at least be very familiar with the current version of whatever programming language is being deployed.

In the business world of bespoke applications, as opposed to ‘shrink-wrapped’ applications, things are a bit more complicated. New applications need to communicate with legacy software and often require middleware to aid that communication. Information will exist in a multitude of databases and may need some form of extract, transform and load (ETL) and master data management (MDM) tools to get access to and use that information as well as analytics tools to make sense of it. Finally there will be business processes that exist or need to be built which will coordinate and orchestrate activities across a whole series of new and legacy applications as well as manual processes. All of these require software or tooling of some sort and similarly need someone to maintain overall integrity. This I see as being the domain, or area of concern, of the Software Architect. Does such a person still code on the project? Well maybe, but on typical projects I see it is unlikely such a person has a much time for this activity. That’s not to say however that she needs some level of (current) knowledge on how all the parts fit together and what they do. No mean task on a large business system.

Finally all this software (business processes, data, applications and middleware) has to be deployed onto actual hardware (computers, networks and storage). Whilst the choice and selection of such hardware may fall to another specialist role (sometimes referred to as an Infrastructure or Technical Architect) there is another level of overall system integrity that needs to be maintained. Such a role is often called the System Architect or maybe Chief Architect. At this stage it is possible that the background of such a person has never involved coding to any great degree so such a person is unlikely to write any code on a project and quite rightly so! This is often not just a technical role that worries about systems development but also a people role that worries about satisfying the numerous stakeholders that such large projects have.

Where you choose to sit in the above layered model and what role you take will of course depend on your experience and personal interests. All roles are important and each must work together if systems, that depend on so many moving parts, are to be delivered in time and on budget.

How to Avoid the Teflon Architecture Syndrome

So you’ve sent all of your budding architects on TOGAF (or similar) training, hired in some expensive consultants in sharp suits to tell you how to “do architecture” and bought a pile of expensive software tools (no doubt from multiple vendors) for creating all those architecture models  you’ve been told about but for some reason you’re still not getting any better productivity or building the more reliable systems you were expecting from all this investment. You’re still building siloed systems that don’t inter-work, you’re still misinterpreting stakeholder requests and every system you build seems to be “one-of-a-kind”, you’re not “leveraging reuse” and SOA seems to be last years acronym you never quite got the hang of. So what went wrong? Why isn’t architecture “sticking” and why does it seem you have given yourselves a liberal coating of Teflon.The plain fact of the matter is that all this investment will not make one jot of difference if you don’t have a framework in place that allows the architects in your organisation to work together in a consistent and repeatable fashion. If it is to be effective the architecture organisation needs careful and regular sustenance. So, welcome to my architecture sustenance framework*.

Here’s how it works, taking each of the above containers one at a time:

  • Common Language: Architects (as do any professionals) need to speak the same, common language. This needs to be the foundation for everything they do. Common languages can come from standards bodies (UML and SPEM are from the OMG, IEEE 1471-2000 is the architecture description standard from the IEEE) or may be ones your organisation has developed and is in your own glossary.
  • Community: Communities are where people come together to exchange ideas, share knowledge and create intellectual capital (IC) that can be shared more broadly in the organisation. Setting up an architecture Community of Practice (CoP) where your thought leaders can share ideas is vital to making architecture stick and become pervasive. Beware the Ivory Tower Antipattern however.
  • Tools: If communities are to be effective they need to use tools that allow them to create and share information (enterprise social networking tools, sometimes referred to as Enterprise 2.0). They also need tools that allow them to create and maintain delivery processes and manage intellectual capital of all types.
  • Processes: Anything but a completely trivial system will need some level of system development lifecycle (SDLC) that enables it creation and brings a level of ceremony that the project team should follow. An SDLC brings order to the chaos that would otherwise ensue if people working on the project do not know what role they are meant to perform, what work products they are meant to create or what tasks they are meant to do to create those work products. Ideally processes are defined by a community that understands, not only the rules of system development, but also what will, and what will not, work in the organisation. Processes should be published in a method repository so that everyone can easily access them.
  • Guidance: Guidance is what actually enables people to do their jobs. Whereas a process shows what to create when, guidance shows how to create that content. Guidance can take many forms but some of the most common is examples (how did someone else do it), templates (show me what I need to do so I don’t start with a blank sheet every time) and guidelines (provide me with a step-by-step guide on how to perform my task) and tool mentors (how can I make use of a tool to perform this task and create my work product). Guidance should be published in the same (method) repository as the process so it is easy to jump between what I do as well as how I do it.
  • Projects: A project is the vehicle for actually getting work done. Projects follow a process, produce work products (using guidance) to deliver a system. Projects (i.e. the work products they produce) are also published in a repository though ideally this will separate from the generic method repository. Projects are “instances” of a delivery process which is configured in a particular way for that project. The project repository stores the artefacts from the project which serve as examples for others to use as they see fit.
  • Project and Method Repositories: The place where the SDLC and the output of multiple projects is kept. These should be well publicized as the place people know to go to to find out what to do as well as what others have done on other projects.

All of the above elements really do need to be in place to enable architecture (and architects) to grow and flourish in an organisation. Whilst these alone are not a guarantee of success without them your chances of creating an effective architecture team are greatly reduced.

*This framework was developed with my colleague at IBM, Ian Turton.

Sketching with the UML

In his book UML Distilled – A Brief Guide to the Standard Object Modeling Language Martin Fowler describes three ways he sees UML being used:

  • UML as sketch: Developers use UML to communicate some aspect of a system. These sketches can be used in both forward (i.e. devising new systems) as well as reverse (i.e. understanding existing systems) engineering. Sketches are used to selectively talk about those parts of the system of interest. Not all of the system is sketched. Sketches are drawn using lightweight drawing tools (e.g. whiteboards and marker pens) in sessions lasting anything from a few minutes to a few hours. The notation and adherence to standards is non-strict. Sketching works best in a collaborative environment. Sketches are explorative.
  • UML as a blueprint: In forward engineering the idea is that developers create a blueprint that can be taken and built from (possibly with some more detailed design first). Blueprints are about completeness. The design/architecture should be sufficiently complete that all of the key design decisions have been made leaving as little to chance as possible, In reverse engineering blueprints aim to convey detailed information about the system, sometimes graphically. Blue prints require more sophisticated modelling tools rather than drawing tools. Blueprints are definitive.
  • UML as a programming language: Here developers draw UML diagrams that are compiled down to executable code. This mode requires the most sophisticated tooling of all. The idea of forward and reverse engineering does not exist because the model is the code. Model Driven Architecture (MDA) fits this space.

In practice there is a spectrum of uses of UML which I’ve shown below in Figure 1. In my experience very few organisations are up at level 10 and I would say most are in the range 3 – 5. I would classify these as being those folk who use UML tools to capture key parts of the system (either in a forward or reverse engineering way) and export these pictures into documents which are then reviewed and signed-off.

Figure 1

An interesting addition to the ‘UML as sketch’ concept is that at least two vendors that I know of (IBM and Sparx) are offering ‘sketching’ capabilities within their modeling tools. In Rational Software Architect the elements in a sketch include shapes such as rectangles, circles, cylinders, stick figures that represent people, and text labels. Checkout this YouTube video for a demo. Unlike other types of models, sketches have only a few types of elements and only one type of relationship between elements. Also, sketches do not contain multiple diagrams; each sketch is an independent diagram. You can however move from a sketch to a more formal UML diagram or create a sketch out of an existing UML diagram so allowing you to work with a diagram in a more abstract way.

Below in Figure 2 is an example of a sketch for my example Hotel Management System, that I’ve used a few times now to illustrate some architectural concepts, drawn using the current version of Rational Software Architect. I guess you might call this an Architecture Overview used to show the main system actors and architectural elements.

Figure 2

I guess the ability to be able to sketch with modeling tools could be a bit of a double edged sword. On the one hand it means sketches are at least captured in a formal modeling environment which means they can be kept in one centrailsed repository and can be maintained more effectively. It also means they can potentially be turned into more formal diagrams thus providing a relatively automated way of moving along the scale shown in Figure 1. The downside might be that sketching is as far as people go and never bother to provide anything more formal. I guess only time will tell whether this kind of capability gains much traction amongst developers and architects alike. For my part I would like to see sketching in this way as a formal part of a process which encourages architects and developers to create models using this approach, get them roughly right and then turn them ino a more formal and detailed model.

Applying Architectural Tactics

The use of architectural tactics, as proposed by the Software Engineering Institute, provides a systematic way of dealing with a systems non-functional requirements (sometime referred to as the systems quality attributes or just qualities). These can be both runtime qualities such as performance, availability and security as well as non-runtime such as maintainability, portability and so on. In my experience, dealing with both functional and non-functional requirements, as well as capturing them using a suitable modeling tool is something that is not always handled very methodically. Here’s an approach that tries to enforce some architectural rigour using the Unified Modeling Language (UML) and any UML compliant modeling tool.

Architecturally, systems can be decomposed from an enterprise or system-wide view (i.e. meaning people, processes, data and IT systems), to an IT system view to a component view and finally to a sub-component view as shown going clock-wise in Figure 1. These diagrams show how an example hotel management system (something I’ve used before to illustrate some architectural principles) might eventually be decomposed into components and sub-components.

Figure 1: System Decomposition

This decomposition typically happens by considering what functionality needs to be associated with each of the system elements at different levels of decomposition. So, as shown in Figure 1 above, first we associate ‘large-grained’ functionality (e.g. we need a hotel system) at the system level and gradually break this down to finer and finer grained levels until we have attributed all functionality across all components (e.g. we need a user interface component that handles the customer management aspects of the system).

Crucially from the point of view of deployment of components we need to have decomposed the system to at least that of the sub-component level in Figure 1 so that we have a clear idea of each of the types of component (i.e. do they handle user input or manage data etc) and know how they collaborate with each other in satisfying use cases. There are a number of patterns which can be adopted for doing this. For example the model-view-controller pattern as shown in Figure 2 is a way of ascribing functionality to components in a standard way using rules for how these components collaborate. This pattern has been used for the sub-component view of Figure 1.

Figure 2: Model-View-Controller Pattern

So far we have shown how to decompose a system based on functional requirements and thinking about which components will realise those requirements. What about non-functional requirements though? Table 1 shows how non-functional requirements can be decomposed and assigned to architectural elements as they are identified. Initially non-functional requirements are stated at the whole system level but as we decompose into finer-grained architectural elements (AKA components) we can begin to think about how those elements support particular non-functional requirements also. In this way non-functional requirements get decomposed and associated with each level of system functionality. Non-functional requirements would ideally be assigned as attributes to each relevant component (preferably inside our chosen UML modelling tool) so they do not get lost or forgotten.

Table 1
System Element Non-Functional Requirement
Hotel System (i.e. including all actors and IT systems). The hotel system must allow customers to check-in 24 hours a day, 365 days a year. Note this is typically the accuracy non-functional requirements are stated at initially. Further analysis is usually needed to provide measurable values.
Hotel Management System (i.e. the hotel IT system). The hotel management system must allow the front-desk clerk to check-in a customer 24 hours a day, 365 days a year with a 99.99% availability value.
Customer Manager (i.e. a system element within the hotel’s IT system). The customer manager system element (component) must allow customer details to be created, read or updated (but not deleted) 24 hours a day, 365 days a year with a 99.99% availability value.
Customer Manager Interface (i.e. the user interface that belongs to the Customer Manager system element). The customer manager interface must allow customer details to be created, read or updated (but not deleted) 24 hours a day, 365 days a year with a 99.99% availability value.

Once it is understood what non-functional requirement each component needs to support we can apply the approach of architectural tactics proposed by the Software Engineering Institute (SEI) to determine how to handle those non-functional requirements.

An architectural tactic represents “codified knowledge” on how to satisfy non-functional requirements by applying one or more patterns or reasoning frameworks (for example queuing or scheduling theory) to the architecture. Tactics show how (the parameters of) a non-functional requirement (e.g. the required response time or availability) can be addressed through architectural decisions to achieve the desired capability.

In the example we are focusing on in Table 1 we need some tactics that allow the desired quality attribute of 99.99% availability (which corresponds to a downtime of 52 min, 34 sec per year) to be achieved by the customer manager interface. A detailed set of availability tactics can be found here but for the purposes of this example availability tactics can be categorized according to whether they address fault detection, recovery, or prevention. Here are some potential tactics for these:

  • Employing good software engineering practices for fault prevention such as code inspections, usability testing and so on to the design and implementation of the interface.
  • Deploying components on highly-available platforms which employ fault detection and recovery approaches such as system monitoring, active failover etc.
  • Developing a backup and recovery approach that allows the platform running the user interface to be replaced within the target availability times.

As this example shows, not all non-functional requirements can be realised suitably by a component alone; sometimes full-realisation can only be done when that component is placed (deployed) onto a suitable computer platform. Once we know what non-functional requirements need to be realised by what components we can then think about how to package these components together to be deployed onto the appropriate computer platform which supports those non-functional requirements (for example on a platform that will support 99.99% availability and so on). Figure 3 shows how this deployment can be modelled in UML adopting the Hot Standby Load Balancer pattern.

Figure 3: Deployment View

Here we have taken one component, the ‘Customer Manager’, and showed how it would be deployed with other components (a ‘Room Manager’ and a ‘Reservation Manager’’) that support the same non-functional requirements onto two application server nodes. A third UML element, an artefact, packages together like components via a UML «manifest» relationship. It is the artefact that actually gets placed onto the nodes. An artefact is a standard UML element that “embodies or manifests a number of model elements. The artefact owns the manifestations, each representing the utilization of a packageable element”.

So far all of this has been done at a logical level; that is there is no mention of technology. However moving from a logical level to a physical (technology dependent level) is a relatively simple step. The packaging notion of an artefact can equally be used for packaging physical components, for example in this case the three components shown in Figure 3 above could Enterprise Java components or .NET components.

This is a simple example to illustrate three main points:

  1. Architecting a system based on functional and non-functional requirements.
  2. Use of a standard notation (i.e. UML) and modelling tool.
  3. Adoption of tactics and patterns to show how a systems qualities can be achieved.

None of it rocket science but something you don’t see done much.

Software Developments Best Kept Secret

A few people have asked what I meant in my previous entry when a said we should be “killing off the endless debates of agile versus waterfall.” Don’t get me wrong, I’m a big fan of doing development in as efficient a way as possible, after all why would you want to be doing things in a ‘non-agile’ way! However I think that the agile versus waterfall debate really does miss the point. If you have ever worked on anything but the most trivial of software development projects you will quickly realise that there is no such thing as a ‘one size fits all’ software delivery lifecycle (SDLC) process. Each project is different and each brings its own challenges in terms of the best way to specify, develop, deliver and run it. Which brings me to the topic of this entry, the snappily titled Software and Systems Process Engineering Metamodel or ‘SPEM’ (but not SSPEM).

SPEM is a standard owned by the Object Management Group (OMG), the body that also owns the Unified Modeling Language (UML), the Systems Modeling Language (SysML) and a number of other open standards. Essentially SPEM gives you the language (the metamodel) for defining software and system processes in a consistent and repeatable way. SPEM also allows vendors to build tools that automate the way processes are defined and delivered. Just like vendors have built system and software modeling tools based around UML so too can vendors build delivery process modeling tools built around SPEM.

So what exactly does SPEM define and why should you be interested in it? For me there are two reasons why you should look at adopting SPEM on your next project.

  1. SPEM separates out what you create (i.e. the content) from how you create it (i.e. the process) whilst at the same time providing instructions for how to do these two things (i.e. guidance).
  2. SPEM (or at least tools that implement SPEM) allows you to create a customised process by varying what you create and when you create it.

Here’s a diagram to explain the first of these.

SPEM Method Framework
The SPEM Method Framework represents a consistent and repeatable approach to accomplishing a set of objectives based on a collection of well-defined techniques and best practices. The framework consists of three parts:
  • Content: represents the primary reusable building blocks of the method that exist outside of any predefined lifecycle. These are the work products that are created as a result of roles performing tasks.
  • Process: assembles method content into a sequence or workflow (represented by a work breakdown structure) used to organise the project and develop a solution. Process includes the phases that make up an end-to-end SDLC, the activities that phases are broken down into as well as reusable chunks of process referred to as ‘capability patterns’.
  • Guidance: is the ‘glue’ which supports content development and process execution. It describes techniques and best-practice for developing content or ‘executing’ a process.

As well as giving us the ‘language’ for building our own processes SPEM also defines the rules for building those processes. For example phases consist of other phases or activities, activities group tasks, tasks take work products as input and output other work products and so on.

This is all well and good you might say but I don’t want to have to laboriously build a whole process every time I want to run a project. This is where the second advantage of using SPEM comes in. A number of vendors (IBM and Sparx to name two) have built tools that not only automate the process for building a process but which also contain one or more ‘ready-rolled’ processes to get you started. You can either use those ‘out of the box’, extend them by adding your own content or start from scratch (not recommended for novices). What’s more the Eclipse foundation have developed an open software tool, called the Eclipse Process Framework (EPF) that not only gives you a tool for building processes but also comes with a number of existing processes, including OpenUP (open version of the Rational Unified Process) as well as Scrum and DSDM.

If you download and install EPF together with the appropriate method libraries you can use these as the basis for creating your own processes. Here’s what EPF looks like when you open the OpenUP SDLC.

EPF and OpenUP

The above view shows the browsing perspective of EPF, however there is also an authoring perspective which allows you to not only reconfigure a process to suit your own project but also add and remove content (i.e. roles, tasks and work products). Once you have made your changes you can republish the new process (as HTML) and anyone with a browser can then view the process together with all of it work products and, most crucially, associated guidance (i.e. examples, templates, guidelines etc) that allow you to use the process in an effective way.

This is, I believe, the true power of using a tool like EPF (or IBM’s Rational Method Composer which comes preloaded with the Rational Unified Process). You can take an existing SDLC (one you have created or one you have obtained from elsewhere) and customise it to meet the needs of your project. The amount of agility and number of iterations etc that you want to run will depend on the intricacies of your project and not what some method guru tells you that you should be using!

By the way for an excellent introduction and overview of EPF see here and here. The Eclipse web site also contains a wealth of information on EPF. You can also download the complete SPEM 2 specification from the OMG web site here.

Architecture Drill Down in the UML

Solution Architects need to create models of the systems they are building for a number of reasons:

  • Models help to visualise the component parts, their relationships and how they will interact.
  • Models help stakeholders understand how the system will work.
  • Models, defined at the right level of detail, enable the implementers of the system to build the component parts in relative isolation provided the interfaces between the parts are well defined.

These models need to show different amounts of detail depending on who is looking at them and what sort of information you expect to get from them. Grady Booch says that “a good model excludes those elements that are not relevant to the given level of abstraction”. Every system can be described using different views and different models. Each model should be “a semantically closed abstraction of the system” (that is complete at what ever level it is drawn). Ideally models will be both structural, emphasizing the organization of the system, as well as behavioral, emphasizing the dynamics aspects of the system.

To support different views and allow models to be created at different levels of abstraction I use a number of different diagrams, created using the Unified Modeling Language (UML) in an appropriate modeling tool (e.g. Rational Software Architect or Sparx Enterprise Architect). Using a process of “architecture drill-down” I can get both high level views as well as detailed views that are relevant to the level of abstraction I want to see. Here are the views and UML diagrams I create.

  • Enterprise View (created with a UML Package diagram). This sets the context of the whole enterprise and shows actors (users and other external systems) who interact with the enterprise.
  • System View (Package diagram). This shows the context of an individual system within the enterprise and shows internal workers and other internal systems.
  • Subsystem View (Package diagram). This shows the breakdown of one of the internal systems into subsystems and the dependencies between them.
  • Component View (Component diagrams and Sequence diagrams). This shows the relationships between components within the subsystems, both static dependency type relationships (through the UML Component diagram) as well as interactions between components (through the UML Sequence diagram).
  • Component Composition View (Composite Structure diagram). This shows the internal structure of a component and the interfaces it provides.

Note hat a good tool will link all these together and ideally allow them to be published as HTML allowing people without the tool to use them and also navigate through the different levels. Illustrative examples of the first three of the diagrams mentioned above are shown below. These give increasing levels of detail for a hypothetical hotel system. Click on the picture to get a bigger view.

Enterprise View
System View
Subsystem View

In the actual tool, Sparx Enterprise Architect in this case, each of these diagrams is linked so when I click on the package of the first it opens up the second and son on. When published as HTML this “drill-down” gets maintained as hyperlinks allowing for easy navigation and review of the architecture.

10 Things I (Should Have) Learned in (IT) Architecture School

Inspired by this book I discovered in the Tate Modern book shop this week I don’t (yet) have 101 things I can claim I should have learned in IT Architecture School but this would certainly be my 10 things:

  1. The best architectures are full of patterns. This from Grady Booch. Whilst there is an increasing  need to be innovative in the architectures we create we also need to learn from what has gone before. Basing architectures on well-tried and tested patterns is one way of doing this.
  2. Projects that develop IT systems rarely fail for technical reasons. In this report the reasons for IT project failures are cited and practically all of them are because of human (communication) failures rather than real technical challenges. Learning point: effective IT architects need to have soft (people skills) as well as hard (technical skills). See my thoughts on this here.
  3. The best architecture documentation contains multiple viewpoints. There is no single viewpoint that adequately describes an architecture. Canny architects know this and use viewpoint frameworks to organise and categorise these various viewpoints. Here’s a paper myself and some IBM colleagues wrote a while ago describing one such viewpoint framework. You can also find out much more about this in the book I wrote with Peter Eeles last year.
  4. All architecture is design but not all design is architecture. Also from Grady. This is a tricky one and alludes to the thorny issue of “what is architecture” and “what is design”. The point is that the best practice of design (separation of concerns, design by contract, identification of clear component responsibilities etc) is also the practice of good architecture how architectures focus is on the significant elements that drive the overall shape of the system under development. For more on this see here.
  5. A project without a system context diagram is doomed to fail. Quite simply the system context bounds the system (or systems) under development and says what is in scope and what is out. If you don’t do this early you will spend endless hours later on arguing about this. Draw a system context early, get it agreed and print it out at least A2 size and pin it in highly visible places. See here for more discussion on this.
  6. Complex systems may be complicated but complicated systems are not necessarily complex. For more discussion on this topic see my blog entry here.
  7. Use architectural blueprints for building systems but use architectural drawings for communicating about systems. A blueprint is a formal specification of what is to be. This is best created using a formal modeling language such as UML or Archimate. As well as this we also need to be able to communicate our architectures to none or at least semi-literate IT people (often the people who hold the purse). Such communications are better done using drawings, not created using formal modeling tools but done with drawing tools. It’s worth knowing the difference and when to use each.
  8. Make the process fit the project, not the other way around. I’m all for having a ‘proper’ software delivery life-cycle (SDLC) but the first thing I do when deploying one on a project is customise it to my own purposes. In software development as in gentleman’s suits there is no “one size fits all”. Just like you might think you can pick up a suit at Marks and Spencers that perfectly fits you can’t. You also cannot take an off-the-shelf SDLC that perfectly fits your project. Make sure you customise it so it does fit.
  9. Success causes more problems than failure.This comes from Clay Shirky’s new book Cognitive Surplus. See this link at TED for Clay’s presentation on this topic. You should also check this out to see why organisations learn more from failure than success. The point here is that you can analyse a problem to death and not move forward until you think you have covered every base but you will always find some problem or another you didn’t expect. Although you might (initially) have to address more problems by not doing too much up front analysis in the long run you are probably going to be better off. Shipping early and benefitting from real user experience will inevitably mean you have more problems but you will learn more from these than trying to build the ‘perfect’ solution but running the risk of never sipping anything.
  10. Knowing how to present an architecture is as important as knowing how to create one. Although this is last, it’s probably the most important lesson you will learn. Producing good presentations that describe an architecture, that are targeted appropriately at stakeholders, is probably as important as the architecture itself. For more on this see here.
Enhanced by Zemanta