A fire hose of waste: full article

by Dec 22, 2011 No Comments

UK Government IT delivery is broken and the current supplier model doesn’t work

By Andy de Vale & Kelvin Prescott

Having worked in central government ICT for some years, and observing what appeared to be endemic waste flowing out of the ICT projects in a range of Departments and Agencies, I started challenging people with the above statement.

Two years on and I have put this idea of fundamental failings in ICT procurement and delivery to Government colleagues,  delivery team members, leaders, directors, auditors, cross government architecture teams, CTO’s and CIO’s.

As yet, no-one has disagreed. People’s response has been one of resignation and acceptance. Indeed, many have supported the notion with their own examples or complaints at the hopelessness of the situation. One CIO likened the current system to “standing in front of a fire hose of waste” commenting that it took all his effort not to get “knocked off his feet”.

We have become used to seeing headlines of significant Public Sector ICT and procurement failure in the news that results in hundreds of millions or billions of pounds of waste. It would be understandable to take the view that these massive projects, where huge volumes of taxpayers money is spent for a minimal return, represents the lion share of waste in the sector.

Unfortunately I believe the reality is somewhat worse than that. For a range of reasons, explored in this article, an approach to ICT procurement and delivery has evolved in the Public Sector that is spectacular in its ability to generate and lock in financial waste, while at the same time driving people to behave in ways  that compound the problem. In addition to the massive failures reported, waste and ICT failure are routine in small, medium and large government ICT projects.

Find below, some examples of the kind of waste that colleagues and I have observed over the past few years, bear in mind, that these examples and many more similar are drawn from the experience of a handful of ICT professionals only:
•    Change costs within existing contracts and projects that are consistently (across departments) between double and ten times what seasoned public sector ICT professionals consider reasonable.
•    Shocking costs locked in to contracts. In one example, any software release regardless of the complexity of change, (i.e. from a contractual perspective it could be as little as a single line of code) costs the department in question £750,000.
•    Aggregated redundant solutions. This is where a software or applications are no longer appropriate or needed, but are left in place simply due to the career limiting nature of pointing this kind of redundancy out. Sometimes at a cost of millions per year, and often causing re-work and additional cost due to their presence.
•    Supplier requests to staff to double quotes for services, suppliers halving quotes when they hear competition is going to be involved.
•    Diversion of funds, where money is diverted into ‘pet projects’ of civil servants, rather than used for the original programme outcomes, or where project sponsors are convinced of the necessity of expensive but unnecessary solutions by suppliers.
•     Zero value projects, where no significant value is delivered to the taxpayer or to client departments, with costs often running to many millions.
•    Project failures themselves, with cost and time over-runs, and functional reduction.

These kinds of failures are realised  against the backdrop of a delivery bureaucracy and governance designed to assure value for money and protect against risk. That’s not to say that there aren’t significant successes in government ICT. Thanks to dedicated and capable individuals and teams there are, but that these appear as minority, achieved despite the system, rather than as a natural product of it.

Nothing New
Of course, this is nothing new. The research exposing systematic failure in ICT delivery in general and the public sector in particular is both extensive and compelling.

The Standish Group published a report in 2000 that categorised success rates of ICT projects in the US by sector. The retail sector came out best with a 59 per cent success rate, followed by the finance sector at 32 per cent, manufacturing at 27 per cent, and finally government with an 18 per cent success rate.

The same study was repeated in 2004 with similar results. Similarly the British Computer Society and Royal Academy of Engineering found in 2004 that 84 per cent public sector ICT projects resulted in some form of failure. One study of ICT in the UK public sector estimated that 20 per cent of expenditure was wasted, while a further 30 to 40 per cent led to no perceivable benefits.

The experience I’ve had, where ICT professionals are resigned to the dissatisfactory status quo, but view the issue as too complex to resolve echoes the research of Maheney and Lederer (in 1999), ‘because this problem has endured for 3 decades, many IS professionals have accepted failure as inevitable’.

More recently there have been high profile calls for change, reflecting the ongoing nature of the problems evidenced above.   The System Error report, published by the Institute for Government in March of this year characterised the situation as follows: “Despite costing approximately £16bn per year, government IT seems locked in a vicious circle: struggling to get the basics right and falling further and further behind …  Most attempts to solve the problems with government IT have treated the symptoms rather than resolved the underlying system-wide problems … Most government IT therefore remains trapped in an outdated model”.

The Public Administration Select Committee report on Government IT, published in July this year, was damning of the situation, describing it as a “recipe for rip-offs” and calling for a new approach: “current arrangements have led to a perverse situation in which governments have wasted an obscene amount of public money. Benchmarking studies have demonstrated that government pays substantially more for IT when compared to commercial rates. The Government needs to break out of this relationship.”

In ‘Dangerous Enthusiasms’ (2006) Gauld and Goldfinch conclude from a range of research that, ‘’While the exact numbers are uncertain, and depend to some extent on how success is measured, something like 20 to 30 per cent of developments are total failures, with projects abandoned. Around 30 to 60 per cent are partial failures, involving time and cost overruns and / or other problems.’

This would suggest a conservative estimate of wasted public money due to ICT failure well in excess of £30bn over the last 10 years. A staggering amount. Some believe it significantly more than that; An investigation by the Independent newspaper in January reported that the cost of 10 most notorious IT failures alone (£26bn) is equivalent to more than half of the budget for Britain’s schools last year. This only considers the most dramatic failings, rather than the many un-reported failures.

This is not a “maybe we can can improve things a little” problem, this is a systemic failing, an order of magnitude, multi billion pound problem in the UK Govt that exists right now. As John Seddon has put it “It seems that the protagnists play to everyones ‘no-brainer’ attitude with promises of jam tomorrow and by the time things have gone wrong, explanation, rationalisations and blame replace cash as outcomes.”

While UK central government are taking steps toward reform in response to recent reports, there is still significant risk that these actions will fail to remedy the situation as previous reforms have.

PROCUREMENT CONSIDERATIONS

One of the areas repeatedly raised as fundamental in discussing ICT failures is the role of ICT procurement and contracts. Looking at these brings to mind a range of reasons that many consider contributory to ICT failure:

Poorly defined and managed scope. Public sector organisations are very complex, and there are usually multiple initiatives going on in parallel at any one time. The scope of a procurement is usually fixed at the outset of the competition, based on the needs and structure of the customer at the time.
However, the scope and requirement will always change (not just in IT projects, but also in other complex areas from military aircraft to hospitals).

Because of the consensus based approach to decision making used in public sector organisation, changes to the scope can lag months or years behind the associated changes to the contract or procurement. As a result, contracts are signed that deliver services that the users just don’t need any more; or which don’t fit with the rest of the organisation.

A sedimentary approach to contracts. What this means is that public bodies use model contracts for different types of services. When something goes wrong with a particular high profile project, one of the first things that is done is to add further clauses and protections into the model contracts.

These protections seek to allocate more of the risk in a project onto the supplier, and mitigate the failures of management that occurred in the customer organisation. Over time, these changes act like sediment, building up and adding more and more cost into supplier prices (since they then add a premium to their charges to cover for the additional risks that have been added to the contract), and incentivising them to be less and less flexible and innovative (since if they do anything that is not in the letter of the contract, then they will expose themselves to the risk of challenge).

It is rare indeed for a public customer to say “if I took out all of these contractual provisions, how much would you take off the price?”.  That’s not to say that these additions are effective: “no contract, even if it runs for thousands of pages, can hope to see and control all aspects of ISD (Collins and Bicknell 1997; Dale and Goldfinch 2002). The complex and uncertain causes of failure – if and when failure does occur – and the often remarkably complex contracts that have been developed in (a usually vain) attempt to control the development and provide sanctions, mean that in the face of failure, litigation can be costly and the results uncertain” (Dangerous Enthusiasms 2006).

Reluctance to cancel projects if it becomes apparent that they are no longer feasible. The FireControl project is a case study of this issue: it was apparent (and widely reported) that it was undeliverable from virtually the first day after it was signed (a view that was shared by a number of the supplier personnel who were involved). However, it was only actually cancelled after the money had been spent, rather than when it became clear that the it was a waste of that money.

Public sector procurement teams tend to be specialists in process (e.g. Compliance with regulations and policies) rather than results. They are incentivised to award contracts only when the procedural requirements have been met (irrespective of whether the consequences of delay are far greater than the cost/risk of non-compliance).
Compare and contrast the performance metrics that would be used in a private sector procurement team, with those of a public sector procurement team. Compliance is treated as far more important than issues like flexibility, timeliness and delivery of improvements in performance or reductions in cost.

FOUR HORIZONS OF WASTE
People often confuse symptoms and causes. The symptoms may be prolonged procurement processes, inflexible contracts, excessive charging by suppliers; but that doesn’t mean that the cause is poor procurement, or poor procurement practice, or that procurement is at the root of ICT failure in government. You have to step back and look at the wider picture, the way in which the business cases and projects are articulated and managed, and the way in which civil servants are measured and rewarded, to find the root causes of most major procurement disasters, and ICT failure in general. In the current approach, any ICT project has broadly four critical areas or ‘horizons’ where there is significant opportunity to lock in or avoid waste.

Pre-Sponsorship Horizon
Pre-Sponsorship refers to the way senior people approve or sponsor a project. Until very recently (with the introduction of the Major Projects Authority as part of the Efficiency and Reform Group) this was entirely through the medium of promotion and positive testing.

It is clear to me, both from personal experience and various reports sited in this article that there is currently a catastrophe of evidence in government ICT. This means that any decisions made with regards to IT transformation or indeed any service delivery are effectively being made blind.

This leads projects to be approved based on limited or inaccurate information, and key spending decisions to be made on the basis of opinion based persuasion; the FireControl project cited above is a prime example of this. This makes it very difficult for leaders to make appropriate decisions.

Regardless of how effective a team is put in place, how spectacularly well a project is delivered and managed, if the mandate is fundamentally flawed and doesn’t lead to value then the entire project and money spent on it is effectively wasted.

Pre-Build Horizon
This is what happens between senior sponsorship of the project or programme, and when the ICT build starts.  It is usually at this point that the project / programme delivery organisation comprising a range of people is put in place for the process of articulating what, in some detail, the project is all about.
Usually, this new project organisation as a whole only has a vague understanding of why the project is being commissioned and is almost completely divorced from the service  organisation that will be using the system once the project is completed. Very often there is no understanding of what success is at this point.

From this context we task them with determining the appropriate solution for the delivery of the project. The critical issue here is that a solution or range of solutions can be put forward and supported in a comprehensive business case, that do not serve the original mandate (although this isn’t necessarily appreciated at this point). These solutions will then be measured as the success criteria for the project. This leads to a situation where a project can finally deliver and be seen as a well executed project delivering value for money, but that fundamentally fails to delivery any benefit for anyone.

This is a huge cause of hidden waste in the public sector – where projects declared as a success or partial success deliver against a solution – but not against the value envisaged for the operational service or taxpayer segment that merited the release of funds in the first instance.   Clearly if the solution decided on for a project is divorced from the original value to be delivered then any further money spent on it is effectively wasted.

Build Horizon
Build refers to what happens from the point that people believe they know what needs to be created from an ICT perspective, that is to say hardware is purchased, software is bought and applications commissioned.

The majority of advice as to how to be more successful in ICT projects is focused on this area. Unfortunately, in many of cases, the damage has already been done in the Pre-Sponsorship and Pre-Build horizons, that is to say the original vision of value has been lost and build activities are effectively waste, to build a low or no value solution.  This gives rise to a some referring to such improvements as being “doing the wrong things righter, “ or in the case of Agile development “doing the wrong things faster” (John Seddon).

While I agree with this statement, there is a great deal of room for improvement in the build horizon, with a resultant huge saving attached. Beneath the cover of many suppliers are huge amounts of over-specialisation, over-management, poor technical practice and out of date approaches to ICT delivery that can result in staggering overcharges that are accepted by government as the status quo. This is, in many cases the direct result of government creating a system that encourages a particular form of engagement from large suppliers.

Following are two short [of many] anecdotal examples:

I have witnessed people moving from software engineering positions in small, best practice companies to large companies with primarily a public sector focus and being given 5-10 times as long to complete comparable work.

I have seen a supplier manufacture a project (rather than minutely change scope of an existing project) over a two year period and associate a £1.25m cost, utilising a 15 person team for delivery. This should have involved no more than a few days effort plus testing and deployment.

Operational Horizon

This is Post-Launch: when the system is operating as a live service.

This is when, in most cases using the current delivery approach, the rubber hits the road and the value of the ICT delivery is realised (or otherwise) by the users of the system, whether it be the taxpayer directly or through an operational service organisation.

Unfortunately, this is also shortly before the project delivery team is disbanded, often able to claim success (based on time, cost and quality of deliver of a solution), regardless of the value actually delivered to the tax payer.

A key effect of this approach (with notable exceptions) is that there is little interest in evidencing the success of a project beyond launch. If a senior project sponsor has invested millions and waiting months / years for delivery, and celebrated success – how keen do you think people are to investigate whether the project has delivered real value?

The cost of making changes to a system after launch skyrockets to around 100 times more than were they conducted during the project, and yet the operational life of a system can be many, many times that of the project to create it. This isn’t good news for the people using the system if it doesn’t do what it needs to.

Often in government ICT, if a journey of a thousand miles starts with a single step, we spend our money on the first step, launch, and leave the rest to the operational service.

OPPORTUNITIES FOR IMPROVEMENT
For each of the areas of waste outlined above there are drivers that are key to the on-going failure of ICT projects and programmes. Without addressing these any significant efforts at reform will find it very difficult to succeed.

A systematic and evidence based approach

We should put evidence at the heart of how we develop services.

In addition to suitable empirical evidence, equally valuable is evidence from people who have most relevant experience – i.e. people who are currently doing the work, or closely related work. Many of the spectacular ICT failures to date have been initiated against the backdrop of strong opposition from people who would have had to deal with the resultant systems on a daily, operational basis.

In particular we should:

•    Ensure that new programmes and projects are evidence driven, and develop a culture that doesn’t allow sponsorship of initiatives to occur without independent evaluation against the evidence
•    Establish ongoing and current evidence of the spend and value flows in government as a system. Huge amounts of waste is attributable to making decisions without appropriate information.  It is important to note that the availability of this evidence in itself puts government in a powerful position of knowing where to focus in terms of reducing waste and increasing value delivered through its services. The process of decision making becomes less an activity of big once off reviews and ideas – but a natural reaction to the obvious reactions invited from the available data.
•    Take into account the evidence regarding large systems development, there is substantial research that proves the folly of large scale IT developments:  “The success rate varied dramatically by total project budget: at less than US $750000 the success rate was 55 per cent; with budgets over $10 million, no project was successful” (SIMPL/NZIER 2000).  “There is little evidence that consultants, IT companies, and public agencies, or many practitioners and academics, have learned one of the key lessons of IS failure – that large and ambitious projects should be treated with great caution or avoided altogether.” (Dangerous Enthusiasms 2006)
•    Understand project and systems delivery economics in terms of flow, rather than scale. The capability to rapidly demonstrate value in terms of delivery of small changes is essential to building the right thing.   Most current government environments are crippled by the lack of capability and huge cost of pushing changes through environments – this can mean months or in some cases over a year before people can determine whether or not what is being built in any way relates to what was requested.   This lack of capability is often covered up by large IT suppliers claiming economies of scale in using large batch sizes, a position which is not backed up by research or evidence  (which small batch sizes and economies of flow are).

Measure success based on taxpayer value
We should measure the success of a project on value delivered to the taxpayer
Our current approach to evaluating success is far too solution centric. It is equivalent to deciding that since I’m cold, I need to build a central heating system, then evaluating success based on the development of that system. All seems reasonable until you consider I was standing in a field – we can quickly anchor on a evaluating success based on a solution that delivers no value, and loose sight of the original focus of the project.

By the time many projects are delivered (in some cases before they start), the original vision of value to the tax payer is long forgotten, and senior sponsors, told what has been delivered have no real way of knowing that the project has been a success.

•    Ensure that we measure success in terms of delivered value to the taxpayer.
•    Ensure that projects are established and governed on the basis of taxpayer value and concise, outcome based metrics.  Specifically I would advocate that, as suggested by Tom Gilb, until there is a one page, measureable articulation in terms of who is going to benefit from a system, what value they are going to get out of it – no project should be sponsored.

So many of the negative and non-productive behaviours in ICT delivery have their root in this simple fact – people don’t clearly understand why they are building a system, and very importantly, neither do their seniors – who are measuring their success on time and money to deliver whatever they have been told to deliver.

This results in a hierarchy of people focused on delivery of something, rather than an asking questions about how to realise value, and very little incentive to put your head (or career) above the parapet to point it out.

You dont know what you’re doing – involve the people who do.

Previously I have been known to describe ICT projects as being  “the wrong people, at the wrong time, building the wrong things.”
The concept that a project organisation can exist in isolation of people that will ultimately benefit from the system, and that they can divine the appopriate solution from such a position of ignorance at the start of a project has repeatedly researched and evidenced as fallicy.

•    Involve the right people: people who know most about the system, users and front-line service workers, people who are part of the “business”. These people shouldn’t be involved as token additions to the programme, but at the core of determining what is appropriate to build.
•    Involve the right people: that is to say, less people. ICT projects center around the ability to share common understanding of often complex concepts and situations, yet often the current approach we take is to have as many people as possible communicating across multiple organisational, physical and professional boundaries. This doesn’t help, as the evidence around large projects confirms.
•    Involve IT at the right time: when there is a sufficient evidence of what is needed. This may require operational prototyping (that is building a small operational service with people rather than IT to understand the problem) or other prototyping and improvement to arrive at a starting point to build.
•    Involve IT at the right time: on an ongoing basis, the first release of an ICT solution is the start of a live service, not the end of a project – projects should provision ongoing change to operational services and not end at the initial go-live.
•    Build the right things: use agile, iterative approaches – confirming that what is being built is useful by the people that will need it on an ongoing basis

CONCLUSION

Research has consistently reported, over decades that the approach to delivering ICT in the public sector is wasteful and ineffective on a staggering scale.  The current work being undertaken to address the issue has far wider support from UK  Central Government than at any other time, and many initiatives are doing very good work to try and deliver more for less, do things better, and resolve some of the underlying issues.

However, very little of the work being suggested tackles any of the fundamental issues head on, the issues that make successes in Government ICT a minority.  This runs the risk of things improving just a little.   Scotland has a huge opportunity to lead the way, and dramatically demonstrate a transformation of ICT delivery capability through its public services and ICT reform commitments:
•    put evidence at the heart of what we do, in terms of choosing and running projects.
•    measure success in terms of value delivered rather than solutions implemented.
•    open the market to a new way of working, rather than rely completely on current partners  and delivery models.
•    stop running projects around the features that IT can provide us with. Focus on how to best create and support exceptional services for people who use them, then find out where IT fits in.

Attribution
Andy de Vale is a founding member of the Agile Delivery Network, a not for profit that advocates agile and lean approaches to government ICT delivery, and provides a delivery capability through a network of SME suppliers. Andy currently works as a Principal Consultant with Emergn.
Kelvin Prescott of Newbury Consulting, is a public and private sector procurement and outsourcing authority. Kelvin is also a community member of the Agile Delivery Network.

With thanks for discussion and ideas on the above: All the Agile Delivery Network members, particularly Paul Wilson from EdgeCase and Matt Wynne, John Jones from Thumbswood. John Seddon, Tom Gilb, Phil Black from emergn, Gus Power and Simon Baker from Energized Work, Jerrett Myers from IfG.

Leave a Reply