By Susan Chadwick
Sometimes it feels as if there is a never-ending storm of publicity surrounding major public sector ICT projects that have not delivered, and precious little comment passed on those projects that do deliver on time, that are both of quality and under budget.
Sadly this is not a new phenomenon. Failure and cancellation appears to occur, more often than not, late on in the project lifecycle and usually after significant spend.
What has to change so that this does not continue to happen? And if things have changed, why does it still happen?
The best headlines are reserved for the really big ICT projects that have not delivered as the cost of failure is so significant. Clearly, however, all ICT projects should be expected to have strong governance, risk management and evaluation measures set against quality gates throughout the project lifecycle. This applies regardless of project or delivery organisation size.
Ultimately the public sector organisation is the project owner and has responsibility to apply governance and manage risk on an ongoing basis, using objective measures and metrics established at project initiation. This responsibility stands regardless of the experience, size or reputation of the organisation being commissioned to deliver the system.
Whilst it is always difficult to raise a ‘red card’ on a project, it is indefensible that no challenges are made when delays and major issues become apparent, because without these being addressed robustly it will not ‘turn out all right in the end’. It is also not acceptable that controls and measures are not put in place to alert the project owner that there is an issue or the project is starting to slip.
Public Sector ICT projects are not a walk in the park. They are complex and demanding and there will always be challenges and issues. If I am at a project board meeting for a major project and key risks and issues are raised and discussed openly I feel much more comfortable than if I just get reports that ‘all is fine’.
The key measure is how and when challenges and problems are identified and that they are addressed effectively. Some elements to consider are:
Clear definition of requirements – the root cause of the majority of software errors can usually be traced back to poorly defined requirements. A small investment in requirements testing can yield significant benefit;
Defined acceptance criteria which clearly establish on what basis deliveries will be approved and signed off so everyone knows what is expected at each stage;
Clear governance and risk management on behalf of the public sector organisation, with robust measures in the form of objective quality gates which are applied regularly throughout the project lifecycle. This includes a clear definition of what will happen if a quality gate is not met. This enables the risk management process to take on the mantle of governance, rather than relying on individuals having to raise their hands and call foul;
A clear focus on the critical role of whole lifecycle testing, from requirements and other static testing through to system testing and user acceptance.
In terms of these elements another key factor is independence. Testing in holistic terms must be an intrinsic part of the project lifecycle, running throughout that lifecycle. It is not an element that comes in at the end of the process only, but can provide assurance at all stages. Yet in many projects testing continues to be carried out by the organisation which is also delivering the software development. This does not seem to be the most robust form of risk management and creates a conflict of interest: you should not be marking your own work (albeit this may have meant I would have awarded myself a pass the first time I sat my driving test!).
Independent testing is a critical element of the project lifecycle, supported by early engagement so that it is part of the overall construction process and has a positive impact on project progress. Testing is not just about finding defects during final stages. When initiated early it supports quality assurance and can drive costs down and quality up by finding defects from the outset and raising confidence. This approach should be leveraged by the public sector to optimise outcomes, manage risk and deliver independent assurance.
In the worst case of failure, litigation might be the ‘solution of final resort’ but it is really no sort of solution at all: it costs everyone, can go on for years and still results in no solution. I have been involved in a discussion where the view was that independent testing was not required because if the supplier did not deliver an acceptable solution ‘they would just sue them’. This is wrong on so many levels.
Public sector organisations should be ensuring a more robust risk management profile by keeping testing independent, so that there is a third party who can validate and, if necessary, challenge progress and quality. Successful projects must combine collaboration, governance and assurance in the right measures to ensure positive outcomes.