Capability Statement
Computer Software Litigation

Litigation over failed software construction and implementation projects presents interwoven technical and legal issues which can be both arcane and complex - and therefore prove costly and time-consuming to unravel. This tends to be true whatever the financial size of the claims and counterclaims, the facts and circumstances of the contract between the parties, or the conduct of the subsequent software development.

CASTELL Consulting, an independent professional IT consultancy practice, founded in 1978, has been involved in a wide variety of such computer software litigation. The practice has been instructed as expert witnesses (in e.g. the UK, Europe, the Arabian Gulf, Australasia, the USA) in many legal actions concerning major software development contracts which have been terminated, with the software rejected amidst allegations of incomplete or inadequate delivery, software errors, shifting user specifications, poor project management, delays and cost over-runs. This work is routinely on behalf of both Plaintiffs or Defendants; software customers or suppliers; in the High Court (or equivalent), and Arbitration, Mediation or other forms of ADR.

Over the years of examining mixed and varied software development disputes as appointed experts, CASTELL has developed a range of rigorous analytical techniques for assessing and reading the 'technical entrails' of failed, stalled, delayed or generally troublesome software development projects. These techniques, founded on sound software engineering principles, are objective and impartial, favouring neither customer nor supplier, software user nor software developer. This objectively justifiable and unbiased approach is particularly important where software projects involve a contractually uncertain mixture of 'customised' software packages and 'bespoke' software construction - which these days is increasingly the case.

Software 'quality'

Arguably the most important issue on which the independent computer expert is inevitably asked to give a view in software development or implementation cases is: what was the quality of the delivered software and was it fit for purpose ? This raises the question of: just what is meant by 'software quality' ? The ready answer from the experienced IT expert is that questions concerning 'software quality' essentially come down to: does the delivered software meet its stated requirements ?.

Thus the 'technico-legal litigation tension' common to all such software cases may be succinctly put as 'fitness for purpose vs statement of requirements'. However, this 'tension' manifests itself in different ways, with a different emphasis, for different specific cases.

For example, in a case concerning an in-store EFTPOS system for a major national retailer, the crucial issue we addressed was whether or not the software supplier was likely to have fixed many outstanding errors and have had the system ready to roll-out in time for the pre-Christmas sales rush. What was the objective technical evidence of the software house's 'bug find and fix' performance ? Were the bugs escalating, or was the software converging onto a stable, performant system ? Were, rather, the constant changes in customer specification - as alleged by the supplier - perhaps to blame for the delays and inability of the software to pass a critical acceptance test ?

In a case concerning a large University Consortium we focused on the apparent inability of the software developer to present a main module of the software system in a state capable of passing formal Repeat Acceptance Tests, with a number of faults appearing at each attempt at 'final' testing (even though three earlier main modules had been successfully developed and accepted). How serious were these faults, and were earlier faults that had been thought to have been fixed constantly re-appearing ? Was the customer justified in terminating the contract on the grounds of a 'reasonable opinion' that the software supplier would not resolve all the alleged faults in a 'timely and satisfactory manner' ? Was the supplier's counter-claim for a large financial amount for 'software extras' valid, and could that explain the inability of the software to converge onto an 'acceptable' system ?

In another case - that of a real-time computer-aided mobilising system for a large ambulance brigade - our attention was on the response times of the software in a clearly life-or-death application. How well were the desired response, availability, reliability and recovery targets for the software contractually defined, and what was the evidence of the system's actual performance under varying load conditions ?