Skip to main content

https://mhrainspectorate.blog.gov.uk/2017/04/20/computer-system-validation-gcp/

Computer System Validation - GCP

Posted by: , Posted on: - Categories: Compliance matters, Good clinical practice

Hello and welcome to the latest MHRA Inspectorate Blog post, my name is Balall Naeem, GCP Inspector, and you may already have read my previous posts on Reference Safety Information (RSI).

Files of paperwork laid on top of a keyboard

This post focuses on Computer System Validation (CSV) and is a combination of a case study seen at a single organisation and some of the common findings GCP Inspectors have seen across a number of recent inspections. CSV is an important part of the development and use of computer systems within clinical trials and it applies not just to specialist eSystem vendors, but also Clinical Trials Units (CTUs) or Clinical Research Organisations (CROs) offering randomisation and Interactive Response Technology services, specialist analytical software developers and sponsor organisations developing their own software solutions. Whether you are using a product that has been validated by a vendor, ensuring a vendor’s product is validated and fit for use, validating your own product or validating a trial specific configuration/build, this post is intended to provide some guidance on the type of validation activities you should be considering.

Case Study

Images of electronic health data

So what’s the expected starting point when a GCP Inspector comes to your door and asks about CSV? In this case the starting point was their latest major upgrade, essentially a new build to bring the product up to date with the latest programming technology that had been released into production for their customers. I’m not going to go through the whole product development life cycle and I appreciate with new Agile and Sprint development methodology processes are changing, but at some point you will have a finalised specification document. It may have been a fluid document and evolved during the “build”, but at some point you have said that’s it, this is what the final specification is. Make sure you capture and document it. At this particular organisation this was not the case, only draft versions of both the user requirements and functional specifications were provided. So at the starting point of attempting to assess whether the system was in a validated state, it was not possible to identify all the user requirements and functionality that were included in the build and therefore confirm that these had been validated.

Yes, as I eluded to earlier the specification documents may be in a state of flux and being updated as part of staggered Sprint builds, the content of which is impacted by resourcing, timelines and evolving user requirements, but at some stage it will be finalised and should be documented. There will have been a process for deciding what does and doesn’t get included in each Sprint and the process and the decisions made from following it need to be documented formally. It is recommended that decisions are not contained in informal meeting minutes from which it’s not clear what has and hasn’t been included and why, but in a formalised signed off document. At this organisation, this was not the case and it was not possible to clearly identify what aspects of functionality were covered in which Sprint and there was no formal approval process prior to the developers staring work.

So at this stage we had no finalised requirements or specification and so we had to work with what documentation was available. Four pieces of functionality that had been deemed as “Business Critical” in the draft functional specification were selected for review; by requesting the evidence of their development and testing. Two of the items selected were categorised as both “Business Critical” and “Low Effort”, suggesting that even if the scope changed they would have been implemented due to them having been identified as high value for customers and low cost of implementation to the business. The organisation had verbally described a robust testing process involving white box  testing by the developers to test the application at the level of the source code and black box testing to examine the final functionality of the application by independent developers acting as end users who were not involved in the project and had no knowledge of the development using formalised test scripts.  Evidence of this testing was therefore expected in terms of formally approved and completed test scripts populated with the outcomes of the testing. Of the four items selected, test scripts could only be found for one of them and this was recorded as a failure.

Following the failure there was no documented evidence to demonstrate the issue had been rectified. This could have been demonstrated in a number of ways such as:

  • A follow-up test script showing the failed test had been passed;
  • Documented evidence showing that the error had been corrected and a re-test was not required;
  • Documentation stating that piece of functionality had been removed from the product;

So where are we now? Well we now have the possibility of there being untested and therefore unvalidated functionality in the live system, but before this can be established there were a number of questions that required answering.

  1. Was the functionality actually included in the final build that was released into production? Well without the final functional design documentation it was not possible to say. There were two “change of scope” documents that related to the project, but they only stated that the scope had changed and provided no information on what the changes actually were. For example, if business critical functionality that can be added at low effort had been removed from the scope of the project we would expect to see the justification for the decision documented in some way. In this case it was not even evident in meeting minutes.
  1. If the functionality was built and included in the release did it go through any sort of testing? It may have been done; we know at least one of the selected items was tested at least once. Was it poor testing practice or was it poor documentation practice? Unfortunately, in both instances the conclusions have to be the same, that there was not enough evidence to demonstrate the system was in a validated state.
  1. But there is a validation report so surely that means the system is validated? Not necessarily, a validation report is always good to see but what does it relate to? In this instance the validation report referenced versions of the functional specification that were not available, so it was not possible to determine if all the specified functionality had actually been captured in the validation report, and even if it was, without the executed test scripts, there was no evidence of its validation.

So we have been unable to identify what was supposed to be built, what was actually built, what testing was done, and what the validation report was issued against. As a result, a critical finding had to be issued as the system could not be confirmed to be in a validated state. The critical finding was given for a number of reasons, firstly because of the impact a lack of validation can have on the both the trial data and its subjects. Secondly, functionality used to control the quality of the data such as edit checks to identify potentially erroneous data at an early stage or eligibility criteria controls to prevent ineligible patients being randomised may not have been functioning as required.

But it wasn’t just this release validation that raised concerns. There were also examples of poor validation practices in the release of bug fixes and patches. One selected bug fix had two rounds of white box testing as the developer decided to make changes to it following the first round. There was no documentation to show the details of what these changes were or if they had been approved. The only documentation available stated that some things had been added and the subsequent black box testing was performed using scripts that were approved prior to the developer having made the changes. As a result the approved test scripts may not have captured any impact the developer’s changes may have had that would have needed additional testing. It could be that no changes were needed to the black box tests scripts but there was no evidence of any review having taken place to confirm this, so again that bug fix has been released in a potentially unvalidated state.

What Can I Do?

Electrical circuits

So what does all this mean if you are a potential user of software that was validated outside of your organisation? Well the first thing is to make sure you do some due diligence assessment prior to use of the vendor. This could be possibly remote or require an onsite visit to the vendor.  Ask if the current product release is formally validated and what evidence can be supplied to demonstrate that it is. You may only receive a formal statement saying yes or no in which case you need to decide if this is sufficient or if you are comfortable using such an eSystem without this evidence. We would not recommend using a system for which you are unable to obtain sufficient evidence to demonstrate it is validated, but if you do have to, you must carry out an effective risk assessment and look to see what mitigating action you can take.

If you are able to obtain validation documentation here are some suggestions on what you do next as a minimum:

  • If you receive a validation report check it, make sure it corresponds to the version of the software you are using. If it details the systems functionality then make sure all the functionality you are using is covered in the report.
  • If you receive a validation pack does it show the system to be successfully validated, i.e. has all the functionality you intend to use been tested and passed? Is it evident who the tester was and have they signed and dated everything correctly? Is it evident how test fails have been rectified? Is there anything that might cause you concern such as a missing follow-up test after a fail or undecipherable testing?
  • Are the dates sequential? Was all testing completed before the product was released? Were all the specification requirements and test scripts agreed and signed off before the build had been completed? Was the validation report issued prior to release?
  • If you have concerns can you address them? Are you able to self-validate or mitigate them in another way?
  • Do a formalised risk assessment, document your findings and record any mitigating action you are going to take.

Users Matter

Remember system validation does not stop with the systems development; there are also the users to think about. What do I mean by this? Well you can have a very reliable and fully validated system, but if the users are not able to use it correctly there are likely to be user generated errors that could potentially lead to non-compliance. An example of this would be when a user is performing a study specific configuration of an eCRF and is not aware that certain fields need to be flagged as mandatory and are not automatically categorised as such. This could result in data not being collected or edit checks relating to subject eligibility not being effective as the data point needed to fire the edit check has not been collected. Common findings relating to the user aspect of validation include:

  • The product being released to the customer before the training material (i.e. user guide) has been developed and released.
  • Users being given access to the system with no training.
  • Users being given inappropriate (higher level) access such as the ability to make data changes.
  • User material not being reviewed or updated following the release of a new version with new functionality.
  • Users not being notified of system updates that included changes to functionality.
  • Internal processes and SOPs are not followed and as a result the formal review and approval of key documents such as validation plans, test scripts and reports are not completed.

Ideally you should aim to obtain as much of this information as possible before committing to or using an eSystem in your clinical trial regardless of whether it has been developed in-house or developed/ or supplied by a vendor. If you don’t fully understand the capabilities and limitations of your chosen system you may find yourself in a position where you are forced to use a product that is not really fit for purpose and forces you to implement multiple time consuming workarounds as the financial investment you have made may mean an immediate upgrade may not be an option.

Contracts

Cogs turning assisted by individuals walking inside themRemember the vendor may have produced the software, but you’re the one using it in your clinical trials, and ultimate responsibility is with the sponsor. So if you have just assumed a system or piece of software is validated and this causes data integrity or patient safety issues this remains your responsibility. Bearing this in mind your contract with the vendor becomes essential as if they are not contracted to do something there is a high probability they won’t. Here are a few key points you, or your legal and finance departments if they are responsible for contracting, should be aware of:

  • The eSystem vendors may have expert knowledge relating to IT systems and sometimes on data protection legislation if applicable, but not necessarily on GCP requirements.
  • The contract should require the vendor to work to GCP. If it doesn’t then it increases the risk of them not doing so and not retaining sufficient documentation to reconstruct essential trial activities.
  • The contract should allow the sponsor access to or ensure the retention of essential non-trial specific documentation such as software/system validation documents, vendor SOPs, training records and issues log/resolutions in Helpdesk/IT Ticket system.
  • The contract should require the vendor to report serious breaches either to the sponsor or the relevant regulatory authorities.
  • The contract needs to be clear with regards to sub-contracting by the vendor specifically which tasks can and cannot be sub-contracted and how the sponsor will maintain oversight.

Useful Links

Here are some links to further guidance on contracts, relevant legislation, and guidance relating to CSV in GCP.

UK legislation

EMA Reflection Paper

ICH E6 Addendum

IWG Questions and Answers on Computer System Contracts


Don’t miss the next post, sign up to be notified by email when a new post is published on the Inspectorate blog.

Access our guidance on good practice for information on the inspection process and staying compliant.

Sharing and comments

Share this page