The U.S. Food and Drug Administration (FDA) has a new approach to computerized system validation (CSV) called computer system assurance (CSA). Without a draft guidance issued, are we entering an era of regulation by presentation and publication? As a result, does CSA risk becoming “complete stupidity assured?”
Computerized system validation (CSV) has an uninspiring reputation for being a slow, no-value-added activity that only wastes time and delays the implementation of new software.Is that an accurate portrayal?
As somebody who has been involved with CSV for over 35 years, I would say it depends. Here are two CSV examplesone in which using CSV is sublime and another in which using CSV is ridiculous:
A one-size-fits-all validation approach lacks the flexibility to tailor each validation based on intended use and condemns any regulated laboratory to mountains of paper. You can see why CSV gets a bad reputation; instead of applying common sense that then results in business benefit generated by the system, CSV is an inflexible approach, and coupled with the ultraconservative nature of the pharmaceutical industry, consigns it as an old-fashioned, outdated process. You should have an accurate assessment of a supplier’s development and testing, the application software category, and the impact of the records created by the system to focus the CSV effort where it is most needed. Flexibility is the name of the game.
CDRH: The Case for Quality
Approximately 10 years ago, the FDA’s Center for Devices and Radiological Health (CDRH) started the “Case for Quality” initiative that was aimed at reviewing the problems medical device companies had with regulatory compliance. By 2015, one of the areas identified was CSV for the following reasons:
As a result, the FDA set up a joint CDRH and industry team to develop a new validation approach to computerized systems used in the medical device industry with the aim of following the least burdensome approach, called computer system assurance (CSA).
Least Burdensome Approach
An CDRH guidance for industry is the General Principles of Software Validation issued in 2002. In section 2.3, it states:
We believe we should consider the least burdensome approach in all areas of medical device regulation. This guidance reflects our careful review of the relevant scientific and legal requirements and what we believe is the least burdensome way for you to comply with those requirements (2).
This section concludes with an invitation that if a company knows of a better way to validate software, talk to the Agency. The guidance goes further in section 4.8 on validation coverage which is quoted verbatim:
Validation coverage should be based on the software’s complexity and safety risk, not on firm size or resource constraints. The selection of validation activities, tasks, and work items should be commensurate with the complexity of the software design and the risk associated with the use of the software for the specified intended use. For lower risk devices, only baseline validation activities may be conducted. As the risk increases additional validation activities should be added to cover the additional risk. Validation documentation should be sufficient to demonstrate that all software validation plans and procedures have been completed successfully (2).
The key takeaway here is to focus on intended use, risk and the nature of the software used. The 20-year-old FDA guidance sends a clear message that paraphrased says it is important to not kill yourself with compliance. For too long, the pharmaceutical industry has not evolved from a risk-averse to a risk-managed industry.
FDA Centers: CDRH and CDER
The FDA is divided into several centers. There are two centers that are discussed at length in this column:
Software that is used to control or operate a medical device is already validated when you purchase it, which contrasts radically with software used in the pharmaceutical industry that is not validated when you buy it (although many suppliers would like you to believe it is), and the laboratory must undertake CSV to demonstrate that it is fit for intended use, which is based on business needs and the process being automated. Bear in mind that CSA is aimed primarily at medical devices and not the pharmaceutical industry.
CSA Principles and Pilot Projects
The joint industry–CDRH team developed the principles of CSA as:
Pilot projects were used to verify and refine the CSA approach. Since 2017, there have been a number of presentations and publications from both FDA staff members and members of the various pilot projects. So far, so good.
However, despite a draft guidance for industry for CSA being on CDRH’s list of documents to be issued since 2018, nothing has appeared from the Agency, which is a problem.
Waiting for Godot?
This is the regulatory equivalent of “Waiting for Godot” where the two main characters of the play are on stage for over two hours spouting rubbish and Godot never turns up.
It is interesting to contrast the differences in approach to regulations between the USA and Europe. Since 2011, the European Union (EU) GMP has updated eight of the nine chapters of Part I, including several Annexes such as 11 and 15. Indeed, chapter 4 and Annex 11 are being revised again to reinforce data integrity principles. In contrast, the U.S. GMP (21 CFR 211) published in 1978 has only been updated once in 2008 with the addition of one clause impacting manufacturing: 211.68(c) (4).
It is my opinion that the FDA, specifically CDRH, is inept and unprofessional in failing to issue a draft guidance on CSA.
Instead of updating regulations, FDA issues advice as either a Level 1 Guidance for industry documents or a Level 2 Question and Answers section published on the FDA website. Let us focus on the Level 1 guidance, which are usually issued as a draft for industry comment and after a prolonged reflection once a final version is issued. Relatively fast track examples of guidance
issuance are:
It could be argued that guidance documents are regulation by the back door, but all have the phrase “contains nonbinding recommendations” emblazoned on each page, which could mean that content could be difficult to enforce.
The Genie’s Out of the Bottle
In the absence of a draft guidance for industry, there are presentations from CDRH officials and industry members from the pilot programs as well as articles, white papers, and industry guidances published. The situation is that we are putting the industry interpretation cart before the regulatory horse. Typically, the reverse is true: FDA issue a draft guidance for industry comment and then presentations and publications follow with industry implementing after citations in Warning Letters. Not this time, as the genie is already out of the bottle. Houston, we have a bigger problem.
Regulation by Presentation and Publication
Regulations and Level 1 regulatory guidance documents must go through due process. This process involves issuing a draft for industry comment, revision where appropriate, followed by the final version. For regulations, the final version published in the Federal Register contains a precis of industry comments together with the review and response by the Agency that either rejects or acts upon them. You can see this for 21 CFR 11 in the March 1997 issue of Federal Register: The regulation is three pages and the preamble comments are 35 pages (11).
However, with CSA, the situation is different. The FDA list guidance documents that they will issue each year and one for CSA has been on the list for at least three years. Covid is not an excuse for inaction by the Agency as the guidance was promised before the pandemic and working from home should enable a guidance to be issued. Instead, presentations and publications outlining how to undertake CSA are being thrown out like garbage. However, it is important to note that:
Without a draft guidance for industry, we cannot see if the FDA aims for CSA are being filtered, enhanced, or subverted. In other words, there is a concern over the regulatory integrity of the perspectives being presented. As a regulated industry, we cannot change direction based solely on rumor: We need a draft guidance. But...
Do We Need CSA?
With all of the issues surrounding CSA, an important question emerges: Do we need it at all? Have we got regulations and guidance for the pharmaceutical industry in place now that give us the flexibility to do what is purported to be in the CSA guidance? In my view, the answer is yes, and I’ll explain in the following sections. Of course, this is my interpretation, but because there is no guidance for industry, there may be gaps in my discussion.
Regulatory Flexibility
Below are two quotes from “General Principles of Software Validation”. Section 2.1 simply states:
This document is based on generally recognized software validation principles and, therefore, can be applied to any software (2).
The guidance scope outlined in section 2 notes:
This guidance recommends an integration of software life cycle management and risk management activities. Based on the intended use and the safety risk associated with the software to be developed, the software developer should determine the specific approach, the combination of techniques to be used, and the level of effort to be applied” (2).
A flexible risk-based CSV approach for all software is mirrored in EU GMP Annexes 11 and 15. Clause 1 of Annex 11 focuses on risk management:
Risk management should be applied throughout the lifecycle of the computerized system taking into account patient safety, data integrity and product quality. As part of a risk management system, decisions on the extent of validation and data integrity controls should be based on a justified and documented risk assessment of the computerized system (12).
The regulation explicitly states that the extent of validation and data integrity controls should be based on the risk posed by a system to a product and patient. A product for a laboratory can be interpreted as the data for a submission or the medicinal product for patients. Clause 1 implicitly means that a one-size-fits-all validation approach is inappropriate. It is important to fit the validation to the system, not the other way around.
A system level risk assessment for analytical instruments and systems was published in this column in 2013 by Chris Burgess and myself (13). This approach can be used to classify each into the updated USP <1058> Groups B and C subtypes (14)
to determine the extent of qualification and validation required. Next, we have Annex 15 on Qualification and Validation clause 2.5:
Qualification documents may be combined together, where appropriate, e.g. installation qualification (IQ) and operational qualification (OQ) (15).
This makes sense when an engineer installs and qualifies your next spectrometer,
you could have an single installation qualification (IQ) or operational qualification (OQ) document that combines the two activities into one document. A single document for pre-execution review and post-execution approval is appealing. This approach is mirrored in USP <1058> that allows, where appropriate, qualification activities and associated documentation to be combined together (IQ and OQ) (14). But why stop there?
An Integrated Validation Document
Remember the UV spectrometer we discussed in the introduction where all that the system did was measure absorbance at a few wavelengths? Why not take Annex 15 clause 2.5 to a logical conclusion and combine all validation elements into one? The document should include:
This sounds like a long list, but with the focus on intended use requirements only, this can be a relatively short document. Control of the process would be via an SOP or validation master plan. I have practiced and published such an approach for systems based on GAMP software category 3 and simple category 4 even if the data generated were used in batch release or submitted to regulatory agencies (16). The key is documented risk management as required by clause 1 of Annex 11 (12).
Do I Need a Risk Assessment?
I know what you are thinking, General Principles of Software Validation (2) recommends risk assessments to focus work and EU GMP Annex 11 says your need risk management should be applied throughout the lifecycle of the computerized system (12). However, this does not mean you must always perform a qualitative failure mode effect analysis (FMEA) described in GAMP 5 (17).
Let me give you an example of stupidity of a risk assessment for remediation of a data integrity audit finding: All users sharing the same user account. Sharp intake of breath: no attribution of action! What happened next? The laboratory then assessed the risk and impact of a shared user account with an FMEA risk assessment with a numeric assessment of all elements. At the end of the assessment, a single number is produced and compared against a scale to determine if it is critical, major, minor, or low. Unsurprisingly, the number indicates that this is a critical issue (the same as the auditor’s finding!)—and only now a remedial action is triggered. Guess what the resolution is? Yep, give each user their own account. Least burdensome approach? I don’t think so.
How stupid is this? It is death by compliance. Once identified in the audit, you know you have a critical vulnerability. You are out of compliance. Fix it. Don’t perform a risk assessment when you know what the only possible outcome will be. Just fix it. Annex 11 does not require a risk assessment; it requires that risk management be applied. The audit has identified the risk; the remediation is to give each user their own account that is faster, easier, and compliant with GMP. As Audny Stenbråten, a retired Norwegian GMP inspector, stated, “Using risk assessment does not provide an excuse to be out of compliance.” There is an interesting article by O’Donnell and others entitled “The Concept of Formality in Quality Risk Management“ that is recommended for further reading on this topic (18).
Rather than just apply a single risk assessment methodology as described in the GAMP Guide (17), there are methodologies that could be applied to implement a scalable risk assessment approach to both application software and IT infrastructure (19).
Leveraging the Supplier’s Development
Advocates of CSA mention trusted software suppliers. Let’s go back to 2008 and the publication of GAMP 5, often cited by the FDA, which discusses leveraging supplier involvement in sections 2.1.5, 7, and 8.3 plus Appendix M2 for supplier assessments (17). There are comments about leveraging supplier testing into your validation. To leverage supplier testing and reduce your validation effort, you must do more than just sending out a questionnaire for the supplier to fill out and QA to stick in a filing cabinet or document management system. This process requires a proactive assessment that reviews the procedures and practice of software development for software category 4 applications, such as:
The greater the investment in understanding the suppliers QMS and software development, the more you can rely on supplier decisions and processes. This type of assessment is not suitable for a questionnaire, but either an onsite or remote audit. It will require at least a day to perform. You are looking for a robust software development process. Identify two or three requirements and trace them through the supplier’s development process: How extensive is the work and does this give you confidence in the supplier? As part of the evaluation, include questions that a supplier must answer about collaboration and sharing information about instrument and software issues and updates. You want a supplier you can trust. This assessment must be documented in a report as it is the foundation on which you leverage the supplier’s development into your validation project to reduce the amount of work.
This information can be used as follows:
A small investment in time here can reduce the amount and extent of user acceptance testing of any system. This means there is no need for CSA as the regulations and industry guidance have been suggesting such an approach for over 10 years.
Undocumented Testing
One of the purported CSA approaches is undocumented testing, but without the draft FDA guidance, care needs to be taken. I would caution any regulated pharmaceutical laboratory saying that they did undocumented testing, especially in today’s data integrity environment. Remember that software controlling a medical device is validated under 21 CFR 820 and cannot be configured, so one interpretation is that undocumented testing can be conducted during beta testing with the aim of finding errors rather than in formal release testing.
How could this be applied to software used in pharmaceutical laboratories?
One area is prototyping to learn how to configure and use an application. Provided this phase is described in the validation plan, an undocumented prototyping is acceptable with the deliverable of an application configuration specification containing the agreed software settings.
Critical Thinking
Rather than a test everything regardless approach, critical thinking should focus on demonstrating intended use of the system and the associated compliance functions to support product development or release.
Testing Assumptions, Exclusions, and Limitations
Barry Boehm explained that it is impossible to test software exhaustively in a 1970 report for the U.S. military (21). We see this in everyday use of computers with security updates, patches, minor versions, quick fixes, or whatever name is applied to them to fix bugs. However, our focus is on testing efficiency and how to focus on what is important to demonstrate the intended use of a system. The key to reducing the amount of effort in testing, in addition to leveraging supplier development, is to document what assumptions, exclusions, and limitations you are making in your test approach. Just because you have a requirement does not mean that you must test it blindly: You need to think objectively.
For example, if an application has 100 different access privileges and you want five different user roles, this results in 500 different combinations. Hopefully, you won’t test all of them, but how many will you test? How will you justify your approach? This is the role of documented assumptions, exclusions, and limitations of your test approach, which documents any rationale for what, how, and the extent of your testing and how you can leverage supplier development. If you are going to exclude specific user requirements from testing, state why you are doing this (20).
The other side of the coin is including additional requirements in the system’s unified registration system (URS) just in case they might be used in future. If tested, these requirements can result in extra work for zero value if they are never used. For example, if a system is used for quantitative analysis do you validate all calibration modes or just the ones that you use now? A better way is to focus on current requirements in the initial validation. If required later, other software features can be evaluated but not used for regulated work. If you want to use them, raise a change request, include the requirements in an updated URS, verify that they work as expected. For standalone spectrometer systems, it is unlikely that you will have the luxury of a separate test instance to evaluate them, so this is the only practical way of adding new functionality to a validated system.
Test Instructions
The bane of CSV’s existence is test documentation: at what level of detail will you document? Will you be using trained users or drag someone off the street to test your software? If it is the former, you can reduce the detail required compared to the latter. Don’t treat testers as though they are naïve people with mind numbing detailed instructions, testers are educated and trained; treat them like adults.
Table I compares test instructions for risk-averse and trained users. Generally, with risk-averse instructions shown in the left-hand column, each instruction needs to be documented with observed results, dated, and initialed. If you are really unlucky, you’ll have a screenshot to take at each step. In contrast, a better way is to give a trained user a simpler instruction shown in the right column of Table I. A trained user will know how to execute this instruction consistently. Note that the quality of test instructions is dependent on the knowledge of the software by both the test writer and the tester. The more training and experience with the system, the easier it will be to write simpler instructions and execute them.
Instead of dating and initialing each test step, just allow the tester and reviewer to sign and date the bottom of each page just as you would do for a laboratory notebook. Furthermore, some test instructions may instruct a tester to get to a different function of the application. If so, why does a test need expected and observed results for such instructions?
Can you go further with reduction of test documentation? Absolutely, you could do so but without the draft guidance available, why would you dare to?
Screenshot at Dawn
Screenshots are the bane of CSV: They are overused and, in most cases, have zero value. If used for documenting every step in a test, it is indicative of an overcautious and risk-averse approach to computer validation and an absolute waste of resource required to execute, collate, review, and retain. If used sparingly, a screenshot can add value to a document a transient message on the screen where there is no other way of recording it. A GAMP Good Practice Guide on Testing emphasizes the point that only take screenshots when there is value added by doing so.
However, if a transient massage on the screen also results in an audit trail entry, why take a screenshot? Use the audit trail entry to automatically document the activity. In this way, you can save time not just by testing analytical functions but simultaneously verify audit trail functionality. This way, it increases the testing elegance as well as reduces the time to test.
An alternative approach to documenting you testing could be utilizing screen videos that record all that is being done. If properly described in the test plan and outline test instructions, using screen videos is a perfect way to document the evidence. Reviewers can randomly select passages to review (22).
Automation of Testing
Test automation is excellent, but it comes with caveats. One of the best times for test automation is in software development, such as regression testing, to see if existing functions in new software builds still work as expected. As such, test automation is fast, and if a new build fails regression testing, then no manual testing is conducted until the error is fixed. Manual testing tends to be focused on new functions added in the release under development.
Using automated testing in validating laboratory systems is best focused on networked applications as most automated test tools, such as HP Application Lifecycle Management (ALM), are networked. However, test tools come with the same problems as manual testing in that you need to know the application software you are testing and the level of test step detail. Frewster and Graham note that when using an automated test tool for the first time, writing the test suite takes longer than manual testing. In addition, it takes over 10 executions to make a return on investment in the test tool (23). The advantage of an automated test tool is automated attribution of action via log-on credentials, contemporaneous execution, and the ability to capture and integrate electronic documented evidence (included dreaded screen shots) easily, quickly, and automatically. This makes review quicker and easier.
Reality will bite when you try to consider automated user acceptance testing on standalone systems as is more problematic. How will you load the test tool onto a spectrometer system? How will you manage the documented evidence and store it safely after the validation? Don’t tell me you’ll use a USB stick (24)!
Summary
Although the FDA has developed CSA, the inability to issue draft guidance on the subject has led to regulation by presentation and publication. The lack of draft guidance is not an appropriate regulatory process. However, is CSA needed? There is sufficient existing flexibility in current regulations and industry guidance that the pharmaceutical industry has not taken advantage of. If the pharmaceutical industry read regulations to understand CSV as a business benefit and investment protection rather than regulatory overhead, it would make the CSV process simpler and easier. However, is FDA incompetence meant to keep consultants gainfully employed?
Acknowledgments
I would like to acknowledge Chris Burgess, Mark Newton, Yves Samson, Siegfried Schmitt, and Paul Smith for helpful comments and advice during the writing of this column.
References
R.D. McDowall is the director of R.D. McDowall Limited and the editor of the “Questions of Quality” column for LCGC Europe, Spectroscopy’s sister magazine. Direct correspondence to: SpectroscopyEdit@MMHGroup.com
Synthesizing Synthetic Oligonucleotides: An Interview with the CEO of Oligo Factory
February 6th 2024LCGC and Spectroscopy Editor Patrick Lavery spoke with Oligo Factory CEO Chris Boggess about the company’s recently attained compliance with Good Manufacturing Practice (GMP) International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) Expert Working Group (Q7) guidance and its distinction from Research Use Only (RUO) and International Organization for Standardization (ISO) 13485 designations.