The second part of this column series on traceability matrices looks at some of the practical ways of tracing requirements from the user requirements throughout the life cycle documents.
In the first part of this column series on traceability matrices (1), we reviewed a system development life cycle model and then discussed the regulatory requirements and expectations as well as the business benefits of this document. We also looked at the terminology used and some of the principles of tracing requirements to other documents and activities in the life cycle.
R.D. McDowall
Two points need to be reiterated from Part I for the discussion in this part. Requirements can either be tested or verified throughout the life cycle. Testable and verifiable were defined in Part I (1) as
Testable: The degree to which a requirement is stated in terms that permit establishment of test criteria and performance of tests to determine whether those criteria have been met. This is typically undertaken in the operational qualification (OQ) or performance qualification (PQ) phases of the life cycle.
Verifiable: The degree to which a requirement can be fulfilled by implementation (in the installation qualification phase), software configuration, writing standard operating procedures (SOPs), user training, vendor audit, calibration, or documentation.
This is important. In contrast to the GAMP 5 guide (2) that only discusses verification in the testing phases of a life cycle model, I prefer to split traceability into testing and verification because these are different, and in doing so, it makes the process easier and simpler to understand. When I talk about testing, I am referring to tasks that will be undertaken by the user (during the PQ) and also the vendor (during the OQ). In contrast, requirements that are verified are scattered throughout all the remaining stages of a life cycle and not just the testing ones.
To illustrate the principles of traceability between the requirements in the user requirements specifications (URS) and testing and verification in later stages of a system life cycle, let's look at Figure 1. The URS and the requirements that it contains are shown in the left-hand column. Across the top of the figure in a single row are the documents that could be written in a validation project. The figure caption lists what each of the abbreviations used in the table means. Before we start our traceability journey, the document names are what I would typically use; your organization might name them differently, but that is not a problem. This is to illustrate the principles of traceability in more detail than in Part I. You will need to interpret this approach using your terminology instead.
Figure 1
I will discuss the principles of traceability using most of the nine requirements listed in the URS in Figure 1. After the principles have been discussed, you can apply them to the remaining ones.
Requirement R1 is for the PC hardware upon which the spectrometer software will run and that the IT department will purchase. This is expanded in more detail to the make, model, memory, disk size, and configuration required to support the application and store the acquired data. It is traced first to item the specification T1 in a document called a technical specification; from here it is verified during the installation qualification (IQ) of the computer system. This answers the question: was what we specified actually delivered and installed, and if not, has the difference been documented.
R2 describes a requirement that needs an SOP written for the users to follow, and when this document is available the requirement will be verified. Collate all the user requirements that trace to procedures, check them against existing SOPs, and update as appropriate. Where no SOP exists, one must be written.
R3 leads to a function in the application software that will be configured and the setting is documented in the configuration specification in section 3.2 and tested during the PQ (for example, T). One can go further than just listing T under the PQ, but the detail to which we go will be discussed in a later section of this column.
R4 is a requirement that will be tested adequately during the vendor's OQ execution, and therefore it will not be tested again during the PQ because it concerns resources and effort.
R6 illustrates an interesting point. A single URS requirement might break down into one or more functional requirements. One of these links to a configuration of the application software, which is then tested, and the other one will be tested directly from the description in the functional specification.
Some of the other requirements can be verified by calibration of the instrument carried out in the laboratory, a vendor audit of the vendor, or a service-level agreement with IT. Remember, computer validation is only one mechanism of control. Some others are calibration, maintenance, and qualification.
Figure 1 provides the big picture of traceability – requirements are traced from the URS throughout the whole of the life cycle. However, this figure only goes halfway. Note that the arrows only point one way from the URS to the place in the life cycle where each is either tested or verified. Ideally, traceability should be a two-way process — forward from the URS to other documents as shown in the figure but also from the documents backward to the URS.
Because this column is about the practicalities of traceability, I'd like to give some examples of how to develop a traceability matrix. However, for reasons of space, I'll not be able to discuss all the options, but I'd like to look at some of the main ones.
Traceability Within a Single Document
The simplest traceability matrix can be found in the single integrated validation document (3,4). Here, a single document is written for the validation of a system and the traceability is used within the document to link the intended use requirements to where the requirement is tested or verified. This simplified approach can be used for systems that require validation but consist generally of nonconfigurable software (GAMP Category 3) (2). A detailed discussion and examples of the integrated validation document can be found in reference 4.
In the example presented in Table I, a simplified traceability matrix has been incorporated into the user requirements section of the integrated validation document. The user requirements consist of tables containing three columns. The first column is the unique number of the requirement (in this case generated automatically by Word); the requirement itself is in the second column; and finally, the third column indicates how the requirement will be tested or verified (the traceability matrix). The example given is not from spectrometer software, but is from an application that controls and analyzes data generated by battery-powered data loggers that can be used to monitor temperature in environmental storage or shipping packages from one location to another. The section of the requirements covers the data loggers used by the system.
Table I: An example of intended use requirements with traceability (4)
There are two traceability references shown in Table I.
The first is C against requirements 13–15 inclusive. This refers to traceability to the calibration certificate provided by the vendor with each batch of data loggers.
The remaining requirements will be tested in the test procedure contained in section 8.2 of the document, which covers the shipping of material from a primary to secondary manufacturing site as outlined in the next section of this article.
As shown in Figure 1, traceability can be to other phases of the life cycle, writing standard operating procedures or installation of components that will not be illustrated in this example.
Traceability from the Functional Specification to the URS
The next example is where there needs to be traceability for an NIR spectrometer used for identification of delivered material within a pharmaceutical warehouse. The requirements contained in the URS are being traced into a functional specification. To ensure that all requirements are broken down from the URS to the functional specification (FS), the numbers of each are listed in the two left-hand columns (Table II).
Table II: Functional specification for an NIR spectrometer used for compound identification
Note that for three user requirements (5.1.1, 5.2.4, and 5.7.6), these have been decomposed into eight functional requirements in the FS. This is typical as requirements are broken down further and is also depicted in Figure 1 where there is a one-to-many relationship as a requirement moves from the URS to the FS. Alternatively, you can avoid the need to write an FS if your URS is sufficiently detailed.
Because this is a more complex system, word auto-numbering is not used and each number must be typed into the appropriate column. In fact, word auto-numbering should only be used with the integrated validation document, because traceability is linked directly with the requirement number. In all other cases, it is an indirect link and use of autonumbering will result in destroying traceability rapidly when a new requirement is inserted in a table.
Linking the two specification documents as shown in the two left-hand columns has three main benefits. First, traceability is developed as the life cycle proceeds so that it is kept current — or nearly current. Second, there is a linkage between each URS requirement and the corresponding break down in the next document, allowing any changes to one document to be linked to the other one. Third, it provides a mechanism to check if all user requirements have been captured in the next document and is a simple mechanism for ensuring completeness of the following document.
Traceability Matrix Combined with Functional Risk Assessment
Functional risk assessment (FRA) is a simpler risk analysis methodology that was developed specifically for the validation of commercially available software (5–7). This can be developed further into a traceability matrix. The process is as follows:
The input to the process is a user requirements specification where each requirement is uniquely numbered and prioritized. All URS requirements are prioritized as either mandatory (M) or desirable (D). The mandatory assignment means that the requirement must be present for the system to operate; if "desirable" is assigned, then the requirement need not be present for operability of the system. This is shown in the first three columns of Table III.
Table III: Functional risk assessment linked with a traceability matrix for commercial software
The next stage in the process is to carry out a risk assessment of each function to determine if the function is business or regulatory risk critical (C) or not (N). This risk analysis methodology uses the tables from the URS that have two additional columns (columns 4 and 5) added to them, as shown in Table III. The approach is shown in the table in the fourth column from the left. Here, each requirement has been assessed as either critical or noncritical. For a requirement to be assessed as critical, one or both of the following criteria must be met. The requirement functionality poses a regulatory risk that must be managed. The basic question: will there be a regulatory citation if nothing is done? For example, requirements covering security and access control, data acquisition, data storage, calculation and transformation of data, use of electronic signatures, and integrity of data are areas that would come under the banner of critical regulatory risk (as well as good science). Requirements can also be critical for business reasons — for example, performance of the system or its availability.
Figure 2
The FRA approach is based upon plotting the prioritized user requirements and regulatory or business criticality together to produce the Boston Grid shown in Figure 3. Requirements that are both mandatory and critical are the highest risk (combination of the prioritization and business–regulatory risk). For most commercial spectrometry systems, requirements fall into either the high- or the low-risk categories. There will be a few requirements in the mandatory and noncritical quadrant of the grid, but few, if any, in the desirable but critical quadrant. This is logical. If your requirement were only desirable, why would it be critical or vice versa? If requirements fall in this last quadrant, it might be an indication that the initial prioritization or the risk analysis was wrong, and either should be reassessed. Under the FRA, only the software requirements classified as "high" in the grid (mandatory and critical) will be considered further in the validation of a system. No other combination (that is, low) will be considered any further.
Figure 3
Once the risk analysis has been completed, the advantage of the FRA approach is that a traceability matrix can be included in the same document. This is achieved by adding a fifth column to highlight where the requirement will be tested or verified in the remaining part of the validation. Some of the areas are illustrated in the legend to Table III and also are shown in the table itself.
How Detailed Should PQ Traceability Be?
When we trace a requirement to the user acceptance testing or performance qualification phase of a life cycle, we have a number of options as to the depth of traceability that is possible. However, before we get into the detail, I'd just like to explain my terminology around PQ test documentation. The rationale is to help you to interpret and adapt this discussion to the names you use for this documentation within your organizations. Typically, when I conduct PQ testing, there is an overview of the testing documented in a test plan that links the test scripts that undertake the actual PQ testing.
A test script contains the testing for a specific area of a system (for example, security and access control, audit trail, or library functions for identification of chemicals). We will use security and access control as an example of a test script for further discussion in this section.
Each test script contains one or more test procedures that focus on a specific area within the focus area. So for our example of the security and access control test script, we could have test procedures for the correct password and user identity combination, access privileges for each user type, and account locking.
Within each step procedure there are the individual test steps that will be used to test the functions and requirements of the system.
In essence, there are four options to discuss and debate (shown in Figure 1). The answers will depend on how much time and resource you have at your disposal, but we must have a realistic level that is achievable and maintainable. The options:
Trace 1: To the performance qualification phase as a whole and the requirements would be listed in the PQ test plan.
This is the easiest option of the four and just links the requirement to a specific phase. However, it is not very helpful, and what happens if you are asked by an auditor or inspector where was this requirement tested? It is not a nice experience trying to find out under pressure. Let's be realistic and throw this option in the bin.
Trace 2: To an individual test script.
This is a better option, because we could have between 5 and 20 PQ test scripts to sift through, depending on the complexity and risk of the system. To indicate the test script in which a requirement is tested is better, but you would still need to sort through the whole document to find where the requirement is tested. Good but perhaps not good enough?
Trace 3: To an individual test procedure within a test script.
Within each test script there could be between one and seven test procedures, so if you are tracing to a specific test procedure, this will reduce the amount of work to find where you test the requirement. This is relatively easy to undertake, especially if you are using a spreadsheet to sort and manage requirements. This is my preferred option because it balances business and regulatory benefit, and if you are asked to trace in an audit or inspection, you should be able to do this relatively easily from the 10–50 test steps that typically constitute a test procedure. This rationale is based on the manual Word and Excel tools that we currently use in our system validations.
Trace 4: To an individual test step or instruction within a test procedure.
This last option is the most specific and will be linked to specific test steps, and it is easy to demonstrate exactly where a specific requirement is tested. It is a feasible option for traceability; however, we also have to balance the cost and effort to undertake this initially and then maintain this level of traceability. Personally, I believe that if you are doing this manually, it is too much work and will increase overall validation costs for little benefit unless this level of traceability can be achieved by automation.
So you have the available options for the depth of traceability that is possible. All you have to do is make the decision for which one you want and implement it.
There is a degree of black art involved in computer validation, and it is most apparent in the generation of the traceability matrix together with the linkage of this to the rest of the life cycle documentation and activities for each system. Note that this section does not apply to the integrated validation document because the traceability matrix is linked already to each requirement individually. If the validation team has conducted a functional risk assessment and traceability matrix as described earlier, you will have a document with between 250 and 750 requirements that have been assessed individually for priority and risk. They will be linked to different stages of the life cycle and multiple test scripts in the PQ phase of testing as well as sorted into the various traceability classes. The requirements allocated to a specific test script then need to be sorted and organized further into separate test procedures and then arranged into a logical process flow, reflecting the laboratory working practices, which will enable each test procedure to be written.
Enter stage left the main black art software application — Excel. This is the software I love to hate, not because it does a poor job — far from it. But because it is easy to use and widely available, most spectroscopists prefer to use Excel rather than read the software manual for their instrument and use the latter for calculations instead. Anyway, to Excel and a role in the traceability saga; the requirements tables from the risk assessment and traceability matrix can be copied into an uncontrolled spreadsheet. Note the word uncontrolled. This spreadsheet is used to help manage the project, but it will not be a validation deliverable as such. This is the role of the traceability matrix and with some refinement in the PQ test plan.
The requirements are then sorted into similar groups (for example, IQ, OQ, SOP, and individual test scripts). Once allocated to a specific group, requirements can be sorted into a logical order, as shown in the next two figures.
Figure 3 shows a case study for a liquid chromatography–mass spectrometry (LC–MS) validation. Here some of the user requirements that were traced to SOPs are shown after they have been organized by cutting and pasting in the spreadsheet into a logical order for the user manual and the equipment log SOPs. In this case, the user manual has not been written and the traced requirements are used to provide an order and structure for the new SOP. The equipment log SOP already exists but will be reviewed to see if any changes need to be made to this SOP. This illustrates the proactive nature of traceability: the ability to ensure that requirements tracing to SOPs are organized, and the SOPs that need to be written or reviewed are identified long before the system goes live. So rather than scrabble around after the testing is completed, the procedures are available when the user acceptance testing or PQ is undertaken.
The second example concerns a test script for security and access control. Here, some of the requirements for a single test script have been organized into two individual test procedures as shown in Figure 4. This approach corresponds with the Trace 3 option in Figure 2, where a requirement is traced to a test procedure within a test script. Figure 4 also shows the next stage of test script design where the requirements in column B can be used to draft the outline of a test procedure (column F) that will be the basis of the test execution instructions in the actual test script.
Figure 4
Life can be a bummer — the validation team puts great effort into writing and refining the user requirements, but when you come to the risk assessment or even designing the test scripts, one or more requirements are not testable or you don't understand what they mean. Don't worry — this is a normal occurrence in validation and happens even with experienced validation personnel. The result is that at least the URS and the traceability matrix will need to be updated and new versions will be authorized and released during the validation. Depending upon how far the validation has progressed when these problems were identified, considerably more documents could be impacted. When using an essentially manual process based upon Word documents, this update process is laborious, and unless you are very aware of what needs to be changed, it is also an error-prone process. Is there a better way? Yes, validation management systems.
Validation Management Systems
There are commercial applications that can automate the computer validation process and make the whole job much easier compared with a manual validation. The application database holds the current versions of the validation templates for an organization, and when a specific project is created (for example, the validation of a spectrometry system), the appropriate document templates are copied to it. All documents created during the validation, including the requirements, are managed electronically, so an author can write a document and reviewers can make comments on it electronically. After incorporation of these comments, a document is approved electronically and released. Traceability can be automated from the URS onward, and if any links are updated, the system will indicate which ones are updated automatically. This is the way forward in computer validation.
Figure 5 shows a screen shot from a validation management tool that illustrates how a traceability matrix can be created by the system (8). The user requirements are traced to different validation deliverables in the predefined life cycle — it's easier when this is done automatically with no copying into Excel and Word documents.
Figure 5
You'll remember from previous discussions in this column that computer validation is a journey and not an event. Therefore, once you start the process, you can't stop. Validation and as a consequence traceability is an ongoing process throughout the life cycle, including upgrades to the system. The URS, traceability matrix, and PQ test suite are all living documents and will change as you upgrade the software and use new functions contained within it as well as capturing changes in the way you use the system as analytical demands change over time.
Rather than thinking of this as a burden, consider it as a business benefit akin to making an investment of intellectual input and time. The traceability matrix is a key document to help you with the changes because you can see the ramifications in the other documents and get an appreciation of the overall impact of the upgrade or change. For example:
In short, you know that the validation is complete and you can face audits and inspections easier than without one because you'll be able to answer questions easier and find relevant documents faster than without one.
R.D. McDowall is principal of McDowall Consulting and director of R.D. McDowall Limited, and "Questions of Quality" column editor for LCGC Europe, Spectroscopy's sister magazine. Address correspondence to him at 73 Murray Avenue, Bromley, Kent, BR1 3DJ, UK.
(1) R.D. McDowall, Spectroscopy 23(11), 22–27 (2008).
(2) Good Automated Manufacturing Practice Guide version 5, International Society for Pharmaceutical Engineering, Tampa Florida, 2008.
(3) R.D. McDowall, Pharmaceutical Regulatory Guidance Book, 24–30, July 2006.
(4) R.D. McDowall, Quality Assurance J., submitted for publication.
(5) R.D. McDowall, Quality Assurance J. 9, 196–227 (2005).
(6) R.D. McDowall, Spectroscopy 21(7), 20–26 (2006).
(7) R.D. McDowall, Validation of Chromatography Data Systems: Generating Business Benefits and Meeting Regulatory Requirements (Royal Society of Chemistry, Cambridge, 2005).
(8) QA Valid., Clarmon Corporation, www.clarmon.com