Clsi Ep18 A
Clsi Ep18 A
Clsi Ep18 A
Vol. 22 No. 28
Replaces EP18-P
Vol. 19 No. 24
Quality Management for Unit-Use Testing; Approved Guideline
This guideline recommends a quality management system for unit-use devices that will aid in the
identification, understanding, and management of sources of error (potential failure modes) and help to
ensure correct results. It is targeted for those involved in supervision of laboratory-testing quality
management, and it addresses issues related to specimen collection through reporting of results.
A guideline for global application developed through the NCCLS consensus process.
NCCLS...
Serving the World’s Medical Science Community Through Voluntary Consensus
NCCLS is an international, interdisciplinary, nonprofit, the need for field evaluation or data collection, documents
standards-developing, and educational organization that may also be made available for review at an intermediate
promotes the development and use of voluntary (i.e., “tentative”) consensus level.
consensus standards and guidelines within the healthcare
Proposed An NCCLS consensus document undergoes the
community. It is recognized worldwide for the
first stage of review by the healthcare community as a
application of its unique consensus process in the
proposed standard or guideline. The document should
development of standards and guidelines for patient
receive a wide and thorough technical review, including an
testing and related healthcare issues. NCCLS is based on
overall review of its scope, approach, and utility, and a line-
the principle that consensus is an effective and cost-
by-line review of its technical and editorial content.
effective way to improve patient testing and healthcare
services. Tentative A tentative standard or guideline is made
available for review and comment only when a
In addition to developing and promoting the use of
recommended method has a well-defined need for a field
voluntary consensus standards and guidelines, NCCLS
evaluation or when a recommended protocol requires that
provides an open and unbiased forum to address critical
specific data be collected. It should be reviewed to ensure its
issues affecting the quality of patient testing and health
utility.
care.
Approved An approved standard or guideline has achieved
PUBLICATIONS
consensus within the healthcare community. It should be
An NCCLS document is published as a standard, reviewed to assess the utility of the final document, to
guideline, or committee report. ensure attainment of consensus (i.e., that comments on
earlier versions have been satisfactorily addressed), and to
Standard A document developed through the consensus
identify the need for additional consensus documents.
process that clearly identifies specific, essential
requirements for materials, methods, or practices for use NCCLS standards and guidelines represent a consensus
in an unmodified form. A standard may, in addition, opinion on good practices and reflect the substantial
contain discretionary elements, which are clearly agreement by materially affected, competent, and interested
identified. parties obtained by following NCCLS’s established
consensus procedures. Provisions in NCCLS standards and
Guideline A document developed through the
guidelines may be more or less stringent than applicable
consensus process describing criteria for a general
regulations. Consequently, conformance to this voluntary
operating practice, procedure, or material for voluntary
consensus document does not relieve the user of
use. A guideline may be used as written or modified by
responsibility for compliance with applicable regulations.
the user to fit specific needs.
COMMENTS
Report A document that has not been subjected to
consensus review and is released by the Board of The comments of users are essential to the consensus
Directors. process. Anyone may submit a comment, and all comments
are addressed, according to the consensus process, by the
CONSENSUS PROCESS
NCCLS committee that wrote the document. All comments,
The NCCLS voluntary consensus process is a protocol including those that result in a change to the document when
establishing formal criteria for: published at the next consensus level and those that do not
result in a change, are responded to by the committee in an
• the authorization of a project appendix to the document. Readers are strongly encouraged
• the development and open review of documents to comment in any form and at any time on any NCCLS
document. Address comments to the NCCLS Executive
• the revision of documents in response to comments Offices, 940 West Valley Road, Suite 1400, Wayne, PA
by users 19087, USA.
• the acceptance of a document as a consensus VOLUNTEER PARTICIPATION
standard or guideline.
Healthcare professionals in all specialties are urged to
Most NCCLS documents are subject to two levels of volunteer for participation in NCCLS projects. Please
consensus—“proposed” and “approved.” Depending on contact the NCCLS Executive Offices for additional
information on committee participation.
Volume 22 EP18-A
NCCLS. Quality Management for Unit-Use Testing; Approved Guideline. NCCLS document EP18-A
(ISBN 1-56238-481-3). NCCLS, 940 West Valley Road, Suite 1400, Wayne, Pennsylvania 19087-1898
USA, 2002.
THE NCCLS consensus process, which is the mechanism for moving a document through two or more
levels of review by the healthcare community, is an ongoing process. Users should expect revised
editions of any given document. Because rapid changes in technology may affect the procedures,
methods, and protocols in a standard or guideline, users should replace outdated editions with the
current editions of NCCLS documents. Current editions are listed in the NCCLS Catalog, which is
distributed to member organizations, and to nonmembers on request. If your organization is not a
member and would like to become one, and to request a copy of the NCCLS Catalog, contact the
NCCLS Executive Offices. Telephone: 610.688.0100; Fax: 610.688.0700; E-Mail: [email protected];
Website: www.nccls.org
i
Number 28 NCCLS
ii
EP18-A
ISBN 1-56238-481-3
ISSN 0273-3099
Quality Management for Unit-Use Testing; Approved Guideline
Volume 22 Number 28
David L. Phillips, Chairholder
Paula J. Santrach, M.D., Vice-Chairholder
Anne Belanger, M.T.(ASCP), M.A.
Veronica Calvin
Cecelia S. Hinkel, M.T.(ASCP)
Wendell R. O’Neal, Ph.D.
James O. Westgard, Ph.D.
Ronald J. Whitley, Ph.D.
Number 28 NCCLS
This publication is protected by copyright. No part of it may be reproduced, stored in a retrieval system,
transmitted, or made available in any form or by any means (electronic, mechanical, photocopying,
recording, or otherwise) without prior written permission from NCCLS, except as stated below.
NCCLS hereby grants permission to reproduce limited portions of this publication for use in laboratory
procedure manuals at a single site, for interlibrary loan, or for use in educational programs provided that
multiple copies of such reproduction shall include the following notice, be distributed without charge,
and, in no event, contain more than 20% of the document’s text.
Permission to reproduce or otherwise use the text of this document to an extent that exceeds the
exemptions granted here or under the Copyright Law must be obtained from NCCLS by written request.
To request such permission, address inquiries to the Executive Director, NCCLS, 940 West Valley Road,
Suite 1400, Wayne, Pennsylvania 19087-1898, USA.
Suggested Citation
(NCCLS. Quality Management for Unit-Use Testing; Approved Guideline. NCCLS document EP18-A
[ISBN 1-56238-481-3]. NCCLS, 940 West Valley Road, Suite 1400, Wayne, Pennsylvania 19087-1898
USA, 2002.)
Proposed Guideline
November 1999
Approved Guideline
December 2002
ISBN 1-56238-481-3
ISSN 0273-3099
iv
Volume 22 EP18-A
Committee Membership
Wendell R. O’Neal, Ph.D. The Whisk Group & Hlth. Alliance of Greater
Cincinnati
Cincinnati, Ohio
Advisors
v
Number 28 NCCLS
Advisors (Continued)
Ellis Jacobs, Ph.D., CABCC, FACB The Mount Sinai Hospital-NYU Medical Center
New York, New York
vi
Volume 22 EP18-A
Active Membership
(as of 1 October 2002)
vii
Number 28 NCCLS
viii
Volume 22 EP18-A
Akershus Central Hospital and AFA Clarian Health–Methodist Hospital Geisinger Medical Center (PA)
(Norway) (IN) Grady Memorial Hospital (GA)
Albemarle Hospital (NC) Clendo Lab (Puerto Rico) Guthrie Clinic Laboratories (PA)
Allegheny General Hospital (PA) Clinical Laboratory Partners, LLC Hahnemann University Hospital
Allina Health System (MN) (CT) (PA)
Alton Ochsner Medical CLSI Laboratories (PA) Harris Methodist Erath County
Foundation (LA) Columbia Regional Hospital (MO) (TX)
Antwerp University Hospital Commonwealth of Kentucky Harris Methodist Fort Worth (TX)
(Belgium) Community Hospital of Lancaster Hartford Hospital (CT)
Arkansas Department of Health (PA) Headwaters Health Authority
ARUP at University Hospital (UT) CompuNet Clinical Laboratories (Alberta, Canada)
Armed Forces Research Institute of (OH) Health Network Lab (PA)
Medical Science (APO, AP) Cook County Hospital (IL) Health Partners Laboratories (VA)
Associated Regional & Cook Children’s Medical Center Heartland Regional Medical Center
University Pathologists (UT) (TX) (MO)
Aurora Consolidated Covance Central Laboratory Highlands Regional Medical Center
Laboratories (WI) Services (IN) (FL)
Azienda Ospedale Di Lecco (Italy) Danish Veterinary Laboratory Hoag Memorial Hospital
Bay Medical Center (MI) (Denmark) Presbyterian (CA)
Baystate Medical Center (MA) Danville Regional Medical Center Holmes Regional Medical Center
Bbaguas Duzen Laboratories (VA) (FL)
(Turkey) Delaware Public Health Laboratory Holzer Medical Center (OH)
Bermuda Hospitals Board Department of Health & Hopital du Sacre-Coeur de
Bo Ali Hospital (Iran) Community Services (New Montreal (Montreal, Quebec,
Brooks Air Force Base (TX) Brunswick, Canada) Canada)
Broward General Medical Center DesPeres Hospital (MO) Hôpital Maisonneuve – Rosemont
(FL) DeTar Hospital (TX) (Montreal, Canada)
Cadham Provincial Laboratory Detroit Health Department (MI) Hospital for Sick Children
Calgary Laboratory Services Diagnosticos da América S/A (Toronto, ON, Canada)
Carilion Consolidated Laboratory (Brazil) Hospital Sousa Martins (Portugal)
(VA) Dr. Everett Chalmers Hospital Hotel Dieu Hospital (Windsor, ON,
Cathay General Hospital (Taiwan) (New Brunswick, Canada) Canada)
Central Peninsula General Hospital Doctors Hospital (Bahamas) Houston Medical Center (GA)
(AK) Duke University Medical Center Huddinge University Hospital
Central Texas Veterans Health Care (NC) (Sweden)
System Dwight David Eisenhower Army Hurley Medical Center (MI)
Centre Hospitalier Regional del la Med. Ctr. (GA) Indiana University
Citadelle (Belgium) E.A. Conway Medical Center (LA) Innova Fairfax Hospital (VA)
Centro Diagnostico Italiano Eastern Maine Medical Center Institute of Medical and Veterinary
(Milano, Italy) East Side Clinical Laboratory (RI) Science (Australia)
Champlain Valley Physicians Eastern Health (Vic., Australia) International Health Management
Hospital (NY) Elyria Memorial Hospital (OH) Associates, Inc. (IL)
Chang Gung Memorial Hospital Emory University Hospital (GA) Jackson Memorial Hospital (FL)
(Taiwan) Esoterix Center for Infectious Jersey Shore Medical Center (NJ)
Changi General Hospital Disease (TX) John C. Lincoln Hospital (AZ)
(Singapore) Fairview-University Medical John F. Kennedy Medical Center
The Charlotte Hungerford Hospital Center (MN) (NJ)
(CT) Federal Medical Center (MN) John Peter Smith Hospital (TX)
Children’s Hospital (LA) Florida Hospital East Orlando Kadlec Medical Center (WA)
Children’s Hospital (NE) Focus Technologies (CA) Kaiser Permanente Medical Care
Children’s Hospital & Clinics (MN) Foothills Hospital (Calgary, AB, (CA)
Children’s Hospital Medical Center Canada) Kaiser Permanente (MD)
(Akron, OH) Fresenius Medical Care/Spectra Kantonsspital (Switzerland)
Children’s Hospital of East (NJ) Keller Army Community Hospital
Philadelphia (PA) Fresno Community Hospital and (NY)
Children’s Medical Center of Dallas Medical Center Kenora-Rainy River Regional
(TX) Frye Regional Medical Center (NC) Laboratory Program (Ontario,
Gambro BCT (CO) Canada)
ix
Number 28 NCCLS
Kern Medical Center (CA) Mississippi Baptist Medical Center Quest Diagnostics Incorporated
Kimball Medical Center (NJ) Monte Tabor – Centro Italo - (CA)
King Faisal Specialist Hospital Brazileiro de Promocao (Brazil) Quintiles Laboratories, Ltd. (GA)
(Saudi Arabia) Montreal Children’s Hospital Regions Hospital
King Khalid National Guard (Canada) Reid Hospital & Health Care
Hospital (Saudi Arabia) Montreal General Hospital Services (IN)
King’s Daughter Medical Center (Canada) Research Medical Center (MO)
(KY) MRL Pharmaceutical Services, Inc. Rex Healthcare (NC)
Klinični Center (Slovenia) (VA) Rhode Island Department of Health
Laboratories at Bonfils (CO) Nassau County Medical Center Laboratories
Laboratoire de Santé Publique du (NY) Riyadh Armed Forces Hospital
Quebec (Canada) National Institutes of Health (MD) (Saudi Arabia)
Laboratório Fleury S/C Ltda. Naval Hospital – Corpus Christi Royal Columbian Hospital (New
(Brazil) (TX) Westminster, BC, Canada)
Laboratory Corporation of America Naval Surface Warfare Center (IN) Sacred Heart Hospital (MD)
(NJ) Nebraska Health System Saint Mary’s Regional Medical
Laboratory Corporation of New Britain General Hospital (CT) Center (NV)
America (MO) New England Fertility Institute St. Alexius Medical Center (ND)
LAC and USC Healthcare (CT) St. Anthony Hospital (CO)
Network (CA) New Mexico VA Health Care St. Anthony’s Hospital (FL)
Lakeland Regional Medical Center System St. Barnabas Medical Center (NJ)
(FL) New York University Medical St-Eustache Hospital (Quebec,
Lancaster General Hospital (PA) Center Canada)
Langley Air Force Base (VA) North Carolina State Laboratory of St. Francis Medical Ctr. (CA)
LeBonheur Children’s Public Health St. John Hospital and Medical
Medical Center (TN) North Shore – Long Island Jewish Center (MI)
L'Hotel-Dieu de Quebec (Canada) Health System Laboratories (NY) St. John Regional Hospital (St.
Libero Instituto Univ. Campus North Shore University Hospital John, NB, Canada)
BioMedico (Italy) (NY) St. Joseph Hospital (NE)
Louisiana State University Northwestern Memorial Hospital St. Joseph’s Hospital – Marshfield
Medical Center (IL) Clinic (WI)
Maccabi Medical Care and Health O.L. Vrouwziekenhuis (Belgium) St. Joseph's Medical Center (NY)
Fund (Israel) Ordre professionnel des St. Joseph Mercy Hospital (MI)
Malcolm Grow USAF Medical technologists médicaux du St. Jude Children's Research
Center (MD) Québec Hospital (TN)
Martin Luther King/Drew Medical Ospedali Riuniti (Italy) St. Luke’s Regional Medical
Center (CA) The Ottawa Hospital Center (IA)
Massachusetts General Hospital (Ottawa, ON, Canada) St. Mary of the Plains Hospital
(Microbiology Laboratory) Our Lady of Lourdes Hospital (NJ) (TX)
MDS Metro Laboratory Services Our Lady of the Resurrection St. Mary’s Hospital & Medical
(Burnaby, BC, Canada) Medical Center (IL) Center (CO)
Medical College of Virginia Pathology and Cytology St. Vincent Medical Center (CA)
Hospital Laboratories, Inc. (KY) Ste. Justine Hospital (Montreal, PQ,
Medicare/Medicaid Certification, Pathology Associates Medical Canada)
State of North Carolina Laboratories (WA) Salina Regional Health Center (KS)
Memorial Hospital at Gulfport (MS) The Permanente Medical Group San Francisco General Hospital
Memorial Medical Center (IL) (CA) (CA)
Memorial Medical Center (LA) Piedmont Hospital (GA) Santa Clara Valley Medical Center
Jefferson Davis Hwy Pikeville Methodist Hospital (KY) (CA)
Memorial Medical Center (LA) Pocono Hospital (PA) Seoul Nat’l University Hospital
Napoleon Avenue Presbyterian Hospital of Dallas (Korea)
Mercy Medical Center (IA) (TX) Shanghai Center for the
Methodist Hospital (TX) Providence Health Care Clinical Laboratory (China)
Methodist Hospitals of Memphis Queen Elizabeth Hospital (Prince South Bend Medical Foundation
(TN) Edward Island, Canada) (IN)
MetroHealth Medical Center (OH) Queensland Health Pathology Southwest Texas Methodist Hospital
Michigan Department of Services (Australia) (TX)
Community Health
x
Volume 22 EP18-A
South Western Area Pathology University Hospitals of Cleveland VA Outpatient Clinic (OH)
Service (Australia) (OH) Vejle Hospital (Denmark)
Southern Maine Medical Center The University Hospitals (OK) Washington Adventist Hospital
Specialty Laboratories, Inc. (CA) University of Alabama-Birmingham (MD)
Stanford Hospital and Clinics (CA) Hospital Washoe Medical Center
State of Washington Department of University of Alberta Hospitals Laboratory (NV)
Health (Canada) West Jefferson Medical Center
Stony Brook University Hospital University of Colorado Health (LA)
(NY) Science Center West Shore Medical Center (MI)
Stormont-Vail Regional Medical University of Chicago Hospitals Wilford Hall Medical Center (TX)
Center (KS) (IL) William Beaumont Army Medical
Sun Health-Boswell Hospital (AZ) University of Illinois Medical Center Center (TX)
Swedish Medical Center – University of the Ryukyus (Japan) William Beaumont Hospital (MI)
Providence Campus (WA) University of Texas M.D. Anderson William Osler Health Centre
Tampa General Hospital (FL) Cancer Center (Brampton, ON, Canada)
Temple University Hospital (PA) University of Virginia Medical Williamsburg Community Hospital
Tenet Odessa Regional Hospital Center (VA)
(TX) University of Washington Winn Army Community Hospital
The Toledo Hospital (OH) UZ-KUL Medical Center (Belgium) (GA)
Touro Infirmary (LA) VA (Denver) Medical Center (CO) Winnipeg Regional Health
Trident Regional Medical Center Virginia Department of Health Authority (Winnipeg, Canada)
(SC) VA (Hines) Medical Center Wishard Memorial Hospital (IN)
Tripler Army Medical Center (HI) VA (Kansas City) Medical Center Yonsei University College of
Truman Medical Center (MO) (MO) Medicine (Korea)
UCSF Medical Center (CA) VA (Western NY) Healthcare York Hospital (PA)
UNC Hospitals (NC) System
University College Hospital VA (San Diego) Medical Center
(Galway, Ireland) (CA)
University Hospital (Gent) VA (Tuskegee) Medical Center
(Belgium) (AL)
Donna M. Meyer, Ph.D., Susan Blonshine, RRT, RPFT, Tadashi Kawai, M.D., Ph.D.
President FAARC International Clinical Pathology
CHRISTUS Health TechEd Center
xi
Number 28 NCCLS
xii
Volume 22 EP18-A
Contents
Abstract ....................................................................................................................................................i
Committee Membership..........................................................................................................................v
Foreword ...............................................................................................................................................xv
1 Introduction................................................................................................................................1
2 Scope..........................................................................................................................................2
3 Definitions .................................................................................................................................3
xiii
Number 28 NCCLS
xiv
Volume 22 EP18-A
Foreword
Unit-use testing has existed for many years. Conventional methods and lyophilized or aqueous materials
were generally used for quality control and quality assurance. Because these materials were readily
available and generally accepted as capable of ensuring trueness and precision, they became part of the
quality assurance program for early unit-use test systems such as urine dipsticks.
The concepts of quality control over the last half-century have developed in two primary directions. The
first is the more familiar, in which a continuous process that generates measurements is monitored to
determine whether the process is stable or is headed out of control. The concepts of statistical quality
control were applied to the clinical laboratory with the introduction of Levey-Jennings charts, with many
subsequent statistical and interpretation enhancements developed to provide additional capabilities of
process control to the clinical laboratory measurement process. Similar quality control practices are also
used by manufacturers to release lots of reagents, including unit-use reagents. This quality control
regimen guards against continuous processes that drift or become unstable, generating trends or increased
imprecision.
The second area of quality control is acceptance sampling, where a “lot” of individual items is sampled to
determine that an acceptable level of performance has been obtained. Continuous variable measurement,
as used in process control, uses quantitative measurements which have standard deviations and means.
Acceptance sampling (in its simplest and most common applications) classifies items in two discrete
categories: defective and valid. Use of acceptance sampling protects against failures that appear to occur
randomly. These failures can occur from a continuous process that has no detectable mean shift and in
some cases no detectable increased imprecision, e.g., they can occur in conventional diagnostic analyzers
that exhibit acceptable, conventional quality control.
In the clinical laboratory, only the first of these two general areas has found wide application, whereas
acceptance sampling is sometimes used by manufacturers in release criteria for reagent lots. With the
introduction of unit-use devices for clinical sample testing, it is necessary to incorporate the concepts of
the second type of quality control. The assumptions and implications of each approach are different, and
it is now necessary to combine both approaches for many of the new in vitro devices now in the
marketplace. Two varieties of systems are currently in use for quality control of the unit-use device. One
system consists of self-contained unit-use disposable devices; the other is a combination of a unit-use
disposable device (test strip, cassette, disk, card, etc.) and a reader (reflectance meter, fluorescence,
spectrophotometry device, etc).
No conventional quality control (QC) method and material can completely control any test system. With
some devices, quality control in clinical laboratories that is used to detect process changes may be less
relevant for unit-use systems, assuming that the manufacturer has carried out conventional quality control
during manufacturing. This is because the additional “process” that takes place in a conventional
diagnostic analyzer at a clinical laboratory has already occurred for a unit-use system in the
manufacturing environment, rather than the clinical laboratory. Acceptance sampling, while impractical
for clinical laboratories, is also carried out by manufacturers when appropriate.
Conventional quality assurance and quality control methods in and of themselves do not assure quality. A
one-size-fits-all or prescribed quality control testing protocol such as “two levels per day of use” may not
be appropriate for all testing systems. The diversity among regulatory requirements, accreditation
practices, and user needs coupled with the financial aspects of this QC method led to the formation of the
NCCLS Subcommittee on Unit-Use Testing.
It is the subcommittee’s intent to provide a comprehensive and flexible guideline that will enable users,
manufacturers, and regulators to identify potential sources of errors in unit-use test systems and
implement processes to manage these errors using new quality management models.
xv
Number 28 NCCLS
Foreword (Continued)
The subcommittee has limited the discussions within this document to unit-use test systems. While it is
the committee’s expectation that the guideline will be used primarily to address the issues around point-
of-care (POC) devices that utilize single-use disposables, EP18 should not be considered as exclusive to
unit-use systems. However, as these concepts are further refined with actual experience, an additional,
perhaps broader-based guideline could be undertaken to address multiuse systems and include all aspects
of statistical process control and error reduction.
Key Words
Quality assurance, quality control, quality management, quality system, unit-use system
A Note on Terminology
NCCLS, as a global leader in standardization and harmonization, is firmly committed to achieving global
harmonization wherever possible. Harmonization is a process of recognizing, understanding, and
explaining differences while taking steps to achieve worldwide uniformity. NCCLS recognizes that
medical conventions in the global metrological community have evolved differently in the United States,
Europe, and elsewhere; that these differences are reflected in NCCLS, ISO, and CEN documents; and that
legally required use of terms, regional usage, and different consensus timelines are all obstacles to
harmonization. In light of this, NCCLS recognizes that harmonization of terms facilitates the global
application of standards and is an area of immediate attention. Implementation of this policy must be an
evolutionary and educational process that begins with new projects and revisions of existing documents.
In the context of this guideline, it is necessary to point out that several terms are used differently in the
USA and other countries, notably those in Europe.
In order to align the usage of terms to ISO, the term "trueness" is used in this document when referring to
the closeness of the agreement between the average value from a large series of measurements and to an
accepted reference value. The term "accuracy," in its metrological sense, refers to the closeness of the
agreement between the result of a (single) measurement and a true value of a measurand, thus comprising
both random and systematic effects.
xvi
Volume 22 EP18-A
Improvement
Organization
Management
Management
Information
Satisfaction
Assessment
Facilities &
Occurrence
Documents
Equipment
& Records
Service &
Personnel
Inventory
Process
Control
Process
Safety
X X X X
Adapted from NCCLS document HS1—A Quality System Model for Health Care.
xvii
Number 28 NCCLS
xviii
Volume 22 EP18-A
1 Introduction
Unit-use testing presents unique challenges to manufacturers, users, regulators, and accrediting agencies
in terms of quality control and quality assurance. Conventional schemes of quality control, with strictly
defined materials and frequency, are not always applicable to unit-use test systems due to the very nature
of these devices. Furthermore, quality assurance and oversight take on new dimensions with the
utilization of many of these test systems outside traditional laboratory test settings, and with test
performance by a variety of healthcare personnel.
Even though the committee considered the use of all unit-use (point-of-care) test systems in this
guideline, the primary focus is the use of these unit-use systems within professional settings, i.e.,
hospitals, physician offices, etc. and not for patient self-testing or in-home testing. It is in the
professional settings that the healthcare professional has assumed the responsibility of ensuring
the quality of the testing system. Moreover, these testing sites are subject to regular and routine
inspections or surveys by various accrediting agencies. Therefore, some guidance as to how to deal
with various test system errors is important. It is no less important in self-testing situations, but it is the
patient along with his/her physician that is responsible for the quality of the testing system. Further, there
is no organization that requires and monitors the patient's compliance to any quality systems. However,
as technology becomes more advanced by making test systems simpler to operate for the layperson, some
portions of this guideline may become appropriate for review and use by the individual consumer.
• Unit-use devices are extremely diverse in their technology, design, and function. Every unit-use test
system is subject to certain preanalytical, analytical, and postanalytical errors. The relative
importance and likelihood of these errors varies with the device, the specimen, the user, and the
environment. In addition, a high level of variability exists in terms of skill and knowledge level
among the end users of the unit-use device as opposed to the user in the hospital or commercial
laboratory. While it is evident that all in vitro diagnostic (IVD) devices are subject to these issues,
this document focuses strictly on unit-use test devices and may be expanded in future versions.
• A single quality control/quality assurance regimen cannot be developed to cover all unit-use test
systems (as well as most, if not all IVD systems) and detect all possible errors.
• The principles of traditional, statistical quality control need to be customized and adapted for the unit-
use test system. It is impractical to consume large numbers of unit-use systems needed to detect the
low rate of defects found in properly designed, manufactured, shipped, and stored unit-use systems.
A multitier approach to quality control and quality assurance has been proposed within this document.
This approach provides the user with the means to inspect goods upon arrival through the use of
limited acceptance sampling to detect variables such as shipping conditions, lot changes, and new
operators. It also allows for further quality assurance testing when device results deviate from
established QC control ranges, and it allows for an assessment of operator competency. Periodic
quality control also serves as an indicator of operator competency.
• Quality control/assurance programs may evolve with increasing experience with the unit-use test
system. These programs should focus on errors which may occur relatively frequently and/or have
the potential for significant clinical impact.
The more simple an in vitro device might be to use, the more demanding the design requirements for
robustness of the analytic process and the stability of the system.
Based upon the assessment of the guiding concepts outlined above, the subcommittee based this guideline
on a systems approach to quality management.1 The phases of the testing process are defined, and the
potential sources of error within each phase are identified.
A generic “sources of error” matrix is presented and suggestions for practical management/monitoring are
described. The expectation is that a manufacturer will evaluate this list of potential failure modes during
the design and development of each new product and identify those that are relevant. Failure mode,
effects, and criticality analysis (FMECA)2 and hazard analyses should consider whether each of the listed
failure modes is relevant to the device under design. The device’s design should lessen to the extent
possible any resulting hazards that present an unacceptable risk to patients, users, or other individuals.
Any remaining failure modes shall be clearly and unambiguously disclosed in the product
labeling/instructions for use. Clinical users can develop a comprehensive, yet individualized, quality
management program based on the unit-use test system and the specific setting in which it will be
utilized. Regulatory and accrediting agencies can use both the generic and customized matrices to assess
the appropriateness of these programs.
The key to the success of this approach is cooperation and open exchange of information among these
groups. In this way, high-quality patient care can be delivered through the competent use of accurate and
reliable unit-use test systems.
2 Scope
The intent of the subcommittee is to develop a guideline for establishing a quality management system for
unit-use test systems that is practical to implement; applicable to various devices and settings; and
scientifically based so that “sources of error” are identified, understood, and managed. This system will
aid device manufacturers and users in assuring correct results.
• The container where the test is performed is always discarded after each test.
• Reagents, calibrators, and wash solutions are typically segregated as one test. There is no interaction
of reagents, calibrators, and wash solutions from test to test.
The scope of the guideline comprises testing components, locations, and users. These include:
• Testing Components
Specimen collection
Sample presentation
Instrument/reagents
Result/readout/raw data
Preliminary review
Integration into the patient record
This guideline applies to unit-use test systems utilized by healthcare providers in any setting.
3 Definitionsa
Error, n - A test result where the difference between the measured value and the true value is larger than
laboratory-specified or manufacturer-specified tolerances; that is, a result that could lead to inappropriate
patient management; NOTES: a) This definition is a combination of VIM 3.10 “measurement error” and
VIM5.21 “maximum permissible error”; b) In this document, the term “error” is used broadly to include
all potential failure modes. This includes measurement error (the difference between the test result and the
true value), which may or may not exceed specified tolerances; it also includes operator mistakes,
instrument failure or defects, and environmental conditions that can create “errors” as defined above.
Where possible, the document is exact in stating the type of error, but does not do so where the meaning
is clear and the exact term is unnecessarily wordy or awkward.
Failure mode, effect, and criticality analysis (FMECA), n – A systematic review of a system or
product involving three phases: identification of potential failures, assessing the impact on total
system/product performance of that failure, and the criticality of that failure; NOTES: a) The analysis
also includes a review(s) of the steps taken to guard against the failure or to mitigate its effect; b) The
procedure is sometimes referred to as a “bottoms-up” analysis; c) If no criticality or severity is part of the
analysis, the term FMEA is used.
Fault tree analysis (FTA), n – A systematic review of a system or product to identify sources of
potential failure; particularly useful in safety and reliability analyses; NOTES: a) First, a list of potential
failure modes is developed. For each, an analysis is conducted to (i) determine the primary causes; (ii) the
secondary causes behind the primary causes; and (iii) possibilities to mitigate the primary and the
secondary causes; b) The procedure is sometimes referred to as a “top-down” analysis; c) The causes for a
top-level event are enumerated through a series of Boolean logic gates.
Hazard analysis, n - A fault tree analysis used in medical devices, whereby the top-level event is related
to patient safety, operator safety, or an environmental hazard.
Quality assurance, n - Planned and systematic activities to provide adequate confidence that
requirements for quality will be met.
Quality control, n - Operational techniques and activities that are used to fulfill requirements for quality.
Quality management, n - All activities of the overall management function that determine the quality
policy, objectives and responsibilities, and implement them by means such as quality planning, quality
control, quality assurance, and quality improvement within the quality system.
Quality system, n - The organizational structure, resources, processes, and procedures needed to
implement quality management.
Source of error, n – A component of the measurement method, device, or operator practice that creates
risk for patients, users, or other individuals.
Source of error matrix, n – A generic FMECA diagram prepared for unit-use medical devices.
Trueness, n - The closeness of agreement between the average value obtained from a large series of test
results and an accepted reference value.
a
Some of these definitions are found in NCCLS document NRSCL8—Terminology and Definitions for Use in NCCLS
Documents. For complete definitions and detailed source information, please refer to the most current edition of that document.
An NCCLS global consensus guideline. ©NCCLS. All rights reserved. 3
Number 28 NCCLS
Unit-use system, n – Testing system where reagents, calibrators, and wash solutions are typically
segregated as one test, without interaction of reagents, calibrators, and wash solutions from test to test,
and the container where the test is performed is always discarded after each test.
The “sources of error” matrix can be used as a starting point. The manufacturer’s responsibility is to
design the system to eliminate or minimize sources of error as much as possible, then to disclose those
that remain. Additional sources of error that are not on the matrix may be identified. Analyte-specific, as
well as system-specific, sources of error should be included. Once the applicable factors have been
identified, the manufacturer should develop recommendations for managing these sources of error with
consideration given to the nature of the error’s impact, the device capabilities, any operator requirements,
and the type and frequency of applicable quality monitoring. The risk analysis, which may include items
listed in Appendix A, should be analyzed and those risks not mitigated by the manufacturer should be
disclosed in the information supplied by the manufacturer. Specific details on quality control as to the
level and/or frequency of testing should be provided in the information supplied by the manufacturer.
The following list provides suggested steps for the completion of the “sources of error” matrix:
The user has responsibilities to develop a quality management system that is specific to the testing system
and the setting in which each device is being used. For the test system to perform within its intended use,
performance characteristics, and limitations, the user must follow manufacturer directions. The user bears
responsibility for establishment of performance characteristics if deviating from manufacturer
instructions.
A quality assurance program elaborates with definitive details how to identify and manage possible
sources of error associated with clinical testing. The user is responsible for development of a documented
4 An NCCLS global consensus guideline. ©NCCLS. All rights reserved.
Volume 22 EP18-A
quality assurance program appropriate for each testing system. The sources of error matrix may be used
as a tool to help define a facility’s quality assurance (QA) program. The sources of error matrix may be
used as a checklist or as a tool to help identify potential failure modes so that they can be addressed by the
manufacturer or by the user.
The user should carefully review the manufacturer’s instructions for use and identify applicable failure
modes that the laboratory’s QA program must address. A completed “sources of error” matrix will define
all possible sources of error associated with a particular system and how to monitor, detect, and manage
(minimize/eliminate) identified sources of error. A separate “source of error” matrix should be completed
for each type of unit-use device utilized by each facility.
(1) The user should review the manufacturer’s instructions for use and identify any sources of error that
the laboratory must control. If the customer needs additional information and recommendations, they
should contact the manufacturer.
(2) Compare the manufacturer’s summary of failure modes to the sources of error matrix (Appendix A)
information to determine if the manufacturer’s information is compatible with the analytical/clinical
needs and test setting. Add omitted and additional possible sources of error as they are determined to
be relevant for the use and setting.
(3) Complete all matrix columns for all identified sources of error. Identify where additional quality
control measures are necessary and how these sources should be managed. Obtain supporting data as
needed from the manufacturer. The criteria for determining which quality monitors to use and at what
frequency to implement them is determined by factors specific for the facility. Such factors may
include: regulatory requirements; laboratory director specifications; device sensitivity/specificity;
device past performance record; competency level of testing operators; reporting mechanisms; and
frequency of device utilization.
(4) Revise, add, delete and/or create QA programs, standard operating procedures, training protocols, and
other facility policies as necessary based on the information derived from the completed sources of
error matrix.
(5) Implement all applicable quality monitoring (see Section 5.2.6) at the frequencies specified in the
“sources of error” matrix.
(6) Periodically review and evaluate the quality management system (QMS) to ensure sources of error
are identified and managed at an acceptable rate. Reestablish acceptable quality monitors and
frequencies to monitor sources of error to improve outcomes.
The “sources of error” matrix (see Appendix A) is a table that contains a list of potential failure modes in
the preanalytical, analytical, and postanalytical phases of unit-use device testing, causing erroneous
results. The chart may be completed with information from the manufacturer and user describing the
relevance of each applicable source of error with potential to cause erroneous results. Some items may
not apply to a particular test type or format.
The purpose of the “sources of error” matrix is to aid the manufacturer and user in considering and
identifying possible sources of error applicable to a particular unit-use test system. Once a source of error
is identified, its relevance can be assessed to determine how and at what frequency it will be monitored.
This column contains six categories. Each category corresponds to the different phases of the testing
process. This grouping provides finer discrimination than the traditional classification of preanalytical,
analytical, and postanalytical errors.
Appendix B illustrates a sample "sources of error matrix" for a typical unit-use test system. Typically, a
specific source of error matrix is a more abbreviated and focused evaluation of the specific system than
that which is demonstrated in the appendix.
Specimen collection applies to possible errors occurring during patient preparation, sample collection,
transport, and storage prior to measurement. This includes inappropriate sample selection, (e.g., wrong
sample type or presence of known interferents).
Sample presentation applies to possible errors occurring during specimen preparation (e.g., during
dilution) and during mixing with reagents or introduction into the unit-use device.
5.2.1.3 Instrument/Reagents
Instrument/reagent applies to possible errors occurring during measurement, due to problems with
instrument, reagent, or user procedure (e.g., outdated reagent or electromagnetic interference).
Results/readout/raw data applies to potential errors occurring at the conclusion of the measurement phase
(e.g., incorrect instrument mode setting or misinterpretation of a visual result by the user).
Preliminary review applies to potential errors occurring after measurement is complete, while judging
validity of the measurement process and results (e.g., failure to recognize alert value or instrument
diagnostic/ malfunction warning, or physiologically impossible results).
Integration into the patient record applies to potential errors occurring during sample result storage and
transfer to patient medical records (e.g., transcription mistakes).
The identified source of error either applies or does not apply to the unit-use device systems.
This is a description of how the result is perturbed or impacted by the sources of error. In the case of a
multianalyte test, a pattern may emerge and should be described. (For example, does air contamination of
a blood gas sample cause low PCO2? Does it cause elevated pH? Does it cause biased PO2?)
This is a description of how and when the instrument or device prevents or detects the error. (For
example, does the device include visual indicators of reagent viability as a means of prevention? Does
the device include low-battery alarms as a means of detection?)
This is a description of requirements for the user in developing or modifying laboratory procedures and
training requirements, not with regard to manufacturer’s instructions for use, but to address issues
concerning error detection and elimination. Use of this information will promote training that ensures the
safe, effective handling and operation of the unit-use test system. It includes training in all aspects of the
measurement, ranging from specimen collection and handling to integration of results into the patient
record.
This is a description of quality monitoring and assessment appropriate to minimize and/or detect the
errors that have not been prevented by device design. This includes quality assurance procedures to
measure and monitor control results, monitor proficiency testing (internal and external), review records,
and assess personnel for competency and need for retraining.
The user is responsible for completing this column to ensure that the source of error is monitored at a
frequency which optimizes error detection. The user should consider the nature of the impact of the error
and the cost of detection. Additional information on determining the detection and impact of an error can
be determined by using a failure mode, effects, and criticality analysis (FMECA). The manufacturer may
make recommendations in this column, but it is the responsibility of the user to make the final
determination in accordance with all relevant regulations.
Each unit test should have a written procedure which covers all aspects of the testing cycle. This
procedure should be written in language that is familiar to the intended users and should be readily
available to users when testing is performed. (Please refer to the most current version of NCCLS
document GP2—Clinical Laboratory Technical Procedure Manuals for additional information.) The
procedure should include the following elements that are applicable to the specific unit-use test:
In general, sources of error that are detected by the operator, dependent on proper technique, and/or
managed by training should be contained in the procedure. A system should exist to ensure that
procedures are current and that procedural changes are made in a controlled fashion.
Operators performing unit-use tests should have training in the systems involved or have worked under
the supervision of an experienced laboratorian until they have satisfactorily demonstrated proficiency for
each procedure. The degree of training depends upon both the background of the individual who will be
performing the testing and the analytical systems being employed. When selecting the system, the level
of training (e.g., the complexity of the system, the degree of technique dependence, etc.) that is required
to implement a new method or instrument should be considered.
Training should cover the following subjects. The significance of each topic depends upon the personnel
and the test system being used.
Training may be available from the manufacturer. The use of manufacturer-provided training is
recommended. Site-specific needs and procedures should be considered and the training supplemented to
address them. Some form of competency assessment should be included in order to determine the
effectiveness of training.
Evaluating the competency of all testing personnel and ensuring the staff’s continuing competency to
perform tests and report tests promptly, accurately, and proficiently are essential components of a quality
testing system. Individuals must demonstrate competency in performing the procedure, and evidence of
this competency must be documented. Evaluation of the competency of the staff may include, among
other procedures, the following:
• direct observation of routine patient test performance, including patient preparation (if applicable),
specimen handling, specimen processing, and testing;
• review of intermediate test results or worksheets, QC records, proficiency testing results, and
preventive maintenance records;
• assessment of test performance through testing of previously analyzed specimens, internal blind
testing samples, or external proficiency testing samples;
• evaluation and documentation of the performance of persons responsible for testing, and providing
such documentation to the testing personnel manager.
If a source of operator procedure error has been identified that is not detected by the system, periodic
liquid control testing should be included in the evaluation of user competency. The recommended QC
scheme/procedure is indicated below.
• Frequent operators (those performing the tests at least once per week) would perform traditional
liquid (i.e., not electronic) quality control at least once per week.
• Those operators who perform the tests less frequently (less than once per week) would perform
quality control with every day of testing. These recommendations would serve as a starting point for
quality control testing frequency and could be modified by each institution based on their data and
experience. As quality control testing intervals lengthen, reagent stability should be considered.
• Users should follow the manufacturer’s recommendation for periodic liquid QC with a default
frequency of no longer than 1/10th the labeled stability of a product if the manufacturer does not
provide frequency information.
• If secondary storage conditions occur, QC should be run at the manufacturer’s recommended interval
or approximately midway through the secondary storage interval.
• Unit-use devices have reagent stability of greater than one year; this recommendation means (in
practical terms) that testing should be performed no less than approximately once every month.
Testing personnel should be assessed for competency at least annually. Sources of operator error that
have a critical impact on the test result should be included in each assessment. Competency testing
should occur more frequently if individuals are having difficulty with test performance.
The goal of process control is to verify that all system components are performing as specified by the
manufacturer and at a quality level acceptable to the user. System components include the operator, the
instrument, the reagents, the sample and the environment. Various forms of controls test different parts of
the process. (For additional procedures for test validation, refer to the most recent version of GP29—
Assessment of Laboratory Tests When Proficiency Testing is Not Available.)
At a minimum, process controls should be performed as specified by the manufacturer. Users may
implement additional controls. The types selected should check the components most vulnerable to
failure. Periodically, material should be used that verifies all system components at one time under usual
testing conditions. The composition and frequency of such testing should be defined by considering the
following characteristics:
When there is a change in the test system (e.g., a new lot of reagents, a change in the environment, or a
new test operator), appropriate quality control testing should be performed to show that the change is
acceptable. Sufficient replicate testing must be done to ensure that a problem, if caused by the change,
will be detected. The more precise an assay, the fewer replicates that are necessary to detect a problem.
The laboratory director determines the maximum acceptable shift in the results (the effect). For example,
if the assay is so precise that the standard deviation or coefficient of variation is only one-third of the
effect, then only triplicate measurements need to be made for acceptance testing. (Please refer to the most
current version of NCCLS document EP7—Interference Testing in Clinical Chemistry.) This testing
should be done with two levels of control material or consistent with manufacturer’s recommendations.
Patient samples may be used for acceptance testing, particularly if the test method is subject to a matrix
effect.
Some form of ongoing quality control should be performed periodically with the goals of assessing
system stability and operator competency (see Section 6.2). This ongoing quality control may involve
testing of control material, testing of split patient samples, and/or testing of external proficiency samples.
The trueness of unit-use devices is initially established by recovery and interference studies, and by
comparison to a method that is traceable to a recognized standard or to another trueness basis. These tests
are performed by the manufacturer as a part of design control and government submission processes.
Periodic comparison studies ensure that systematic errors do not gradually increase and go undetected by
conventional quality control systems. In a split-sample study, clinical specimens are collected, split into
aliquots, and analyzed using both methods. If possible, specimens should be fresh, cover the analytical
10 An NCCLS global consensus guideline. ©NCCLS. All rights reserved.
Volume 22 EP18-A
range of interest, and represent a variety of medical conditions. A split-sample study may be employed
when stable control materials are not available, or as a supplemental procedure when the source of a
measurement error cannot be identified from available control data. The frequency of split sampling
should be established by each institution.
6.3.4.1 Electronic QC
Electronic quality control (EQC) devices are test simulators that monitor and/or report on the function of
the test system. Some EQC devices provide numerical results as a simulated test. Others provide a
“pass/fail” based on the performance of the device being monitored.
When a device is equipped with electronic QC, the manufacturer should explain the parts of the device
which are tested by the EQC. The user should use this information to evaluate any additional errors that
need to be tested for in the entire testing process; and add them to the QC scheme. When appropriate, if
the test system and alternate QC are separate components, the above mentioned should prevail.
If the components are integrated, then follow instructions from the second paragraph, above.
Single-unit devices that are self-contained (e.g., pregnancy tests) have no maintenance required by the
tester. Single-unit devices or cartridges used in combination with other devices or readers, such as
reflectance meters, should be maintained according to the manufacturer’s procedures. Examples of
preventive maintenance may include, but are not limited to the following:
• periodic cleaning;
• frequent pipet checks;
• part replacements (e.g., electrical, mechanical); and
• calibration.
When preventive maintenance is performed, it must be clearly documented in the system records.
All types of unit-use testing should be enrolled in a proficiency testing program if one is available.
Alternatively, split patient samples may be used in a similar fashion. By treating such specimens
similarly to routine patient samples, proficiency samples may provide an overall assessment of the testing
process.
Delta checks consist of a comparison of the patient’s current test result to the patient’s last result, looking
for a significant difference. What defines a significant difference depends on the analyte and the
precision of the method and is determined by the staff at each facility. If a significant difference is
detected, the result is then correlated to the patient’s current clinical condition. A significant difference in
a test result in a clinically stable patient may indicate a problem with the measurement.
Environmental monitoring encompasses all conditions surrounding the use of a device/method which
ultimately determines test performance. It is essential to recognize, monitor, and establish limits on
An NCCLS global consensus guideline. ©NCCLS. All rights reserved. 11
Number 28 NCCLS
environmental monitoring associated with a testing device/method. A list of the more obvious
environmental monitoring topics is in the “sources of error” matrix.
A device/method manufacturer has a responsibility to identify environmental monitoring factors that
would potentially impact the test performance in normal and usual operating conditions.
The user has a responsibility to identify environmental factors that may impact test performance, but may
not be identified by the manufacturer. When such factors are identified, the user must determine the
limits and frequencies at which to monitor identified factors to ensure optimal device/method
performance.
Users are responsible for adhering to any and all applicable regulatory requirements associated with a
particular device/method. Regulatory requirements may include environmental factors that must be
monitored at specified frequencies and within certain limits. In addition, users have a responsibility to
provide quality feedback to manufacturers to enable them to correct design deficiencies and support
continuous product development.
Adverse events are required to be reported to regulatory agencies in some countries. Users should also
report them to the manufacturer, along with other problems with product quality such as defective
devices, inaccurate or unreadable product labeling, packaging or product mix-up, or stability problems,
etc. Manufacturers are obligated under quality system standards to investigate all complaints and take
corrective and preventative action where appropriate and to improve product design.
6.5 Auditing
The purpose of periodic auditing is a search for concealed or not immediately apparent problems in the
testing cycle that need improvement or corrective action. Most often, this quality monitoring method is
used for record review, such as QC records and records of test results. Auditing may be particularly
helpful in assessing test-reporting mechanisms to see if test results are actually being recorded in the
patient’s medical record. An audit may cover all aspects of the testing cycle or may be focused on one
particular portion. It can reveal whether or not a problem exists, some sense of the frequency of the
problem, and reasons for problem occurrence. Audits may be performed on a regular, scheduled basis or
may be initiated in response to a reported problem. Prior to the audit, a threshold for acceptable
performance should be determined. If the audit yields findings that fall below the threshold, quality
improvement or corrective action should be undertaken. Solutions should ultimately be assessed for
effectiveness in improving performance.
References
1
ANSI/ASQC Q90004-1-1994. Quality Management and Quality System Elements - Guidelines.
2
MIL-STD 1629A. United States Military. Procedures for Performing a Failure Mode, Effects, and
Criticality Analysis. Philadelphia, PA: Document Automation and Production Service; 1980.
3
ISO 8402:1994. Quality management and quality assurance—vocabulary. Geneva, Switzerland:
International Organization for Standardization; 1994.
4
Motschman TL, Santrach PJ, Moore SB. Error/incident management and its practical application. In:
Duckett JB, Woods LL, Santrach PJ, eds. Quality in Action. Bethesda, MD: American Association of
Blood Banks; 1996.
5
ISO 13485:1996. Quality systems – medical devices – particular requirements for the application of
ISO 9001. Geneva, Switzerland: International Organization for Standardization; 1996.
Additional Reference
Volume 22
Appendix A. Example of a “System-Specific Sources of Error” Matrix
EP18-A
15
16
Appendix A. (Continued)
Number 28
Potential Sources of Applicable Nature of Training/Laboratory Applicable Quality Frequency of
Error Y/N? Impact Procedure Requirements Monitoring Monitoring
1.6.1 Hematocrit Too High or
Too Low
1.6.2 Oxygen Too Low or Too
Unstable
1.6.3 Medications Interfere with
Method
1.6.4 Lipemia
1.6.5 Dilute Urine
1.6.6 Dehydration/Hemodilution
1.6.7 Shock
1.7 Improper Patient
Preparation
2 Sample Presentation
An NCCLS global consensus guideline © NCCLS. All rights reserved.
2.1 Incorrect
Procedure/Technique
2.1.1 Contamination
2.2 Incorrect Sample
Presented
2.2.1 Sample Type
2.2.2 Failure to Appropriately
Dilute Sample
2.2.3 Failure to Remove Excess
Particulate Matter
2.2.4 Incorrect Sample
Temperature
2.2.5 Improper Handling of
Stored Specimens
2.3 Long Delay from
Collection to Analysis
NCCLS
Appendix A. (Continued)
An NCCLS global consensus guideline © NCCLS. All rights reserved.
Volume 22
Potential Sources of Applicable Nature of Training/Laboratory Applicable Quality Frequency of
Error Y/N? Impact Procedure Requirements Monitoring Monitoring
2.4 Sample Inadequately
Mixed
2.5 Sample Inadequately
Mixed with Reagents
2.6 Inappropriate Amount of
Sample Presented
2.6.1 Insufficient Volume
2.6.2 Excessive Volume
2.7 Introduction of Air
Bubbles
2.8 Incorrect Patient
Identification Information
Entered into Instrument
3 Instrument/Reagents
3.1 Adverse Environmental
Conditions
3.1.1 Temperature
3.1.2 Humidity
3.1.3 Shock/Vibration
3.1.4 Static Electricity
3.1.5 Radio Frequency
Interference/Electromagnetic
Interference
3.1.6 Light Intensity
3.1.7 Barometric
Pressure/Altitude
3.1.8 Inadequate Warm-Up Time
3.1.9 Low Power
3.2 Outdated Reagents
EP18-A
17
18
Appendix A. (Continued)
Number 28
Potential Sources of Applicable Nature of Training/Laboratory Applicable Quality Frequency of
Error Y/N? Impact Procedure Requirements Monitoring Monitoring
3.3 Improper Reagent
Shipment
3.4 Improper Reagent
Storage
3.5 Incorrectly Prepared
Reagents
3.6 Incorrect Use of Reagents
3.7 Reagent Contamination
3.8 Deterioration of Reagent
Lots Over Time
3.9 Lot-to-Lot Variability
3.10 Sample-Related Reagent
Failure
An NCCLS global consensus guideline © NCCLS. All rights reserved.
NCCLS
An NCCLS global consensus guideline © NCCLS. All rights reserved.
Appendix A. (Continued)
Volume 22
Potential Sources of Applicable Nature of Training/Laboratory Applicable Quality Frequency of
Error Y/N? Impact Procedure Requirements Monitoring Monitoring
3.16 Poor Precision
3.17 Poor Trueness
/Correlation with Laboratory
Method
3.17.1 Bias
3.17.2 Interferences
3.18 Incorrect Analysis Mode
3.18.1 Controls vs. Patient
Samples
3.18.2 Incorrect Analyte
Selected
3.18.3 Incorrectly Programming
Parameters
3.19 Sample Carryover
3.20 Instrument Error
3.21 Instrument Failure
3.21.1 Software Computation
3.21.2 Drift Between Calibration
and Analysis
3.21.3 Loss of Calibration
3.21.4 Electronic Instability
3.21.5 Readout Device Error
3.21.6 Loss/Corruption of Data
3.22 Instrument/Reagent
Performance Not Verified
Prior to Use
3.22.1 Initial Instrument
Implementation
3.22.2 Instrument
Repair/Maintenance
EP18-A
19
20
Number 28
Appendix A. (Continued)
Equipment Used
3.26 Complicated Procedure
3.27 Incorrect Technique
4 Results/Readout/Raw
Data
4.1 Visual Misinterpretation
4.1.1 Color
4.1.2 Number
4.2 Incorrect Setting for Units
of Measure
4.3 Incorrect Mode Setting
4.3.1 Neonatal vs. Whole Blood
©
NCCLS. All rights reserved.
vs. Plasma
4.3.2 Control vs. Patient Sample
4.3.3 Incorrect Programming
4.4 Accidental Loss of Data
NCCLS
An NCCLS global consensus guideline © NCCLS. All rights reserved.
Volume 22
Appendix A. (Continued)
EP18-A
21
22
Number 28
Appendix A. (Continued)
NCCLS
Volume 22 EP18-A
Sample Presentation
Incorrect sample preparation (mistake in mixing with Controls out of range; visual appearance of assay
pretreatment solution, incorrect preparation of control zones
samples)
Incorrect introduction of sample to device (dropwise Controls out of range; visual appearance of assay
vs. bolus, inadequate distribution on membrane) zones
Reagents
Incorrect use of reagents (mixing different lots of Controls out of range; appearance of membrane
reagents)
Testing Environment
Appendix B. (Continued)
Performance
Technique
Instrument (Reader)
Incorrect instrument use (opening door too soon, Controls out of range
trauma to reader)
NCCLS consensus procedures include an appeals process that is described in detail in Section 9 of
the Administrative Procedures. For further information contact the Executive Offices or visit our
website at www.nccls.org.
Foreword
1. At the end of the Foreword, the committee asks whether EP18 should be broadened in scope to cover
process control issues for in vitro devices in general. While the ultimate goals for quality management
are the same for unit-use and multiuse systems, we believe that the seminal concept in EP18 is that
the unique characteristics of unit-use systems require a different approach to quality management,
including a broadened use of acceptance testing and a more limited role for traditional liquid quality
control. At least until these concepts are digested, accepted, and possibly developed more fully, EP18
should remain focused on quality management for unit-use systems. Possibly a new, upper-level or
broad-based guideline could be developed to include all aspects of statistical process control and error
reduction.
• The reviewer’s comments are acknowledged, but the subcommittee is not entirely in agreement.
The committee has incorporated the following changes to the cited paragraph:
“The subcommittee has limited the discussions within this document to unit-use test systems.
While it is the committee’s expectation that the guideline will be used primarily to address the
issues around point-of-care (POC) devices that utilize single-use disposables, EP18 should not
be considered as exclusive to unit-use systems. However, as these concepts are further refined
with actual experience, an additional, perhaps broader-based guideline could be undertaken to
address multiuse systems and include all aspects of statistical process control and error
reduction.”
Introduction
2. The presentation of the concept of acceptance sampling is unclear, because the introduction appears
to be referring only to acceptance sampling for attributes while Section 6.3.1 appears to be referring
only to acceptance sampling for variables. The difference between the two is paramount, since
acceptance sampling for attributes occurring at low frequency (such as point defects in
manufacturing) is highly impractical for end users and, therefore, must be the responsibility of the
manufacturer, while acceptance sampling for variables may be quite practical for end users, as
described in Section 6.3.1.
• The subcommittee agrees with this distinction between the two kinds of acceptance sampling.
The following modification to the third bullet in the Introduction has been made:
“It is impractical to consume large numbers of unit-use systems needed to detect the low rate of
defects found in properly designed, manufactured, shipped, and stored unit-use systems. A
multitier approach to quality control and quality assurance has been proposed within this
document. This approach provides the user with the means to inspect goods upon arrival
through the use of limited acceptance sampling to detect variables such as shipping conditions, lot
changes, and new operators.”
3. In the Introduction (page 1, last paragraph) the sources of error matrix is described as a “partial list of
potential failure modes” while in Section 5.1 it is described as a “comprehensive list.” Both adjectives
could be dropped, since the list is further qualified in both places.
• The subcommittee agrees with the comment. The suggested revisions have been incorporated.
Section 4.1
4. Manufacturers do embed most, if not all, information needed to create an error matrix in manuals,
package inserts, and end-user training. This information should be “clearly and unambiguously
disclosed in the product labeling/instructions for use.” Unfortunately, the competitive nature of our
business makes us, the manufacturers, reluctant to point out a source of error that might appear to be a
weakness in our systems. Somehow we have to get a consensus agreement to include an error matrix
in our labeling, and EP18 is a very positive start.
• The subcommittee agrees with the comment and believes that manufacturers will see the value
of “truth in labeling” and be forthcoming with information for the device operators that will
allow them to focus on error identification and reduction/elimination.
Section 4.2
Section 5.1
6. Change “unexpected results” to “erroneous results,” since unexpected results may indeed reflect a
patient’s true condition.
• The subcommittee agrees with the comment. The suggested revision has been incorporated.
Section 5.2.4
7. Device Capabilities: This paragraph does not include more sophisticated capabilities. An additional
example could be added, such as: Does the device have on-line quality checks for adverse operating
conditions, operator errors, and reagents and analyzers that are performing outside of specifications
with clearly displayed descriptions and resolutions to the detected errors?
• The examples used were selected to simply illustrate the definition of device capabilities; hence,
sophisticated device examples are not included. Many other examples could be used but are not
felt to be necessary for clarification.
Section 5.2.7
8. Frequency of Monitoring: After the first sentence add a sentence to convey the concept that the nature
of the impact of error and the cost of detection should be considered. If the nature of the impact is
minor, an attempt to detect the error 100% of the time would probably be inefficient.
• The subcommittee agrees with the comment. The following text has been incorporated:
“The user should consider the nature of the impact of the error and the cost of detection.
Additional information on determining the detection and impact of an error can be determined
by using a failure mode, effects, and criticality analysis (FMECA).”
Section 6.2
9. We have difficulty with the emphasis put on quality control sample testing as the best way to assess
the competency of end users. One reason for this is that our analyzer incorporates real-time quality
checks that detect operator errors and incorporates real-time operator error reports. We are very much
in agreement that operator competency must be assessed periodically, but we disagree that the best
way to accomplish this is by using traditional quality control schemes. We would appreciate the
committee’s consideration of the following changes to Section 6.2.
a. Place the paragraph beginning “Evaluation of the competency of all testing personnel and ensuring
the staff’s continuing competency…” before the paragraph beginning “Frequent operators (those
performing the tests at least once per week)…”
• The subcommittee agrees with the comment. The suggested revision has been incorporated.
b. In the above paragraph, change the wording of the sentence, “The procedures for evaluation of the
competency of the staff should include, but are not limited to, the following:” to “Evaluation of the
competency of the staff could include, among other procedures, the following:”
• The subcommittee agrees with the comment. The suggested revision has been incorporated.
c. Add before the paragraph beginning “Frequent operators (those performing the tests at least once per
week)…” the following sentence, “ If a source of operator procedure error has been identified that is
not detected by the system, periodic liquid control testing should be included in the evaluation of user
competency.”
(Even for a system susceptible to operator error, the use of liquid controls is problematic. Blood gas
controls require very different handling from patient samples, and they do not assess preanalytical
error which for unit-use tests is the more likely source of error.)
• The subcommittee agrees with the comment. The suggested revision has been incorporated.
d. Remove from the above paragraph on operator competency the information about reagent stability
beginning with the sentence, “As quality control testing intervals lengthen, reagent stability should be
considered.” Place this information in Section 6.3.2 and title this section “Traditional, Liquid Quality
Control.” Then delete the reference to operator competence in this section, since it is covered in
Section 6.2. If this change is made, references to split patient samples and external proficiency
samples should also be deleted from this section, since they are covered in Sections 6.3.3 and 6.3.6.
• The subcommittee appreciates the comments. However, the subcommittee believes the current
format is clear.
e. 1/10th may be too frequent if the user has established that the system/test unit is very stable. I don’t
like the inclusion of a specific frequency interval. Performing QC frequently increases costs but
usually does not improve the quality of the final test result.
• The text has been modified to suggest that users follow the manufacturer’s recommendation for
periodic liquid QC with a default frequency of no longer than 1/10th the labeled stability of a
product if the manufacturer does not provide frequency information. If secondary storage
An NCCLS global consensus guideline. ©NCCLS. All rights reserved. 27
Number 28 NCCLS
Section 6.3
10. We have no essential problem with the recommendation that the quality control testing interval
should be no longer than 1/10th the stability stated by the manufacturer. However, the rational for this
recommendation appears to be no more absolute, in a statistical sense, than the 24-hour limit imposed
on traditional QC programs and could eventually be outdated by ongoing improvements in systems,
reagents, and error detection software. While gaining a new consensus for an NCCLS
recommendation is relatively straightforward, changing a recommendation once it is incorporated into
a CLIA standard is not. To avoid this, the recommendation could be to follow the manufacturer’s
recommendation for periodic liquid QC with a default frequency of no longer than 1/10th the labeled
stability of a product if the manufacturer does not provide frequency information. It is the
manufacturer’s responsibility to ensure that the recommended frequency is suitable, based on the
stability and quality system of the device, to assure the reliability of results.
In addition, we would not want to see the 1/10th QC limit applied to the two-week room temperature
shelf life. Again, we appreciate the fact that a guideline must be generic enough to cover all systems
in use now and in the near future and that it cannot address individual systems. However, since to be
most affective the quality system must be tailored to the characteristics of the individual system, the
manufacturer’s recommendations should override generic recommendations.
11. For consistency, the terms “dual split patient samples” and “known patient samples” should be
changed to “split patient samples.”
• The subcommittee agrees with the comment. The suggested revision has been incorporated.
Section 6.3.4
12. Add paragraph 6.3.4.2, Automated Real-time (or On-line) Quality Checks, to address the extent to
which a system’s ability to detect environmental, operator, unit-use device, and analyzer errors
influences the frequency at which liquid control samples need to be tested. A recommendation could
be made to validate the manufacturer’s claims to detect these errors. Validation could be a period
during which liquid control samples are tested on a frequent basis with results examined for shifts,
drifts, or errors not detected by the system. Validation could also include specific challenges of the
detection software, such as underfilling a cartridge or using an analyzer outside its specified operating
conditions.
This paragraph would compliment and expand upon paragraph 5.2.4, Device Capabilities, under
Contents of Matrix, which does not address the capabilities of more sophisticated devices.
• The subcommittee believes the underlying concern has been appropriately addressed, as
NCCLS document EP18—Quality Management for Unit-Use Testing provides users with a
reasonable approach for monitoring ongoing performance of unit-use devices. Additional
validation of specific manufacturer’s claims is beyond the scope of the document.
General
1. Our concerns revolve around the information that is requested for the risk tables. It is our position that
much of the requested information is proprietary in nature. For each of our products, we complete a
risk assessment, as required by QSR. Identified risks are mitigated by various methods. Residual risks
are carefully considered for acceptability. It is our responsibility to determine what level of residual
risk is acceptable to our company as liability. We believe that, if a user follows our instructions for
use as written, than any risks associated with the use of the product are minimal. Users must assess
their own levels of risk if they deviate from our instructions. Although we agree in principle that users
may wish to understand areas where residual risk could be present, we believe that this kind of
information would be inappropriately exploited by marketing representatives, thus leading to gross
levels of misunderstanding by the users. We support ISO 15198 "Clinical Laboratory Medicine—
Validation of manufacturer's recommendations for user quality control." This document also leads the
user through an assessment of a product, considering aspects such as pre-examination errors,
examination errors such as sample handling, QC, calibration, maintenance, stability, and post
examination errors. The document helps the user develop a validation plan and protocol. This
document is appropriate, since it considers products from the user's perspective, rather than the
manufacturer's. Both ISO 15198 and NCCLS EP18-A address similar aspects, we support the use of
the ISO document.
• It is not the intent of this guideline to either compel or suggest that the manufacturer disclose
proprietary information. This is left to the discretion of the individual company. The suggestion
here is that the manufacturer inform the user/operator of any potential error, and provide
information as to how the manufacturer has mitigated the risk of the error. In some instances,
the user/operator may not be satisfied with the information contained in the package insert or
other information generally available from the manufacturer. Thus, the user/operator may
request additional information to help determine if additional testing or monitoring is necessary
locally. It is then left up to the manufacturer whether or not to provide the requested
information. As is stated above “Users must assess their own levels of risk if they deviate from
our instructions.” Indeed, users must assess their own level of risk even when not deviating
from the manufacturer’s instructions and some users/operators may request additional
information on the test system to assess this level.
2. We disagree with this document for a number of reasons, including the intrusiveness into company
confidential data. The document, as proposed, offers little value to most users but creates an onerous,
and in our opinion, an unnecessary amount of data to be provided by IVD manufacturers. We firmly
believe this information proposed to be shared with users, is redundant to extensive prior regulatory
findings. The subcommittee should also be aware of widespread rejection of this document by other
IVD manufacturers. We recommend this document be sent back to the subcommittee and that the
document be rewritten with input from users, regulatory, and industry representatives.
• See response to Comment 1. The NCCLS process is especially suited for developing guidelines
because of its ability to mobilize specific expertise and its adherence to balanced participation.
Experts from NCCLS's core constituencies (i.e., government, professions, and industry) were
included as members of the subcommittee. It is the belief of the subcommittee that, based on the
number and contents of the comments received to date, there is not a "widespread rejection of
this document by other IVD manufacturers."
Section 4.1
3. The introduction of the document states the following, “Any remaining failure modes shall be clearly
and unambiguously disclosed in the product labeling/instructions for use.” We support this idea.
However, it is problematic for a manufacturer to supply the entire Appendix A as indicated in the
document. This information is provided to government agencies that request it for a product
submission and approval process. If risk is determined to offset the benefit of putting the product on
the market, the product is not approved for sale. This information would be too lengthy to include in
product information for the majority of devices. The risk analysis is prepared by manufacturers as a
requirement of quality system and is open to review by the regulatory agencies that require it. It is
important to focus information for use on details that are important to the user, not provide extraneous
information that is not helpful. For this reason, it is suggested that details as to the errors that are not
mitigated by the design of the product or the manufacturing process be made available in the
information supplied by the manufacturer with suggested recommendations. Also, disclosure should
include special warnings or precautions of which the user should be made of aware.
Therefore, we recommend deleting the sentence that reads, "This information should then be
summarized for the user (see Table 1)" and replacing it with the following sentence, “The risk
analysis, which may include the items listed in Appendix A should be analyzed and those risks not
mitigated by the manufacturer shall be disclosed in the information supplied by the manufacturer.
Specific details on QC as to the level and/or frequency of testing should be provided in the
information supplied by the manufacturer.”
4. For the same reasons stated above in Comment 2, we recommend changing the final row in Table 1 to
read as follows, "Provide information and recommendations in product labeling information supplied
by the manufacturer. Manufacturers are encouraged to disclose significant sources of error and
recommend methods of control following this (EP18) guideline.
Section 4.2
5. The user should not expect that the manufacturer be able to provide a complete risk analysis. These
documents are very lengthy and are too extensive to publish and keep current to all users. The
essential safety information, however, is disclosed in the information supplied by the manufacturer as
a requirement of most government agencies for approval to market the product. This safety
information is diligently reviewed by government agencies before allowing the product on the
market. It is important that the user be provided the recommendations for user QC that then covers
the errors that are not mitigated in other ways. The precautions and warnings section of current
product information supplies this information. This allows the user to complete the review of
Appendix A and put the appropriate quality assurance system in place to mitigate the possible sources
of errors disclosed in the information for use.
Therefore, we recommend deleting the sentence that reads, “The sources of error matrix may be used
as a tool to help define a facility’s quality assurance (QA) program” and replacing it with the
following, “The manufacturer’s recommendations on appropriate QC provide a basis for the user to
define a facility’s quality assurance program.”
• The current language has been maintained. The user’s quality assurance program should not
be based solely on manufacturer’s QC recommendations. Users may also need to consider other
sources of error not mitigated by product design or QC, particularly preanalytical and
postanalytical issues that may be institution specific. The manufacturer chooses how much of
the matrix to reveal; this guideline does not infer complete disclosure of all items in the matrix.
The user may also need to know more than just the QC recommendations, as they may not
cover all preanalytical and postanalytical concerns.
6. If the manufacturer is following this document, the information regarding error reduction will be
provided in the information for use. Therefore, we recommend deleting the sentence that reads, “If
the manufacturer has not adequately described potential failure modes in its labeling, or
recommended ways to control them, then the customer should contact the manufacturer and ask for
the information.”
• The subcommittee believes this to be an implied conclusion. However, the text has been revised
to read, "If the customer needs additional information and recommendations, they should
contact the manufacturer."
7. All safety information that the manufacturers supply is furnished in the information that is supplied
with the product. It is important to the manufacturer that every customer receives this important
information. The manufacturer cannot set up a system pertaining to safety of the device, which
provides certain information to some users, while others do not receive the information. Therefore,
we recommend deleting the sentence that reads, “Obtain supporting data as needed from the
manufacturer.”
• Some users are more diligent in managing their quality systems than others. Therefore, some
will require the information while others won’t. The subcommittee believes there is no
additional requirement or implied obligation on the part of the manufacturer by inclusion of
this sentence.
Section 5.2.1
8. More clarity is needed between the use of Appendix A versus Appendix B. Is Appendix B something
that would be used by manufacturer to provide the information in the information for use, or the error
matrix that the user would complete?
Section 6.1
9. It is not always a requirement, that a separate SOP be developed for certain tests. After first sentence,
we recommend adding the following statement, “This may be in the form of the manufacturer’s
instructions for use.” Addition of the recommended statement allows for this provision.
• Use of the manufacturer’s instructions for use does not take into account local procedures
regarding preanalytical and postanalytical aspects of testing. The subcommittee does not
encourage the use of inserts (i.e., manufacturer's instructions for use) only, as the scope of this
document extends beyond the analytical phase. Additionally, such a recommendation is
contrary to most accreditation requirements. Therefore, the text has been maintained.
Section 6.2
10. Some systems do not require formal training by a laboratorian or the manufacturer. Sample devices
are tested by the manufacturer before being marketed to assure the intended user can operate the
system by reading the instructions provided with the device. Other devices are more complicated and
require formal training. The suggested change allows for a continuum of complexity of devices
providing the user with necessary training, but does restrict new technology.
Therefore, we recommend changing the first sentence of the section to read, “Operators performing
simple unit-use tests may have trained themselves using the information supplied with the product,
(for example, in the United States, waived tests). For more complex tests, formal training may be
required for operators until they have satisfactorily demonstrated proficiency for each procedure.”
• The subcommittee disagrees with the recommended text change since it ignores pre- and
postanalytical aspects of testing at the local site. Additionally, if there is more than one
individual performing the same test, there needs to be some check on consistency of
performance of all aspects of testing, not just quality control. The phrase “formal training” has
been changed to “training,” to provide more flexibility for the user.
11. Some systems are simple and easy to use and should not have the added burden put on the user when
there is no benefit to having “formal training.” Testing is also done by the manufacturer to allow the
information supplied to be the only training that is necessary. This testing is documented in the
government clearance application submission and the product is approved/cleared with this intended
purpose. Therefore, we recommend adding the following as a bullet in the third paragraph: “reading
the information supplied by the manufacturer may be sufficient for simple, easy to use devices.”
• Manufacturer’s information may not take into consideration the preanalytical and
postanalytical aspects of the test that may be unique to that institution or testing site. Therefore,
the text has been maintained.
12. Change the first sentence in the fifth paragraph to read, “Individuals must demonstrate competency in
performing the procedure, and evidence of this competency must be documented, if required by
regulatory requirements.”
• This guideline is designed to aid in the development and promotion of a quality system
approach to testing (not necessarily just to meet regulatory requirements). Therefore, the text
has been maintained.
• Frequent operators (those performing the test at least once per week)…
• Those operators who perform the test less frequently…
• Users should follow the manufacturer’s recommendations for periodic….
• If secondary storage conditions occur…
• Unit use devices have reagent stability of greater than one year…"
• The subcommittee agrees with the commenter and has incorporated this change. Also, the first
sentence has been modified to read, “The recommended QC scheme/procedure is indicated
below.”
Section 6.3.1
14. This acceptance testing seems only to apply to quantitative assays. It is not necessary for a user to
carry out this extensive testing for unit use devices since the manufacturer specifications for lot
release is set to test for this type of change. If the product being manufactured does not meet
specification and therefore not meet claims stated in the information supplied by the manufacturer,
then that lot is rejected and does not ship to the end user.
Therefore, we recommend modifying the first sentence to read, "When there is a change in a
quantitative test system (e.g., a shift in standardization, a new environment that has not previously
been validated by the manufacturer, or a new test operator)…"
• The text has been maintained. This section addresses any type of testing (i.e., quantitative or
qualitative—although, a potential problem is perhaps easier to detect in a quantitative test).
This section deals with acceptance testing after a device has passed the manufacturer’s quality
assurance testing process and is designed to help determine when a product may have been
stressed beyond its limits during shipment to the end user.
15. The manufacturer is required to set specifications for release of product lots as a requirement of a
quality system. This testing allows for product that will meet published claims throughout the
expiration date to be approved for shipping to users. All other product that falls outside these
specifications is rejected and not allowed to ship to the end user. It is redundant and expensive to
have the user retest product to this extent. The user should understand what the risk is in receiving
product and do testing that is in alignment with the quality goals.
We recommend deleting the sentence that reads, “For example, if the assay is so precise that the
standard deviation or coefficient of variation is only one-third…NCCLS document EP7-Interference
Testing in Clinical Chemistry)," and modifying the subsequent sentence to read, “This testing should
be done consistent with manufacturer’s recommendations.”
• The manufacturer does not necessarily supply recommendations regarding acceptance testing.
Therefore, the sentence has been maintained.
Section 6.3.3
16. It is important to make it clear to the user that it is not expected or anticipated that these tests would
be repeated in each laboratory or at each site performing the test. We recommend modifying the first
sentence to read, "The trueness of unit-use devices is initially established by recovery and
interference studies and by comparison to a method that is traceable to a recognized standard or to
another trueness basis. These tests are performed by the manufacturer as part of design control and
government submission processes."
Section 6.3.4.1
17. Delete the sentences that read, "EQC devices specifically monitor the instrument only, since the
disposable portion of the test system is a single-use item and cannot be run simultaneously with the
EQC device. In these situations, an additional (non-electronic) quality control material should be
tested at specific intervals to test the device and disposable portion together." These sentences as
written could limit future technology advances in electronic QC. These statements should be removed
in order to avoid this unintended restriction on advances in technology.
Then, add the following sentence after the first paragraph, “When a device is equipped with electronic
QC, the manufacturer should explain the parts of the device which are tested by the EQC and the user
should take this information and evaluate what additional errors need to be tested for the entire testing
process and add these to the QC schemes.”
It could be added, “When appropriate, if the test system and alternate QC are separate components the
above method should prevail. If the components are integrated, then follow the second paragraph
instructions.
AST4-A Blood Glucose Testing in Settings Without Laboratory Support; Approved Guideline
(1999). This document provides recommendations for personnel performing blood glucose
testing at sites outside the traditional clinical laboratory, addressing test performance,
quality control, personnel training, and administrative responsibilities.
C30-A Ancillary (Bedside) Blood Glucose Testing in Acute and Chronic Care Facilities;
Approved Guideline (1994). This document offers guidelines for performance of
bedside blood glucose testing with emphasis on quality control, training, and
administrative responsibility.
GP21-A Training Verification for Laboratory Personnel; Approved Guideline (1995). This
document provides background and recommends an infrastructure for developing a
training verification program that meets quality/regulatory objectives.
NRSCL8-A Terminology and Definitions for Use in NCCLS Documents; Approved Standard
(1998). This document provides standard definitions for use in NCCLS standards and
guidelines, and for submitting candidate reference methods and materials to the National
Reference System for the Clinical Laboratory (NRSCL).
*
Proposed- and tentative-level documents are being advanced through the NCCLS consensus process; therefore, readers should
refer to the most recent editions.
NOTES
NOTES
NOTES