Health Information Technology Evaluation Handbook From Meaningful Use To Meaningful Outcome
Health Information Technology Evaluation Handbook From Meaningful Use To Meaningful Outcome
Health Information Technology Evaluation Handbook From Meaningful Use To Meaningful Outcome
com
Health Information
Technology Evaluation
Handbook
From Meaningful Use to
Meaningful Outcome
www.ebook3000.com
Health Information
Technology Evaluation
Handbook
From Meaningful Use to
Meaningful Outcome
By
Vitaly Herasevich and Brian W. Pickering
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
This book contains information obtained from authentic and highly regarded sources. Reasonable
efforts have been made to publish reliable data and information, but the author and publisher
cannot assume responsibility for the validity of all materials or the consequences of their use. The
authors and publishers have attempted to trace the copyright holders of all material reproduced in
this publication and apologize to copyright holders if permission to publish in this form has not
been obtained. If any copyright material has not been acknowledged please write and let us know
so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known
or hereafter invented, including photocopying, microfilming, and recording, or in any information
storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.
copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC),
222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that
provides licenses and registration for a variety of users. For organizations that have been granted a
photocopy license by the CCC, a separate system of payment has been arranged.
www.ebook3000.com
Contents
Foreword............................................................................... xi
Preface.................................................................................xiii
Authors .............................................................................xxiii
1 The Foundation and Pragmatics of HIT Evaluation....1
1.1 Need for Evaluation..................................................... 1
Historical Essay........................................................... 3
1.2 HIT: Why Should We Worry about It?........................ 4
Historical Essay........................................................... 7
Definitions................................................................... 7
History of Technology Assessment............................ 9
Medical or Health Technology Assessment............... 9
Health Information Technology Assessment............10
1.3 Regulatory Framework in the United States..............11
Food and Drug Administration.................................12
Agency for Healthcare Research and Quality...........13
1.4 Fundamental Steps Required for Meaningful
HIT Evaluation�����������������������������������������������������������14
Suggested Reading.....................................................17
References...................................................................17
2 Structure and Design of Evaluation Studies...........19
2.1 Review of Study Methodologies and Approaches
That Can Be Used in Health IT Evaluations������������19
Define the Health IT (Application, System) to
Be Studied........................................................20
v
vi ◾ Contents
www.ebook3000.com
Contents ◾ vii
www.ebook3000.com
Contents ◾ ix
9 Usability Evaluation..............................................163
9.1 Evaluation of Efficiency............................................164
9.2 Effectiveness and Evaluation of Errors....................165
9.3 Evaluating Consistency of Experience (User
Satisfaction)��������������������������������������������������������������166
9.4 Electronic Medical Record Usability Principles.......168
9.5 Usability and the EHR Evaluation Process.............. 170
A Note on Evaluating the Real-World Usability
of HIT............................................................. 171
9.6 Usability Testing Approaches................................... 173
9.7 Specific Usability Testing Methods.......................... 175
Cognitive Walk-Through.......................................... 175
Key Features and Output .............................. 176
9.8 Procedure.................................................................. 176
Phase 1: Defining the Users of the System............. 176
Phase 2: Defining the Task(s) for the Walk-
Through......................................................... 176
Phase 3: Walking through the Actions and
Critiquing Critical Information......................177
Phase 4: Summarization of the Walk-Through
Results............................................................177
Phase 5: Recommendations to Designers...............177
Keystroke-Level Model............................................. 178
x ◾ Contents
www.ebook3000.com
Foreword
xi
xii ◾ Foreword
www.ebook3000.com
Preface
At its best, medicine is all about the patient. Since the time
of Hippocrates, the human connection between patient and
healthcare provider has been central to the provision of high-
quality care. Recently, a third party— the electronic health
record— has been inserted into the middle of that relationship.
Governments and clinical providers are investing billions of
dollars in health information technologies (HIT) in the expec-
tation that this will translate into healthier patients experienc-
ing better care at lower cost. The scale of adoption, driven
by a combination of marketplace incentives and penalties, is
breathtaking.
In the initial push to roll out HIT, the view that patients
would benefit and costs be contained was widely advanced.
The argument for adoption usually went something like this:
See the handwritten prescription for patient medications, and
note the illegibility of the drug name, drug dose, and lack of
frequency or route of administration information. Next, see
the electronic form, a legible, precise, complete electronic
xiii
xiv ◾ Preface
www.ebook3000.com
Preface ◾ xv
focus can drift away from the needs of the patient and clinical
practice. To paraphrase Hippocrates, “ It is far more important
to know the context in which health information technology
will be used than to know the specifics of the technology.”
Understanding that context and the problems to be solved
requires tapping into the wisdom of our frontline providers
and patients.
A systematic approach to the evaluation of technology in
healthcare is needed if we are to reliably discriminate between
useful innovation and clever marketing. This book is an
attempt to provide guidance to any individual or organization
wishing to take control of the conversation and to objectively
evaluate a technology on their own terms.
We wrote this book with an emphasis on a clinically ori-
ented, data-driven approach to the evaluation of HIT solutions.
This will allow the reader to set the evaluation agenda and
help avoid situations in which the needs of the patient or clini-
cal practice play a secondary role to the needs identified by
technical experts and engineers.
www.ebook3000.com
Preface ◾ xvii
Classroom Potential
This book is in part inspired by a class the authors run each
year as part of a Master’ s in Clinical Research program at
Mayo Clinic and by their development of a new curriculum for
one of the first clinical informatics fellowship programs they
directed at their institution. Readers can use the book itself to
develop their own postgraduate HIT evaluation classes.
www.ebook3000.com
Preface ◾ xix
Acknowledgments
We are deeply indebted to our patients and our clinical, infor-
mation technology, and academic colleagues. While we have
built technology on the shoulders of all who came before
us, we would especially like to acknowledge Tiong Ing and
Troy Neumann, who were instrumental in transforming our
thoughts into concrete and beautiful architecture— without
them, this book would not have been possible. To all our
clinical colleagues in critical care who have been patient with
our experiments and have provided inspiration, support, and
startlingly good feedback, we would like to extend a special
thank you.
Individuals who went beyond the call and who represent
the very best that a patient-centered organization such as ours
can offer include
xx ◾ Preface
www.ebook3000.com
Preface ◾ xxi
xxiii
xxiv ◾ Authors
www.ebook3000.com
Chapter 1
The Foundation
and Pragmatics of
HIT Evaluation
1
2 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
The Foundation and Pragmatics of HIT Evaluation ◾ 3
Historical Essay
The tobacco smoke enema was a medical procedure that was
widely used by Western medical practitioners at the end of
the eighteenth century as a treatment for drowsiness, respira-
tory failure, headaches, stomach cramps, colds, and others
conditions.
In 1774, two London-based physicians, William Hawes
and Thomas Cogan, formed The Institution for Affording
Immediate Relief to Persons Apparently Dead From Drowning.
Their practice quickly spread, reaching its peak in the early
nineteenth century.
The tobacco smoke enema procedure declined after 1811,
when English scientist Ben Brodie discovered nicotine’s toxic-
ity to the cardiac system using an animal model.4
As technologies and science progress, a tremendous
amount of information technology (IT) is being added every
year to the HIT space. Recent government incentives com-
bined with technological progress have given additional rea-
sons for clinicians to implement and adopt HIT.
Like medical technologies in the 1970s, however, we have
little evidence regarding the specific risks and benefits of HIT.
We are in a situation much like the beginning of the nine-
teenth century, when a single statement, “our software can
save lives,” is believed and trusted. New medications cannot
be introduced to the market without rigorous clinical evalu-
ation (even outside regulatory bodies). While commercial
IT products undergo technical validation and testing before
being delivered to customers, unfortunately, this testing has
nothing to do with clinically oriented metrics such as mortal-
ity, complications, medical errors, length of hospitalization,
and so on.
4 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
The Foundation and Pragmatics of HIT Evaluation ◾ 5
Very often, the terms EMR and EHR are used interchange-
ably. The principal difference is the interoperability of an EHR.
The ultimate goal of HIT adoption is a health information
exchange (HIE), a national infrastructure to provide a network
where health information can be exchanged among hospital
and physician offices using EMRs. This structure does not exist
and is still under development.
The resources for successful EMR deployment and utiliza-
tion do exist, but many questions arise when considering the
end users of such technology.
Do EMRs deliver on their promises? What are the intended
and unintended consequences of EMR adoption for healthcare
providers?
In a 2015 article published in the Journal of Medical
Economics, healthcare providers shared opinions about
whether EMRs have delivered on promises such as increased
efficiency, better time management, and faster charting. The
vast majority of clinicians expressed disappointment with
EMRs.5
Physicians are key stakeholders in the adoption of EMR
technologies. Their satisfaction and support of their hospitals’
EMR efforts are critical to ensuring EMRs become a positive
influence on the hospital environment. Figure 1.1 presents
common problems leading to EMR dissatisfaction, such as
information overload, lack of information, and time spent on
data entry. As key stakeholders in the successful deployment
and utilization of EMRs, physician feedback is essential.
Healthcare administrators should continuously evaluate the
benefits of EMRs for physicians and their patients regardless of
the level of EMR adoption in their particular organizations.
Furthermore, decision makers need a more comprehensive
approach to set HIT priorities and obtain maximum benefits
from limited resources, and they must be able to do this with-
out compromising the ethical and social values underpinning
their health systems.6
6 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
The Foundation and Pragmatics of HIT Evaluation ◾ 7
Historical Essay
In the late 1960s, Dr. Larry Weed introduced the concept of
the problem‑oriented medical record instead of recording
diagnoses and their treatment. Multiple parallel efforts then
started to develop EMRs. In the mid-1960s, Lockheed cre-
ated an EMR system that later became Eclipsys (now called
Allscripts). This system included computerized physician order
entry (CPOE). In 1968, the computer‑stored ambulatory record
(COSTAR) was developed by Massachusetts General Hospital
and Harvard. This system had a modular design separated into
accounting and clinical elements. COSTAR was created using
the MUMPS programming language, which 40 years later dom-
inates modern EMR suites and financial applications. In 1972,
the Regenstrief Institute developed their system, which fea-
tured an integration of electronic clinical data from their labo-
ratories and pharmacies. In the 1980s, systems moved from
inpatient to outpatient settings. This change was made based
on the need for simple billing functions to justify the return
on investment for EMRs. Over time, those functions became
more dominant than clinical functions. Through the 1990s and
2000s, technical platforms and core systems did not change
much. This left clinical functionality far behind the current
requirements for safety, efficacy, effectiveness, and usability.
Definitions
Technology is a broad concept that deals with the use and
knowledge of tools and crafts and how such use affects peo-
ple’s ability to control and adapt to the social and physical
environment.
Technology assessment is a scientific, interactive, and com-
municative process that aims to contribute to the forma-
tion of public and political opinion on societal aspects of
science and technology. Technology assessment was (and
is) an extremely broad field.
8 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
The Foundation and Pragmatics of HIT Evaluation ◾ 9
www.ebook3000.com
The Foundation and Pragmatics of HIT Evaluation ◾ 11
www.ebook3000.com
The Foundation and Pragmatics of HIT Evaluation ◾ 13
www.ebook3000.com
The Foundation and Pragmatics of HIT Evaluation ◾ 15
Health IT
Clinical evaluaon and Biomedical
epidemiology assessment stascs
www.ebook3000.com
The Foundation and Pragmatics of HIT Evaluation ◾ 17
Suggested Reading
Scales DC, Laupacis A. Health technology assessment in criti-
cal care. Intensive Care Med. 2007;33(12):2183–2191. PMID:
17952404.
References
1. HIMSS Analytics. EMR effectiveness: The positive benefit
electronic medical record; 2014. http://www.healthcareitnews.
com/directory/himss-analytics.
2. Thompson G, O’Horo JC, Pickering BW, Herasevich V. Impact
of the electronic medical record on mortality, length of stay,
and cost in the hospital and ICU: A systematic review and
meta-analysis. Crit Care Med. 2015;43(6):1276–1282.
3. Jonsson E, Banta D. Treatments that fail to prove their worth:
Interview by Judy Jones. BMJ. 1999;319(7220):1293.
4. Lawrence G. Tobacco smoke enemas. Lancet.
2002;359(9315):1442.
5. Terry K. EHRs’ broken promise: What must be done to win
back your trust. Medical Economics. 2015. http://medicaleco-
nomics.modernmedicine.com/medical-economics/news/ehrs-
broken-promise. Accessed November 12, 2015.
6. Hutton J, McGrath C, Frybourg J-M, Tremblay M, Bramley-
Harker E, Henshall C. Framework for describing and classifying
decision-making systems using technology assessment to deter-
mine the reimbursement of health technologies (fourth hurdle
systems). Int J Technol Assess Health Care. 2006;22(1):10–18.
7. Banta D. What is technology assessment? Int J Technol Assess
Health Care. 2009;25(Suppl 1):7–9.
8. Nancy K, Kubasek GSS. Environmental Law. 6th edn. Upper
Saddle River, NJ: Prentice Hall; 2007. http://www.amazon.com/
gp/product/0136142168?keywords=9780136142164&qid=1450059
677&ref_=sr_1_2&sr=8-2. Accessed December 14, 2015.
9. Jonsson E. Development of health technology assessment in
Europe: A personal perspective. Int J Technol Assess Health
Care. 2002;18(2):171–183. http://www.ncbi.nlm.nih.gov/
pubmed/12053417.
18 ◾ Health Information Technology Evaluation Handbook
10. Shekelle PG, Morton SC, Keeler EB. Costs and Benefits of
Health Information Technology. Evidence Report/Technology
Assessment No. 13 2. (Prepared by the Southern California
Evidence-based Practice Center under Contract No. 290-02-
0003.) AHRQ Publication No. 06-E006.
11. Sorenson C, Drummond M, Kanavos P. Ensuring Value
for Money in Health Care, The Role of Health Technology
Assessment in the European Union. Copenhagen: WHO
Regional Office Europe; 2008.
12. Battista RN, Hodge MJ. The evolving paradigm of health
technology assessment: Reflections for the millennium.
CMAJ. 1999;160(10):1464–1467. http://www.ncbi.nlm.nih.gov/
pubmed/10352637.
13. Guyatt GH, Tugwell PX, Feeny DH, Haynes RB, Drummond
M. A framework for clinical evaluation of diagnostic technolo-
gies. CMAJ. 1986;134(6):587–594. http://www.ncbi.nlm.nih.gov/
pubmed/3512062.
www.ebook3000.com
Chapter 2
19
20 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
Structure and Design of Evaluation Studies ◾ 21
Technology
Safety
1
Evaluate the technology itself and any ac
on that arises from its use
Efficacy
2
Measure what the technology is supposed to do under ideal condi
ons
Effecveness
3
Measure what the technology is supposed to do under average condi
ons
Efficiency (cost)
4 Determine the resources needed to provide the technology
and achieve a return on investment
www.ebook3000.com
Structure and Design of Evaluation Studies ◾ 23
www.ebook3000.com
Structure and Design of Evaluation Studies ◾ 25
Cross-seconal study Exposure and outcome determined the same me, the same group
Case series and case No comparison group
reports
Expert opinion Review, or expert opinion arcle
www.ebook3000.com
Structure and Design of Evaluation Studies ◾ 27
Randomized controlled
Pre-post study Descripve study Interviews
trial
Cohort study
Case-control study
www.ebook3000.com
Time and moon study
Time
Intervenon (I)
Study
Parallel group
Control (C)
www.ebook3000.com
Structure and Design of Evaluation Studies ◾ 31
www.ebook3000.com
Table 2.1 Summary of Study Characteristics for Major Types of Evaluation Studies
www.ebook3000.com
carry out users
intended
function?
34 ◾ Health Information Technology Evaluation Handbook
Structure and Design of Evaluation Studies ◾ 35
Suggested Reading
Article with practical advice for better protocol: Guyatt G. Preparing
a research protocol to improve chances for success. J Clin
Epidemiol. 2006 Sep;59(9):893–899. PMID: 16895810.
Chrzanowski RS, Paccaud F. Study design for technology assess-
ment: Critical issues. Health Policy. 1988;9(3):285–296. http://
www.ncbi.nlm.nih.gov/pubmed/10302542.
Clarke K, O’Moore R, Smeets R, et al. A methodology for evalu-
ation of knowledge-based systems in medicine. Artif Intell
Med. 1994;6(2):107–121. http://www.ncbi.nlm.nih.gov/
pubmed/8049752.
Cusack CM, Byrne C, Hook JM, McGowan J, Poon EG, Zafar A.
Health information technology evaluation toolkit: 2009 update.
(Prepared for the AHRQ National Resource Center for Health
Information Technology under Contract No. 290-04-0016.)
AHRQ Publication No. 09-0083-EF. Rockville, MD: Agency for
Healthcare Research and Quality; June 2009.
Friedman CP, Wyatt JC. Evaluation Methods in Medical Informatics.
Computers and Medicine. New York: Springer, 1997.
Goodman CS. HTA 101: Introduction to Health Technology
Assessment. Bethesda, MD: National Library of Medicine (US);
2014. Direct PDF link (218pp): https://www.nlm.nih.gov/nichsr/
hta101/HTA_101_FINAL_7-23-14.pdf.
36 ◾ Health Information Technology Evaluation Handbook
References
1. Thompson G, O’Horo JC, Pickering BW, Herasevich V. Impact
of the electronic medical record on mortality, length of stay,
and cost in the hospital and ICU: A systematic review and
metaanalysis. Crit Care Med. 2015;43(6):1276–1282.
2. Kashiouris M, O’Horo JC, Pickering BW, Herasevich V.
Diagnostic performance of electronic syndromic surveillance
systems in acute care: A systematic review. Appl Clin Inform.
2013;4:212–224.
3. Hussey MA, Hughes JP. Design and analysis of stepped
wedge cluster randomized trials. Contemp Clin Trials.
2007;28(2):182–191.
4. Ammenwerth E, Gräber S, Herrmann G, Bürkle T, König J.
Evaluation of health information systems: Problems and chal-
lenges. Int J Med Inform. 2003;71(2–3):125–135. http://www.ncbi.
nlm.nih.gov/pubmed/14519405.
www.ebook3000.com
Chapter 3
37
U.S. spending on science, space, and technology
correlates with
suicides by hanging, strangulaon, and suffocaon
Correlaon: 99.79% (r = 0.99789126)
1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009
$30 billion 10,000 suicides
www.ebook3000.com
Hanging suicides US spending on science
Data sources: U.S. office of management and budget and centers for disease control and prevenon
Publicaon bias
Funding bias
Measurement bias
Reported effect
Selecon bias Confounding
Random error
Knowledge users
www.ebook3000.com
Study Design and Measurements Fundamentals ◾ 41
Validity
As the evaluator proceeds with measurements selection,
the important question will be validity: how well does the
measurement represent the true measurement of interest?
Understanding validity helps in choosing the appropriate
methodological designs and interpreting the results.
Validity could be defined in several ways. The following are
the main definitions:
www.ebook3000.com
Study Design and Measurements Fundamentals ◾ 43
Error
Systemac error
Random error (bias)
Precision Accuracy
Bias
The relationship between intervention and outcome could
be affected by a number of factors, and the two most
important are bias and confounding. Bias represents the
internal validity of the study and deviates the results from
the truth. All observational studies have some degree of
bias. There are different classifications, and the most com-
prehensive includes 35 types.1 However, there are three
major categories: selection bias, information bias, and
confounding.
Selection bias, in broad terms, could be described as
the difference between studied groups. For example, users
and non-users of wearable technology that tracks exercises.
They could have different socioeconomic status and differ-
ent motivations to exercise. There may be a difference in
age since younger people tend to use technologies more.
When we compare the groups of users who use this wear-
able technology and those who don’t, the difference in
clinically meaningful outcome (rate of myocardial infarc-
tion [MI]) will be significant. The wrong, biased conclusion
will be that the use of wearable devices can prevent MI.
That is particularly true in this case, but the size of the
effect will be grossly overestimated since selection bias
was not taken into statistical analysis. Other common types
of selection bias include referral, participation, prevalence-
incidence, admission rate, non-response, and volunteer
bias.
Information (or measurement) bias results from the incor-
rect determination of exposure and/or outcome. Overall, to
minimize information bias, better measurement techniques
and tools should be used. The common types of information
bias include recall, exposure suspicion, diagnostic suspicion,
and others.
A separate artifact is the Hawthorne effect. This is a ten-
dency of people to perform better when they know that
www.ebook3000.com
Study Design and Measurements Fundamentals ◾ 45
Confounding
This factor of the variable correlates directly and indirectly
with both intervention and outcome. Knowing confounding
factors in a specific evaluation project will allow controlling
them. One of the control methods is statistical adjustment
for confounding variables. Figure 3.5 explains the relation-
ship between baseline state, outcome, intervention, and
confounders.
Figure 3.6 explains the principles of dealing with systematic
and random errors as the researcher and user of the evalua-
tion report.
Measurement Variables
Measurements are an essential part of evaluation. One type
of variable can be more informative than others. There
Intervenon
Confounder
Intervenon
Evaluate
Crically
confidence
appraise
interval and
design
Knowledge users p-value
Measurements
Categorical Connuous
Ordinal
Nominal (unordered)
(ordered or ranked)
Censored
(data with fixed limits)
www.ebook3000.com
Study Design and Measurements Fundamentals ◾ 47
www.ebook3000.com
Study Design and Measurements Fundamentals ◾ 49
www.ebook3000.com
Study Design and Measurements Fundamentals ◾ 51
Intermediate Outcome
Intermediate outcome is often a biological marker for the con-
dition of interest. The intention is to use it often to decrease
the follow-up period for evaluating the impact of study inter-
vention. An example would be measuring the blood glucose
level rather than measuring the effectiveness of the diabetes
treatment as a clinical outcome. Reducing the time of the
study has its own advantages.
Composite Outcome
Composite outcome is a combination of clinical events that
could be extrapolated to clinical outcome. Often, it is a com-
bination of individual elements of the score or combination.
Composite outcome increases the power of the study as it
occurs more frequently. However, it is sometimes difficult to
interpret, replicate in other studies, and compare.
Patient-Reported Outcomes
Patient-reported outcome (PRO) includes any outcome that is
based on information provided by patients or by people who
52 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
Study Design and Measurements Fundamentals ◾ 53
www.ebook3000.com
Study Design and Measurements Fundamentals ◾ 55
Suggested Reading
American College of Emergency Physicians. Quality of care
and the outcomes management movement. http://
www.acep.org/Clinical---Practice-Management/.
Quality-of-Care-and-the-Outcomes-Management-Movement/
Grimes DA, Schulz KF. Bias and causal associations in observational
research. Lancet. 2002;359(9302):248–252. PMID: 11812579.
Whiting P, Rutjes AW, Reitsma JB, Glas AS, Bossuyt PM, Kleijnen J.
Sources of variation and bias in studies of diagnostic accuracy:
A systematic review. Ann Intern Med. 2004;140(3):189–202.
PMID: 14757617.
Wilson IB, Cleary PD. Linking clinical variables with health-related
quality of life. A conceptual model of patient outcomes. JAMA.
1995;273(1):59–65. PMID: 7996652.
Wunsch H, Linde-Zwirble WT, Angus DC. Methods to adjust for
bias and confounding in critical care health services research
involving observational data. J Crit Care. 2006;21(1):1–7. PMID:
16616616.
References
1. Sackett DL. Bias in analytic research. J Chronic Dis. 1979;32(1–
2):51–63. http://www.ncbi.nlm.nih.gov/pubmed/447779.
www.ebook3000.com
Study Design and Measurements Fundamentals ◾ 57
59
60 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
Analyzing the Results of Evaluation ◾ 61
Data Preparation
All data, regardless of the source, require some degree of
attention before analysis. The first step is to check data for
completeness, homogeneity, accuracy, and inconsistency. The
second step is tabulation and classification. Data need to be
organized in a format that is understandable by statistical soft-
ware, then coded based on their type.
Data Distribution
Normal distribution (or “Gaussian” distribution) is based on
the theory that repeated measurements with the same instru-
ment tend to be the same. When all multiple measurements
62 ◾ Health Information Technology Evaluation Handbook
Central Tendency
Mean Arithmetic Good for Affected by
(average) mean mathematical outliers
calculated as manipulation (extreme
the sum of all variables)
observations
divided by the
number of
observations
Median Middle number, Not sensitive to Not very good
with the extreme values for
number of mathematical
observations manipulation
equal to above
and below
Dispersion (Variability)
Standard The value is the Good for Affected by
deviation average mathematical non‑parametric
(SD) difference of manipulation distribution.
individual Inappropriate
values from the for skewed
sample mean data
Quartile Data points that Use for Not very good
divide the data understandable for
set into four description of mathematical
equal groups data variation manipulation
www.ebook3000.com
Analyzing the Results of Evaluation ◾ 63
Median
Median Median
Frequency
Mean Mean
95%
–1.96 SD 0 +1.96 SD
Z-score –2 0 +2
Confidence Intervals
A confidence interval (CI) is another parameter to describe
variability for means and proportions. A 95% CI means that
there is 95% confidence that the true population mean lies
between those values. It has more practical and clinical impli-
cations. Decision making is based not on significance of differ-
ence, but on the size of the effect. This gives us an idea from
CI limits how clinically important assessing is. Interpretations
www.ebook3000.com
Analyzing the Results of Evaluation ◾ 65
Maximum
X
Median
Interquarle
range (50%)
95% CI
Minimum
X
p-Value
A p-value helps determine the statistical significance of results
and could be any number between 0 and 1. A p-value of
≤0.05 (typical level) indicates strong evidence against the null
hypothesis. This allows the rejection of the null hypothesis,
and supports an alternative hypothesis. The p-value is calcu-
lated with statistical software when statistical tests for testing
hypotheses are applied. The most-used level of p < 0.05 is
considered as statistically significant, and p < 0.001 as highly
statistically significant (as it has a less than one in a thousand
chance of being wrong).
66 ◾ Health Information Technology Evaluation Handbook
Hypothesis Testing
Experiments with the evaluation project should usually start
with research hypotheses. This is a statement of a problem
www.ebook3000.com
Analyzing the Results of Evaluation ◾ 67
Non-Parametric Tests
There are two groups of statistical tests for hypothesis test-
ing: parametric and non-parametric. Parametric tests should
be used on samples that are normally distributed. If a
sample is skewed (it is also helpful if the sample size is
small), non-parametric tests (rank-methods) should be used.
Non‑parametric tests usually rank data and make no assump-
tion about distribution, thus resulting in a less powerful test
and a decrease in the tested effect.
www.ebook3000.com
Table 4.2 Appropriate Statistical Methods for Hypothesis Testing with One Dependent Variable
Groups Dependency Normality Numerical Data Categorical Data
One group Parametric One-sample t-test Z-test for
proportions
Non-parametric Wilcoxon signed rank test Sign test
Two groups Independent Parametric Unpaired t-test Chi-squared test
Non-parametric Mann–Whitney U-test Fisher’s exact test
(Wilcoxon rank-sum test)
Paired Parametric Paired Student’s t-test McNemar’s test
Non-parametric Wilcoxon signed rank test
Three+ groups Independent Parametric One way ANOVA Chi-squared test
Non-parametric Kruskal–Wallis test
Paired Parametric Repeated measures ANOVA
Non-parametric Friedman’s test
Analyzing the Results of Evaluation ◾ 69
70 ◾ Health Information Technology Evaluation Handbook
Analytics Methods
Identifying Relationship: Correlation
Purpose: Correlation is used to measure the degree of asso-
ciation between two variables.
Methods: Pearson’s correlation coefficient calculated for
normally distributed data. Non-parametric equivalent is
Spearman’s correlation.
Interpretation: Test reports r (correlation coefficient). R
ranges from 1 to 1. Zero means that there is no linear cor-
relation between variables; 1 is a perfect positive correla-
tion; 1 is a perfect negative correlation.
Limitations: Correlation cannot be calculated if there is more
than one observation from one individual, with outliers
presented, the relationship is not linear or data compro-
mise subgroups.
Regression
For many non-statisticians, the terms correlation and regres-
sion are synonymous, and refer vaguely to a mental image of a
scatter graph with dots sprinkled messily along a diagonal line
sprouting from the intercept of the axes. The term regression
refers to a mathematical equation that allows one variable (the
target variable) to be predicted from another (the independent
variable). Regression, then, implies a direction of influence,
although it does not prove causality.4
Purpose: To identify and quantify the relationship between
predictor (independent) variables and outcome (dependent)
variables. Predictor variables can also be called risk factors:
exposure, intervention, or treatment and outcome variables as
outcome events (disease). As a result, regression allows us to
build a prediction model.
Linear regression is a technique where only one continuous
predictor variable is considered.
www.ebook3000.com
Analyzing the Results of Evaluation ◾ 71
www.ebook3000.com
Analyzing the Results of Evaluation ◾ 73
True False
Posive
Test or posive posive
alert
(new test) False True
Negave
negave negave
Assessing Agreements
Kappa is used to assess reproducibility or inter-rater reliability.
One example would be the agreement between two observers
on the same measurements.7 Kappa used on categorical vari-
ables and values greater than 0.75 is considered as excellent
agreement. A more advanced statistical method is intraclass
correlation coefficient (ICC), which is applicable for assessing
inter-rater reliability, with two or more raters.
The Bland–Altman plot is a method of comparing new
measurement techniques with an established one to see
whether they agree sufficiently for the new to replace the old.
This is a graphical method where the differences between the
two continuous variables are plotted against their averages.8
Outcome Measurements
The statistical outcome measurement selection involves a
selection of metrics, which is important to show improvement
in patient outcomes. The association between specific inter-
vention and clinical outcome is described by other metrics,
rather than by statistical inference.9 The process and defini-
tion linked to evidence-based medicine (EBM) have identified
www.ebook3000.com
Analyzing the Results of Evaluation ◾ 75
www.ebook3000.com
Analyzing the Results of Evaluation ◾ 77
Multiple Comparisons
Often, when we have data, we would like to do many com-
parisons to find statistically significant relationships. The Type
I error increases dramatically, which leads to spurious conclu-
sions. Analysis should be included only in a limited number
of tests to answer the study hypothesis. This is becoming
more obvious with data-mining exercises to find something
significant. The p-value must be adjusted by using Bonferroni
correction, or multivariate methods such as ANOVA should be
used.
Subgroup Analysis
Another type of analysis that could lead to the wrong conclu-
sions if not done correctly is subgroup analysis. Subgroup
analyses split data into subgroups and make comparisons
78 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
Analyzing the Results of Evaluation ◾ 79
Suggested Reading
Four articles “Basic statistics for clinicians” in Canadian Medical
Association Journal. PMIDs: 7804919, 7820798, 7828099,
7859197.
80 ◾ Health Information Technology Evaluation Handbook
References
1. Salkind NJ. Statistics for People Who (Think They) Hate
Statistics. Los Angeles, CA: SAGE, 2014.
2. Gardner MJ, Altman DG. Confidence intervals rather than P
values: Estimation rather than hypothesis testing. Br Med J (Clin
Res Ed). 1986;292(6522):746–750. http://www.ncbi.nlm.nih.gov/
pubmed/3082422.
3. Nayak BK, Hazra A. How to choose the right statistical test?
Indian J Ophthalmol. 59(2):85–86.
4. Greenhalgh T. How to read a paper. Statistics for the non-
statistician. II: “Significant” relations and their pitfalls. BMJ.
1997;315(7105):422–425. http://www.pubmedcentral.nih.gov/arti-
clerender.fcgi?artid=2127270&amp;#x0026;tool=pmcentrez&
amp;amp;#x0026;rendertype=abstract. Accessed September 15,
2011.
5. Bewick V, Cheek L, Ball J. Statistics review 14: Logistic regres-
sion. Crit Care. 2005;9(1):112–118.
6. Bewick V, Cheek L, Ball J. Statistics review 7: Correlation and
regression. Crit Care. 2003;7(6):451–459.
www.ebook3000.com
Analyzing the Results of Evaluation ◾ 81
Proposing and
Communicating
the Results of
Evaluation Studies
1. Project proposal
2. Detailed protocol
83
84 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
Table 5.1 Types of Study Reports and Their Target Audiences
General
Patients and IT and Administrators/ Scientific Policy- Audience/
Family Clinicians Security Purchasers Community makers Society
Abstract X X
Scientific peer- X X
reviewed article
Newsletter article X X X X
Evaluation report X X X
Poster X X
Website X X X
publication
Guideline/ X X X
standard
Proposing and Communicating Results of Evaluation Studies ◾ 85
86 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
Proposing and Communicating Results of Evaluation Studies ◾ 87
www.ebook3000.com
Proposing and Communicating Results of Evaluation Studies ◾ 89
www.ebook3000.com
Proposing and Communicating Results of Evaluation Studies ◾ 91
www.ebook3000.com
Proposing and Communicating Results of Evaluation Studies ◾ 93
www.ebook3000.com
Proposing and Communicating Results of Evaluation Studies ◾ 95
www.ebook3000.com
Proposing and Communicating Results of Evaluation Studies ◾ 97
Results
explained
as text and
diagrams
Funding
and conflict
of interest
www.ebook3000.com
statement
Methodology
h d l
and Diagram or figuree provided more
details about methodology or Addio
Addional
i Contact
C
Con
measurements
technology used references informaon
98 ◾ Health Information Technology Evaluation Handbook
Two connuous
Without gold standard
variables Relationship Chart Diagnostic (Bland–Altman plot)
(scaerplot)
www.ebook3000.com
Proposing and Communicating Results of Evaluation Studies ◾ 101
Suggested Reading
Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and
accurate reporting of studies of diagnostic accuracy: The
STARD Initiative. Ann Intern Med . 2003 Jan 7;138(1):40– 44.
http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&d
b=PubMed&dopt=Citation&list_uids=12513043.
www.ebook3000.com
Proposing and Communicating Results of Evaluation Studies ◾ 103
References
1. A proposal for more informative abstracts of clinical articles.
Ad Hoc Working Group for Critical Appraisal of the Medical
Literature. Ann Intern Med . 1987;106(4):598– 604. http://www.
ncbi.nlm.nih.gov/pubmed/3826959.
2. Begg C, Cho M, Eastwood S, et al. Improving the quality of
reporting of randomized controlled trials. The CONSORT state-
ment. JAMA . 1996;276(8):637– 639. http://www.ncbi.nlm.nih.gov/
pubmed/8773637.
3. Talmon J, Ammenwerth E, Brender J, de Keizer N, Nykä nen P,
Rigby M. STARE-HI: Statement on reporting of evaluation stud-
ies in health informatics. Int J Med Inform . 2009;78(1):1– 9.
4. Baker TB, Gustafson DH, Shaw B, et al. Relevance of
CONSORT reporting criteria for research on eHealth interven-
tions. Patient Educ Couns . 2010;81 Suppl: S77– S86.
5. Husereau D, Drummond M, Petrou S, et al. Consolidated
Health Economic Evaluation Reporting Standards (CHEERS)
statement. Eur J Heal Econ . 2013;14(3):367– 372.
6. von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC,
Vandenbroucke JP. The Strengthening the Reporting of
Observational Studies in Epidemiology (STROBE) statement:
Guidelines for reporting observational studies. Ann Intern Med .
2007;147(4):344– 349.
7. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete
and accurate reporting of studies of diagnostic accuracy: The
STARD Initiative. Ann Intern Med . 2003;138(1):40– 44.
www.ebook3000.com
Chapter 6
Safety Evaluation
105
106 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
Safety Evaluation ◾ 107
Non-regulated Regulated
www.ebook3000.com
Safety Evaluation ◾ 109
www.ebook3000.com
Safety Evaluation ◾ 111
◾◾ System uptime
◾◾ System response and start time
◾◾ Incorrect reports of diagnostic or laboratory tests
www.ebook3000.com
Safety Evaluation ◾ 113
www.ebook3000.com
Safety evaluaon
study designs
“Acve”— “Passive”—post-
laboratory based implementaon
Figure 6.3 Evaluation study designs to address the clinical safety of HIT.
Safety Evaluation ◾ 115
116 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
Table 6.1 Safety Problems, Factors, Manifestations, and Potential Evaluation Studies Designed to Address Them
Safety Problems Factors Manifestations Evaluation Studies
System malfunction Hardware defects and Failure to alert Technical evaluation
software bugs Missing, incorrect data Diagnostic performance
study— comparison with original
system— gold standard
Inappropriate access to System misuse Penetration test
system
System availability Scheduled system Technical evaluation
maintenance
Unexpected outages
Weak infrastructure Technical evaluation
Incorrect use Poor workflow design Selection workflow Usability studies
Workaround Surveys
Visual design flaw Alert fatigue Cognitive testing
Information overload Usability studies
Surveys
Inadequate training Errors in usage Simulation studies
Interaction with Functionality gap Limited functionality Usability studies
system
User resistance Unreliable usage and Surveys
transfer of data Observational study
Safety Evaluation ◾ 117
118 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
Error in HIT design or implementation
HIT-related
hazard
No Hazard
identified?
Yes
Yes
Hazard “Unforced” HIT-use
No adverse effect
resolved? error
No
No No
Figure 6.4 Proactive hazard control. (From Walker JM et al. Health IT Hazard Manager Beta-Test: Final Report.
Safety Evaluation ◾ 119
6. Patient identification
7. Computerized provider order entry with decision support
8. Test result reporting and follow-up
9. Clinician communication
www.ebook3000.com
Safety Evaluation ◾ 121
Subject pair
Crossover End
randomized
NASA-TLX NASA-TLX
Current Current
user user
interface interface
Physician B
6.4 Summary
Currently, there are no regulatory requirements for evaluating HIT
system safety, even if systems are directly used in patient care
www.ebook3000.com
Safety Evaluation ◾ 123
Suggested Reading
An oversight framework for assuring patient safety in health
information technology. http://bipartisanpolicy.org/library/
oversight-framework-assuring-patient-safety-health-information-
technology/.
Committee on Patient Safety and Health Information Technology,
and Board on Health Care Services. 2012. Health IT and
Patient Safety: Building Safer Systems for Better Care .
Washington, DC: National Academies Press. ISBN: 978-
0309221122. (Free ebook: http://www.nap.edu/catalog/13269/
health-it-and-patient-safety-building-safer-systems-for-better.)
Harrington L, Kennerly D, Johnson C. Safety issues related to the
electronic medical record (EMR): Synthesis of the literature
from the last decade, 2000– 2009. J Healthc Manag . 56(1):31– 43;
PMID: 21323026.
HealthIT.gov safety resource. http://www.healthit.gov/
policy-researchers-implementers/health-it-and-safety.
Health IT Safety Program: Progress on Health IT Patient Safety
Action and Surveillance Plan. https://www.healthit.gov/sites/
default/files/ONC_HIT_SafetyProgramReport_9-9-14_.pdf.
Walker JM, Carayon P, Leveson N, et al. EHR safety: The way for-
ward to safe and effective systems. J Am Med Inform Assoc .
15(3):272– 277. PMID: 18308981.
References
1. Bates DW, Teich JM, Lee J, et al. The impact of computerized
physician order entry on medication error prevention. J Am
124 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
Chapter 7
Cost Evaluation
125
126 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
Cost Evaluation ◾ 127
www.ebook3000.com
Alternave
Regulaons Preferences Cost
soluons
www.ebook3000.com
Cost Evaluation ◾ 131
www.ebook3000.com
Cost Evaluation ◾ 133
1. Explain a perspective
2. Include options for comparison
3. Define a time frame
4. Establish costs
5. Define and calculate impact on outcomes
6. Compare costs and outcomes for each option
Cost-Benefit Analysis
CBA is an economic evaluation in which all costs and conse-
quences of a program are expressed in the same units, usually
money. It is used to determine allocative efficiency, such as
comparing costs and benefits across programs serving differ-
ent patient groups.
Cost measurement: Monetary units.
Outcome measurement: Monetary unit attributed to health
effects. Benefit–cost ratio.
Limitations: A key role that this type of study plays is a
hypothetical estimation of cost and introduces limitations to
reflect real-life economic benefits. Qualitative variables such
134 ◾ Health Information Technology Evaluation Handbook
Cost-Effectiveness Analysis
CEA is an economic evaluation in which the costs and conse-
quences of alternative interventions are expressed in cost per
www.ebook3000.com
Cost Evaluation ◾ 135
Cost-Minimization Analysis
CMA is an economic evaluation in which the consequences
of competing interventions are the same and in which only
inputs (i.e., costs) are taken into consideration. The aim of
CMA is to determine the least costly alternative therapy to
achieve the same outcome.
Outcome measurement varies but is equal among
alternatives.
Cost measurement: Monetary unit.
Cost minimization can only be used to compare two sys-
tems that could be comparable in end effect on the patient
outcome. This method is very useful for comparing an old HIT
system with a new system assumed to have the same perfor-
mance or impact. There is often no reliable evidence between
two products to demonstrate their equivalence, and if equiva-
lence cannot be demonstrated, CMA is inappropriate.
An example of CMA in HIT is the article “ Cost-Effectiveness
of Telemonitoring for High-Risk Pregnant Women.” 10 In
this retrospective study, telemonitoring of high-risk preg-
nant women was compared to hospitalization for monitor-
ing. ICD9CM codes were used to identify patients’ cohort for
study, and possible scenarios and costs were estimated based
on hospital length-of-stay (LOS) associated cost. For the tele-
monitoring group, the associated operational cost was esti-
mated, and using those assumptions and numbers, potential
cost reduction was calculated. Such a study has limitations, as
www.ebook3000.com
Cost Evaluation ◾ 137
Return on Investment
The traditional financial definition of ROI is simply earnings
divided by investment.
The definition of earnings and cost over time in a big
HIT implementation project, however, is not straightforward.
Various tangible/intangible costs and direct/indirect benefits
must be considered during analysis.
Cost measurement: Monetary unit.
Outcome measurement: ROI calculation = (estimated lifetime
benefit − estimated lifetime costs)/estimated lifetime costs.
Benefits measurements: Not always in monetary units;
sometimes distant and fairly indirect (e.g., staff time saved or
potential savings from decreased errors).
When the cost of a big HIT implementation project is esti-
mated, it usually breaks into the following categories:
www.ebook3000.com
Cost Evaluation ◾ 139
www.ebook3000.com
Cost Evaluation ◾ 141
www.ebook3000.com
Cost Evaluation ◾ 143
Suggested Reading
Bassi J, Lau F. Measuring value for money: A scoping review on
economic evaluation of health information systems. J Am Med
Inform Assoc . 2014;20(4):792– 801. PMID: 23416247. Review of
summarized publications on HIT economic evaluation in terms
of the types of economic analysis, covered areas, and others.
Drummond MF, Sculpher MJ, Torrance GW, O’ Brien BJ, Stoddart
GL. Methods for the Economic Evaluation of Health Care
Programmes . 3rd edn. Oxford: Oxford University Press, 2005.
ISBN: 0198529457.
ROI research in healthcare: The value factor in returns on health
IT investments. http://apps.himss.org/transformation/docs/
ResearchReport1.pdf.
Swensen SJ, Dilling JA, McCarty PM, Bolton JW, Harper CM. The
business case for health-care quality improvement. J Patient
Saf . 2013;9(1):44– 52. PMID: 23429226.
Tan-Torres Edejer T, Baltussen R, Adam T. et al. (eds) Making
Choices in Health: WHO Guide to Cost-Effectiveness Analysis.
http://www.who.int/choice/publications/p_2003_generalised_
cea.pdf.
U.S. National Library of Medicine. Health economics information
resources: A self-study course. https://www.nlm.nih.gov/nichsr/
edu/healthecon/index.html.
Wang T, Biedermann S. Running the numbers on an EHR: Applying
cost-benefit analysis in EHR adoption. http://bok.ahima.org/
doc?oid=101607.
References
1. McLaughlin N, Ong MK, Tabbush V, Hagigi F, Martin NA.
Contemporary health care economics: An overview. Neurosurg
Focus . 2014;37(5):E2. doi:10.3171/2014.8.FOCUS14455. PMID:
25363430.
www.ebook3000.com
Cost Evaluation ◾ 145
www.ebook3000.com
Chapter 8
Efficacy and
Effectiveness Evaluation
147
148 ◾ Health Information Technology Evaluation Handbook
Efficacy Effecveness
“Laboratory” “Live”
www.ebook3000.com
Efficacy and Effectiveness Evaluation ◾ 149
www.ebook3000.com
Efficacy and Effectiveness Evaluation ◾ 151
◾◾ Better diagnosis
◾◾ Better therapy
◾◾ Better communication (clinician to clinician or clinician to
patient)
◾◾ Better knowledge management (clinician or patient)
◾◾ Less time spent on the care process (clinicians or patients)
◾◾ Less expensive care
◾◾ Safer care
www.ebook3000.com
Efficacy and Effectiveness Evaluation ◾ 153
www.ebook3000.com
Efficacy and Effectiveness Evaluation ◾ 155
Steps 2 and 3 would be a perfect fit for the initial small pilot
studies in a laboratory setting (efficiency study) and live
implementation.
Paent scheduling
Paent registraon
Paent arrival
Perform paent
evaluaon
Make clinical
decision
Develop treatment
plan
Request
consultaons
Order meds, labs,
imaging
Conduct
procedures
Document
visit/service
www.ebook3000.com
Efficacy and Effectiveness Evaluation ◾ 157
www.ebook3000.com
Efficacy and Effectiveness Evaluation ◾ 159
www.ebook3000.com
Efficacy and Effectiveness Evaluation ◾ 161
Suggested Reading
45 CFR Parts 160 and 164 Modifications to the HIPAA Privacy,
Security, Enforcement, and Breach Notification Rules Under the
Health Information Technology for Economic and Clinical Health
Act and the Genetic Information Nondiscrimination Act. https://
www.gpo.gov/fdsys/pkg/FR-2013-01-25/pdf/2013-01073.pdf.
Altman DG, Royston P. What do we mean by validating a prognos-
tic model? Stat Med . 2000;19(4):453– 473. PMID: 10694730.
Kozma CM, Reeder CE, Schultz RM. Economic, clinical, and human-
istic outcomes: A planning model for pharmacoeconomic
research. Clin Ther . 1993;15(6):1121– 1132. The Economic,
Clinical, and Humanistic Outcomes (ECHO) model depicts the
value of a pharmaceutical product or service as a combination
of traditional clinical-based outcomes with more contemporary
measures of economic efficiency and quality.
Reassessing your security practices in a health IT environment:
A guide for small health care practices. http://s3.amazonaws.
com/rdcms-himss/files/production/public/HIMSSorg/Content/
files/Code%20165%20HHS%20Reassessing%20Security%20
Practices%20in%20a%20Health%20IT%20Environment.pdf.
Scales DC, Laupacis A. Health technology assessment in criti-
cal care. Intensive Care Med. 2007;33(12):2183– 2191. PMID:
17952404.
162 ◾ Health Information Technology Evaluation Handbook
References
1. Roundtable on Value & Science-Driven Health Care; Institute
of Medicine. Core Measurement Needs for Better Care, Better
Health, and Lower Costs: Counting What Counts: Workshop
Summary . Washington, DC: National Academies Press; 2013
Aug 30. 1, Introd.
2. Chin JP, Diehl VA, Norman KL. Development of an instrument
measuring user satisfaction of the human– computer inter-
face. In ACM CHI’88 Proceedings , Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems (pp.
213– 218). New York: ACM, 1988.
3. Davis, F. Perceived usefulness, perceived ease of use, and
user acceptance of information technology. MIS Quarterly .
1989;13(3):319– 340.
4. Hatcher M. Information systems’ approaches and designs and
facility information: Survey of acute care hospitals in the United
States. J Med Syst . 1998;22(6):389– 396.
5. Herasevich V, Ellsworth MA, Hebl JR, Brown MJ, Pickering BW.
Information needs for the OR and PACU electronic medical
record. Appl Clin Inform . 2014;5(3):630– 641.
6. Kilickaya O, Schmickl C, Ahmed A, et al. Customized reference
ranges for laboratory values decrease false positive alerts in
intensive care unit patients. PLoS One . 2014;9(9):e107930.
7. Adams WG, Mann AM, Bauchner H. Use of an electronic medi-
cal record improves the quality of urban pediatric primary
care. Pediatrics . 2003;111(3):626– 632. http://www.ncbi.nlm.nih.
gov/pubmed/12612247.
www.ebook3000.com
Chapter 9
Usability Evaluation
163
164 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
Usability Evaluation ◾ 165
1. Use error root causes : These are aspects of the user inter-
face design that induce use errors when interacting with
the system.
2. Risk parameters : These are attributes regarding particu-
lar use errors (i.e., their severity, frequency, ability to be
detected, and complexity).
3. Evaluative indicators: These are indications that users
are having problems with the system, and are identified
through direct observations of the system in use in situ
through interviews or user surveys.
4. Adverse events : These are descriptions of the use error
outcome and the standard classification of patient harm.
9.3 Evaluating Consistency of
Experience (User Satisfaction)
User satisfaction is usually the first thing that people think in
relation to “usability.” Satisfaction in the context of usability
refers to the subjective satisfaction a user may have with a pro-
cess or outcome. Satisfaction is highly subjective, but routine
questionnaires can provide a good insight into users’ problems
or issues with a system.
User satisfaction is one of the most underrated measure-
ment modalities in healthcare. Generally, healthcare workers
www.ebook3000.com
Usability Evaluation ◾ 167
www.ebook3000.com
Usability Evaluation ◾ 169
1. Clinician burnout
2. Patient satisfaction
www.ebook3000.com
Usability Evaluation ◾ 171
Identify user
interface design
issues and
iterate design
Describe
www.ebook3000.com
Identify critical Critical safety
remaining user
use risks test scenarios
interface issues
Prototype product
Evaluaon plan
Data analysis
Interpretaon of results
www.ebook3000.com
Usability requirements parally met Usability requirements met Usability requirements not met
Producon product
174 ◾ Health Information Technology Evaluation Handbook
Cognitive Walk-Through
Cognitive walk-through involves one or a group of evaluators
observing or interviewing a subject as he or she performs a
176 ◾ Health Information Technology Evaluation Handbook
9.8 Procedure
Phase 1: Defining the Users of the System
Who will be the users of the system? This should include spe-
cific background experience or technical knowledge that
could influence users as they attempt to deal with a new
interface. The users’ knowledge of both the task and the
interface should be considered. An example user description is
“Macintosh users who have worked with MacPaint.”
www.ebook3000.com
Usability Evaluation ◾ 177
◾◾ Will the users try to achieve the right effect? For example,
their task is to print a document, but the first thing they
have to do is select a printer. Will they know that they
should select a printer?
◾◾ Will the user notice that the correct action is available?
This relates to the visibility and understandability of
actions in the interface.
178 ◾ Health Information Technology Evaluation Handbook
◾◾ Will the user associate the correct action with the effect
to be achieved? Users often use the “label-following”
strategy, which leads them to select an action if its label
matches the task description.
◾◾ If the correct action is performed, will the user see that
progress is being made toward the solution of the task?
This is to check the system feedback after the user exe-
cutes the action.
Keystroke-Level Model
The keystroke-level model (KLM) is a simplified version of
GOMS (goals, operators, methods, and selection rules). It was
proposed by Card and Moran (1980) as a method for predict-
ing user performance.3
www.ebook3000.com
Usability Evaluation ◾ 179
Heuristic Evaluation
Heuristic evaluation is a usability inspection method in which
the system is evaluated based on well-tested design principles
such as the visibility of system status, user control and free-
dom, consistency and standards, flexibility, and efficiency of
use. This methodology was developed by Jakob Nielsen4 and
modified by Zhang5 (Nielsen, J., “Enhancing the Explanatory
Power of Usability Heuristics,” CHI’94 Conference Proceedings
[1994]).
www.ebook3000.com
Usability Evaluation ◾ 181
Reporting
A list of identified problems that may be prioritized
with regard to severity and/or safety criticality is produced.
In terms of summative output, the number of found prob-
lems, the estimated proportion of found problems compared
with the theoretical total, and the estimated number of new
problems expected to be found by including a specified num-
ber of new experts in the evaluation can also be provided.
The following are examples of heuristic evaluations lists:
ftp://ftp.cs.uregina.ca/pub/class/305/lab2/example-he.html:
Heuristic Evaluation— A System Checklist by Deniese
Pierotti, Xerox Corporation.
https://wiki.library.oregonstate.edu/confluence/download/
attachments/17959/Heuristic+Evaluation+Checklist.
pdf: Heuristic Evaluation— A System Checklist,
Usability Analysis & Design, WebCriteria, 2002.
www.ebook3000.com
Usability Evaluation ◾ 183
9.9 Conclusions
Healthcare delivery requires that human and technological
actors work in harmony to produce the desired health out-
comes and avoid patient harm. Usability evaluation is impor-
tant if we understand and anticipate the potential unintended
184 ◾ Health Information Technology Evaluation Handbook
Suggested Reading
(NISTIR 7804) Technical evaluation, testing, and validation of the
usability of electronic health records (2012). http://www.nist.
gov/healthcare/usability/upload/EUP_WERB_Version_2_23_12-
Final-2.pdf.
(NISTIR 7804-1) Technical evaluation, testing, and validation of the
usability of electronic health records: Empirically based use
cases for validating safety-enhanced usability and guidelines
for standardization literature (2015). http://nvlpubs.nist.gov/nist-
pubs/ir/2015/NIST.IR.7804-1.pdf.
Electronic Health record usability: Interface design considerations.
Prepared for AHRQ. http://healthit.ahrq.gov/sites/default/files/
docs/citation/09-10-0091-2-EF.pdf.
HIMSS EHR Usability Task Force (2010). Selecting an EHR for
your practice: Evaluating usability. http://www.himss.org/
selecting-ehr-your-practice-evaluating-usability-himss.
Usability.gov is the leading resource for user experience (UX), best
practices, and guidelines, serving practitioners and students in
the government and private sectors. The site provides over-
views of the user-centered design process and various UX dis-
ciplines. It also covers the related information on methodology
and tools for making digital content more usable and useful.
http://www.usability.gov/.
References
1. (NISTIR 7804) Technical evaluation, testing and validation of
the usability of electronic health records. https://www.nist.gov/
publications/nistir-7804-technical-evaluation-testing-and-valida-
tion-usability-electronic-health. 2012.
www.ebook3000.com
Usability Evaluation ◾ 185
Case Studies
187
188 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
Case Studies ◾ 189
SWIFT Implementation
The clinical practice had identified the target for quality
improvement, thus buy-in from clinical nursing and medi-
cal leadership was high. Education materials were targeted
at the bedside providers, with nursing materials designed in
close consultation with nursing quality improvement coaches
and physician materials designed by the physician study
investigators. This material was delivered in a series of small
group and individual provider discussions. The SWIFT score
was automatically calculated once per day at 6:45 am before
190 ◾ Health Information Technology Evaluation Handbook
Results
The impact of the SWIFT score on clinician discharge plan-
ning is illustrated as follows. The first observation to note is
the low rate of utilization of the SWIFT score. Of the 1938
discharges observed during the 1 year post-implementation
period, only 356 patients (18.4%) had a SWIFT score discussed
during morning rounds. Interestingly, the impact of that dis-
cussion was to alter the decision-making process in some way
in 30% of patients.
Given the low uptake of the score, the impact on patient
outcomes was predictably less than expected. The 24-hour
and 7-day readmission rates in the pre and post phase were
not significantly different. Resource utilization was not sig-
nificantly different between the groups. This held true for
the whole group and a severity of illness (APACHE) matched
subgroup.
Case Discussion
Having been closely involved in this study, the low rate of
utilization and the impact on patient outcomes came some-
what as a surprise! But a careful review of the case study
reveals a number of obvious and potentially avoidable failure
points.
www.ebook3000.com
Case Studies ◾ 191
SWIFT Score
The performance of the SWIFT score was reasonable and
was developed on a database of patients highly representa-
tive of the study population. As such, it was expected to be
a reasonable predictor of risk of readmission to the ICU in
this study. However, the intention of this study was to reduce
readmission rates through the implementation of the tool at
the point of care. For this to be effective, the SWIFT score
must provide the clinician with actionable information that
leads to an intervention which reduces the risk of readmission.
When one looks at the patient features that contribute most
to the SWIFT score prediction algorithm, we immediately see
a problem—none of these features are easily modifiable and
some of them are absolutely not modifiable! What action is a
clinician to take when informed that the patient who spends a
long time in the ICU and has come from a nursing home has a
high risk of readmission to the ICU? Most will quickly discard
the information and very soon after stop using the tool.
By calibrating the SWIFT score to a prediction of risk of
readmission, we selected out information that was not action-
able at the bedside. A more successful approach may have
been to calibrate the SWIFT score to identify modifiable clini-
cal features that, if acted upon, would reduce the risk of read-
mission. This is an important distinction and is one that we
learned through this study to incorporate into our subsequent
development efforts.
Study Design
The study was designed as a before-after study in a single
center. This is a weak design for establishing both cau-
sality and generalizability. It is, however, an efficient and
simple study design when resources are limited or you are
gathering preliminary safety and efficacy data for a broader
192 ◾ Health Information Technology Evaluation Handbook
Implementation
When designing tools to be used in clinical practice, resources
must be committed to implementation up front. Part of the
implementation effort requires that the developers clearly estab-
lish the stakeholders’ requirements. In this case, we established
the clinical leadership requirements but failed to understand
the bedside provider’s requirements. This led to a tool that
successfully identified a high-risk readmission cohort but failed
to make that information actionable at the bedside. A second
key function of an implementation effort is to provide educa-
tion on the use of the tool while establishing buy‑in. In this
case, our nursing quality improvement coaches took a lead and
did a good job of engaging and educating nursing leadership
and bedside nursing staff. The physician team leaders, that is,
the authors, did not do an equally good job with the bedside
physicians. The implementation effort for this group was less
consistent. It is possible that a more concerted effort could
have increased the adoption rate of the SWIFT score.
www.ebook3000.com
Case Studies ◾ 193
Results
The primary and secondary outcomes of this study were
clearly defined up front. Both observational data and patient
outcome data were gathered. The multimodal approach is very
useful when one is trying to understand the contribution a
tool such as the SWIFT score is making on a patient-centered
outcome. In this case, we hypothesized that the availability
of the SWIFT score would result in discussion at the bedside,
which altered clinical decision making and reduced readmis-
sion rates. Even if patient readmission rates had plummeted,
without data on adoption and impact on clinical decision
making, it would not be possible to determine if the SWIFT
score contributed in any way to the observed patient outcome.
Direct field observation is a wonderful tool, but the cost of
an observational study can be substantial. In the case of the
194 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
Case Studies ◾ 195
www.ebook3000.com
Case Studies ◾ 197
Testing
The introduction of information technology (IT) into clinical
practice brings with it risk. Critically ill patients are very vul-
nerable to delays in care or diagnosis. The clinical members
of the design team were acutely aware of that risk and deter-
mined early on in the human-centered design process that
testing of safety and efficacy was going to be a key determi-
nant of success. The main steps in the testing and validation
process included a technical review focused on the stability of
data interfaces and validation of data integrity; an assessment
of the impact on providers including cognitive load, efficiency,
team safety, clinical performance, and usability assessment;
the impact on processes of care; and finally the impact on
patients.7 , 8
AWARE Implementation
The results from these evaluations were positive and AWARE
was put into production in the clinical environment in July
198 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
Case Studies ◾ 199
Figure 10.3 Number of software sessions per month (top). Number of unique clinical users per month (bottom).
Case Studies ◾ 201
10.3 Summary
References
1. Gajic O, Malinchoc M, Comfere TB, et al. The stability and
workload index for transfer score predicts unplanned intensive
care unit patient readmission: Initial development and valida-
tion. Crit Care Med . 2008;36(3):676– 682.
2. Herasevich V, Pickering BW, Dong Y, Peters SG, Gajic O.
Informatics infrastructure for syndrome surveillance, decision
support, reporting, and modeling of critical illness. Mayo Clin
Proc . 2010;85(3):247– 254.
3. Chandra S, Agarwal D, Hanson A, et al. The use of an elec-
tronic medical record based automatic calculation tool to quan-
tify risk of unplanned readmission to the intensive care unit: A
validation study. J Crit Care . 2011;6(6):634.e9– 634.e15.
202 ◾ Health Information Technology Evaluation Handbook
www.ebook3000.com
Index
203
204 ◾ Index
www.ebook3000.com
Index ◾ 205
www.ebook3000.com
Index ◾ 207
www.ebook3000.com
Index ◾ 209
www.ebook3000.com
Index ◾ 211
www.ebook3000.com