Artículo
Artículo
Artículo
Scoping reviews, a type of knowledge synthesis, follow a system- sential reporting items and 2 optional items. The authors provide
atic approach to map evidence on a topic and identify main con- a rationale and an example of good reporting for each item. The
cepts, theories, sources, and knowledge gaps. Although more intent of the PRISMA-ScR is to help readers (including research-
scoping reviews are being done, their methodological and re- ers, publishers, commissioners, policymakers, health care pro-
porting quality need improvement. This document presents the viders, guideline developers, and patients or consumers) de-
PRISMA-ScR (Preferred Reporting Items for Systematic reviews velop a greater understanding of relevant terminology, core
and Meta-Analyses extension for Scoping Reviews) checklist and concepts, and key items to report for scoping reviews.
explanation. The checklist was developed by a 24-member ex-
pert panel and 2 research leads following published guidance Ann Intern Med. 2018;169:467-473. doi:10.7326/M18-0850 Annals.org
from the EQUATOR (Enhancing the QUAlity and Transparency For author affiliations, see end of text.
Of health Research) Network. The final checklist contains 20 es- This article was published at Annals.org on 4 September 2018.
Protocol, Advisory Board, and Expert Panel In-Person Group (Round 2 of Delphi)
Our protocol was drafted by the research team and The Chatham House rule (21) was established at
revised as necessary by the advisory board before be- the beginning of the meeting, whereby participants
ing listed as a reporting guideline on the EQUATOR were free to use information that is shared but were not
(17) and PRISMA (18) Web sites. The research team in- permitted to reveal the identity or affiliation of the
cluded 2 leads (A.C.T. and S.E.S.) and 2 research coor- speaker. Expert panel members were given their indi-
dinators (E.L. and W.Z.), none of whom participated in vidual results; the overall group distribution, median,
the scoring exercises, and a 4-member advisory board and interquartile range; a summary of the JBI method-
(K.K.O., H.C., D.L., and D.M.) with extensive experience ological guidance (4); and preliminary feedback from
doing scoping reviews or developing reporting guide- the e-Delphi group. These data were used to generate
lines. We aimed to form an expert panel of approxi- and inform the discussion about each discrepant item
mately 30 members that would be representative of from round 1. Two researchers (A.C.T. and S.E.S.) facil-
different geography and stakeholder types and re- itated the discussion using a modified nominal group
search experiences, including persons with experi- technique (22) to reach consensus. Panel members
ence in the conduct, dissemination, or uptake of were subsequently asked to rescore the discrepant
scoping reviews. items using sli.do (23), a live audience-response system
in a format that resembled the round 1 survey. For
Survey Development and Round 1 of Delphi items that failed to meet the threshold for consensus,
The initial step in developing the Delphi survey via working groups were assembled. The meeting was
Qualtrics (an online survey platform) (19) involved iden- audio-recorded and transcribed using TranscribeMe
tifying potential modifications to the original 27-item (24), and 3 note-takers independently documented the
PRISMA checklist. The modifications were based on a main discussion points. The transcript was annotated to
research program carried out by members of the advi- complement a master summary of the discussion
sory board to better understand scoping review prac- points, which was compiled using the 3 note-takers'
tices (1, 3, 20) and included a broader research ques- files.
tion and literature search strategy, optional risk-of-bias
E-Delphi Group (Round 2 of Delphi)
assessment and consultation exercise (whereby rele-
Those who could not attend the in-person meeting
vant stakeholders contribute to the work, as described
participated via an online discussion exercise using
by Arksey and O’Malley [6]), and a qualitative analysis.
Conceptboard (25), a visual collaboration platform that
For round 1 of scoring, we prepared a draft of the
allows users to provide feedback on “whiteboards” in
PRISMA-ScR (Supplement, available at Annals.org) and
real time. The discrepant items from round 1 were pre-
asked expert panel members to rate their agreement
sented as a single whiteboard, and questions (for ex-
with each of the proposed reporting items using a
ample, “After reviewing your survey results with respect
7-point Likert scale (1 = “entirely disagree,” 2 = “mostly
to this item, please share why you rated this item the
disagree,” 3 = “somewhat disagree,” 4 = “neutral,” 5 =
way you did”) were assigned to participants as tasks to
“somewhat agree,” 6 = “mostly agree,” and 7 = “en-
facilitate the discussion. E-Delphi panel members re-
tirely agree”). Each survey item included an optional
ceived the same materials as in-person participants and
text box where respondents could provide comments.
were encouraged to respond to others' comments and
The research team calibrated the survey for content
interact through a chat feature. The second round of
and clarity before administering it and sent biweekly
scoring was done in Qualtrics using a similar format as
reminders to optimize participation.
in round 1. A summary of the Conceptboard discus-
sion, as well as the annotated meeting transcript and
Survey Analysis
master summary document, were shared so that partic-
To be conservative, a threshold for 85% agreement
ipants could learn about the perspectives of the in-
was established a priori for each of the reporting items
person group before rescoring.
to indicate consensus among the expert panel. This
rule required that at least 85% of the panel mostly or Working Groups and Round 3 of Delphi
entirely agreed (values of 6 or 7 on the Likert scale) with To enable panel-wide dialogue and refine the
the inclusion of the item in the PRISMA-ScR. If agree- checklist items before the final round of scoring, work-
ment was less than 85%, it was considered to be dis- ing groups were created and collaborated by telecon-
crepant. This standard was used for all 3 rounds of ference and e-mail. Their task was to discuss the discrep-
scoring to inform the final checklist. For ease and ant items in terms of the key issues and considerations
consistency with how the survey questions were (relating to both concepts and wording) that had been
worded, we did not include a provision for agree- raised in earlier stages across both groups. To harmo-
ment on exclusion (that is, 85% of answers corre- nize the data from the 2 groups, a third round of scor-
sponding to values of 1 or 2 on the Likert scale). All ing exercise was administered using Qualtrics (19). In
comments were summarized to help explain the scor- this step, suggested modifications (in terms of both
ings and identify any issues. For the analysis, the re- concepts and wording) from all previous stages were
sults were stratified by group (in-person meeting vs. incorporated into the items that had failed to reach
online, hereafter e-Delphi) because discrepant items consensus in the first 2 rounds across both groups, and
could differ between groups. the full panel scored this updated list.
468 Annals of Internal Medicine • Vol. 169 No. 7 • 2 October 2018 Annals.org
Expert panel members participated in working 13 items (2, 4, 7, 11, 12, 14, 16,
Round 3 Delphi
groups to refine certain checklist items before 18–21, and 23 – 24) had 85%
final round of ranking agreement, 2 items (15 and
Expert panel with 24 members completed final 22) were removed, and 1 item
online survey to rate agreement with the updated (10) was modified for
16 discrepant items (using a 7-point Likert scale) inclusion
20 required items
and 2 (1, 3, 5, 6, 8, 9, 17, and 25–27) plus 11
Final
PRISMA = Preferred Reporting Items for Systematic reviews and Meta-Analyses; PRISMA-ScR = PRISMA extension for Scoping Reviews.
Interactive Workshop (Testing) rounds of scoring. The Figure presents results of the
A workshop led by the lead investigator (A.C.T.) modified Delphi, including the number of items that
and facilitated by members of the advisory board and met agreement at each stage.
expert panel (S.E.S., C.M.G., C.G., T.H., M.T.M., and
M.D.J.P.) was held as part of the Global Evidence Sum- Round 1 of Delphi
mit in Cape Town, South Africa, in September 2017. For the in-person group, which involved 16 partic-
Participants (researchers, scientists, policymakers, man- ipants, 9 of 27 items reached agreement. For the dis-
agers, and students) tested the checklist by applying crepant items, agreement ranged from 56% for item 15
the PRISMA-ScR to a scoping review on a health-related (risk of bias) to 81% for items 3 (rationale), 16 (addi-
topic (26). tional analyses), 20 (results of individual sources), and
Role of the Funding Source 23 (additional analyses). For the e-Delphi group, which
This work was supported by a grant from the Cana- involved 15 participants, 8 of 27 items met the 85%
dian Institutes of Health Research. The funding source agreement threshold. For the discrepant items,
had no role in designing the study; collecting, analyz- agreement ranged from 40% for item 12 (risk of bias)
ing, or interpreting the data; writing the manuscript; or to 80% for items 3 (rationale), 25 (limitations), and 26
deciding to submit it for publication. (conclusions).
In-Person Meeting and Round 2 of Delphi
RESULTS The 16 panel members who attended the in-
Expert Panel person meeting in Toronto on 29 November 2016 were
A total of 37 persons were invited to participate, of largely from North America, although a few were from
whom 31 completed round 1 and 24 completed all 3 Australia, Lebanon, or the United Kingdom. Of the 18
Annals.org Annals of Internal Medicine • Vol. 169 No. 7 • 2 October 2018 469
discrepant items from round 1, the panel decided to agreement as not applicable for scoping reviews in
rescore 11 of the items after facilitated discussion. All, round 1, and the following 4 items: item 15 (risk of bias
except item 7 (information sources), reached the 85% across studies) and item 22 (risk of bias across study
threshold for agreement in the rescoring exercise. For results, companion to item 15), which were excluded
the remaining 7 of the 18 discrepant items, the group after round 3, along with item 16 (additional analyses)
believed that notable changes were required, which and item 23 (additional analyses results, companion to
formed the basis of action by the working groups. item 16), which reached agreement as not applicable
for scoping reviews in round 3. The Figure illustrates
E-Delphi Discussion and Round 2 of Delphi
this process. In addition, because scoping reviews can
Fifteen panel members were invited to partici-
include many types of evidence (such as documents,
pate in the online discussion exercise, from countries
blogs, Web sites, studies, interviews, and opinions) and
that included Canada, the United Kingdom, Switzer-
do not examine the risk of bias of included sources,
land, Norway, and South Africa. Of these, 50% (7 of
items 12 (risk of bias in individual studies) and 19 (risk
14 panelists) participated in at least 1 discussion on
of bias within study results) from the original PRISMA
Conceptboard (25) and 1 dropped out. Eleven par-
are treated as optional in the PRISMA-ScR.
ticipants completed the second scoring exercise of
the 19 discrepant items, whereby 5 items reached
PRISMA-ScR Explanation and Elaboration
85% agreement.
The Appendix (available at Annals.org) elaborates
Working Groups and Round 3 of Delphi on the PRISMA-ScR checklist items. It defines each item
The 6 working groups (with 1 call per group) and gives examples of good reporting from existing
ranged in size from 3 to 8 participants, with an average scoping reviews to provide authors with additional
of 5 per group. Round 3 of the Delphi did not include guidance on how to use the PRISMA-ScR.
the 11 items that reached consensus during round 1 or
2 across both the in-person and e-Delphi groups. The
survey focused on the remaining 16 items that failed to
reach consensus across both groups to ensure that one DISCUSSION
group's decisions did not take precedence over the The PRISMA-ScR is intended to provide guidance
other group. on the reporting of scoping reviews. To develop this
A total of 27 persons were invited to participate in PRISMA extension, the original PRISMA statement was
round 3 of the Delphi—16 from the in-person group and adapted and the following revisions were made: 5
11 from the e-Delphi group. Overall, 24 of 27 partici- items were removed (because they were deemed not
pants completed the final round of scoring, and 3 with- relevant to scoping reviews), 2 items were deemed op-
drew (2 from the in-person group and 1 from the tional, and the wording was modified for all items. This
e-Delphi). Two of the 16 applicable items—10 (data col- reporting guideline is consistent with the JBI guidance
lection process) and 15 (risk of bias across studies)— for scoping reviews, which highlights the importance of
failed to meet the 85% agreement threshold. Item 15 methodological rigor in the conduct of scoping re-
was subsequently removed from the checklist (along views. It is hoped that the PRISMA-ScR will improve the
with its companion item, 22), whereas item 10 was re- reporting of scoping reviews and increase their rele-
tained but revised to exclude the optional consultation vance for decision making and that adherence to our
exercise described by Arksey and O’Malley (6) and Le- reporting guideline will be evaluated in the future,
vac and colleagues (7), which was the source of the which will be critical to measure its impact.
disagreement. Participants decided that the consulta- The PRISMA-ScR will be housed on the EQUATOR
tion exercise could be considered a knowledge trans- Network's Web site and the Knowledge Translation Pro-
lation activity, which could be done for any type of gram Web site of St. Michael's Hospital (27). To promote
knowledge synthesis. its uptake, a 1-minute YouTube video will be created
outlining how to operationalize each item, webinars for
Interactive Workshop (Testing)
organizations that do scoping reviews will be offered,
A total of 30 participants attended an interactive
and a 1-page tip sheet for each item will be created. In
workshop at the Global Evidence Summit in September
the future, an automated e-mail system may be consid-
2017 in Cape Town, South Africa, where minor revi-
ered, whereby authors would be sent the PRISMA-ScR
sions were suggested for wording of the items.
upon registering their scoping review, as well as an on-
PRISMA-ScR Checklist line tool similar to Penelope, which verifies manuscripts
The Table presents the final checklist, which in- for completeness and provides feedback to authors
cludes 20 items plus 2 optional items. It consists of 10 (28). The PRISMA-ScR will be shared widely within our
items that reached agreement in rounds 1 and 2 (items networks, including the Alliance for Health Policy and
1, 3, 5, 6, 8, 9, 17, 25, 26, and 27) and 9 that were Systems Research, the World Health Organization (29),
agreed on in round 3 (items 2, 4, 7, 11, 14, 18, 20, 21, and the Global Evidence Synthesis Initiative (30). Fi-
and 24), along with 1 item (item 10) that was modified nally, ongoing feedback and suggestions to improve
for inclusion after the final round. Five items from the uptake of the PRISMA-ScR will be collected via an on-
original PRISMA checklist were deemed not relevant. line form on the Web site for the Knowledge Transla-
They were item 13 (summary measures), which reached tion Program of St. Michael's Hospital (27).
470 Annals of Internal Medicine • Vol. 169 No. 7 • 2 October 2018 Annals.org
Abstract
Structured summary 2 Provide a structured summary that includes (as applicable) background, objectives, eligibility criteria,
sources of evidence, charting methods, results, and conclusions that relate to the review questions
and objectives.
Introduction
Rationale 3 Describe the rationale for the review in the context of what is already known. Explain why the review
questions/objectives lend themselves to a scoping review approach.
Objectives 4 Provide an explicit statement of the questions and objectives being addressed with reference to their
key elements (e.g., population or participants, concepts, and context) or other relevant key
elements used to conceptualize the review questions and/or objectives.
Methods
Protocol and registration 5 Indicate whether a review protocol exists; state if and where it can be accessed (e.g., a Web address);
and if available, provide registration information, including the registration number.
Eligibility criteria 6 Specify characteristics of the sources of evidence used as eligibility criteria (e.g., years considered,
language, and publication status), and provide a rationale.
Information sources* 7 Describe all information sources in the search (e.g., databases with dates of coverage and contact
with authors to identify additional sources), as well as the date the most recent search was
executed.
Search 8 Present the full electronic search strategy for at least 1 database, including any limits used, such that
it could be repeated.
Selection of sources of evidence† 9 State the process for selecting sources of evidence (i.e., screening and eligibility) included in the
scoping review.
Data charting process‡ 10 Describe the methods of charting data from the included sources of evidence (e.g., calibrated forms
or forms that have been tested by the team before their use, and whether data charting was done
independently or in duplicate) and any processes for obtaining and confirming data from
investigators.
Data items 11 List and define all variables for which data were sought and any assumptions and simplifications
made.
Critical appraisal of individual sources of 12 If done, provide a rationale for conducting a critical appraisal of included sources of evidence;
evidence§ describe the methods used and how this information was used in any data synthesis (if
appropriate).
Summary measures 13 Not applicable for scoping reviews.
Synthesis of results 14 Describe the methods of handling and summarizing the data that were charted.
Risk of bias across studies 15 Not applicable for scoping reviews.
Additional analyses 16 Not applicable for scoping reviews.
Results
Selection of sources of evidence 17 Give numbers of sources of evidence screened, assessed for eligibility, and included in the review,
with reasons for exclusions at each stage, ideally using a flow diagram.
Characteristics of sources of evidence 18 For each source of evidence, present characteristics for which data were charted and provide the
citations.
Critical appraisal within sources of evidence 19 If done, present data on critical appraisal of included sources of evidence (see item 12).
Results of individual sources of evidence 20 For each included source of evidence, present the relevant data that were charted that relate to the
review questions and objectives.
Synthesis of results 21 Summarize and/or present the charting results as they relate to the review questions and objectives.
Risk of bias across studies 22 Not applicable for scoping reviews.
Additional analyses 23 Not applicable for scoping reviews.
Discussion
Summary of evidence 24 Summarize the main results (including an overview of concepts, themes, and types of evidence
available), link to the review questions and objectives, and consider the relevance to key groups.
Limitations 25 Discuss the limitations of the scoping review process.
Conclusions 26 Provide a general interpretation of the results with respect to the review questions and objectives, as
well as potential implications and/or next steps.
Funding 27 Describe sources of funding for the included sources of evidence, as well as sources of funding for
the scoping review. Describe the role of the funders of the scoping review.
JBI = Joanna Briggs Institute; PRISMA-ScR = Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews.
* Where sources of evidence (see second footnote) are compiled from, such as bibliographic databases, social media platforms, and Web sites.
† A more inclusive/heterogeneous term used to account for the different types of evidence or data sources (e.g., quantitative and/or qualitative
research, expert opinion, and policy documents) that may be eligible in a scoping review as opposed to only studies. This is not to be confused with
information sources (see first footnote).
‡ The frameworks by Arksey and O'Malley (6) and Levac and colleagues (7) and the JBI guidance (4, 5) refer to the process of data extraction in a
scoping review as data charting.
§ The process of systematically examining research evidence to assess its validity, results, and relevance before using it to inform a decision. This
term is used for items 12 and 19 instead of “risk of bias” (which is more applicable to systematic reviews of interventions) to include and
acknowledge the various sources of evidence that may be used in a scoping review (e.g., quantitative and/or qualitative research, expert opinion,
and policy documents).
Annals.org Annals of Internal Medicine • Vol. 169 No. 7 • 2 October 2018 471
From St. Michael's Hospital and University of Toronto, To- Reproducible Research Statement: Study protocol: Available
ronto, Ontario, Canada (A.C.T., S.E.S.); St. Michael's Hospital, at the EQUATOR and PRISMA Web sites (www.equator-network
Toronto, Ontario, Canada (E.L., W.Z.); University of Toronto, .org/library/reporting-guidelines-under-development/#55 and
Toronto, Ontario, Canada (K.K.O., H.C.); Northeastern Univer- www.prisma-statement.org/Extensions/InDevelopment.aspx).
sity, Boston, Massachusetts (D.L.); Ottawa Hospital Research Statistical code: Not applicable. Data set: Available from Dr.
Institute, Ottawa, Ontario, Canada (D.M., C.G.); University of Tricco (e-mail, [email protected]).
South Australia and University of Adelaide, Adelaide, South
Australia, Australia (M.D.J.P.); Royal College of Physicians and
Surgeons of Canada, Ottawa, Ontario, Canada (T.H.); Cana- Corresponding Author: Andrea C. Tricco, PhD, MSc, Knowl-
dian Agency for Drugs and Technologies in Health, Ottawa, edge Translation Program, Li Ka Shing Knowledge Institute,
Ontario, Canada (L.W., T.C.); RAND Corporation, Santa Mon- St. Michael's Hospital, 209 Victoria Street, East Building, To-
ica, California (S.H.); American University of Beirut, Beirut, ronto, Ontario M5B 1W8, Canada; e-mail, [email protected].
Lebanon (E.A.A.); Agency for Healthcare Research and Qual-
ity, Rockville, Maryland (C.C.); University of Ottawa, Ottawa, Current author addresses and author contributions are avail-
Ontario, Canada (J.M.); University of York, York, United King- able at Annals.org.
dom (L.S.); University of Alberta, Edmonton, Alberta, Canada
(L.H.); BMJ Open, London, United Kingdom (A.A.); McMaster
University, Hamilton, Ontario, Canada (M.G.W.); Norwegian
Institute of Public Health, Oslo, Norway, and South African References
Medical Research Council, Cape Town, South Africa (S.L.); 1. Tricco AC, Lillie E, Zarin W, O’Brien K, Colquhoun H, Kastner M,
et al. A scoping review on the conduct and reporting of scoping
Queen's University, Kingston, Ontario, Canada (C.M.G.); Dal-
reviews. BMC Med Res Methodol. 2016;16:15. [PMID: 26857112]
housie University, Halifax, Nova Scotia, Canada (M.T.M.); doi:10.1186/s12874-016-0116-4
World Health Organization, Geneva, Switzerland (E.V.L., Ö.T.); 2. Canadian Institutes of Health Research. A guide to knowledge
Cochrane, London, United Kingdom (K.S.); and King's Col- synthesis: a knowledge synthesis chapter. 2010. Accessed at www
lege London, London, United Kingdom (J.M.). .cihr-irsc.gc.ca/e/41382.html on 10 January 2018.
3. Colquhoun HL, Levac D, O’Brien KK, Straus S, Tricco AC, Perrier L,
et al. Scoping reviews: time for clarity in definition, methods, and
Note: Dr. Tricco affirms that the manuscript is an honest, ac- reporting. J Clin Epidemiol. 2014;67:1291-4. [PMID: 25034198] doi:
curate, and transparent account of the study being reported; 10.1016/j.jclinepi.2014.03.013
that no important aspects of the study have been omitted; 4. Peters MD, Godfrey CM, Khalil H, McInerney P, Parker D, Soares
and that any discrepancies from the study as planned (and, if CB. Guidance for conducting systematic scoping reviews. Int J Evid
relevant, registered) have been explained. Based Healthc. 2015;13:141-6. [PMID: 26134548] doi:10.1097/XEB
.0000000000000050
5. Peters MDJ, Godfrey C, McInerney P, Baldini Soares C, Khalil H,
Acknowledgment: The authors thank Susan Le for supporting Parker D. Scoping reviews. In: Aromataris E, Munn Z, eds. Joanna
the coordination of the project and formatting the manu- Briggs Institute Reviewer's Manual. Adelaide, Australia: Joanna
script; Anna Lambrinos and Dr. Mai Pham for participating in Briggs Inst; 2017.
round 1 of scoring and attending the in-person meeting; Dr. 6. Arksey H, O’Malley L. Scoping studies: towards a methodological
framework. Int J Soc Res Methodol. 2005;8:19-32.
Lisa O’Malley for participating in round 1 of scoring and the
7. Levac D, Colquhoun H, O’Brien KK. Scoping studies: advancing
e-Delphi round 2 of scoring; Dr. Peter Griffiths and Dr. Charles the methodology. Implement Sci. 2010;5:69. [PMID: 20854677] doi:
Shey Wiysonge for participating in round 1 of scoring and 10.1186/1748-5908-5-69
providing feedback on Conceptboard; Dr. Jill Manthorpe and 8. Altman DG, Simera I. Using reporting guidelines effectively to en-
Dr. Mary Ann McColl for participating in round 1 of scoring; sure good reporting of health research. In: Moher D, Altman D,
Assem M. Khamis for assisting with the identification of exam- Schulz K, Simera I, Wager E, eds. Guidelines for Reporting Health
ples for the Explanation and Elaboration document; and Me- Research: A User's Manual. Hoboken, NJ: J Wiley; 2014:32-40.
9. Moher D, Schulz KF, Simera I, Altman DG. Guidance for develop-
lissa Chen, Jessica Comilang, and Meghan Storey for provid-
ers of health research reporting guidelines. PLoS Med. 2010;7:
ing administrative support for the in-person meeting. e1000217. [PMID: 20169112] doi:10.1371/journal.pmed.1000217
10. Moher D, Liberati A, Tetzlaff J, Altman DG; PRISMA Group. Pre-
ferred reporting items for systematic reviews and meta-analyses: the
Financial Support: By Knowledge Synthesis grant KRS 144046
PRISMA statement. Ann Intern Med. 2009;151:264-9. [PMID:
from the Canadian Institutes of Health Research. Dr. Tricco is 19622511] doi:10.7326/0003-4819151-4-200908180-00135
funded by a Tier 2 Canada Research Chair in Knowledge Syn- 11. Tricco AC, Zarin W, Ghassemi M, Nincic V, Lillie E, Page MJ, et al.
thesis. Dr. O’Brien was supported by a Canadian Institutes of Same family, different species: methodological conduct and quality
Health Research New Investigator Award. Dr. Straus is funded varies according to purpose for five types of knowledge synthesis.
by a Tier 1 Canada Research Chair in Knowledge Translation. J Clin Epidemiol. 2018;96:133-42. [PMID: 29103958] doi:10.1016/j
.jclinepi.2017.10.014
12. McInnes MD, Bossuyt PM. Pitfalls of systematic reviews and meta-
Disclosures: Mr. Aldcroft reports that he is the editor of BMJ analyses in imaging research. Radiology. 2015;277:13-21. [PMID:
Open. Dr. Lewin reports that he is the joint coordinating edi- 26402491] doi:10.1148/radiol.2015142779
tor for the Cochrane Effective Practice and Organisation of 13. Macaskill P, Gatsonis C, Deeks JJ, Harbord RM, Takwoingi Y. Ana-
lysing and presenting results. In: Deeks JJ, Bossuyt PM, Gatsonis C, eds.
Care (EPOC) Group. Dr. Straus reports that she is an associate
Cochrane Handbook for Systematic Reviews of Diagnostic Test
editor of ACP Journal Club. Authors not named here have Accuracy. Version 1.0. London: The Cochrane Collaboration; 2010. Ac-
disclosed no conflicts of interest. Disclosures can also be cessed at https://methods.cochrane.org/sites/methods.cochrane.org
viewedatwww.acponline.org/authors/icmje/ConflictOfInterest .sdt/files/public/uploads/Chapter%2010%20-%20Version%201.0
Forms.do?msNum=M18-0850. .pdf on 3 August 2018.
472 Annals of Internal Medicine • Vol. 169 No. 7 • 2 October 2018 Annals.org