This publication is provided for historical reference only and the information may be out of date.
Individual studies were rated as “good,” “fair” or “poor” as defined below:
For Controlled Trials:
Each criterion was give an assessment of yes, no, or unclear.
- Was the assignment to the treatment groups really random?
- Adequate approaches to sequence generation:
- Computer-generated random numbers
- Random numbers tables
- Inferior approaches to sequence generation:
- Use of alternation, case record numbers, birth dates or week days
- Randomization reported, but method not stated
- Not clear or not reported
- Not randomized
- Was the treatment allocation concealed?
- Adequate approaches to concealment of randomization:
- Centralized or pharmacy-controlled randomization (randomization performed without knowledge of patient characteristics).
- Serially-numbered identical containers
- On-site computer based system with a randomization sequence that is not readable until allocation
- Sealed opaque envelopes
- Inferior approaches to concealment of randomization:
- Use of alternation, case record numbers, birth dates or week days
- Open random numbers lists
- Serially numbered non- opaque envelopes
- Not clear or not reported
- Were the groups similar at baseline in terms of prognostic factors?
- Were the eligibility criteria specified?
- Were outcome assessors and/or data analysts blinded to the treatment allocation?
- Was the care provider blinded?
- Was the patient kept unaware of the treatment received?
- Did the article include an intention-to-treat analysis, or provide the data needed to calculate it (i.e., number assigned to each group, number of subjects who finished in each group, and their results)?
- Did the study maintain comparable groups?
- Did the article report attrition, crossovers, adherence, and contamination?
- Is there important differential loss to followup or overall high loss to followup?
For Cohort Studies:
Each criterion was give an assessment of yes, no, or unclear.
- Did the study attempt to enroll all (or a random sample of) patients meeting inclusion criteria, or a random sample (inception cohort)?
- Were the groups comparable at baseline on key prognostic factors (e.g., by restriction or matching)?
- Did the study use accurate methods for ascertaining exposures, potential confounders, and outcomes?
- Were outcome assessors and/or data analysts blinded to treatment?
- Did the article report attrition?
- Did the study perform appropriate statistical analyses on potential confounders?
- Is there important differential loss to followup or overall high loss to followup?
- Were outcomes pre-specified and defined, and ascertained using accurate methods?
For Case-control Studies
Each criterion was given an assessment of yes, no, or unclear.
- Did the study attempt to enroll all (or a random sample of) cases using pre-defined criteria?
- Were the controls derived from the same population as the cases, and would they have been selected as cases if the outcome was present?
- Were the groups comparable at baseline on key prognostic factors (e.g., by restriction or matching)?
- Did the study report the proportion of cases and controls who met inclusion criteria that were analyzed?
- Did the study use accurate methods for identifying outcomes?
- Did the study use accurate methods for ascertaining exposures and potential confounders?
- Did the study perform appropriate statistical analyses on potential confounders?
For Studies of Diagnostic Accuracy
Each criterion was given an assessment of yes, no, or unclear.
- Did the study evaluate a representative spectrum of patients?
- Did the study enroll a random or consecutive sample of patients meeting pre-defined criteria?
- Did the study evaluate a credible reference standard?
- Did the study apply the reference standard to all patients, or to a random sample?
- Did the study apply the same reference standard to all patients?
- Was the reference standard interpreted independently from the test under evaluation?
- If a threshold was used, was it pre-specified?
Appendix E References
- Harris RP, Helfand M, Woolf SH, et al. Current methods of the US Preventive Services Task Force: a review of the process. Am J Prev Med. 2001;20:21–35. [PubMed: 11306229]
- Downs SH, Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. J Epidemiol Community Health. 1998;52(6):377–84. [PMC free article: PMC1756728] [PubMed: 9764259]
- Whiting PF, Rutjes AW, Westwood ME, et al. QUADAS-2: A Revised Tool for the Quality Assessment of Diagnostic Accuracy Studies. Ann Intern Med. 2011;155(8):529–36. [PubMed: 22007046]
Publication Details
Copyright
Publisher
Agency for Healthcare Research and Quality (US), Rockville (MD)
NLM Citation
Chou R, Cottrell EB, Wasson N, et al. Screening for Hepatitis C Virus Infection in Adults [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2012 Nov. (Comparative Effectiveness Reviews, No. 69.) Appendix E, Quality Assessment Methods.