U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Chou R, Cottrell EB, Wasson N, et al. Screening for Hepatitis C Virus Infection in Adults [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2012 Nov. (Comparative Effectiveness Reviews, No. 69.)

  • This publication is provided for historical reference only and the information may be out of date.

This publication is provided for historical reference only and the information may be out of date.

Cover of Screening for Hepatitis C Virus Infection in Adults

Screening for Hepatitis C Virus Infection in Adults [Internet].

Show details

Appendix EQuality Assessment Methods

Individual studies were rated as “good,” “fair” or “poor” as defined below:

For Controlled Trials:

Each criterion was give an assessment of yes, no, or unclear.

  1. Was the assignment to the treatment groups really random?
    • Adequate approaches to sequence generation:
      • Computer-generated random numbers
      • Random numbers tables
    • Inferior approaches to sequence generation:
      • Use of alternation, case record numbers, birth dates or week days
    • Randomization reported, but method not stated
    • Not clear or not reported
    • Not randomized
  2. Was the treatment allocation concealed?
    • Adequate approaches to concealment of randomization:
      • Centralized or pharmacy-controlled randomization (randomization performed without knowledge of patient characteristics).
      • Serially-numbered identical containers
      • On-site computer based system with a randomization sequence that is not readable until allocation
      • Sealed opaque envelopes
    • Inferior approaches to concealment of randomization:
      • Use of alternation, case record numbers, birth dates or week days
      • Open random numbers lists
      • Serially numbered non- opaque envelopes
      • Not clear or not reported
  3. Were the groups similar at baseline in terms of prognostic factors?
  4. Were the eligibility criteria specified?
  5. Were outcome assessors and/or data analysts blinded to the treatment allocation?
  6. Was the care provider blinded?
  7. Was the patient kept unaware of the treatment received?
  8. Did the article include an intention-to-treat analysis, or provide the data needed to calculate it (i.e., number assigned to each group, number of subjects who finished in each group, and their results)?
  9. Did the study maintain comparable groups?
  10. Did the article report attrition, crossovers, adherence, and contamination?
  11. Is there important differential loss to followup or overall high loss to followup?

For Cohort Studies:

Each criterion was give an assessment of yes, no, or unclear.

  1. Did the study attempt to enroll all (or a random sample of) patients meeting inclusion criteria, or a random sample (inception cohort)?
  2. Were the groups comparable at baseline on key prognostic factors (e.g., by restriction or matching)?
  3. Did the study use accurate methods for ascertaining exposures, potential confounders, and outcomes?
  4. Were outcome assessors and/or data analysts blinded to treatment?
  5. Did the article report attrition?
  6. Did the study perform appropriate statistical analyses on potential confounders?
  7. Is there important differential loss to followup or overall high loss to followup?
  8. Were outcomes pre-specified and defined, and ascertained using accurate methods?

For Case-control Studies

Each criterion was given an assessment of yes, no, or unclear.

  1. Did the study attempt to enroll all (or a random sample of) cases using pre-defined criteria?
  2. Were the controls derived from the same population as the cases, and would they have been selected as cases if the outcome was present?
  3. Were the groups comparable at baseline on key prognostic factors (e.g., by restriction or matching)?
  4. Did the study report the proportion of cases and controls who met inclusion criteria that were analyzed?
  5. Did the study use accurate methods for identifying outcomes?
  6. Did the study use accurate methods for ascertaining exposures and potential confounders?
  7. Did the study perform appropriate statistical analyses on potential confounders?

For Studies of Diagnostic Accuracy

Each criterion was given an assessment of yes, no, or unclear.

  1. Did the study evaluate a representative spectrum of patients?
  2. Did the study enroll a random or consecutive sample of patients meeting pre-defined criteria?
  3. Did the study evaluate a credible reference standard?
  4. Did the study apply the reference standard to all patients, or to a random sample?
  5. Did the study apply the same reference standard to all patients?
  6. Was the reference standard interpreted independently from the test under evaluation?
  7. If a threshold was used, was it pre-specified?

Appendix E References

  1. Harris RP, Helfand M, Woolf SH, et al. Current methods of the US Preventive Services Task Force: a review of the process. Am J Prev Med. 2001;20:21–35. [PubMed: 11306229]
  2. Downs SH, Black N. The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. J Epidemiol Community Health. 1998;52(6):377–84. [PMC free article: PMC1756728] [PubMed: 9764259]
  3. Whiting PF, Rutjes AW, Westwood ME, et al. QUADAS-2: A Revised Tool for the Quality Assessment of Diagnostic Accuracy Studies. Ann Intern Med. 2011;155(8):529–36. [PubMed: 22007046]

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (3.9M)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...