Appendix DAnalyses and Risk of Bias Assessments

Publication Details

Analyses of Comparative Accuracy

Table D-1. Summary of analyses of comparative accuracy.

Table D-1

Summary of analyses of comparative accuracy.

Quality of Systematic Reviews

Modified AMSTAR Instrument131,132 for Systematic Reviews

The eight items in boldface below were required to be answered “Yes” in order for a systematic review to be considered high quality. Otherwise, the review was rated not high quality.

1.

Was an a priori design or protocol provided?

2.

Was a comprehensive search strategy performed?

2a.

Was this strategy appropriate to address the relevant Key Question of the CER?

3.

Was a list of included and excluded studies provided?

4.

Was the application of inclusion/exclusion criteria unbiased?

4a.

Are the inclusion/exclusion criteria appropriate to address the relevant Key Question of the CER?

5.

Was there duplicate study selection and data extraction?

6.

Were the characteristics of the included studies provided?

7.

Was the individual study quality assessed?

7a.

Was the method of study quality assessment consistent with that recommended by the Methods Guide?

7b.

Was the scientific quality of the individual studies used appropriately in formulating conclusions?

8.

Were the methods used to combine the findings of studies appropriate?

9.

Was the likelihood of publication bias assessed?

10.

Have the authors disclosed conflicts of interest?

Table D-2. Quality assessments of systematic reviews.

Table D-2

Quality assessments of systematic reviews.

Risk of Bias of Comparative Accuracy Studies

  1. Did the study enroll all, consecutive, or a random sample of patients?
  2. Was the study unaffected by spectrum bias (e.g., patients with known status before the study, or patients selected for being difficult to diagnose/stage)?
  3. Was prior experience with the test (technicians, readers) similar for the two imaging tests being compared in the study?
  4. Were the imaging tests performed within one month of each other (to avoid the possibility that the patient’s true condition changed between tests)?
  5. Was knowledge of the other test complementary (either both tests were read with knowledge of the other results, or neither test was read with knowledge of the other)?
  6. Did the interpreters have the same other information available at the time of interpretation for the two imaging tests (other clinical information, 3rd test results)?
  7. Was each test’s accuracy measuring using the same reference standard (or a similar proportion of patients who underwent different reference standards such as clinical follow-up and surgical findings)?
  8. Were readers of both tests of interest blinded to the results of the reference standard (or the reference standard was unknowable until after the tests were read)?
  9. Were the people determining the reference standard unaware of the diagnostic test results?

We defined LOW risk of bias as a study that has a YES for the six boldfaced items above (#2, and #4-#8). We defined HIGH risk of bias as a study that has a NO (or Not Reported) for these six items. We defined MEDIUM risk of bias a study that meets neither the LOW nor the HIGH criteria.

Table D-3. Risk of bias assessments of comparative accuracy studies.

Table D-3

Risk of bias assessments of comparative accuracy studies.