U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Butler M, Olson A, Drekonja D, et al. Early Diagnosis, Prevention, and Treatment of Clostridium difficile: Update [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2016 Mar. (Comparative Effectiveness Reviews, No. 172.)

Cover of Early Diagnosis, Prevention, and Treatment of Clostridium difficile: Update

Early Diagnosis, Prevention, and Treatment of Clostridium difficile: Update [Internet].

Show details

Methods

The methods for this CER update follow the methods suggested in the AHRQ Methods Guide for Effectiveness and Comparative Effectiveness Reviews (available at www.effectivehealthcare.ahrq.gov); certain methods map to the PRISMA checklist.24 All methods and analyses were determined a priori. We recruited a technical expert panel to provide high-level content and methodological expertise feedback on the review protocol. This section summarizes the methods used.

Literature Search Strategy

Our search methods were essentially the same as were used for CER No. 3. We searched Ovid MEDLINE, and Cochrane Central Register of Controlled Trials (CENTRAL) from 2011 to April 2015 to update CER No. 3. The keyword search for ‘difficile’ is highly specific yet sensitive to C. difficile related articles. The search algorithm is provided in Appendix B.

We conducted additional grey literature searching to identify relevant completed and ongoing studies. Relevant grey literature resources included trial registries and funded research databases. We searched ClinicalTrials.gov and the International Controlled Trials Registry Platform (ICTRP) for ongoing studies. Scientific information packet (SIP) letters and emails were sent to relevant industry stakeholders to request submission of published and unpublished information on their product(s). Grey literature search results were used to identify studies, outcomes, and analyses not reported in the published literature to assess publication and reporting bias and inform future research needs.

Studies were included in the review based on the PICOTS framework outlined in Table 1 and the study-specific inclusion criteria described in Table 2.

Table 2. Study inclusion criteria.

Table 2

Study inclusion criteria.

Study Selection and Data Extraction

We reviewed bibliographic database search results for studies relevant to our PICOTS framework and study-specific criteria. All studies identified at title and abstract as relevant by either of two independent investigator underwent full-text screening. Two investigators independently performed full-text screening to determine if inclusion criteria were met. Differences in screening decisions were resolved by consultation between investigators, and, if necessary, consultation with a third investigator. Appendix C provides a list of articles excluded at full text.

We first assessed the relevance of systematic reviews that met inclusion criteria. If we determined that certain Key Questions or comparisons addressed in the previous systematic review were relevant to our review, we assessed the quality of the methodology using modified AMSTAR criteria.25 When prior systematic reviews were assessed as sufficient quality, and when the review assessed strength of evidence or provided sufficient information for it to be assessed, we used the conclusions from that review to replace the de novo process. If additional studies on these comparisons were identified, we updated the systematic review results. We then abstracted data from eligible trials and prospective cohort studies not included in previous systematic reviews that addressed comparisons not sufficiently addressed by a previous eligible systematic review. One investigator abstracted the relevant information directly into evidence tables. A second investigator reviewed evidence tables and verified them for accuracy.

Risk of Bias Assessment of Individual Studies

Risk of bias of eligible studies was assessed by two independent investigators using instruments specific to each study design. For diagnostic studies, we used the QUADAS-2 tool.26 For randomized controlled trials (RCTs), questionnaires developed from the Cochrane Risk of Bias tool were used. We developed an instrument for assessing risk of bias for observational studies based on the RTI Observational Studies Risk of Bias and Precision Item Bank27 (Appendix D). We selected items most relevant in assessing risk of bias for this topic, including participant selection, attrition, ascertainment, and appropriateness of analytic methods. Study power was assessed in ‘other sources of bias’ in studies with data that were not eligible for pooling. Overall summary risk of bias assessments for each study were classified as low, moderate, or high based upon the collective risk of bias inherent in each domain and confidence that the results were believable given the study's limitations. When the two investigators disagreed, a third party was consulted to reconcile the summary judgment.

Data Synthesis

Evidence and summary tables followed those used for CER No. 3 wherever possible. Information from individual studies reviewed in CER No. 3 were brought forward into this updated report when meta-analysis was performed using such information. Otherwise, tables show studies identified for the update and text notes if and how overall results from CER No. 3 were amended.

Where possible, we used data from previous reviews combined with data abstracted from newly identified studies to create new datasets for analysis. We summarized included study characteristics and outcomes in evidence tables. We emphasized patient-centered outcomes in the evidence synthesis. We used statistical differences to assess efficacy and comparative effectiveness and calculate the minimum detectable difference that the data allowed (β=.8, α=.05).

For diagnostic studies we looked at the reference standards and base contrasts on the type of reference standard and respective operating characteristics.28,29 We focused on the differences between test category/methodology sensitivities and specificities rather than on specific test sensitivities and specificities themselves. Categories were Immunoassays for Toxin A/B, glutamate dehydrogenase (GDH), PCR, LAMP, and test algorithms. We pooled one-step NAAT (PCR or LAMP) studies using random effects models; diagnostic test algorithm studies that include NAAT tests (likely PCR) were pooled with other test algorithms. Data were analyzed in OpenMetaAnalyst. We calculated sensitivity, specificity, receiver operating characteristic (ROC) curves, and negative and positive likelihood ratios.30 We used random effect models to pool data when clinically appropriate.

For studies that used multiple reference standards, such as culture, toxigenic culture, and cell cytotoxicity neutralization assay (CCNA), we used toxigenic culture as the reference standard. If different reference standards were used for specific subgroups (such as study site) and none was used across all the samples, then we used the reference standard that was used in interpretation of the index test.

For treatment studies, if certain comparisons could be pooled, we conducted meta-analyses using a random effects model. Data were analyzed in Stata I/C version 12.1.We calculated risk ratios (RR) and absolute risk differences (RD) with the corresponding 95 percent CI for binary primary outcomes. Weighted mean differences (WMD) and/or standardized mean differences (SMD) with the corresponding 95 percent confidence intervals (CIs) were calculated for continuous outcomes. We assessed the clinical and methodological heterogeneity and variation in effect size to determine appropriateness of pooling data.31 We assessed statistical heterogeneity with Cochran's Q test and measure magnitude with I2 statistic.

Strength of Evidence for Major Comparisons and Outcomes

The overall strength of evidence for select outcomes within each comparison were evaluated based on four required domains: (1) study limitations (internal validity); (2) directness (single, direct link between intervention and outcome); (3) consistency (similarity of effect direction and size); and (4) precision (degree of certainty around an estimate).32 A fifth domain, reporting bias, was assessed when strength of evidence based upon the first four domains was moderate or high.32 Based on study design and conduct, risk of bias was rated as low, medium, or high. Consistency was rated as consistent, inconsistent, or unknown/not applicable (e.g., single study). Directness was rated as either direct or indirect. Precision was rated as precise or imprecise. Other factors that may be considered in assessing strength of evidence include dose-response relationship, the presence of confounders, and strength of association. Based on these factors, the overall evidence for each outcome was rated as:32

  • High: Very confident that estimate of effect lies close to true effect. Few or no deficiencies in body of evidence, findings believed to be stable.
  • Moderate: Moderately confident that estimate of effect lies close to true effect. Some deficiencies in body of evidence; findings likely to be stable, but some doubt.
  • Low: Limited confidence that estimate of effect lies close to true effect; major or numerous deficiencies in body of evidence. Additional evidence necessary before concluding that findings are stable or that estimate of effect is close to true effect.
  • Insufficient: No evidence, unable to estimate an effect, or no confidence in estimate of effect. No evidence is available or the body of evidence precludes judgment.

Applicability

Applicability of studies was determined according to the PICOTS (population, intervention, comparator, outcome, timing, settings) framework. Study characteristics that may affect applicability include, but are not limited to, the population from which the study participants are enrolled, diagnostic assessment processes, narrow eligibility criteria, and patient and intervention characteristics different from those described by population studies of C. difficile.33

Applicability of studies of diagnostic accuracy of diagnostic tests for CDI may be influenced by the selection of patient samples in the studies included and the degree (if any) of delineation of the demographic and clinical characteristics of the studies' respective patient populations and how these characteristics compare with a local population. Further, certain diagnostic tests may not be available to all clinicians depending on local health system factors.

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (3.2M)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...