U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Wang Z, Whiteside S, Sim L, et al. Anxiety in Children [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2017 Aug. (Comparative Effectiveness Reviews, No. 192.)

Cover of Anxiety in Children

Anxiety in Children [Internet].

Show details

Methods

We developed an analytic framework to guide the whole process of the systematic review (Figure. 1). We followed the established methodologies of systematic reviews as outlined in the Agency for Healthcare Research and Quality (AHRQ) Methods Guide for Comparative Effectiveness Reviews.25 The reporting complies with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statements. 26 The study protocol is registered in the international prospective register of systematic reviews (PROSPERO #: CRD42016046542) and published on AHRQ Web site.

The figure is an analytic framework that depicts the association between the available treatments for childhood anxiety disorders such as pharmacotherapy and psychotherapy, and intermediate and patient-centered outcomes. Disorders include panic disorder, social anxiety disorder, specific phobias, generalized anxiety disorder, and separation anxiety. The figure shows a link between treatments for childhood anxiety and intermediate outcomes, such as child, parent, school, and clinician version of standardized measures. The figure then suggests an overarching link between these treatments and patient-centered outcomes, such as remission, symptoms, behavioral problems, academic performance, and quality of life. The figure shows a link between treatments and adverse effects. The figure suggests the presence of several factors that can modify the treatment effect such as child and family demographics, comorbidities and other characteristics.

Figure 1

Analytic framework.

Literature Search Strategy

Search Strategy

We conducted a comprehensive literature search of eight databases, including Ovid MEDLINE® In-Process & Other Non-Indexed Citations, Ovid MEDLINE®, Embase®, PsycINFO®, Cochrane Central Register of Controlled Trials, Ovid Cochrane Database of Systematic Reviews, and SciVerse Scopus from databases inception to February 1, 2017. We also searched U.S. Food and Drug Administration (FDA) new drug applications, ClinicalTrials.gov, Health Canada, Medicines and Healthcare Products Regulatory Agency (MHRA), AHRQ’s Horizon Scanning System, conference proceedings, patient advocate group Web sites, and medical society Web sites. Relevant systematic reviews and meta-analysis, as well as reference mining of relevant publications, were used to identify additional existing and new literature. An experienced librarian, with the inputs from the study investigators, developed the search strategy (Appendix B). An independent experienced librarian peer-reviewed the search strategy.

Inclusion and Exclusion Criteria

The eligible studies had to meet all the following criteria: 1) children and adolescents between 3 and 18 years old with confirmed diagnosis of panic disorder, social anxiety disorder, specific phobias, generalized anxiety disorder, or separation anxiety; 2) received any psychotherapy, pharmacotherapy, alone or combined; 3) reported outcomes of interest (standardized measures, patient centered outcomes, or safety outcomes). We included randomized controlled trials (RCTs), and comparative observational studies. Case reports or case series were used to identify additional adverse events (AEs). We did not restrict publication time, or study location. The detailed inclusion and exclusion criteria are attached in Appendix C.

Study Selection

Independent reviewers, working in duplicate and in pairs, screened the titles and abstracts of all citations using the inclusion and exclusion criteria. Studies included by either reviewer were retrieved for full-text screening. Independent reviewers, working in pairs, screened the full-text version of eligible references (Appendix Figure A.1). Discrepancies between the reviewers were resolved through discussions and consensus. If consensus was not reached, a third reviewer was added to resolve the difference.

Data Extraction

At the beginning of data extraction, we developed a standardized data extraction form to extract study characteristics (author, study design, inclusion and exclusion criteria, patient characteristics, interventions, comparisons, outcomes, and related items for assessing study quality and applicability). The standardized form was pilot-tested by all study team members using 10 randomly selected studies. We iteratively continued testing the form until no additional items or unresolved questions existed. A second reviewer verified data extraction. When there was missing information, we contacted the authors.

Assessment of Methodological Risk of Bias of Individual Studies

We evaluated the risk of bias of each included study using predefined criteria. For RCTs, we applied the Cochrane Collaboration’s Risk of Bias tool (scored as high, low, unclear) to assess sequence generation; allocation concealment; participant, personnel, and outcome assessor blinding; attrition bias; incomplete outcome data; selective outcome reporting; and other sources of bias (e.g. imbalance of baseline characteristics, conflict of interest). 27 A judgment of overall risk of bias across the various domains was made focusing on random allocation, allocation concealment and blinding (high risk of bias in any of these domains led to a high overall rating). We did not consider industry funding as an automatic indicator of high risk of bias. For observational studies, we selected appropriate items from the Newcastle-Ottawa Scale (i.e. high, moderate, low, unclear), focusing on the representativeness of the population, selection of the cohorts, ascertainment of exposure and outcomes, adequacy of follow-up land possible conflicts of interest.28

Data Synthesis

We summarized key features/characteristics (e.g. study populations, design, intervention, outcomes, and conclusions) of the included studies and presented data qualitatively in evidence tables for each Key Question.

We conducted meta-analyses to quantitatively summarize study findings. The main analyses were based on the effects measured post intervention, though length of followup (less than 6 months versus longer than 6 months) was evaluated in the subgroup analyses. We defined length of follow up as the time from the end of treatments to the time of outcome assessment. We used the intention-to-treat (ITT) principle. To facilitate the analyses, we categorized the standardized measures into groups: primary anxiety measure; secondary related measure; function related outcome; satisfaction with treatment; and social function (Table 4). For binary treatment response, we define it as 1) loss of principal anxiety diagnosis, or 2) Clinical Global Impression – Severity scale (CGI-S) 1 or 2; for remission, we define it as 1) loss of all anxiety diagnoses, or 2) Clinical Global Impression – Improvement scale (CGI-I) 1 or 2. We grouped AEs into symptoms related to abdominal/GI/appetite, behavior change, cold/infection/allergies, headache/dizzy/vision problems, fatigue/somnolence, difficulty sleeping, accidental injury, and suicide/suicidal ideation/self-harm. AEs were deemed to be serious if they were described as serious by the included studies, or led to discontinuation of treatment, significant morbidity, or mortality. We calculated relative risk (RR) and corresponding 95-percent confidence intervals (CIs) for binary outcomes and standardized mean difference (SMD) and related 95 percent confidence intervals for continuous outcomes. For count data (i.e. a single patient may experience more than one event), we calculated rate ratios, instead of RRs. The DerSimonian and Laird random effect method with the Knapp and Hartung adjustment of the variance was used when the number of the comparison was larger than two (n>2).29 The fixed effect model based on the Mantel and Haenszel method was used when there were only two studies (n=2). We evaluated heterogeneity between studies using the I2 indicator.

Table 4. Categories of standardized outcome measures.

Table 4

Categories of standardized outcome measures.

To further explore heterogeneity, we planned to stratify analysis conducting these subgroup analyses (based on a priori defined factors):

  • Age
  • Sex
  • Race/ethnicity
  • Household income
  • Parent education level
  • Family dysfunction/stressor
  • Diagnosis
  • Severity
  • Length of follow-up
  • Treatment sequence
  • Comorbidities
  • Provider
  • Delivery mode
  • Component of psychotherapy
  • Cognitive behavioral therapy intensity
  • Study settings

The statistical difference between subgroups was evaluated using one-way ANOVA tests

We evaluated potential publication bias by evaluating funnel plots symmetry and using the Egger linear regression test when the number of studies included in a direct comparison is large (n>=20). Two tailed p value <0.05 was considered as statistically significant. All statistical analyses were conducted using Stata version 14.2 (StataCorp LP, College Station, Texas).

Grading the Strength of Evidence

We graded strength of evidence (SOE) following the Methods Guide on assessing the strength of evidence.25 The ratings were made via a consensus process among team members with expertise in evidence appraisal and guideline methodology. Randomized studies start with an initial level of high and observational studies start at a level of low. For each comparison and for the critical outcomes, we assessed the following domains for the total body of evidence addressing each outcome (all relevant studies in a particular comparison):

  1. The methodological limitations of the studies (i.e., risk of bias): We lowered SOE one or two levels based on how serious the limitations in terms of their impact on inference.
  2. Precision: We lowered SOE one or two levels based on the confidence intervals and sample size. If confidence intervals included appreciable benefits and harms (crossing no effect), or the total sample size was lower than 400 (an arbitrary cutoff that corresponds to a standardized small effect of 0.20 with significance of 0.05 and power of 0.80),30 we rated SOE down by one level. When both of these situations were encountered simultaneously, we rated SOE down twice for imprecision and labeled this scenario as “severe imprecision”.
  3. Directness: We lowered SOE one level if the outcomes were surrogate and not patient-important.
  4. Consistency: We lowered SOE one or two levels based on qualitative and statistical measures of heterogeneity (arbitrary cutoff of I-squared value of 60% or more was used as an indication for substantial heterogeneity).
  5. The likelihood of publication bias: We lowered SOE one level if we suspected publication bias based on study reporting or statistical tests for publication bias.

Evidence derived from observational studies could be rated up if we observe a large effect, a dose response gradient, or if plausible confounding suggested a stronger association31. When judgment about two domains were borderline (for example, unclear risk of bias and possible publication bias), we opted to rate down once for both domains. Based on this assessment and the initial study design, we assigned SOE rating as high, moderate, low, or ‘insufficient evidence to estimate an effect’. We produced summary of evidence tables for each comparison and for each outcome including data source, effect size and SOE rating with rationale for judgments that affected rating. We did not consider consistency in results across informants (child, parent and clinician) as a factor in rating SOE because we considered these as independent outcomes.

Assessing Applicability

Overall judgments about applicability were qualitatively made using the PICOTS framework. We focused on whether the populations, interventions, and comparisons in existing studies were representative of current practice. We reported any limitations in applicability of individual studies in evidence tables and limitations of applicability of the whole body of evidence in the discussion section. To further enhance applicability and considering that relative association measures and standardized effects are challenging to apply, we provided: 1) an approach to convert RRs to absolute effects (using baseline risks derived from the current data), and 2) an approach to convert SMDs to measures with units of scales commonly used in evaluating anxiety disorders in children (using standard deviations if such scales derived from the current data).

Image appaf1

Views

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...