U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Francis DO, Chinnadurai S, Morad A, et al. Treatments for Ankyloglossia and Ankyloglossia With Concomitant Lip-Tie [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2015 May. (Comparative Effectiveness Reviews, No. 149.)

Cover of Treatments for Ankyloglossia and Ankyloglossia With Concomitant Lip-Tie

Treatments for Ankyloglossia and Ankyloglossia With Concomitant Lip-Tie [Internet].

Show details

Methods

In this chapter, we document the procedures that the Vanderbilt Evidence-based Practice Center used to produce a Comparative Effectiveness Review on the approaches to treatment for ankyloglossia. These procedures follow the methods suggested in the Agency for Healthcare Research and Quality (AHRQ) Effective Health Care Program “Methods Guide for Effectiveness and Comparative Effectiveness Reviews.”13

Topic Refinement and Review Protocol

The topic for this report was nominated by the American Academy of Pediatrics in a public process using the Effective Health Care Web site. Working from the nomination, we drafted the initial Key Questions (KQs) and analytic framework and refined them with input from key informants representing the fields of pediatric care, pediatric otolaryngology, breastfeeding and lactation, dentistry, occupational therapy, and speech therapy. All members of the research team were required to submit information about potential conflicts of interest before initiation of the work. No members of the review team had any conflicts.

After review from AHRQ, the questions and framework were posted online for public comment. No changes to the questions or framework were recommended. We also developed population, interventions, outcomes, timing, and settings (PICOTS) criteria for intervention KQs.

We identified technical experts on the topic to provide assistance during the project. The Technical Expert Panel (TEP), representing the fields of pediatric care, pediatric otolaryngology, breastfeeding and lactation, dentistry, and speech-language pathology, contributed to the AHRQ's broader goals of (1) creating and maintaining science partnerships as well as public-private partnerships and (2) meeting the needs of an array of potential customers and users of its products. Thus, the TEP was both an additional resource and a sounding board during the project. The TEP included nine members serving as technical or clinical experts. To ensure robust, scientifically relevant work, we called on the TEP to review and provide comments as our work progressed. TEP members participated in conference calls and discussions through e-mail to:

  • Help to refine the analytic framework and KQs at the beginning of the project;
  • Discuss the preliminary assessment of the literature, including inclusion/exclusion criteria; and
  • Provide input on the information and domains included in evidence tables.

The final protocol was posted to the AHRQ Effective Health Care Web site.14

Literature Search Strategy

Search Strategy

To ensure comprehensive retrieval of relevant studies of therapies for children with ankyloglossia or ankyloglossia with concomitant tight labial frenulum (lip-tie), we used four key databases: the MEDLINE® medical literature database via the PubMed® interface, the PsycINFO® psychology and psychiatry database, the Cumulative Index of Nursing and Allied Health Literature (CINAHL®) and EMBASE (Excerpta Medica Database), an international biomedical and pharmacological literature database via the Ovid® interface. Search strategies applied a combination of controlled vocabulary (Medical Subject Headings (MeSH), PsycINFO headings, CINAHL medical headings, and Emtree headings, respectively) to focus specifically on concepts related to ankyloglossia and its treatment as well as treatment harms. Literature searches were not restricted to a year range (i.e., searches were from inception of the database to the present) given the need to capture variations in practice patterns and trends in breastfeeding over time.

We included studies published in English only as a review of non-English citations retrieved by our MEDLINE search identified few studies of relevance. Appendix A lists our search terms and strategies and the yield from each database. Searches were executed between September 2013 and August 2014.

We carried out hand searches of the reference lists of recent systematic reviews or meta-analyses of therapies for ankyloglossia; the investigative team scanned the reference lists of articles included after the full-text review phase for studies that potentially could meet our inclusion criteria.

As we did not review medications or devices, we did not request Scientific Information Packets or regulatory information. We reviewed abstracts presented at annual meetings of key scientific societies including the American Association of Pediatrics (AAP), the Pediatric Academic Societies (PAS), the Academy of Breastfeeding Medicine (ABM), the American Academy of Pediatric Dentistry (AAPD), the American Academy of Otolaryngology—Head and Neck Surgery (AAO-HNS), the American Speech-Language-Hearing Association (ASHA), the International Lactation Consultant Association (ILCA), Lactation Consultants of Australia and New Zealand (LCANZ), the College of Lactation Consultants of Western Australia (CLCWA), the American Orthodontic Society (AOS) and the American Association of Orthodontists (AAO). We identified relevant theses and dissertations through ProQuest Dissertations and Theses (PQDT).

Inclusion and Exclusion Criteria

Table 4 lists the inclusion/exclusion criteria we used based on our understanding of the literature, key informant and public comment during the topic-refinement phase, input from the TEP, and established principles of systematic review methods.

Table 4. Inclusion and exclusion criteria.

Table 4

Inclusion and exclusion criteria.

Study Selection

Once we identified articles through the electronic database searches and hand-searching, we examined abstracts of articles to determine whether studies met our criteria. Two reviewers separately evaluated the abstracts for inclusion or exclusion, using an Abstract Review Form (Appendix B). If one reviewer concluded that the article could be eligible for the review based on the abstract, we retained it. Following abstract review, two reviewers independently assessed the full text of each included study using a standardized form (Appendix B) that included questions stemming from our inclusion/exclusion criteria. Disagreements between reviewers were resolved by a senior reviewer. All abstract and full text reviews were conducted using the DistillerSR online screening application (Evidence Partners Incorporated, Ottawa, Ontario). Excluded studies, and the reasons for exclusion, are presented in Appendix C. Reviewers included three clinicians with expertise in pediatrics and/or otolaryngology and two expert systematic reviewers.

Data Extraction

The staff members and clinical experts who conducted this review jointly developed the evidence tables. We designed the tables to provide sufficient information to enable readers to understand the studies and to determine their quality; we gave particular emphasis to essential information related to our key questions. Two evidence table templates were employed to facilitate the extraction of data based on study type; one form was designed for case series and one to accommodate all types of comparative studies. We based the format of our evidence tables on successful designs used for prior systematic reviews.

The team was trained to extract data by extracting several articles into evidence tables and then reconvening as a group to discuss the utility of the table design. We repeated this process through several iterations until we decided that the tables included the appropriate categories for gathering the information contained in the articles. All team members shared the task of initially entering information into the evidence tables. A second team member also reviewed the articles and edited all initial table entries for accuracy, completeness, and consistency. The two data extractors reconciled disagreements concerning the information reported in the evidence tables. The full research team met regularly during the article extraction period and discussed global issues related to the data extraction process. In addition to outcomes related to intervention effectiveness, we extracted all data available on harms. Harms encompass the full range of specific negative effects, including the narrower definition of adverse events.

The final evidence tables are presented in their entirety in Appendix D. Studies are presented in the evidence tables alphabetically by the last name of the first author. A list of abbreviations and acronyms used in the tables appears at the beginning of that appendix.

Data Synthesis

We considered the possibility of conducting a meta-analysis, but the small number of the studies, the study designs and the heterogeneity of interventions and outcomes made a meta-analysis inappropriate. We completed evidence tables for all included studies, and data are presented in summary tables and analyzed qualitatively in the text.

Quality (Risk of Bias) Assessment of Individual Studies

We used four tools to assess quality of individual studies: the Cochrane Risk of Bias Tool for Randomized Controlled Trials,16 a cohort study assessment instrument and a tool for case series, both adapted from RTI Item Bank questions,17 and a four-item harms assessment instrument for cohort studies derived from the McMaster Quality Assessment Scale of Harms (McHarm) for Harms Outcomes18 and the RTI Item Bank.17

The Cochrane Risk of Bias tool is designed for the assessment of studies with experimental designs and randomized participants. Fundamental domains include sequence generation, allocation concealment, blinding, completeness of outcome data, and selective reporting bias. The RTI Item Bank-based cohort instrument was used to assess the quality of nonrandomized studies (e.g., cohort and case-control studies). Questions assess selection and follow up of study groups, the comparability of study groups, and the ascertainment of outcomes of interest for cohort studies. The case series tool assesses attrition, blinding, appropriateness of outcome measures, and reporting bias. The harms assessment tool documents whether harms were predefined and pre-specified and if standard scales were applied. We did not assess the quality of case reports, which we used solely for harms data. All four tools are presented in Appendix E.

Quality assessment of each study was conducted by two team members independently using the forms presented in Appendix E. Any discrepancies were adjudicated through discussion between the assessors to reach consensus or via a senior reviewer. Investigators did not rely on the study design as described by authors of individual papers; rather, the methods section of each paper was reviewed to determine which rating tool to employ. The results of these tools were then translated to the AHRQ standard of “good,” “fair,” and “poor” quality designations as described below.

Determining Quality Ratings

  • We required that randomized controlled trials (RCTs) receive a positive score (i.e., low risk of bias for RCTs) on all questions used to assess quality to receive a rating of good (equivalent to low risk of bias). RCTs had to receive at least five positive scores to receive a rating of fair (moderate risk of bias), and studies with less than or equal to four positive ratings were considered poor quality (high risk of bias). We designated an “unclear” rating on an individual question as a positive rating as long as the consensus of the investigators assessing quality was that study outcomes were not likely to be biased by the factor.
  • We required that cohort studies receive positive scores on all elements to receive a rating of good, less than or equal to two negative ratings for fair, and greater than two negative scores for a rating of poor quality.
  • Case series, or pre-post studies, have inherently high risk of bias. Nonetheless, prospective case series that enroll participants consecutively and control for potentially confounding factors may provide more evidence to support comparative studies. We assessed case series using questions identified in the AHRQ Effective Health Care program's “Methods Guide for Effectiveness and Comparative Effectiveness Reviews”13 but did not assign a quality level for these studies as it would be inappropriate to assess them on the same scale as prospective cohort and RCT designs. Rather, the elements on which they were scored and the results are presented in Appendix F.
  • For harms assessment we required that studies receive a positive score (i.e., an affirmative response) on all four questions to receive a rating of good. Studies had to receive three positive scores to receive a rating of fair, and studies with less than three positive scores received a rating of poor.

Strength of the Body of Evidence

We applied explicit criteria for rating the overall strength of the evidence for each key intervention-outcome pair for which the overall risk of bias is not overwhelmingly high. We established concepts of the quantity of evidence (e.g., numbers of studies, aggregate ending-sample sizes), the quality of evidence (from the quality ratings on individual articles), and the coherence or consistency of findings across similar and dissimilar studies and in comparison to known or theoretically sound ideas of clinical or behavioral knowledge.

The strength of evidence evaluation is that stipulated in the Effective Health Care Program's “Methods Guide for Effectiveness and Comparative Effectiveness Reviews”13 and in the updated strength of evidence guide19 which emphasizes the following five major domains: study limitations (low, medium, high level of limitation), consistency (inconsistency not present, inconsistency present, unknown or not applicable), directness (direct, indirect), and precision (precise, imprecise), and reporting bias. Study limitations are derived from the quality assessment of the individual studies that addressed the KQ and specific outcome under consideration. Each key outcome for each comparison of interest is given an overall evidence grade based on the ratings for the individual domains.

The overall strength of evidence was graded as outlined in Table 5. Two senior staff independently graded the body of evidence; disagreements were resolved as needed through discussion or third-party adjudication. We recorded strength of evidence assessments in tables, summarizing results for each outcome.

Table 5. Strength of evidence grades and definitions.

Table 5

Strength of evidence grades and definitions.

Applicability

We assessed the applicability of findings reported in the included literature to the general population of children with ankyloglossia by determining the population, intervention, comparator, and setting in each study and developing an overview of these elements for each intervention category. We anticipated that areas in which applicability would be especially important to describe would include the severity of ankyloglossia in the study population, the age range of the participants, and the setting in which the intervention took place. We also attempted to capture information about the clinical provider including specialty and training. We describe any needs related to the setting, including anesthesia, surgical environment, materials for non-surgical interventions, etc.

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (1.8M)

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...