U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Ayorinde AA, Williams I, Mannion R, et al. Publication and related bias in quantitative health services and delivery research: a multimethod study. Southampton (UK): NIHR Journals Library; 2020 Aug. (Health Services and Delivery Research, No. 8.33.)

Cover of Publication and related bias in quantitative health services and delivery research: a multimethod study

Publication and related bias in quantitative health services and delivery research: a multimethod study.

Show details

Scientific summary

Background

Publication bias occurs when the publication or non-publication of research findings depends on the direction or strength of the outcomes. Other related biases, such as p-hacking and outcome reporting bias, can also occur at stages between the analysis of data and publication of research findings. Although these biases, collectively termed here ‘publication and related bias’, have been widely documented in clinical research, little is known about the existence and magnitude of the bias in health services and delivery research, defined as ‘research that produces evidence on the quality, accessibility and organisation of health services’. In this project we aimed to collect prima facie evidence on publication and related bias in health services and delivery research through a systematic review of pertinent literature and use of quantitative and qualitative methods to examine the existence and potential impact of the bias, to assess current methods and practice in relation to detecting and mitigating the bias in health services and delivery research systematic reviews, and to explore the perceptions and experiences concerning the bias among health services and delivery research stakeholders.

Objectives

This study had five work packages, each with a corresponding objective.

Work package 1

  • To undertake a systematic review of empirical and methodological studies concerning the occurrence, potential impact and/or methodology related to publication and related bias in health services and delivery research.

Work package 2

  • To carry out a survey (meta-epidemiological study) of systematic reviews of intervention and association studies in health services and delivery research, to examine current practice and challenges in assessing publication bias during evidence synthesis.

Work package 3

  • To conduct in-depth case studies to evaluate the applicability of different methods for detecting and mitigating publication bias in health services and delivery research.

Work package 4

  • To retrospectively follow up the publication status of cohorts of health services and delivery research studies to directly observe publication bias.

Work package 5

  • To conduct semistructured interviews with key health services and delivery research stakeholders and a focus group discussion with patient representatives to explore their perceptions and experiences concerning publication and related bias.

Methods

Work package 1

We searched MEDLINE, EMBASE, Health Management Information Consortium, Cumulative Index to Nursing and Allied Health Literature, Web of Science™ (Clarivate Analytics, Philadelphia, PA, USA), Health Systems Evidence, Cochrane Effective Practice and Organisation of Care Review Group, and websites of key organisations linked to health services and delivery research. Initial searches were conducted in March 2017 and updated in July/August 2018. Subject experts were consulted to identify any additional studies. We included methodological studies that set out to investigate publication and related biases in health services and delivery research and systematic reviews of health services and delivery research topics that examined such bias as part of the review process. Information on study design, methods of investigating publication bias, key findings, limitations and conclusions reported by the authors was extracted from eligible studies. Citation screening was conducted independently by two reviewers. Data extraction was conducted by one reviewer and checked by another. Data were synthesised narratively.

The protocol of this systematic review was registered with PROSPERO, registration number CRD42016052333.

Work package 2

A stratified random sample of 200 systematic reviews of quantitative health services and delivery research published in English from 2007 to 2017 was selected from the Health Systems Evidence database. Half (n = 100) of the selected reviews (intervention reviews) concerned interventions to improve the effectiveness and efficiency of service delivery, for example by synthesising comparative studies. The other half (association reviews) evaluated associations between structure, process and outcome variables along the service delivery causal chain, mostly by synthesising observational studies.

We extracted data on the number of studies included in the systematic reviews; the inclusion of meta-analyses and whether or not the use of systematic review guidelines was reported; mention and assessment of publication bias or outcome reporting bias; and methods for detecting or mitigating the biases and/or reasons for no assessment. Journals were classified into those that did or did not formally endorse specific systematic review guidelines, such as Preferred Reporting Items for Systematic Reviews and Meta-Analyses, and journal impact factors were obtained. Three measures related to the awareness and actual practice of assessing publication and outcome reporting biases were evaluated: (1) mentioned publication bias (regardless of whether or not it was formally assessed), (2) (formally) assessed publication bias and (3) assessed outcome reporting bias. Factors associated with the three aforementioned outcome measures were explored using univariable and multivariable logistic regression. The associations were presented as odds ratios with 95% confidence intervals.

The protocol of this methodological overview of systematic reviews was registered with PROSPERO, registration number CRD42016052366.

Work package 3

We purposively sampled four systematic reviews in health services and delivery research for detailed case studies, and examined the applicability of existing methods for assessing publication and related bias. We targeted systematic reviews of various sizes, but with sufficient number of included studies (≥ 10 to allow statistical analysis, such as funnel plots and regression tests) and covering major issues likely to be encountered during evidence synthesis of health services and delivery research. We also aimed for cases that are of general interest for health services and delivery research stakeholders. The four cases identified were:

  1. case study 1 – the association between weekend and weekday admissions and hospital mortality
  2. case study 2 – the association between organisational culture and climate and nurses’ job satisfaction
  3. case study 3 – the effectiveness of computerised physician order entry systems on medication errors and adverse events
  4. case study 4 – the effectiveness of standardised hand-off protocols on information relay, patient, provider and organisational outcomes.

For each case study, we examined the methods and findings related to publication bias and related bias presented by the original authors and highlighted issues particularly relevant to health services and delivery research. We obtained detailed numerical data for one of the case studies and applied five statistical techniques commonly used for the assessment of publication and related bias: funnel plots, Egger’s regression test, the Begg and Mazumdar’s rank correlation test, trim and fill and meta-regression. We also explored p-curve for detecting possible p-hacking. The findings were presented, with detailed evaluations of potential issues that could influence the validity of these statistical methods and the interpretation of their findings.

Work package 4

We selected four cohorts of health services and delivery research studies with a quantitative component from prospective registries (inception cohorts) and conference abstracts (conference cohorts):

  1. Health Services Research Projects in Progress cohort – selected projects completed in 2012 from Health Services Research Projects in Progress, which is a US-based, publicly accessible prospective registry of health services and public health research (n = 100).
  2. National Institute for Health Research cohort – selected projects completed between 2007 and 2014 from a database of projects funded by the UK National Institute for Health Research’s Health Services and Delivery Research programme and its predecessors (n = 100).
  3. International Society for Quality in Healthcare cohort: selected from abstracts of the International Society for Quality in Healthcare 2012 conference (n = 50).
  4. Health Services Research UK cohort: selected from Health Services Research UK conference, 2012–14 (n = 50).

We classified each study into association or intervention studies and their source of data (routinely collected data/survey vs. bespoke data collection). For intervention studies, we also classified study design features, including with/without a concurrent control and whether or not the study was a randomised controlled trial. The publication status for each study was verified online and by contacting researchers. Status of publication was categorised as published (in academic journals), grey literature (available online in a form other than academic journals) or unpublished. Study findings were classified according to statistical significance, with a p-value ≤ 0.05 considered statistically significant. For sensitivity analysis, study findings were also classified into ‘positive’ or ‘non-positive’ based on the comments of the original authors. Univariable and multivariable logistic regression were conducted to investigate associations (expressed as odds ratios) between publication status, study features, and statistical significance and positivity of findings. Study selection, data extraction and verification of publication status were conducted by one reviewer and checked by a second reviewer.

Work package 5

Twenty-four in-depth telephone interviews were conducted with key informants in the field of health services research. The sample covered a wide range of stakeholders, including researchers of various seniority, editors, funders, service managers, and clinicians. In order to explore the issues from a patient and service user perspective, we conducted a focus group with eight patient and service user representatives. Records of interviews and focus group discussion were transcribed and thematic coding was conducted using the qualitative coding software NVivo version 11 (QSR International, Warrington, UK).

Results

Work package 1

After screening a total of 7483 citations, four methodological studies of publication and related bias, three systematic reviews of substantive health services and delivery research topics that made comparisons between published and grey/unpublished literature, and 181 additional systematic reviews that reported assessment of publication and outcome reporting bias as part of the review process, were included. Three of the four methodological studies examined the existence of publication bias in health informatics research, whereas the remaining study examined p-hacking and reporting bias in health economics. All four studies reported evidence suggesting the existence of publication bias, but all had methodological weaknesses. The three systematic reviews comparing published and grey and unpublished literature showed significant differences between them in some, but not all, cases. The remaining 181 systematic reviews used predominantly statistical methods for detecting small study effects and, therefore, provided only indirect evidence on publication bias. Approximately half of these reported evidence of small study effects.

Work package 2

We found that 43% (85/200) of the sampled systematic reviews mentioned (including assessed) publication bias, and this was more common in intervention reviews (54%) than in association reviews (31%). However, only about 10% (19/200) formally assessed publication bias through statistical analysis, mostly using funnel plots and related methods. Outcome reporting bias was discussed and assessed in 34 (17%) systematic reviews, and this was much more frequent in intervention reviews (30%) than in association reviews (4%). All reviews that mentioned outcome reporting bias assessed it as part of the quality assessment of included studies (mostly using the Cochrane’s risk-of-bias tool). Insufficient number of studies, heterogeneity and lack of pre-registered protocols were the commonly reported impediments in assessing publication and outcome reporting bias.

Among the review features examined, only the inclusion of meta-analysis was statistically associated with mentioning and assessing publication bias in the multivariable analysis (odds ratio 4.02, 95% confidence interval 1.76 to 9.15). Use of the Grading of Recommendations, Assessment, Development and Evaluations guideline and being an intervention review were significantly associated with assessing outcome reporting bias in the multivariable analysis.

Work package 3

Across the four chosen case studies, which included two systematic reviews of association studies and two reviews of intervention studies, between-study heterogeneity stood out as a major issue in the applicability of statistical methods for assessing publication bias and interpretation of findings from them. Several alternative explanations (in addition to publication bias) existed among selected cases in which such effects were found. The direction of small study effects and the notion of individual studies being random samples drawn from the target population may need to be carefully assessed in systematic reviews of observational studies based on data collected from administrative databases covering the whole population.

Work package 4

Across the four cohorts of health services and delivery research studies, 62% (186/300) were published in academic journals (inception cohorts 70%, conference cohorts 47%). Findings for 20% of the 300 studies were available only as grey literature and 18% were unpublished. Publication in academic journals ranged from 75% in the Health Services Research Projects in Progress cohort to 26% in the International Society for Quality in Health Care cohort. Findings for all studies funded by the National Institute for Health Research, which mandated publication of study findings, were available either in academic journals or as technical reports, whereas non-publication was 68% for the International Society for Quality in Health Care cohort, in which many of the authors appeared to be affiliated with clinical rather than academic institutions. The majority of the studies reported some statistically significant findings (79%) and were considered to be positive (79%). In multivariable analyses, being a study from the International Society for Quality in Health Care cohort or the National Institute for Health Research cohort (compared with the Health Services Research Projects in Progress cohort) was associated with significantly lower odds of publication in academic journals, whereas positive study findings were associated with significantly higher odds. Studies from conference cohorts were also associated with significantly higher risk of non-publication (i.e. published neither in journals nor as grey literature). Statistically significant and positive findings were associated with significantly lower odds of non-publication.

Work package 5

Interviews with health services and delivery research stakeholders revealed a wide range of perceptions concerning publication and related bias in health services and delivery research, although most of the interviewees were unable to state with certainty how significant the problem of publication bias was in health services and delivery research. Some claimed it was ‘rampant’, whereas others believed that strength of results was secondary to other potential sources of ‘novelty’ in shaping publication outcomes in health services and delivery research. Some interviewees reported that null results were likely to be submitted for publication in lower impact journals or not submitted at all. Pre-registration of study protocols was supported by many as a way of alleviating publication bias, but a significant minority were concerned that this may create unnecessary constraints. Interviewed stakeholders also proposed other measures, such as development of repositories of null findings and training, to raise awareness of publication bias. The perceived impact of publication bias was attenuated by the often weak relationship between health services and delivery research and health service decision-making. In the focus group involving patient and service user representatives, additional issues were raised, such as the ethical implications associated with research waste, particularly in relation to research participation and allocation of scarce health resources.

Conclusion

This project collected prima facie evidence of publication and related bias in health services and delivery research. We found a small number of studies suggesting the existence of the bias in this field. Our examination of 200 published health services and delivery research systematic reviews showed that documentation of awareness of publication and outcome reporting bias, and formal assessment of the bias, were low, particularly among reviews of association studies. Adherence to existing systematic review guidelines, which include items relating to publication and outcome reporting bias, could increase the practice of assessing the bias. Our case studies illustrated challenges and caveats in using statistical methods for detecting publication and related bias in health services and delivery research due to between-study heterogeneity, and potential confounding of the association between study size and effect size by many other factors. Follow-up of 300 health services and delivery research studies from four cohorts showed a wide range of rates of publication in academic journals and of non-publication, possibly influenced by funder’s publication policy and motivation of researchers. Evidence of publication bias was found in analyses adjusting for key study features. Key informant interviews revealed diverse perceptions, but also substantial uncertainty towards the scale and impact of these bias among health services and delivery research stakeholders. The bias was perceived to be modified by various factors, which may differ between subfields of health services and delivery research. Emphasis on methodological novelty and focus beyond summative assessments were thought to mitigate the risk of publication bias. There was general support for pre-registration health services and delivery research protocols, but also some reservations concerning unwarranted restrictions. Methodological and epistemological diversity in health services and delivery research and the changing landscape of research publication need to be taken into account when interpreting the above findings. Future studies collecting direct evidence using health services and delivery research cohorts and exploration of optimal health services and delivery research practice for minimising the bias are warranted.

Study registration

This study is registered as PROSPERO CRD42016052333 and CRD42016052366.

Funding

This project was funded by the National Institute for Health Research (NIHR) Health Services and Delivery Research programme and will be published in full in Health Services and Delivery Research; Vol. 8, No. 33. See the NIHR Journals Library website for further project information.

Copyright © Queen’s Printer and Controller of HMSO 2020. This work was produced by Ayorinde et al. under the terms of a commissioning contract issued by the Secretary of State for Health and Social Care. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.
Bookshelf ID: NBK561488

Views

Other titles in this collection

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...