当前期刊: Research Synthesis Methods Go to current issue    加入关注   
显示样式:        排序: 导出
我的关注
我的收藏
您暂时未登录!
登录
  • Ten questions to consider when interpreting results of a meta‐epidemiological study—the MetaBLIND study as a case
    Res. Synth. Methods (IF 5.043) Pub Date : 2020-01-20
    Helene Moustgaard; Hayley E Jones; Jelena Savović; Gemma L Clayton; Jonathan AC Sterne; Julian PT Higgins; Asbjørn Hróbjartsson

    Randomized clinical trials underpin evidence‐based clinical practice, but flaws in their conduct may lead to biased estimates of intervention effects and hence invalid treatment recommendations. The main approach to the empirical study of bias is to collate a number of meta‐analyses and, within each, compare the results of trials with and without a methodological characteristic such as blinding of participants and health professionals. Estimated within‐meta‐analysis differences are combined across meta‐analyses, leading to an estimate of mean bias. Such “meta‐epidemiological” studies are published in increasing numbers and have the potential to inform trial design, assessment of risk of bias, and reporting guidelines. However, their interpretation is complicated by issues of confounding, imprecision, and applicability. We developed a guide for interpreting meta‐epidemiological studies, illustrated using MetaBLIND, a large study on the impact of blinding. Applying generally accepted principles of research methodology to meta‐epidemiology, we framed 10 questions covering the main issues to consider when interpreting results of such studies, including risk of systematic error, risk of random error, issues related to heterogeneity, and theoretical plausibility. We suggest that readers of a meta‐epidemiological study reflect comprehensively on the research question posed in the study, whether an experimental intervention was unequivocally identified for all included trials, the risk of misclassification of the trial characteristic, and the risk of confounding, i.e the adequacy of any adjustment for the likely confounders. We hope that our guide to interpretation of results of meta‐epidemiological studies is helpful for readers of such studies.

    更新日期:2020-01-21
  • A comparison of Bayesian and frequentist methods in random‐effects network meta‐analysis of binary data
    Res. Synth. Methods (IF 5.043) Pub Date : 2020-01-18
    Svenja E. Seide; Katrin Jensen; Meinhard Kieser

    The performance of statistical methods is often evaluated by means of simulation studies. In case of network meta‐analysis of binary data, however, simulations are not currently available for many practically relevant settings.We perform a simulation study for sparse networks of trials under between‐trial heterogeneity and including multi‐arm trials. Results of the evaluation of two popular frequentist methods and a Bayesian approach using two different prior specifications, are presented. Methods are evaluated using coverage, width of intervals, bias, and RMSE. In addition, deviations from the theoretical SUCRAs or P‐scores of the treatments are evaluated. Under low heterogeneity and when a large number of trials informs the contrasts, all methods perform well with respect to the evaluated performance measures. Coverage is observed to be generally higher for the Bayesian than the frequentist methods. The width of credible intervals is larger than those of confidence intervals and is increasing when using a flatter prior for between‐trial heterogeneity. Bias was generally small, but increased with heterogeneity, especially in netmeta. In some scenarios, the direction of bias differed between frequentist and Bayesian methods. The RMSE was comparable between methods, but larger in indirectly than in directly estimated treatment effects. The deviation of the SUCRAs or P‐scores from their theoretical values was mostly comparable over the methods, but differed depending on the heterogeneity and the geometry of the investigated network. Multivariate meta‐regression or Bayesian estimation using a half‐normal prior scaled to 0.5 seem to be promising with respect to the evaluated performance measures in network meta‐analysis of sparse networks.

    更新日期:2020-01-21
  • Adjudication rather than experience of data abstraction matters more in reducing errors in abstracting data in systematic reviews
    Res. Synth. Methods (IF 5.043) Pub Date : 2020-01-18
    E Jian‐Yu; Ian J. Saldanha; Joseph Canner; Christopher H. Schmid; Jimmy T. Le; Tianjing Li

    During systematic reviews, “data abstraction” refers to the process of collecting data from reports of studies. The data abstractors’ level of experience may affect the accuracy of data abstracted. Using data from a randomized crossover trial in which different data abstraction approaches were compared, we examined the association between abstractors’ level of experience and accuracy of data abstraction.

    更新日期:2020-01-21
  • Meta‐analysis of full ROC curves with flexible parametric distributions of diagnostic test values
    Res. Synth. Methods (IF 5.043) Pub Date : 2020-01-17
    Annika Hoyer; Oliver Kuss

    Diagnostic accuracy studies often evaluate diagnostic tests at several threshold values, aiming to make recommendations on optimal thresholds for use in practice. Methods for meta‐analysis of full receiver operating characteristic (ROC) curves have been proposed, but still have deficiencies. We recently proposed a parametric approach that is based on bivariate time‐to‐event models for interval‐censored data to this task. To increase the flexibility of that approach, to cover a wide range of distributions of diagnostic test values and to address the open point of model selection, we here suggest to use the generalized F family of distributions which includes previously used distributions for the bivariate time‐to‐event model as special cases. The results of a simulation study are given as well as an illustration by an example of population‐based screening for type 2 diabetes mellitus.

    更新日期:2020-01-21
  • A simulation study to compare robust tests for linear mixed‐effects meta‐regression
    Res. Synth. Methods (IF 5.043) Pub Date : 2020-01-12
    Thilo Welz; Markus Pauly

    The explanation of heterogeneity when synthesizing different studies is an important issue in meta‐analysis. Besides including a heterogeneity parameter in the statistical model, it is also important to understand possible causes of between‐study heterogeneity. One possibility is to incorporate study‐specific covariates in the model that account for between‐study variability. This leads to linear mixed‐effects meta‐regression models. A number of alternative methods have been proposed to estimate the (co)variance of the estimated regression coefficients in these models, which subsequently drives differences in the results of statistical methods. To quantify this, we compare the performance of hypothesis tests for moderator effects based upon different heteroscedasticity consistent covariance matrix estimators and the (untruncated) Knapp‐Hartung method in an extensive simulation study. In particular, we investigate type 1 error and power under varying conditions regarding the underlying distributions, heterogeneity, effect sizes, number of independent studies, and their sample sizes. Based upon these results, we give recommendations for suitable inference choices in different scenarios and highlight the danger of using tests regarding the study‐specific moderators based on inappropriate covariance estimators.

    更新日期:2020-01-13
  • Adding value to core outcome set development using multimethod systematic reviews
    Res. Synth. Methods (IF 5.043) Pub Date : 2020-01-08
    Ginny Brunton; James Webbe; Sandy Oliver; Chris Gale

    Trials evaluating the same interventions rarely measure or report identical outcomes. This limits the possibility of aggregating effect sizes across studies to generate high‐quality evidence through systematic reviews and meta‐analyses. To address this problem, core outcome sets (COS) establish agreed sets of outcomes to be used in all future trials. When developing COS, potential outcome domains are identified by systematically reviewing the outcomes of trials, and increasingly, through primary qualitative research exploring the experiences of key stakeholders, with relevant outcome domains subsequently determined through transdisciplinary consensus development. However, the primary qualitative component can be time consuming with unclear impact. We aimed to examine the potential added value of a qualitative systematic review alongside a quantitative systematic review of trial outcomes to inform COS development in neonatal care using case analysis methods. We compared the methods and findings of a scoping review of neonatal trial outcomes and a scoping review of qualitative research on parents', patients', and professional caregivers' perspectives of neonatal care. Together, these identified a wider range and greater depth of health and social outcome domains, some unique to each review, which were incorporated into the subsequent Delphi process and informed the final set of core outcome domains. Qualitative scoping reviews of participant perspectives research, used in conjunction with quantitative scoping reviews of trials, could identify more outcome domains for consideration and could provide greater depth of understanding to inform stakeholder group discussion in COS development. This is an innovation in the application of research synthesis methods.

    更新日期:2020-01-09
  • Combining threshold analysis and GRADE to assess sensitivity to bias in antidepressant treatment recommendations adjusted for depression severity
    Res. Synth. Methods (IF 5.043) Pub Date : 2020-01-08
    L Holper

    Threshold analysis has recently been proposed to be used in combination with the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) in order to assess the sensitivity to plausible bias of treatment recommendations derived from Bayesian network meta‐analysis (NMA). Here, it was aimed to apply the combination of threshold analysis and GRADE to judge quantitative and qualitative information on risk of bias in antidepressant treatment recommendations. The analysis was based on the data set provided by Cipriani et al. (The Lancet 2018) comparing 21 antidepressants in adult major depressive disorder (MDD). Primary outcomes were efficacy (response rate) and acceptability (dropout rate) adjusted for the covariate depression severity. The combined approach suggested sensitivity to plausible bias to be largest for antidepressant recommendations top ranked by Cipriani et al., that is, amitriptyline, duloxetine, paroxetine, and venlafaxine in terms of efficacy and agomelatine, escitalopram, paroxetine, and venlafaxine in terms of acceptability. Covariate ranges within which recommendations were most sensitive to plausible bias were very severe depression in terms of efficacy (smallest threshold, ie, the largest sensitivity, around 39 Hamilton Depression Rating Scale [HDRS]) and moderate depression in terms of acceptability (smallest thresholds around 16 and 35 HDRS). This indicates that treatment recommendations within these ranges may likely change if plausible bias adjustments take place. The present findings may support decision makers in judging the sensitivity to plausible bias of current antidepressant treatment recommendations to accurately guide treatment decisions in MDD depending on depression severity.

    更新日期:2020-01-09
  • Construct validity of the Physiotherapy Evidence Database (PEDro) quality scale for randomized trials: Item response theory and factor analyses
    Res. Synth. Methods (IF 5.043) Pub Date : 2020-01-05
    Emiliano Albanese; Lukas Bütikofer; Susan Armijo‐Olivo; Christine Ha; Matthias Egger

    There is an agreement that the methodological quality of randomized trials should be assessed in systematic reviews, but there is a debate on how this should be done. We conducted a construct validation study of the Physiotherapy Evidence Database (PEDro) scale, which is widely used to assess the quality of trials in physical therapy and rehabilitation.

    更新日期:2020-01-06
  • Sensitivity analyses assessing the impact of early stopping on systematic reviews: recommendations for interpreting guidelines
    Res. Synth. Methods (IF 5.043) Pub Date : 2020-01-03
    Ian C. Marschner; Lisa M. Askie; I. Manjula Schou

    The CONSORT Statement says that data‐driven early stopping of a clinical trial is likely to weaken the inferences that can be drawn from the trial. The GRADE guidelines go further, saying that early stopping is a study limitation that carries the risk of bias, and recommending sensitivity analyses in which trials stopped early are omitted from evidence synthesis. Despite extensive debate in the literature over these issues, the existence of clear recommendations in high profile guidelines makes it inevitable that systematic reviewers will consider sensitivity analyses investigating the impact of early stopping. The purpose of this paper is to assess methodologies for conducting such sensitivity analyses, and to make recommendations about how the guidelines should be interpreted. We begin with a clarifying overview of the impacts of early stopping on treatment effect estimation in single studies and meta‐analyses. We then warn against naive approaches for conducting sensitivity analyses, including simply omitting trials stopped early from meta‐analyses. This approach underestimates treatment effects, which may have serious implications if cost‐effectiveness analyses determine whether treatments are made widely available. Instead, we discuss two unbiased approaches to sensitivity analysis, one of which is straightforward but statistically inefficient, and the other of which achieves greater statistical efficiency by making use of recent methodological developments in the analysis of clinical trials. We end with recommendations for interpreting: (a) the CONSORT Statement on reporting of reasons for early stopping, and (b) the GRADE guidelines on sensitivity analyses assessing the impact of early stopping.

    更新日期:2020-01-04
  • Influence diagnostics and outlier detection for meta‐analysis of diagnostic test accuracy
    Res. Synth. Methods (IF 5.043) Pub Date : 2019-12-18
    Yuki Matsushima; Hisashi Noma; Tomohide Yamada; Toshi A. Furukawa

    Meta‐analyses of diagnostic test accuracy (DTA) studies have been gaining prominence in research in clinical epidemiology and health technology development. In these DTA meta‐analyses, some studies may have markedly different characteristics from the others and potentially be inappropriate to include. The inclusion of these “outlying” studies might lead to biases, yielding misleading results. In addition, there might be influential studies that have notable impacts on the results. In this article, we propose Bayesian methods for detecting outlying studies and their influence diagnostics in DTA meta‐analyses. Synthetic influence measures based on the bivariate hierarchical Bayesian random effects models are developed because the overall influences of individual studies should be simultaneously assessed by the two outcome variables and their correlation information. We propose four synthetic measures for influence analyses: (a) relative distance, (b) standardized residual, (c) Bayesian p‐value, and (d) influence statistic on the area under the summary receiver operating characteristic curve. We also show that conventional univariate Bayesian influential measures can be applied to the bivariate random effects models, which can be used as marginal influential measures. Most of these methods can be similarly applied to the frequentist framework. We illustrate the effectiveness of the proposed methods by applying them to a DTA meta‐analysis of ultrasound in screening for vesicoureteral reflux among children with urinary tract infections.

    更新日期:2019-12-19
  • The “realist search”: A systematic scoping review of current practice and reporting
    Res. Synth. Methods (IF 5.043) Pub Date : 2019-12-15
    Andrew Booth; Simon Briscoe; Judy M. Wright

    The requirement for literature searches that identify studies for inclusion in systematic reviews should be systematic, explicit, and reproducible extends, at least by implication, to other types of literature review. However, realist reviews commonly require literature searches that challenge systematic reporting; searches are iterative and involve multiple search strategies and approaches. Notwithstanding these challenges, reporting of the “realist search” can be structured to be transparent and to facilitate identification of innovative retrieval practices. Our six‐component search framework consolidates and extends the structure advanced by Pawson, one of the originators of realist review: formulating the question, conducting the background search, searching for program theory, searching for empirical studies, searching to refine program theory and identifying relevant mid‐range theory, and documenting and reporting the search process. This study reviews reports of search methods in 34 realist reviews published within the calendar year of 2016. Data from all eligible reviews were extracted from the search framework. Realist search reports poorly differentiate between the different search components. Review teams often conduct a single “big bang” multipurpose search to fulfill multiple functions within the review. However, it is acknowledged that realist searches are likely to be iterative and responsive to emergent data. Overall, the search for empirical studies appears most comprehensive in conduct and reporting detail. In contrast, searches to identify and refine program theory are poorly conducted, if at all, and poorly reported. Use of this framework offers greater transparency in conduct and reporting while preserving flexibility and methodological innovation.

    更新日期:2019-12-17
  • Developing methods for the overarching synthesis of quantitative and qualitative evidence: The interweave synthesis approach
    Res. Synth. Methods (IF 5.043) Pub Date : 2019-12-13
    Jo Thompson Coon; Ruth Gwernan‐Jones; Ruth Garside; Michael Nunns; Liz Shaw; G.J. Melendez‐Torres; Darren Moore

    The incorporation of evidence derived from multiple research designs into one single synthesis can enhance the utility of systematic reviews making them more worthwhile, useful, and insightful. Methodological guidance for mixed‐methods synthesis continues to emerge and evolve but broadly involves a sequential, parallel, or convergent approach according to the degree of independence between individual syntheses before they are combined.

    更新日期:2019-12-17
  • Examining overlap of included studies in meta‐reviews: Guidance for using the corrected covered area index
    Res. Synth. Methods (IF 5.043) Pub Date : 2019-12-10
    Emily A. Hennessy, Blair T. Johnson

    Overlap in meta‐reviews results from the use of multiple identical primary studies in similar reviews. It is an important area for research synthesists because overlap indicates the degree to which reviews address the same or different literatures of primary research. Current guidelines to address overlap suggest that assessing and documenting the degree of overlap in primary studies, calculated via the corrected covered area (CCA) is a promising method. Yet, the CCA is a simple percentage of overlap and current guidelines do not detail ways that reviewers can use the CCA as a diagnostic tool while also comprehensively incorporating these findings into their conclusions. Furthermore, we maintain that meta‐review teams must address non‐independence via overlap more thoroughly than by simply estimating and reporting the CCA. Instead, we recommend and elaborate five steps to take when examining overlap, illustrating these steps through the use of an empirical example of primary study overlap in a recently conducted meta‐review. This work helps to show that overlap of primary studies included in a meta‐review is not necessarily a bias but often can be a benefit. We also highlight further areas of caution in this task and potential for the development of new tools to address non‐independence issues.

    更新日期:2019-12-11
  • A novel approach for identifying and addressing case‐mix heterogeneity in individual participant data meta‐analysis
    Res. Synth. Methods (IF 5.043) Pub Date : 2019-12-02
    Tat‐Thang Vo, Raphael Porcher, Anna Chaimani, Stijn Vansteelandt

    Case‐mix heterogeneity across studies complicates meta‐analyses. As a result of this, treatments that are equally effective on patient subgroups may appear to have different effectiveness on patient populations with different case mix. It is therefore important that meta‐analyses be explicit for what patient population they describe the treatment effect. To achieve this, we develop a new approach for meta‐analysis of randomized clinical trials, which use individual patient data (IPD) from all trials to infer the treatment effect for the patient population in a given trial, based on direct standardization using either outcome regression (OCR) or inverse probability weighting (IPW). Accompanying random‐effect meta‐analysis models are developed. The new approach enables disentangling heterogeneity due to case mix from that due to beyond case‐mix reasons.

    更新日期:2019-12-03
  • A comparison of Bayesian synthesis approaches for studies comparing two means: A tutorial
    Res. Synth. Methods (IF 5.043) Pub Date : 2019-11-29
    Han Du, Thomas N. Bradbury, Justin A. Lavner, Andrea L. Meltzer, James K. McNulty, Lisa A. Neff, Benjamin R. Karney

    Researchers often seek to synthesize results of multiple studies on the same topic to draw statistical or substantive conclusions and to estimate effect sizes that will inform power analyses for future research. The most popular synthesis approach is meta‐analysis. There have been few discussions and applications of other synthesis approaches. This tutorial illustrates and compares multiple Bayesian synthesis approaches (i.e., integrative data analyses, meta‐analyses, data fusion using augmented data‐dependent priors, and data fusion using aggregated data‐dependent priors) and discusses when and how to use these Bayesian synthesis approaches to combine studies that compare two independent group means or two matched group means. For each approach, fixed‐, random‐, and mixed‐effects models with other variants are illustrated with real data. R code is provided to facilitate the implementation of each method and each model. On the basis of these analyses, we summarize the strengths and limitations of each approach and provide recommendations to guide future synthesis efforts.

    更新日期:2019-11-30
  • Individual participant data meta‐analysis of intervention studies with time‐to‐event outcomes: A review of the methodology and an applied example
    Res. Synth. Methods (IF 5.043) Pub Date : 2019-11-23
    Valentijn M.T. de Jong, Karel G.M. Moons, Richard D. Riley, Catrin Tudur Smith, Anthony G. Marson, Marinus J.C. Eijkemans, Thomas P.A. Debray

    Many randomized trials evaluate an intervention effect on time‐to‐event outcomes. Individual participant data (IPD) from such trials can be obtained and combined in a so‐called IPD meta‐analysis (IPD‐MA), to summarize the overall intervention effect. We performed a narrative literature review to provide an overview of methods for conducting an IPD‐MA of randomized intervention studies with a time‐to‐event outcome. We focused on identifying good methodological practice for modeling frailty of trial participants across trials, modeling heterogeneity of intervention effects, choosing appropriate association measures, dealing with (trial differences in) censoring and follow‐up times, and addressing time‐varying intervention effects and effect modification (interactions).

    更新日期:2019-11-26
  • Difficulties arising in reimbursement recommendations on new medicines due to inadequate reporting of population adjustment indirect comparison methods
    Res. Synth. Methods (IF 5.043) Pub Date : 2019-07-29
    Eileen M. Holmes, Joy Leahy, Cathal D. Walsh, Arthur White, Peter T. Donnan, Felicity Lamrock

    Indirect treatment comparisons are useful to estimate relative treatment effects when head‐to‐head studies are not conducted. Statisticians at the National Centre for Pharmacoeconomics Ireland (NCPE) and Scottish Medicines Consortium (SMC) assess the clinical and cost‐effectiveness of new medicines as part of multidisciplinary teams. We describe some shared observations on areas where reporting of population‐adjustment indirect comparison methods is causing uncertainty in our recommendations to decision‐making committees when assessing reimbursement of medicines.

    更新日期:2019-11-18
  • The value of a second reviewer for study selection in systematic reviews
    Res. Synth. Methods (IF 5.043) Pub Date : 2019-07-18
    Carolyn R.T. Stoll, Sonya Izadi, Susan Fowler, Paige Green, Jerry Suls, Graham A. Colditz

    Although dual independent review of search results by two reviewers is generally recommended for systematic reviews, there are not consistent recommendations regarding the timing of the use of the second reviewer. This study compared the use of a complete dual review approach, with two reviewers in both the title/abstract screening stage and the full‐text screening stage, as compared with a limited dual review approach, with two reviewers only in the full‐text stage.

    更新日期:2019-11-18
  • Conduct and reporting of citation searching in Cochrane systematic reviews: A cross‐sectional study
    Res. Synth. Methods (IF 5.043) Pub Date : 2019-07-04
    Simon Briscoe, Alison Bethel, Morwenna Rogers

    The search for studies for a systematic review should be conducted systematically and reported transparently to facilitate reproduction. This study aimed to report on the conduct and reporting of backward citation searching (ie, checking reference lists) and forward citation searching in a cross section of Cochrane reviews. Citation searching uses the citation network surrounding a source study to identify additional studies.

    更新日期:2019-11-18
  • Dealing with missing outcome data in meta‐analysis
    Res. Synth. Methods (IF 5.043) Pub Date : 2019-06-09
    Dimitris Mavridis, Ian R. White

    Missing data result in less precise and possibly biased effect estimates in single studies. Bias arising from studies with incomplete outcome data is naturally propagated in a meta‐analysis. Conventional analysis using only individuals with available data is adequate when the meta‐analyst can be confident that the data are missing at random (MAR) in every study—that is, that the probability of missing data does not depend on unobserved variables, conditional on observed variables. Usually, such confidence is unjustified as participants may drop out due to lack of improvement or adverse effects. The MAR assumption cannot be tested, and a sensitivity analysis to assess how robust results are to reasonable deviations from the MAR assumption is important. Two methods may be used based on plausible alternative assumptions about the missing data. Firstly, the distribution of reasons for missing data may be used to impute the missing values. Secondly, the analyst may specify the magnitude and uncertainty of possible departures from the missing at random assumption, and these may be used to correct bias and reweight the studies. This is achieved by employing a pattern mixture model and describing how the outcome in the missing participants is related to the outcome in the completers. Ideally, this relationship is informed using expert opinion. The methods are illustrated in two examples with binary and continuous outcomes. We provide recommendations on what trial investigators and systematic reviewers should do to minimize the problem of missing outcome data in meta‐analysis.

    更新日期:2019-11-18
  • Meta‐analysis and Mendelian randomization: A review
    Res. Synth. Methods (IF 5.043) Pub Date : 2019-04-23
    Jack Bowden, Michael V. Holmes

    Mendelian randomization (MR) uses genetic variants as instrumental variables to infer whether a risk factor causally affects a health outcome. Meta‐analysis has been used historically in MR to combine results from separate epidemiological studies, with each study using a small but select group of genetic variants. In recent years, it has been used to combine genome‐wide association study (GWAS) summary data for large numbers of genetic variants. Heterogeneity among the causal estimates obtained from multiple genetic variants points to a possible violation of the necessary instrumental variable assumptions. In this article, we provide a basic introduction to MR and the instrumental variable theory that it relies upon. We then describe how random effects models, meta‐regression, and robust regression are being used to test and adjust for heterogeneity in order to improve the rigor of the MR approach.

    更新日期:2019-11-18
  • Joint synthesis of conditionally related multiple outcomes makes better use of data than separate meta‐analyses
    Res. Synth. Methods (IF 5.043) Pub Date : 2019-11-10
    Sumayya Anwer, A.E. Ades, Sofia Dias

    When there are structural relationships between outcomes reported in different trials, separate analyses of each outcome do not provide a single coherent analysis, which is required for decision‐making. For example, trials of intrapartum anti‐bacterial prophylaxis (IAP) to prevent early onset group B streptococcal (EOGBS) disease can report three treatment effects: the effect on bacterial colonisation of the newborn, the effect on EOGBS, and the effect on EOGBS conditional on newborn colonisation. These outcomes are conditionally related, or nested, in a multi‐state model.

    更新日期:2019-11-11
  • Pathway‐based meta‐analysis for partially paired transcriptomics analysis
    Res. Synth. Methods (IF 5.043) Pub Date : 2019-11-10
    Wing Tung Fung, Joseph T. Wu, Wai Man Mandy Chan, Henry H. Chan, Herbert Pang

    Pathway‐based differential expression analysis allows the incorporation of biological domain knowledge into transcriptomics analysis to enhance our understanding of disease mechanisms. To integrate information among multiple studies at the pathway level, pathway‐based meta‐analysis can be performed. Paired or partially paired samples are common in biomedical research. However, there are currently no existing pathway‐based meta‐analysis methods appropriate for paired or partially paired study designs.

    更新日期:2019-11-11
  • Systematic reviews in health research.
    Res. Synth. Methods (IF 5.043) Pub Date : 2019-08-29
    Emily E Tanner-Smith,Matthias Egger,Julian Higgins

    更新日期:2019-11-01
  • Applications of text mining within systematic reviews.
    Res. Synth. Methods (IF 5.043) Pub Date : 2011-03-01
    James Thomas,John McNaught,Sophia Ananiadou

    Systematic reviews are a widely accepted research method. However, it is increasingly difficult to conduct them to fit with policy and practice timescales, particularly in areas which do not have well indexed, comprehensive bibliographic databases. Text mining technologies offer one possible way forward in reducing the amount of time systematic reviews take to conduct. They can facilitate the identification of relevant literature, its rapid description or categorization, and its summarization. In this paper, we describe the application of four text mining technologies, namely, automatic term recognition, document clustering, classification and summarization, which support the identification of relevant studies in systematic reviews. The contributions of text mining technologies to improve reviewing efficiency are considered and their strengths and weaknesses explored. We conclude that these technologies do have the potential to assist at various stages of the review process. However, they are relatively unknown in the systematic reviewing community, and substantial evaluation and methods development are required before their possible impact can be fully assessed. Copyright © 2011 John Wiley & Sons, Ltd.

    更新日期:2019-11-01
  • State of the art, state of the science?
    Res. Synth. Methods (IF 5.043) Pub Date : 2018-09-25
    Andrew Booth

    更新日期:2019-11-01
  • Innovation in information retrieval methods for evidence synthesis studies.
    Res. Synth. Methods (IF 5.043) Pub Date : 2018-09-04
    Suzy Paisley,Margaret J Foster

    更新日期:2019-11-01
  • GetReal in mathematical modelling: a review of studies predicting drug effectiveness in the real world.
    Res. Synth. Methods (IF 5.043) Pub Date : 2016-08-17
    Klea Panayidou,Sandro Gsteiger,Matthias Egger,Gablu Kilcher,Máximo Carreras,Orestis Efthimiou,Thomas P A Debray,Sven Trelle,Noemi Hummel,

    The performance of a drug in a clinical trial setting often does not reflect its effect in daily clinical practice. In this third of three reviews, we examine the applications that have been used in the literature to predict real-world effectiveness from randomized controlled trial efficacy data. We searched MEDLINE, EMBASE from inception to March 2014, the Cochrane Methodology Register, and websites of key journals and organisations and reference lists. We extracted data on the type of model and predictions, data sources, validation and sensitivity analyses, disease area and software. We identified 12 articles in which four approaches were used: multi-state models, discrete event simulation models, physiology-based models and survival and generalized linear models. Studies predicted outcomes over longer time periods in different patient populations, including patients with lower levels of adherence or persistence to treatment or examined doses not tested in trials. Eight studies included individual patient data. Seven examined cardiovascular and metabolic diseases and three neurological conditions. Most studies included sensitivity analyses, but external validation was performed in only three studies. We conclude that mathematical modelling to predict real-world effectiveness of drug interventions is not widely used at present and not well validated. © 2016 The Authors Research Synthesis Methods Published by John Wiley & Sons Ltd.

    更新日期:2019-11-01
  • 更新日期:2019-11-01
  • New models for describing outliers in meta-analysis.
    Res. Synth. Methods (IF 5.043) Pub Date : 2015-11-28
    Rose Baker,Dan Jackson

    An unobserved random effect is often used to describe the between-study variation that is apparent in meta-analysis datasets. A normally distributed random effect is conventionally used for this purpose. When outliers or other unusual estimates are included in the analysis, the use of alternative random effect distributions has previously been proposed. Instead of adopting the usual hierarchical approach to modelling between-study variation, and so directly modelling the study specific true underling effects, we propose two new marginal distributions for modelling heterogeneous datasets. These two distributions are suggested because numerical integration is not needed to evaluate the likelihood. This makes the computation required when fitting our models much more robust. The properties of the new distributions are described, and the methodology is exemplified by fitting models to four datasets. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd.

    更新日期:2019-11-01
  • Depicting estimates using the intercept in meta-regression models: The moving constant technique.
    Res. Synth. Methods (IF 5.043) Pub Date : 2011-09-01
    Blair T Johnson,Tania B Huedo-Medina

    In any scientific discipline, the ability to portray research patterns graphically often aids greatly in interpreting a phenomenon. In part to depict phenomena, the statistics and capabilities of meta-analytic models have grown increasingly sophisticated. Accordingly, this article details how to move the constant in weighted meta-analysis regression models (viz. "meta-regression") to illuminate the patterns in such models across a range of complexities. Although it is commonly ignored in practice, the constant (or intercept) in such models can be indispensible when it is not relegated to its usual static role. The moving constant technique makes possible estimates and confidence intervals at moderator levels of interest as well as continuous confidence bands around the meta-regression line itself. Such estimates, in turn, can be highly informative to interpret the nature of the phenomenon being studied in the meta-analysis, especially when a comparison with an absolute or a practical criterion is the goal. Knowing the point at which effect size estimates reach statistical significance or other practical criteria of effect size magnitude can be quite important. Examples ranging from simple to complex models illustrate these principles. Limitations and extensions of the strategy are discussed. Copyright © 2011 John Wiley & Sons, Ltd.

    更新日期:2019-11-01
  • A Pocock approach to sequential meta-analysis of clinical trials.
    Res. Synth. Methods (IF 5.043) Pub Date : 2013-12-19
    Jonathan J Shuster,Josef Neu

    Three recent papers have provided sequential methods for meta-analysis of two-treatment randomized clinical trials. This paper provides an alternate approach that has three desirable features. First, when carried out prospectively (i.e., we only have the results up to the time of our current analysis), we do not require knowledge of the information fraction (the fraction of the total information that is available at each analysis). Second, the methods work even if the expected values of the effect sizes vary from study to study. Finally, our methods have easily interpretable metrics that make sense under changing effect sizes. Although the other published methods can be adapted to be “group sequential” (recommended), meaning that a set number and timing of looks are specified, rather than looking after every trial, ours can be used in both a continuous or group sequential manner. We provide an example on the role of probiotics in preventing necrotizing enterocolitis in preterm infants.

    更新日期:2019-11-01
  • Meta-analysis of safety for low event-rate binomial trials.
    Res. Synth. Methods (IF 5.043) Pub Date : 2012-03-01
    Jonathan J Shuster,Jennifer D Guo,Jay S Skyler

    This article focuses on meta-analysis of low event-rate binomial trials. We introduce two forms of random effects: (1) 'studies at random' (SR), where we assume no more than independence between studies; and (2) 'effects at random' (ER), which forces the effect size distribution to be independent of the study design. On the basis of the summary estimates of proportions, we present both unweighted and study-size weighted methods, which, under SR, target different population parameters. We demonstrate mechanistically that the popular DerSimonian-Laird (DL) method, as DL actually warned in their paper, should never be used in this setting. We conducted a survey of the major cardiovascular literature on low event-rate studies and found that DL using odds ratios or relative risks to be the clear method of choice. We looked at two high profile examples from diabetes and cancer, respectively, where the choice of weighted versus unweighted methods makes a large difference. A large simulation study supports the accuracy of the coverage of our approximate confidence intervals. We recommend that before looking at their data, users should prespecify which target parameter they intend to estimate (weighted vs. unweighted) but estimate the other as a secondary analysis. Copyright © 2012 John Wiley & Sons, Ltd.

    更新日期:2019-11-01
  • The effect direction plot: visual display of non-standardised effects across multiple outcome domains.
    Res. Synth. Methods (IF 5.043) Pub Date : 2013-06-25
    Hilary J Thomson,Sian Thomas

    Visual display of reported impacts is a valuable aid to both reviewers and readers of systematic reviews. Forest plots are routinely prepared to report standardised effect sizes, but where standardised effect sizes are not available for all included studies a forest plot may misrepresent the available evidence. Tabulated data summaries to accompany the narrative synthesis can be lengthy and inaccessible. Moreover, the link between the data and the synthesis conclusions may be opaque. This paper details the preparation of visual summaries of effect direction for multiple outcomes across 29 quantitative studies of the health impacts of housing improvement. A one page summary of reported health outcomes was prepared to accompany a 10 000-word narrative synthesis. The one page summary included details of study design, internal validity, sample size, time of follow-up, as well as changes in intermediate outcomes, for example, housing condition. This approach to visually summarising complex data can aid the reviewer in cross-study analysis and improve accessibility and transparency of the narrative synthesis where standardised effect sizes are not available. Copyright © 2012 John Wiley & Sons, Ltd.

    更新日期:2019-11-01
  • Innovation in information retrieval methods for evidence synthesis studies.
    Res. Synth. Methods (IF 5.043) Pub Date : 2018-10-17
    Suzy Paisley,Margaret J Foster

    更新日期:2019-11-01
  • Influence diagnostics in meta-regression model.
    Res. Synth. Methods (IF 5.043) Pub Date : 2017-07-19
    Lei Shi,ShanShan Zuo,Dalei Yu,Xiaohua Zhou

    This paper studies the influence diagnostics in meta-regression model including case deletion diagnostic and local influence analysis. We derive the subset deletion formulae for the estimation of regression coefficient and heterogeneity variance and obtain the corresponding influence measures. The DerSimonian and Laird estimation and maximum likelihood estimation methods in meta-regression are considered, respectively, to derive the results. Internal and external residual and leverage measure are defined. The local influence analysis based on case-weights perturbation scheme, responses perturbation scheme, covariate perturbation scheme, and within-variance perturbation scheme are explored. We introduce a method by simultaneous perturbing responses, covariate, and within-variance to obtain the local influence measure, which has an advantage of capable to compare the influence magnitude of influential studies from different perturbations. An example is used to illustrate the proposed methodology.

    更新日期:2019-11-01
  • Diagnostics for generalized linear hierarchical models in network meta-analysis.
    Res. Synth. Methods (IF 5.043) Pub Date : 2017-07-07
    Hong Zhao,James S Hodges,Bradley P Carlin

    Network meta-analysis (NMA) combines direct and indirect evidence comparing more than 2 treatments. Inconsistency arises when these 2 information sources differ. Previous work focuses on inconsistency detection, but little has been done on how to proceed after identifying inconsistency. The key issue is whether inconsistency changes an NMA's substantive conclusions. In this paper, we examine such discrepancies from a diagnostic point of view. Our methods seek to detect influential and outlying observations in NMA at a trial-by-arm level. These observations may have a large effect on the parameter estimates in NMA, or they may deviate markedly from other observations. We develop formal diagnostics for a Bayesian hierarchical model to check the effect of deleting any observation. Diagnostics are specified for generalized linear hierarchical NMA models and investigated for both published and simulated datasets. Results from our example dataset using either contrast- or arm-based models and from the simulated datasets indicate that the sources of inconsistency in NMA tend not to be influential, though results from the example dataset suggest that they are likely to be outliers. This mimics a familiar result from linear model theory, in which outliers with low leverage are not influential. Future extensions include incorporating baseline covariates and individual-level patient data.

    更新日期:2019-11-01
  • An exploration of crowdsourcing citation screening for systematic reviews.
    Res. Synth. Methods (IF 5.043) Pub Date : 2017-07-06
    Michael L Mortensen,Gaelen P Adam,Thomas A Trikalinos,Tim Kraska,Byron C Wallace

    Systematic reviews are increasingly used to inform health care decisions, but are expensive to produce. We explore the use of crowdsourcing (distributing tasks to untrained workers via the web) to reduce the cost of screening citations. We used Amazon Mechanical Turk as our platform and 4 previously conducted systematic reviews as examples. For each citation, workers answered 4 or 5 questions that were equivalent to the eligibility criteria. We aggregated responses from multiple workers into an overall decision to include or exclude the citation using 1 of 9 algorithms and compared the performance of these algorithms to the corresponding decisions of trained experts. The most inclusive algorithm (designating a citation as relevant if any worker did) identified 95% to 99% of the citations that were ultimately included in the reviews while excluding 68% to 82% of irrelevant citations. Other algorithms increased the fraction of irrelevant articles excluded at some cost to the inclusion of relevant studies. Crowdworkers completed screening in 4 to 17 days, costing $460 to $2220, a cost reduction of up to 88% compared to trained experts. Crowdsourcing may represent a useful approach to reducing the cost of identifying literature for systematic reviews.

    更新日期:2019-11-01
  • Text mining for search term development in systematic reviewing: A discussion of some methods and challenges.
    Res. Synth. Methods (IF 5.043) Pub Date : 2017-07-01
    Claire Stansfield,Alison O'Mara-Eves,James Thomas

    Using text mining to aid the development of database search strings for topics described by diverse terminology has potential benefits for systematic reviews; however, methods and tools for accomplishing this are poorly covered in the research methods literature. We briefly review the literature on applications of text mining for search term development for systematic reviewing. We found that the tools can be used in 5 overarching ways: improving the precision of searches; identifying search terms to improve search sensitivity; aiding the translation of search strategies across databases; searching and screening within an integrated system; and developing objectively derived search strategies. Using a case study and selected examples, we then reflect on the utility of certain technologies (term frequency-inverse document frequency and Termine, term frequency, and clustering) in improving the precision and sensitivity of searches. Challenges in using these tools are discussed. The utility of these tools is influenced by the different capabilities of the tools, the way the tools are used, and the text that is analysed. Increased awareness of how the tools perform facilitates the further development of methods for their use in systematic reviews.

    更新日期:2019-11-01
  • Authors' response to letter to the editor.
    Res. Synth. Methods (IF 5.043) Pub Date : 2017-06-21
    Michael Borenstein,Julian P T Higgins,Larry V Hedges,Hannah R Rothstein

    更新日期:2019-11-01
  • Practical challenges of I2 as a measure of heterogeneity.
    Res. Synth. Methods (IF 5.043) Pub Date : 2017-06-21
    David C Hoaglin

    更新日期:2019-11-01
  • Meta-analysis combining parallel and crossover trials using generalised estimating equation method.
    Res. Synth. Methods (IF 5.043) Pub Date : 2017-06-07
    François Curtin

    Clinical trials have different designs: In late stage drug development, the parallel trial design is the most frequent one; however, the crossover design is not rare; different techniques are used to analyse their results. Although both designs measure the same treatment effect, combining parallel and crossover trials in a meta-analysis is not straightforward. We present here a meta-analysis method based on generalised estimating equation (GEE) regression to combine aggregated results of crossover and parallel trials using a marginal estimation approach. This method is based on the fixed effects meta-analytic model; it allows combining average outcomes belonging to the exponential distributions obtained from trials of different designs and in particular from crossover trials with more than 2 periods and 2 treatments. By extending the methods proposed so far to combine the 2 trial designs, the GEE regression allows for adjusting for bias, such as a carry-over effect typical in crossover trials. In this paper, the GEE meta-analysis method is compared to the classical weighted average method with examples of published and simulated meta-analyses. Although the GEE can account for crossover specificities, it is limited by the availability of detailed trial information often encountered with reports of these trials.

    更新日期:2019-11-01
  • Evaluating the accuracy and economic value of a new test in the absence of a perfect reference test.
    Res. Synth. Methods (IF 5.043) Pub Date : 2017-05-26
    Xuanqian Xie,Alison Sinclair,Nandini Dendukuri

    BACKGROUND Streptococcus pneumoniae (SP) pneumonia is often treated empirically as diagnosis is challenging because of the lack of a perfect test. Using BinaxNOW-SP, a urinary antigen test, as an add-on to standard cultures may not only increase diagnostic yield but also increase costs. OBJECTIVE To estimate the sensitivity and specificity of BinaxNOW-SP and subsequently estimate the cost-effectiveness of adding BinaxNOW-SP to the diagnostic work-up. DESIGN We fit a Bayesian latent-class meta-analysis model to obtain estimates of BinaxNOW-SP accuracy that adjust for the imperfect accuracy of culture. Meta-analysis results were combined with information on prevalence of SP pneumonia to estimate the number of patients who are correctly classified under competing diagnostic strategies. Taking into consideration the cost of antibiotics, we determined the incremental cost of adding BinaxNOW-SP to the work-up per case correctly diagnosed. RESULTS The BinaxNOW-SP test had a pooled sensitivity of 0.74 (95% credible interval [CrI], 0.67-0.83) and a pooled specificity of 0.96 (95% CrI, 0.92-0.99). An overall increase in diagnostic accuracy of 6.2% due to the addition of BinaxNOW-SP corresponded to an incremental cost per case correctly classified of $582 Canadian dollars. CONCLUSIONS The methods we have described allow us to evaluate the accuracy and economic value of a new test in the absence of a perfect reference test using an evidence-based approach.

    更新日期:2019-11-01
  • The albatross plot: A novel graphical tool for presenting results of diversely reported studies in a systematic review.
    Res. Synth. Methods (IF 5.043) Pub Date : 2017-04-30
    Sean Harrison,Hayley E Jones,Richard M Martin,Sarah J Lewis,Julian P T Higgins

    Meta-analyses combine the results of multiple studies of a common question. Approaches based on effect size estimates from each study are generally regarded as the most informative. However, these methods can only be used if comparable effect sizes can be computed from each study, and this may not be the case due to variation in how the studies were done or limitations in how their results were reported. Other methods, such as vote counting, are then used to summarize the results of these studies, but most of these methods are limited in that they do not provide any indication of the magnitude of effect. We propose a novel plot, the albatross plot, which requires only a 1-sided P value and a total sample size from each study (or equivalently a 2-sided P value, direction of effect and total sample size). The plot allows an approximate examination of underlying effect sizes and the potential to identify sources of heterogeneity across studies. This is achieved by drawing contours showing the range of effect sizes that might lead to each P value for given sample sizes, under simple study designs. We provide examples of albatross plots using data from previous meta-analyses, allowing for comparison of results, and an example from when a meta-analysis was not possible.

    更新日期:2019-11-01
  • Meta-analysis combining parallel and cross-over trials with random effects.
    Res. Synth. Methods (IF 5.043) Pub Date : 2017-04-22
    François Curtin

    Meta-analysis can necessitate the combination of parallel and cross-over trial designs. Because of the differences in the trial designs and potential biases notably associated with the crossover trials, one often combines trials of the same designs only, which decreases the power of the meta-analysis. To combine results of clinical trials from parallel and cross-over designs, we extend the method proposed in an accompanying study to account for random effects. We propose here a hierarchical mixed model allowing the combination of the 2 types of trial designs and accounting for additional covariates where random effects can be introduced to account for heterogeneity in trial, treatment effect, and interactions. We introduce a multilevel model and a Bayesian hierarchical model for combined trial design meta-analysis. The analysis of the models by restricted iterative generalised least square and Monte Carlo Markov Chain is presented. Methods are compared in a combined design meta-analysis model on salt reduction. Both models and their respective advantages in the perspective of meta-analysis are discussed. However, the access to the trial data, in particular sequence and period data in cross-over trials, remains a major limitation to the meta-analytic combination of trial designs.

    更新日期:2019-11-01
  • Using the realist perspective to link theory from qualitative evidence synthesis to quantitative studies: Broadening the matrix approach.
    Res. Synth. Methods (IF 5.043) Pub Date : 2017-04-22
    Leonie van Grootel,Floryt van Wesel,Alison O'Mara-Eves,James Thomas,Joop Hox,Hennie Boeije

    BACKGROUND This study describes an approach for the use of a specific type of qualitative evidence synthesis in the matrix approach, a mixed studies reviewing method. The matrix approach compares quantitative and qualitative data on the review level by juxtaposing concrete recommendations from the qualitative evidence synthesis against interventions in primary quantitative studies. However, types of qualitative evidence syntheses that are associated with theory building generate theoretical models instead of recommendations. Therefore, the output from these types of qualitative evidence syntheses cannot directly be used for the matrix approach but requires transformation. This approach allows for the transformation of these types of output. METHOD The approach enables the inference of moderation effects instead of direct effects from the theoretical model developed in a qualitative evidence synthesis. Recommendations for practice are formulated on the basis of interactional relations inferred from the qualitative evidence synthesis. In doing so, we apply the realist perspective to model variables from the qualitative evidence synthesis according to the context-mechanism-outcome configuration. FINDINGS A worked example shows that it is possible to identify recommendations from a theory-building qualitative evidence synthesis using the realist perspective. We created subsets of the interventions from primary quantitative studies based on whether they matched the recommendations or not and compared the weighted mean effect sizes of the subsets. The comparison shows a slight difference in effect sizes between the groups of studies. The study concludes that the approach enhances the applicability of the matrix approach.

    更新日期:2019-11-01
  • Power analysis for random-effects meta-analysis.
    Res. Synth. Methods (IF 5.043) Pub Date : 2017-04-06
    Dan Jackson,Rebecca Turner

    One of the reasons for the popularity of meta-analysis is the notion that these analyses will possess more power to detect effects than individual studies. This is inevitably the case under a fixed-effect model. However, the inclusion of the between-study variance in the random-effects model, and the need to estimate this parameter, can have unfortunate implications for this power. We develop methods for assessing the power of random-effects meta-analyses, and the average power of the individual studies that contribute to meta-analyses, so that these powers can be compared. In addition to deriving new analytical results and methods, we apply our methods to 1991 meta-analyses taken from the Cochrane Database of Systematic Reviews to retrospectively calculate their powers. We find that, in practice, 5 or more studies are needed to reasonably consistently achieve powers from random-effects meta-analyses that are greater than the studies that contribute to them. Not only is statistical inference under the random-effects model challenging when there are very few studies but also less worthwhile in such cases. The assumption that meta-analysis will result in an increase in power is challenged by our findings.

    更新日期:2019-11-01
  • Can abstract screening workload be reduced using text mining? User experiences of the tool Rayyan.
    Res. Synth. Methods (IF 5.043) Pub Date : 2017-04-05
    Hanna Olofsson,Agneta Brolund,Christel Hellberg,Rebecca Silverstein,Karin Stenström,Marie Österberg,Jessica Dagerhamn

    BACKGROUND One time-consuming aspect of conducting systematic reviews is the task of sifting through abstracts to identify relevant studies. One promising approach for reducing this burden uses text mining technology to identify those abstracts that are potentially most relevant for a project, allowing those abstracts to be screened first. OBJECTIVES To examine the effectiveness of the text mining functionality of the abstract screening tool Rayyan. User experiences were collected. METHODS Rayyan was used to screen abstracts for 6 reviews in 2015. After screening 25%, 50%, and 75% of the abstracts, the screeners logged the relevant references identified. A survey was sent to users. RESULTS After screening half of the search result with Rayyan, 86% to 99% of the references deemed relevant to the study were identified. Of those studies included in the final reports, 96% to 100% were already identified in the first half of the screening process. Users rated Rayyan 4.5 out of 5. DISCUSSION The text mining function in Rayyan successfully helped reviewers identify relevant studies early in the screening process.

    更新日期:2019-11-01
  • The loss of the NHS EED and DARE databases and the effect on evidence synthesis and evaluation.
    Res. Synth. Methods (IF 5.043) Pub Date : 2017-03-08
    Simon Briscoe,Chris Cooper,Julie Glanville,Carol Lefebvre

    更新日期:2019-11-01
  • Estimating data from figures with a Web-based program: Considerations for a systematic review.
    Res. Synth. Methods (IF 5.043) Pub Date : 2017-03-08
    Brittany U Burda,Elizabeth A O'Connor,Elizabeth M Webber,Nadia Redmond,Leslie A Perdue

    BACKGROUND Systematic reviewers often encounter incomplete or missing data, and the information desired may be difficult to obtain from a study author. Thus, systematic reviewers may have to resort to estimating data from figures with little or no raw data in a study's corresponding text or tables. METHODS We discuss a case study in which participants used a publically available Web-based program, called webplotdigitizer, to estimate data from 2 figures. We evaluated and used the intraclass coefficient and the accuracy of the estimates to the true data to inform considerations when using estimated data from figures in systematic reviews. RESULTS The estimates for both figures were consistent, although the distribution of estimates in the figure of a continuous outcome was slightly higher. For the continuous outcome, the percent difference ranged from 0.23% to 30.35% while the percent difference of the event rate ranged from 0.22% to 8.92%. For both figures, the intraclass coefficient was excellent (>0.95). CONCLUSIONS Systematic reviewers should consider and be transparent when estimating data from figures when the information cannot be obtained from study authors and perform sensitivity analyses of pooled results to reduce bias.

    更新日期:2019-11-01
  • Risk of bias in overviews of reviews: a scoping review of methodological guidance and four-item checklist.
    Res. Synth. Methods (IF 5.043) Pub Date : 2017-01-12
    Madeleine Ballard,Paul Montgomery

    OBJECTIVE To assess the conditions under which employing an overview of systematic reviews is likely to lead to a high risk of bias. STUDY DESIGN To synthesise existing guidance concerning overview practice, a scoping review was conducted. Four electronic databases were searched with a pre-specified strategy (PROSPERO 2015:CRD42015027592) ending October 2015. Included studies needed to describe or develop overview methodology. Data were narratively synthesised to delineate areas highlighted as outstanding challenges or where methodological recommendations conflict. RESULTS Twenty-four papers met the inclusion criteria. There is emerging debate regarding overlapping systematic reviews; systematic review scope; quality of included research; updating; and synthesizing and reporting results. While three functions for overviews have been proposed-identify gaps, explore heterogeneity, summarize evidence-overviews cannot perform the first; are unlikely to achieve the second and third simultaneously; and can only perform the third under specific circumstances. Namely, when identified systematic reviews meet the following four conditions: (1) include primary trials that do not substantially overlap, (2) match overview scope, (3) are of high methodological quality, and (4) are up-to-date. CONCLUSION Considering the intended function of proposed overviews with the corresponding methodological conditions may improve the quality of this burgeoning publication type. Copyright © 2017 John Wiley & Sons, Ltd.

    更新日期:2019-11-01
  • Basics of meta-analysis: I2 is not an absolute measure of heterogeneity.
    Res. Synth. Methods (IF 5.043) Pub Date : 2017-01-07
    Michael Borenstein,Julian P T Higgins,Larry V Hedges,Hannah R Rothstein

    When we speak about heterogeneity in a meta-analysis, our intent is usually to understand the substantive implications of the heterogeneity. If an intervention yields a mean effect size of 50 points, we want to know if the effect size in different populations varies from 40 to 60, or from 10 to 90, because this speaks to the potential utility of the intervention. While there is a common belief that the I2 statistic provides this information, it actually does not. In this example, if we are told that I2 is 50%, we have no way of knowing if the effects range from 40 to 60, or from 10 to 90, or across some other range. Rather, if we want to communicate the predicted range of effects, then we should simply report this range. This gives readers the information they think is being captured by I2 and does so in a way that is concise and unambiguous. Copyright © 2017 John Wiley & Sons, Ltd.

    更新日期:2019-11-01
  • Synthesis of linear regression coefficients by recovering the within-study covariance matrix from summary statistics.
    Res. Synth. Methods (IF 5.043) Pub Date : 2016-12-18
    Daisuke Yoneoka,Masayuki Henmi

    Recently, the number of regression models has dramatically increased in several academic fields. However, within the context of meta-analysis, synthesis methods for such models have not been developed in a commensurate trend. One of the difficulties hindering the development is the disparity in sets of covariates among literature models. If the sets of covariates differ across models, interpretation of coefficients will differ, thereby making it difficult to synthesize them. Moreover, previous synthesis methods for regression models, such as multivariate meta-analysis, often have problems because covariance matrix of coefficients (i.e. within-study correlations) or individual patient data are not necessarily available. This study, therefore, proposes a brief explanation regarding a method to synthesize linear regression models under different covariate sets by using a generalized least squares method involving bias correction terms. Especially, we also propose an approach to recover (at most) threecorrelations of covariates, which is required for the calculation of the bias term without individual patient data. Copyright © 2016 John Wiley & Sons, Ltd.

    更新日期:2019-11-01
  • Using multiple group modeling to test moderators in meta-analysis.
    Res. Synth. Methods (IF 5.043) Pub Date : 2016-12-10
    Alexander M Schoemann

    Meta-analysis is a popular and flexible analysis that can be fit in many modeling frameworks. Two methods of fitting meta-analyses that are growing in popularity are structural equation modeling (SEM) and multilevel modeling (MLM). By using SEM or MLM to fit a meta-analysis researchers have access to powerful techniques associated with SEM and MLM. This paper details how to use one such technique, multiple group analysis, to test categorical moderators in meta-analysis. In a multiple group meta-analysis a model is fit to each level of the moderator simultaneously. By constraining parameters across groups any model parameter can be tested for equality. Using multiple groups to test for moderators is especially relevant in random-effects meta-analysis where both the mean and the between studies variance of the effect size may be compared across groups. A simulation study and the analysis of a real data set are used to illustrate multiple group modeling with both SEM and MLM. Issues related to multiple group meta-analysis and future directions for research are discussed. Copyright © 2016 John Wiley & Sons, Ltd.

    更新日期:2019-11-01
  • Sequential change detection and monitoring of temporal trends in random-effects meta-analysis.
    Res. Synth. Methods (IF 5.043) Pub Date : 2016-12-10
    Samson Henry Dogo,Allan Clark,Elena Kulinskaya

    Temporal changes in magnitude of effect sizes reported in many areas of research are a threat to the credibility of the results and conclusions of meta-analysis. Numerous sequential methods for meta-analysis have been proposed to detect changes and monitor trends in effect sizes so that meta-analysis can be updated when necessary and interpreted based on the time it was conducted. The difficulties of sequential meta-analysis under the random-effects model are caused by dependencies in increments introduced by the estimation of the heterogeneity parameter τ2 . In this paper, we propose the use of a retrospective cumulative sum (CUSUM)-type test with bootstrap critical values. This method allows retrospective analysis of the past trajectory of cumulative effects in random-effects meta-analysis and its visualization on a chart similar to CUSUM chart. Simulation results show that the new method demonstrates good control of Type I error regardless of the number or size of the studies and the amount of heterogeneity. Application of the new method is illustrated on two examples of medical meta-analyses. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.

    更新日期:2019-11-01
  • Interpretive analysis of 85 systematic reviews suggests that narrative syntheses and meta-analyses are incommensurate in argumentation.
    Res. Synth. Methods (IF 5.043) Pub Date : 2016-11-20
    G J Melendez-Torres,A O'Mara-Eves,J Thomas,G Brunton,J Caird,M Petticrew

    Using Toulmin's argumentation theory, we analysed the texts of systematic reviews in the area of workplace health promotion to explore differences in the modes of reasoning embedded in reports of narrative synthesis as compared with reports of meta-analysis. We used framework synthesis, grounded theory and cross-case analysis methods to analyse 85 systematic reviews addressing intervention effectiveness in workplace health promotion. Two core categories, or 'modes of reasoning', emerged to frame the contrast between narrative synthesis and meta-analysis: practical-configurational reasoning in narrative synthesis ('what is going on here? What picture emerges?') and inferential-predictive reasoning in meta-analysis ('does it work, and how well? Will it work again?'). Modes of reasoning examined quality and consistency of the included evidence differently. Meta-analyses clearly distinguished between warrant and claim, whereas narrative syntheses often presented joint warrant-claims. Narrative syntheses and meta-analyses represent different modes of reasoning. Systematic reviewers are likely to be addressing research questions in different ways with each method. It is important to consider narrative synthesis in its own right as a method and to develop specific quality criteria and understandings of how it is carried out, not merely as a complement to, or second-best option for, meta-analysis. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.

    更新日期:2019-11-01
  • Quality and clarity in systematic review abstracts: an empirical study.
    Res. Synth. Methods (IF 5.043) Pub Date : 2016-10-21
    Amy Y Tsou,Jonathan R Treadwell

    BACKGROUND Systematic review (SR) abstracts are important for disseminating evidence syntheses to inform medical decision making. We assess reporting quality in SR abstracts using PRISMA for Abstracts (PRISMA-A), Cochrane Handbook, and Agency for Healthcare Research & Quality guidance. METHODS We evaluated a random sample of 200 SR abstracts (from 2014) comparing interventions in the general medical literature. We assessed adherence to PRISMA-A criteria, problematic wording in conclusions, and whether "positive" studies described clinical significance. RESULTS On average, abstracts reported 60% of PRISMA-A checklist items (mean 8.9 ± 1.7, range 4 to 12). Eighty percent of meta-analyses reported quantitative measures with a confidence interval. Only 49% described effects in terms meaningful to patients and clinicians (e.g., absolute measures), and only 43% mentioned strengths/limitations of the evidence base. Average abstract word count was 274 (SD 89). Word count explained only 13% of score variability. PRISMA-A scores did not differ between Cochrane and non-Cochrane abstracts (mean difference 0.08, 95% confidence interval -1.16 to 1.00). Of 275 primary outcomes, 48% were statistically significant, 32% were not statistically significant, and 19% did not report significance or results. Only one abstract described clinical significance for positive findings. For "negative" outcomes, we identified problematic simple restatements (20%), vague "no evidence of effect" wording (9%), and wishful wording (8%). CONCLUSIONS Improved SR abstract reporting is needed, particularly reporting of quantitative measures (for meta-analysis), easily interpretable units, strengths/limitations of evidence, clinical significance, and clarifying whether negative results reflect true equivalence between treatments. Copyright © 2016 John Wiley & Sons, Ltd.

    更新日期:2019-11-01
  • Heterogeneity: multiplicative, additive or both?
    Res. Synth. Methods (IF 5.043) Pub Date : 2016-10-18
    Christopher H Schmid

    更新日期:2019-11-01
  • Estimation of the biserial correlation and its sampling variance for use in meta-analysis.
    Res. Synth. Methods (IF 5.043) Pub Date : 2016-09-16
    Perke Jacobs,Wolfgang Viechtbauer

    Meta-analyses are often used to synthesize the findings of studies examining the correlational relationship between two continuous variables. When only dichotomous measurements are available for one of the two variables, the biserial correlation coefficient can be used to estimate the product-moment correlation between the two underlying continuous variables. Unlike the point-biserial correlation coefficient, biserial correlation coefficients can therefore be integrated with product-moment correlation coefficients in the same meta-analysis. The present article describes the estimation of the biserial correlation coefficient for meta-analytic purposes and reports simulation results comparing different methods for estimating the coefficient's sampling variance. The findings indicate that commonly employed methods yield inconsistent estimates of the sampling variance across a broad range of research situations. In contrast, consistent estimates can be obtained using two methods that appear to be unknown in the meta-analytic literature. A variance-stabilizing transformation for the biserial correlation coefficient is described that allows for the construction of confidence intervals for individual coefficients with close to nominal coverage probabilities in most of the examined conditions. Copyright © 2016 John Wiley & Sons, Ltd.

    更新日期:2019-11-01
  • An appraisal of meta-analysis guidelines: how do they relate to safety outcomes?
    Res. Synth. Methods (IF 5.043) Pub Date : 2016-09-11
    Meg Bennetts,Ed Whalen,Sima Ahadieh,Joseph C Cappelleri

    Although well developed to assess efficacy questions, meta-analyses and, more generally, systematic reviews, have received less attention in application to safety-related questions. As a result, many open questions remain on how best to apply meta-analyses in the safety setting. This appraisal attempts to: (i) summarize the current guidelines for assessing individual studies, systematic reviews, and network meta-analyses; (ii) describe several publications on safety meta-analytic approaches; and (iii) present some of the questions and issues that arise with safety data. A number of gaps in the current quality guidelines are identified along with issues to consider when performing a safety meta-analysis. While some work is ongoing to provide guidance to improve the quality of safety meta-analyses, this review emphasizes the critical need for better reporting and increased transparency regarding safety data in the systematic review guidelines. Copyright © 2016 John Wiley & Sons, Ltd.

    更新日期:2019-11-01
Contents have been reproduced by permission of the publishers.
导出
全部期刊列表>>
2020新春特辑
限时免费阅读临床医学内容
ACS材料视界
科学报告最新纳米科学与技术研究
清华大学化学系段昊泓
自然科研论文编辑服务
加州大学洛杉矶分校
上海纽约大学William Glover
南开大学化学院周其林
课题组网站
X-MOL
北京大学分子工程苏南研究院
华东师范大学分子机器及功能材料
中山大学化学工程与技术学院
试剂库存
天合科研
down
wechat
bug