当前期刊: BMC Medicine Go to current issue    加入关注   
显示样式:        排序: 导出
  • Youth Depression Alleviation with Anti-inflammatory Agents (YoDA-A): a randomised clinical trial of rosuvastatin and aspirin
    BMC Med. (IF 8.285) Pub Date : 2020-01-17
    Michael Berk; Mohammadreza Mohebbi; Olivia M. Dean; Sue M. Cotton; Andrew M. Chanen; Seetal Dodd; Aswin Ratheesh; G. Paul Amminger; Mark Phelan; Amber Weller; Andrew Mackinnon; Francesco Giorlando; Shelley Baird; Lisa Incerti; Rachel E. Brodie; Natalie O. Ferguson; Simon Rice; Miriam R. Schäfer; Edward Mullen; Sarah Hetrick; Melissa Kerr; Susy M. Harrigan; Amelia L. Quinn; Catherine Mazza; Patrick McGorry; Christopher G. Davey

    Inflammation contributes to the pathophysiology of major depressive disorder (MDD), and anti-inflammatory strategies might therefore have therapeutic potential. This trial aimed to determine whether adjunctive aspirin or rosuvastatin, compared with placebo, reduced depressive symptoms in young people (15–25 years). YoDA-A, Youth Depression Alleviation with Anti-inflammatory Agents, was a 12-week triple-blind, randomised, controlled trial. Participants were young people (aged 15–25 years) with moderate to severe MDD (MADRS mean at baseline 32.5 ± 6.0; N = 130; age 20.2 ± 2.6; 60% female), recruited between June 2013 and June 2017 across six sites in Victoria, Australia. In addition to treatment as usual, participants were randomised to receive aspirin (n = 40), rosuvastatin (n = 48), or placebo (n = 42), with assessments at baseline and weeks 4, 8, 12, and 26. The primary outcome was change in the Montgomery-Åsberg Depression Rating Scale (MADRS) from baseline to week 12. At the a priori primary endpoint of MADRS differential change from baseline at week 12, there was no significant difference between aspirin and placebo (1.9, 95% CI (− 2.8, 6.6), p = 0.433), or rosuvastatin and placebo (− 4.2, 95% CI (− 9.1, 0.6), p = 0.089). For rosuvastatin, secondary outcomes on self-rated depression and global impression, quality of life, functioning, and mania were not significantly different from placebo. Aspirin was inferior to placebo on the Quality of Life Enjoyment and Satisfaction Questionnaire (Q-LES-Q-SF) at week 12. Statins were superior to aspirin on the MADRS, the Clinical Global Impressions Severity Scale (CGI-S), and the Negative Problem Orientation Questionnaire scale (NPOQ) at week 12. The addition of either aspirin or rosuvastatin did not to confer any beneficial effect over and above routine treatment for depression in young people. Exploratory comparisons of secondary outcomes provide limited support for a potential therapeutic role for adjunctive rosuvastatin, but not for aspirin, in youth depression. Australian New Zealand Clinical Trials Registry, ACTRN12613000112763. Registered on 30/01/2013.

  • Pregnancy-specific malarial immunity and risk of malaria in pregnancy and adverse birth outcomes: a systematic review
    BMC Med. (IF 8.285) Pub Date : 2020-01-16
    Julia C. Cutts; Paul A. Agius; Zaw Lin; Rosanna Powell; Kerryn Moore; Bridget Draper; Julie A. Simpson; Freya J. I. Fowkes

    In endemic areas, pregnant women are highly susceptible to Plasmodium falciparum malaria characterized by the accumulation of parasitized red blood cells (pRBC) in the placenta. In subsequent pregnancies, women develop protective immunity to pregnancy-associated malaria and this has been hypothesized to be due to the acquisition of antibodies to the parasite variant surface antigen VAR2CSA. In this systematic review we provide the first synthesis of the association between antibodies to pregnancy-specific P. falciparum antigens and pregnancy and birth outcomes. We conducted a systematic review and meta-analysis of population-based studies (published up to 07 June 2019) of pregnant women living in P. falciparum endemic areas that examined antibody responses to pregnancy-specific P. falciparum antigens and outcomes including placental malaria, low birthweight, preterm birth, peripheral parasitaemia, maternal anaemia, and severe malaria. We searched 6 databases and identified 33 studies (30 from Africa) that met predetermined inclusion and quality criteria: 16 studies contributed estimates in a format enabling inclusion in meta-analysis and 17 were included in narrative form only. Estimates were mostly from cross-sectional data (10 studies) and were heterogeneous in terms of magnitude and direction of effect. Included studies varied in terms of antigens tested, methodology used to measure antibody responses, and epidemiological setting. Antibody responses to pregnancy-specific pRBC and VAR2CSA antigens, measured at delivery, were associated with placental malaria (9 studies) and may therefore represent markers of infection, rather than correlates of protection. Antibody responses to pregnancy-specific pRBC, but not recombinant VAR2CSA antigens, were associated with trends towards protection from low birthweight (5 studies). Whilst antibody responses to several antigens were positively associated with the presence of placental and peripheral infections, this review did not identify evidence that any specific antibody response is associated with protection from pregnancy-associated malaria across multiple populations. Further prospective cohort studies using standardized laboratory methods to examine responses to a broad range of antigens in different epidemiological settings and throughout the gestational period, will be necessary to identify and prioritize pregnancy-specific P. falciparum antigens to advance the development of vaccines and serosurveillance tools targeting pregnant women.

  • Can real-world data really replace randomised clinical trials?
    BMC Med. (IF 8.285) Pub Date : 2020-01-15
    Sreeram V. Ramagopalan; Alex Simpson; Cormac Sammon

    Classically, randomised controlled trials (RCTs) are considered the gold standard for demonstrating product efficacy for the regulatory approval of medicines. However, as personalised medicine becomes increasingly common, patient recruitment into RCTs is affected and – sometimes – it is not possible to include a control arm [1]. Real-world data (RWD) are data that are collected outside of RCTs [2]. They are gaining increasing attention for their use in regulatory decision-making. The United States twenty-first Century Cures Act mandated that the US Food and Drug Administration (FDA) should provide guidance about the circumstances under which manufacturers can use RWD to support the approval of a medicine. More recently, investigators from the European Medicines Agency (EMA) detailed their views on this topic [3]. RWD for regulatory approval: opportunities and challenges Eichler et al., from the EMA, state that, “the RCT will, in our view, remain the best available standard and be required in many circumstances, but will need to be complemented by other methodologies to address research questions where a traditional RCT may be unfeasible or unethical.” Thus, the gauntlet has been laid down for RWD to be used to support European regulatory approval. Indeed, RWD has been used by the EMA to approve several medicines for rare/orphan indications [4]. Eichler and colleagues, however, highlight that RWD methods must be critically appraised before they can be more widely accepted. They suggest that this appraisal can be undertaken via prospective validation of any proposed method with a pre-defined protocol. Why the need for validation? Studies of the concordance between the results of RCTs and RWD studies investigating the same research question have given mixed results [5, 6]. It has been suggested that this discordance can be attributed to differences in the populations being investigated, or bias in RWD studies as a result of lack of randomisation. Using an example of cancer risk in statin users, Dickerman and co-workers attempted to understand why RWD studies have shown a protective effect and RCTs showed no effect on neoplasm incidence [7]. One of the key principles of an RCT is to assess patient characteristics at baseline to check study eligibility based on inclusion/exclusion criteria. If eligibility is met, the next task is to randomise subjects into groups and, subsequently, to provide treatment as assigned for each group. Dickerman et al. operationalised a similar ‘target trial’ approach using RWD and followed up trial-eligible new and non-users of statins to compare rates of cancer between these groups. Performing the analysis in this way enabled the researchers to illustrate that results from RWD were in acquiescence with those from RCTs. Furthermore, previously reported differences were largely a result of two avoidable issues: immortal time and selection bias caused by the inclusion of prevalent statin users (prevalent users had to have survived without cancer up to baseline, leading to artificially lower rates of cancer in the statin group), rather than being attributed to the lack of randomisation per se. As Dickerman et al. acknowledge, a limitation of the outcome they studied is that confounding by indication (whereby the reason for prescribing a patient medication is also associated with the outcome of interest) is unlikely to have a major role. Where the outcome is more likely to be affected by confounding by indication, then – to mimic the randomisation element of an RCT and appropriately compare treatment groups – RWD studies must carefully adjust for all baseline confounders. In this regard, Carrigan et al. recently report results exploring a research question more likely to be affected by confounding by indication [8]: whether control groups generated from RWD could approximate the control arms used in published RCTs in non-small cell lung cancer. In 10 of the 11 analyses conducted, hazard ratio estimates for overall survival derived from comparing RWD control arms with the intervention arm from the RCT were similar to those seen in the original RCT comparison. However, the analyses showed that a simple ‘target trial’ alignment of the RWD arm with the trial inclusion/exclusion criteria could not fully replicate the RCT effect estimate; additional adjustment to control for confounding using propensity scores was required. The single non-concordant analysis was thought to be associated with a biomarker that was likely enriched in the RCT but was not present in RWD and therefore could not be adjusted for. This exception to the overall consistency between RWD and RCT findings highlights the importance of needing RWD with information available on all possible confounders to avoid generating inaccurate results. These two recent studies show that analytical methods and approaches are in place to enable consistency between RCT and RWD results. Further evidence will arise from the FDA-funded RCT DUPLICATE project, which will investigate RCT–RWD concordance on a larger scale [9]. In light of this, the question arises: how many examples are required before regulators can begin to accept RWD for regulatory decision-making? Eichler et al. state that the answer is unlikely to be simple: decision-makers should perhaps first accept RWD analyses for situations in which there is a relatively small impact (e.g. label expansion) and then gradually expand acceptability as confidence in the method grows. Accumulating evidence suggests that appropriately conducted RWD studies have the potential to support regulatory decisions in the absence of RCT data. Further work may be needed to better illustrate the settings in which RWD analyses can robustly and consistently match the results of RCTs and, more importantly, the settings in which they cannot match them. After careful consideration of the potential for bias, regulators can then determine when they would unequivocally accept RWD in place of an RCT. If studies based on RWD are ever to replace RCTs, regulators may need to accept that the cost of accelerating patient access to treatment carries a higher level of decision-making uncertainty than that with which they are familiar. Not applicable. 1. Moscow JA, Fojo T, Schilsky RL. The evidence framework for precision cancer medicine. Nat Rev Clin Oncol. 2018;15:183. Article Google Scholar 2. McDonald L, Lambrelli D, Wasiak R, Ramagopalan SV. Real-world data in the United Kingdom: opportunities and challenges. BMC Med. 2016;14:97. Article Google Scholar 3. Eichler HG, Koenig F, Arlett P, Enzmann H, Humphreys A, Pétavy F, et al. Are novel, nonrandomised analytic methods fit for decision-making? The need for prospective, controlled, and transparent validation. Clin Pharmacol Therapeut. 2019. https://doi.org/10.1002/cpt.1638. 4. Cave A, Kurz X, Arlett P. Real-world data for regulatory decision making: challenges and possible solutions for Europe. Clin Pharmacol Ther. 2019;106:36. Article Google Scholar 5. Anglemyer A, Horvath HT, Bero L. Healthcare outcomes assessed with observational study designs compared with those assessed in randomized trials. Cochrane Database Syst Rev. 2014;4:MR000034. Google Scholar 6. Hemkens LG, Contopoulos-Ioannidis DG, Ioannidis JP. Agreement of treatment effects for mortality from routinely collected data and subsequent randomized trials: meta-epidemiological survey. BMJ. 2016;352:i493. Article Google Scholar 7. Dickerman BA, García-Albéniz X, Logan RW, Denaxas S, Hernán MA. Avoidable flaws in observational analyses: an application to statins and cancer. Nat Med. 2019:1–6. 8. Carrigan G, Whipple S, Capra WB, Taylor MD, Brown JS, Lu M, et al. Using electronic health records to derive control arms for early phase single-arm lung cancer trials: proof-of-concept in randomized controlled trials. Clin Pharmacol Ther. 2019. https://doi.org/10.1002/cpt.1586. 9. RCT DUPLICATE. Projects. Effectiveness research with Real World Data to support FDA’s regulatory decision making: A Real World Evidence demonstration project. https://www.rctduplicate.org/projects.html. Accessed 21 Nov 2019. Download references Not applicable. Funding No specific funding was received for this work. Affiliations London School of Economics and Political Science, Houghton St, London, WC2A 2AE, UK Sreeram V. Ramagopalan Bristol-Myers Squibb, Sanderson Road, Uxbridge, UB8 1DH, UK Alex Simpson PHMR, Berkley Grove, London, NW1 8XY, UK Cormac SammonAuthors Search for Sreeram V. Ramagopalan in: PubMed • Google Scholar Search for Alex Simpson in: PubMed • Google Scholar Search for Cormac Sammon in: PubMed • Google Scholar Contributions SVR wrote the first draft of the article. All authors contributed to subsequent drafts and the final manuscript. All authors read and approved the final version of the manuscript. Corresponding author Correspondence to Sreeram V. Ramagopalan. Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests SVR has been an employee of pharmaceutical and life science consultancy companies. AS is an employee of Bristol-Myers Squibb. CS is an employee of PHMR. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Reprints and Permissions Cite this article Ramagopalan, S.V., Simpson, A. & Sammon, C. Can real-world data really replace randomised clinical trials?. BMC Med 18, 13 (2020) doi:10.1186/s12916-019-1481-8 Download citation Received: 09 December 2019 Accepted: 09 December 2019 Published: 15 January 2020 DOI: https://doi.org/10.1186/s12916-019-1481-8 Keywords Real world data Randomised clinical trials Epidemiology Treatment Effectiveness

  • Vaccinating children against influenza: overall cost-effective with potential for undesirable outcomes
    BMC Med. (IF 8.285) Pub Date : 2020-01-14
    Pieter T. de Boer; Jantien A. Backer; Albert Jan van Hoek; Jacco Wallinga

    The present study aims to assess the cost-effectiveness of an influenza vaccination program for children in the Netherlands. This requires an evaluation of the long-term impact of such a program on the burden of influenza across all age groups, using a transmission model that accounts for the seasonal variability in vaccine effectiveness and the shorter duration of protection following vaccination as compared to natural infection. We performed a cost-effectiveness analysis based on a stochastic dynamic transmission model that has been calibrated to reported GP visits with influenza-like illness in the Netherlands over 11 seasons (2003/2004 to 2014/2015). We analyzed the costs and effects of extending the current program with vaccination of children aged 2–16 years at 50% coverage over 20 consecutive seasons. We measured the effects in quality-adjusted life-years (QALYs) and we adopted a societal perspective. The childhood vaccination program is estimated to have an average incremental cost-effectiveness ratio (ICER) of €3944 per QALY gained and is cost-effective in the general population (across 1000 simulations; conventional Dutch threshold of €20,000 per QALY gained). The childhood vaccination program is not estimated to be cost-effective for the target-group itself with an average ICER of €57,054 per QALY gained. Uncertainty analyses reveal that these ICERs hide a wide range of outcomes. Even though introduction of a childhood vaccination program decreases the number of infections, it tends to lead to larger epidemics: in 23.3% of 1000 simulations, the childhood vaccination program results in an increase in seasons with a symptomatic attack rate larger than 5%, which is expected to cause serious strain on the health care system. In 6.4% of 1000 simulations, the childhood vaccination program leads to a net loss of QALYs. These findings are robust across different targeted age groups and vaccination coverages. Modeling indicates that childhood influenza vaccination is cost-effective in the Netherlands. However, childhood influenza vaccination is not cost-effective when only outcomes for the children themselves are considered. In approximately a quarter of the simulations, the introduction of a childhood vaccination program increases the frequency of seasons with a symptomatic attack rate larger than 5%. The possibility of an overall health loss cannot be excluded.

  • Reductions in sugar sales from soft drinks in the UK from 2015 to 2018
    BMC Med. (IF 8.285) Pub Date : 2020-01-13
    L. K. Bandy; P. Scarborough; R. A. Harrington; M. Rayner; S. A. Jebb

    The consumption of free sugars in the UK is more than double the guideline intake for adults and close to triple for children, with soft drinks representing a significant proportion. The aim of this study was to assess how individual soft drink companies and consumers have responded to calls to reduce sugar consumption, including the soft drink industry levy (SDIL), between 2015 and 2018. This was an annual cross-sectional study using nutrient composition data of 7377 products collected online, paired with volume sales data for 195 brands offered by 57 companies. The main outcome measures were sales volume, sugar content and volume of sugars sold by company and category, expressed in total and per capita per day terms. Between 2015 and 2018, the volume of sugars sold per capita per day from soft drinks declined by 30%, equivalent to a reduction of 4.6 g per capita per day. The sales-weighted mean sugar content of soft drinks fell from 4.4 g/100 ml in 2015 to 2.9 g/100 ml in 2018. The total volume sales of soft drinks that are subject to the SDIL (i.e. contain more than 5 g/100 ml of sugar) fell by 50%, while volume sales of low- and zero-sugar (< 5 g/100 ml) drinks rose by 40%. Action by the soft drinks industry to reduce sugar in products and change their product portfolios, coupled with changes in consumer purchasing, has led to a significant reduction in the total volume and per capita sales of sugars sold in soft drinks in the UK. The rate of change accelerated between 2017 and 2018, which also implies that the implementation of the SDIL acted as an extra incentive for companies to reformulate above and beyond what was already being done as part of voluntary commitments to reformulation, or changes in sales driven by consumer preferences.

  • Lifestyle factors and risk of multimorbidity of cancer and cardiometabolic diseases: a multinational cohort study
    BMC Med. (IF 8.285) Pub Date : 2020-01-10
    Heinz Freisling; Vivian Viallon; Hannah Lennon; Vincenzo Bagnardi; Cristian Ricci; Adam S. Butterworth; Michael Sweeting; David Muller; Isabelle Romieu; Pauline Bazelle; Marina Kvaskoff; Patrick Arveux; Gianluca Severi; Christina Bamia; Tilman Kühn; Rudolf Kaaks; Manuela Bergmann; Heiner Boeing; Anne Tjønneland; Anja Olsen; Kim Overvad; Christina C. Dahm; Virginia Menéndez; Antonio Agudo; Maria-Jose Sánchez; Pilar Amiano; Carmen Santiuste; Aurelio Barricarte Gurrea; Tammy Y. N. Tong; Julie A. Schmidt; Ioanna Tzoulaki; Konstantinos K. Tsilidis; Heather Ward; Domenico Palli; Claudia Agnoli; Rosario Tumino; Fulvio Ricceri; Salvatore Panico; H. Susan J. Picavet; Marije Bakker; Evelyn Monninkhof; Peter Nilsson; Jonas Manjer; Olov Rolandsson; Elin Thysell; Elisabete Weiderpass; Mazda Jenab; Elio Riboli; Paolo Vineis; John Danesh; Nick J. Wareham; Marc J. Gunter; Pietro Ferrari

    Although lifestyle factors have been studied in relation to individual non-communicable diseases (NCDs), their association with development of a subsequent NCD, defined as multimorbidity, has been scarcely investigated. The aim of this study was to investigate associations between five lifestyle factors and incident multimorbidity of cancer and cardiometabolic diseases. In this prospective cohort study, 291,778 participants (64% women) from seven European countries, mostly aged 43 to 58 years and free of cancer, cardiovascular disease (CVD), and type 2 diabetes (T2D) at recruitment, were included. Incident multimorbidity of cancer and cardiometabolic diseases was defined as developing subsequently two diseases including first cancer at any site, CVD, and T2D in an individual. Multi-state modelling based on Cox regression was used to compute hazard ratios (HR) and 95% confidence intervals (95% CI) of developing cancer, CVD, or T2D, and subsequent transitions to multimorbidity, in relation to body mass index (BMI), smoking status, alcohol intake, physical activity, adherence to the Mediterranean diet, and their combination as a healthy lifestyle index (HLI) score. Cumulative incidence functions (CIFs) were estimated to compute 10-year absolute risks for transitions from healthy to cancer at any site, CVD (both fatal and non-fatal), or T2D, and to subsequent multimorbidity after each of the three NCDs. During a median follow-up of 11 years, 1910 men and 1334 women developed multimorbidity of cancer and cardiometabolic diseases. A higher HLI, reflecting healthy lifestyles, was strongly inversely associated with multimorbidity, with hazard ratios per 3-unit increment of 0.75 (95% CI, 0.71 to 0.81), 0.84 (0.79 to 0.90), and 0.82 (0.77 to 0.88) after cancer, CVD, and T2D, respectively. After T2D, the 10-year absolute risks of multimorbidity were 40% and 25% for men and women, respectively, with unhealthy lifestyle, and 30% and 18% for men and women with healthy lifestyles. Pre-diagnostic healthy lifestyle behaviours were strongly inversely associated with the risk of cancer and cardiometabolic diseases, and with the prognosis of these diseases by reducing risk of multimorbidity.

  • ‘Optimising’ breastfeeding: what can we learn from evolutionary, comparative and anthropological aspects of lactation?
    BMC Med. (IF 8.285) Pub Date : 2020-01-09
    Mary S. Fewtrell; Nurul H. Mohd Shukri; Jonathan C. K. Wells

    Promoting breastfeeding is an important public health intervention, with benefits for infants and mothers. Even modest increases in prevalence and duration may yield considerable economic savings. However, despite many initiatives, compliance with recommendations is poor in most settings – particularly for exclusive breastfeeding. Mothers commonly consult health professionals for infant feeding and behavioural problems. We argue that broader consideration of lactation, incorporating evolutionary, comparative and anthropological aspects, could provide new insights into breastfeeding practices and problems, enhance research and ultimately help to develop novel approaches to improve initiation and maintenance. Our current focus on breastfeeding as a strategy to improve health outcomes must engage with the evolution of lactation as a flexible trait under selective pressure to maximise reproductive fitness. Poor understanding of the dynamic nature of breastfeeding may partly explain why some women are unwilling or unable to follow recommendations. We identify three key implications for health professionals, researchers and policymakers. Firstly, breastfeeding is an adaptive process during which, as in other mammals, variability allows adaptation to ecological circumstances and reflects mothers’ phenotypic variability. Since these factors vary within and between humans, the likelihood that a ‘one size fits all’ approach will be appropriate for all mother-infant dyads is counterintuitive; flexibility is expected. From an anthropological perspective, lactation is a period of tension between mother and offspring due to genetic ‘conflicts of interest’. This may underlie common breastfeeding ‘problems’ including perceived milk insufficiency and problematic infant crying. Understanding this – and adopting a more flexible, individualised approach – may allow a more creative approach to solving these problems. Incorporating evolutionary concepts may enhance research investigating mother–infant signalling during breastfeeding; where possible, studies should be experimental to allow identification of causal effects and mechanisms. Finally, the importance of learned behaviour, social and cultural aspects of primate (especially human) lactation may partly explain why, in cultures where breastfeeding has lost cultural primacy, promotion starting in pregnancy may be ineffective. In such settings, educating children and young adults may be important to raise awareness and provide learning opportunities that may be essential in our species, as in other primates.

  • Mass cytometry analysis reveals a distinct immune environment in peritoneal fluid in endometriosis: a characterisation study
    BMC Med. (IF 8.285) Pub Date : 2020-01-07
    Manman Guo; Cemsel Bafligil; Thomas Tapmeier; Carol Hubbard; Sanjiv Manek; Catherine Shang; Fernando O. Martinez; Nicole Schmidt; Maik Obendorf; Holger Hess-Stumpp; Thomas M. Zollner; Stephen Kennedy; Christian M. Becker; Krina T. Zondervan; Adam P. Cribbs; Udo Oppermann

    Endometriosis is a gynaecological condition characterised by immune cell infiltration and distinct inflammatory signatures found in the peritoneal cavity. In this study, we aim to characterise the immune microenvironment in samples isolated from the peritoneal cavity in patients with endometriosis. We applied mass cytometry (CyTOF), a recently developed multiparameter single-cell technique, in order to characterise and quantify the immune cells found in peritoneal fluid and peripheral blood from endometriosis and control patients. Our results demonstrate the presence of more than 40 different distinct immune cell types within the peritoneal cavity. This suggests that there is a complex and highly heterogeneous inflammatory microenvironment underpinning the pathology of endometriosis. Stratification by clinical disease stages reveals a dynamic spectrum of cell signatures suggesting that adaptations in the inflammatory system occur due to the severity of the disease. Notably, among the inflammatory microenvironment in peritoneal fluid (PF), the presence of CD69+ T cell subsets is increased in endometriosis when compared to control patient samples. On these CD69+ cells, the expression of markers associated with T cell function are reduced in PF samples compared to blood. Comparisons between CD69+ and CD69− populations reveal distinct phenotypes across peritoneal T cell lineages. Taken together, our results suggest that both the innate and the adaptive immune system play roles in endometriosis. This study provides a systematic characterisation of the specific immune environment in the peritoneal cavity and identifies cell immune signatures associated with endometriosis. Overall, our results provide novel insights into the specific cell phenotypes governing inflammation in patients with endometriosis. This prospective study offers a useful resource for understanding disease pathology and opportunities for identifying therapeutic targets.

  • Multi-level transcriptome sequencing identifies COL1A1 as a candidate marker in human heart failure progression
    BMC Med. (IF 8.285) Pub Date : 2020-01-06
    Xiumeng Hua; Yin-Ying Wang; Peilin Jia; Qing Xiong; Yiqing Hu; Yuan Chang; Songqing Lai; Yong Xu; Zhongming Zhao; Jiangping Song

    Heart failure (HF) has been recognized as a global pandemic with a high rate of hospitalization, morbidity, and mortality. Although numerous advances have been made, its representative molecular signatures remain largely unknown, especially the role of genes in HF progression. The aim of the present prospective follow-up study was to reveal potential biomarkers associated with the progression of heart failure. We generated multi-level transcriptomic data from a cohort of left ventricular heart tissue collected from 21 HF patients and 9 healthy donors. By using Masson staining to calculate the fibrosis percentage for each sample, we applied lasso regression model to identify the genes associated with fibrosis as well as progression. The genes were further validated by immunohistochemistry (IHC) staining in the same cohort and qRT-PCR using another independent cohort (20 HF and 9 healthy donors). Enzyme-linked immunosorbent assay (ELISA) was used to measure the plasma level in a validation cohort (139 HF patients) for predicting HF progression. Based on the multi-level transcriptomic data, we examined differentially expressed genes [mRNAs, microRNAs, and long non-coding RNAs (lncRNAs)] in the study cohort. The follow-up functional annotation and regulatory network analyses revealed their potential roles in regulating extracellular matrix. We further identified several genes that were associated with fibrosis. By using the survival time before transplantation, COL1A1 was identified as a potential biomarker for HF progression and its upregulation was confirmed by both IHC and qRT-PCR. Furthermore, COL1A1 content ≥ 256.5 ng/ml in plasma was found to be associated with poor survival within 1 year of heart transplantation from heart failure [hazard ratio (HR) 7.4, 95% confidence interval (CI) 3.5 to 15.8, Log-rank p value < 1.0 × 10− 4]. Our results suggested that COL1A1 might be a plasma biomarker of HF and associated with HF progression, especially to predict the 1-year survival from HF onset to transplantation.

  • Drug-resistant enteric fever worldwide, 1990 to 2018: a systematic review and meta-analysis
    BMC Med. (IF 8.285) Pub Date : 2020-01-03
    Annie J. Browne; Bahar H. Kashef Hamadani; Emmanuelle A. P. Kumaran; Puja Rao; Joshua Longbottom; Eli Harriss; Catrin E. Moore; Susanna Dunachie; Buddha Basnyat; Stephen Baker; Alan D. Lopez; Nicholas P. J. Day; Simon I. Hay; Christiane Dolecek

    Antimicrobial resistance (AMR) is an increasing threat to global health. There are > 14 million cases of enteric fever every year and > 135,000 deaths. The disease is primarily controlled by antimicrobial treatment, but this is becoming increasingly difficult due to AMR. Our objectives were to assess the prevalence and geographic distribution of AMR in Salmonella enterica serovars Typhi and Paratyphi A infections globally, to evaluate the extent of the problem, and to facilitate the creation of geospatial maps of AMR prevalence to help targeted public health intervention. We performed a systematic review of the literature by searching seven databases for studies published between 1990 and 2018. We recategorised isolates to allow the analysis of fluoroquinolone resistance trends over the study period. The prevalence of multidrug resistance (MDR) and fluoroquinolone non-susceptibility (FQNS) in individual studies was illustrated by forest plots, and a random effects meta-analysis was performed, stratified by Global Burden of Disease (GBD) region and 5-year time period. Heterogeneity was assessed using the I2 statistics. We present a descriptive analysis of ceftriaxone and azithromycin resistance. We identified 4557 articles, of which 384, comprising 124,347 isolates (94,616 S. Typhi and 29,731 S. Paratyphi A) met the pre-specified inclusion criteria. The majority (276/384; 72%) of studies were from South Asia; 40 (10%) articles were identified from Sub-Saharan Africa. With the exception of MDR S. Typhi in South Asia, which declined between 1990 and 2018, and MDR S. Paratyphi A, which remained at low levels, resistance trends worsened for all antimicrobials in all regions. We identified several data gaps in Africa and the Middle East. Incomplete reporting of antimicrobial susceptibility testing (AST) and lack of quality assurance were identified. Drug-resistant enteric fever is widespread in low- and middle-income countries, and the situation is worsening. It is essential that public health and clinical measures, which include improvements in water quality and sanitation, the deployment of S. Typhi vaccination, and an informed choice of treatment are implemented. However, there is no licenced vaccine for S. Paratyphi A. The standardised reporting of AST data and rollout of external quality control assessment are urgently needed to facilitate evidence-based policy and practice. PROSPERO CRD42018029432.

  • Travel time to health facilities in areas of outbreak potential: maps for guiding local preparedness and response
    BMC Med. (IF 8.285) Pub Date : 2019-12-30
    E. N. Hulland; K. E. Wiens; S. Shirude; J. D. Morgan; A. Bertozzi-Villa; T. H. Farag; N. Fullman; M. U. G. Kraemer; M. K. Miller-Petrie; V. Gupta; R. C. Reiner; P. Rabinowitz; J. N. Wasserheit; B. P. Bell; S. I. Hay; D. J. Weiss; D. M. Pigott

    Repeated outbreaks of emerging pathogens underscore the need for preparedness plans to prevent, detect, and respond. As countries develop and improve National Action Plans for Health Security, addressing subnational variation in preparedness is increasingly important. One facet of preparedness and mitigating disease transmission is health facility accessibility, linking infected persons with health systems and vice versa. Where potential patients can access care, local facilities must ensure they can appropriately diagnose, treat, and contain disease spread to prevent secondary transmission; where patients cannot readily access facilities, alternate plans must be developed. Here, we use travel time to link facilities and populations at risk of viral hemorrhagic fevers (VHFs) and identify spatial variation in these respective preparedness demands. We used geospatial resources of travel friction, pathogen environmental suitability, and health facilities to determine facility accessibility of any at-risk location within a country. We considered in-country and cross-border movements of exposed populations and highlighted vulnerable populations where current facilities are inaccessible and new infrastructure would reduce travel times. We developed profiles for 43 African countries. Resulting maps demonstrate gaps in health facility accessibility and highlight facilities closest to areas at risk for VHF spillover. For instance, in the Central African Republic, we identified travel times of over 24 h to access a health facility. Some countries had more uniformly short travel times, such as Nigeria, although regional disparities exist. For some populations, including many in Botswana, access to areas at risk for VHF nationally was low but proximity to suitable spillover areas in bordering countries was high. Additional analyses provide insights for considering future resource allocation. We provide a contemporary use case for these analyses for the ongoing Ebola outbreak. These maps demonstrate the use of geospatial analytics for subnational preparedness, identifying facilities close to at-risk populations for prioritizing readiness to detect, treat, and respond to cases and highlighting where gaps in health facility accessibility exist. We identified cross-border threats for VHF exposure and demonstrate an opportunity to improve preparedness activities through the use of precision public health methods and data-driven insights for resource allocation as part of a country’s preparedness plans.

  • The NASSS framework for ex post theorisation of technology-supported change in healthcare: worked example of the TORPEDO programme
    BMC Med. (IF 8.285) Pub Date : 2019-12-30
    Seye Abimbola; Bindu Patel; David Peiris; Anushka Patel; Mark Harris; Tim Usherwood; Trisha Greenhalgh

    Evaluation of health technology programmes should be theoretically informed, interdisciplinary, and generate in-depth explanations. The NASSS (non-adoption, abandonment, scale-up, spread, sustainability) framework was developed to study unfolding technology programmes in real time—and in particular to identify and manage their emergent uncertainties and interdependencies. In this paper, we offer a worked example of how NASSS can also inform ex post (i.e. retrospective) evaluation. We studied the TORPEDO (Treatment of Cardiovascular Risk in Primary Care using Electronic Decision Support) research programme, a multi-faceted computerised quality improvement intervention for cardiovascular disease prevention in Australian general practice. The technology (HealthTracker) had shown promise in a cluster randomised controlled trial (RCT), but its uptake and sustainability in a real-world implementation phase was patchy. To explain this variation, we used NASSS to undertake secondary analysis of the multi-modal TORPEDO dataset (results and process evaluation of the RCT, survey responses, in-depth professional interviews, videotaped consultations) as well as a sample of new, in-depth narrative interviews with TORPEDO researchers. Ex post analysis revealed multiple areas of complexity whose influence and interdependencies helped explain the wide variation in uptake and sustained use of the HealthTracker technology: the nature of cardiovascular risk in different populations, the material properties and functionality of the technology, how value (financial and non-financial) was distributed across stakeholders in the system, clinicians’ experiences and concerns, organisational preconditions and challenges, extra-organisational influences (e.g. policy incentives), and how interactions between all these influences unfolded over time. The NASSS framework can be applied retrospectively to generate a rich, contextualised narrative of technology-supported change efforts and the numerous interacting influences that help explain its successes, failures, and unexpected events. A NASSS-informed ex post analysis can supplement earlier, contemporaneous evaluations to uncover factors that were not apparent or predictable at the time but dynamic and emergent.

  • Suicide prevention and depression apps’ suicide risk assessment and management: a systematic assessment of adherence to clinical guidelines
    BMC Med. (IF 8.285) Pub Date : 2019-12-19
    Laura Martinengo; Louise Van Galen; Elaine Lum; Martin Kowalski; Mythily Subramaniam; Josip Car

    There are an estimated 800,000 suicides per year globally, and approximately 16,000,000 suicide attempts. Mobile apps may help address the unmet needs of people at risk. We assessed adherence of suicide prevention advice in depression management and suicide prevention apps to six evidence-based clinical guideline recommendations: mood and suicidal thought tracking, safety plan development, recommendation of activities to deter suicidal thoughts, information and education, access to support networks, and access to emergency counseling. A systematic assessment of depression and suicide prevention apps available in Google Play and Apple’s App Store was conducted. Apps were identified by searching 42matters in January 2019 for apps launched or updated since January 2017 using the terms “depression,” “depressed,” “depress,” “mood disorders,” “suicide,” and “self-harm.” General characteristics of apps, adherence with six suicide prevention strategies identified in evidence-based clinical guidelines using a 50-question checklist developed by the study team, and trustworthiness of the app based on HONcode principles were appraised and reported as a narrative review, using descriptive statistics. The initial search yielded 2690 potentially relevant apps. Sixty-nine apps met inclusion criteria and were systematically assessed. There were 20 depression management apps (29%), 3 (4%) depression management and suicide prevention apps, and 46 (67%) suicide prevention apps. Eight (12%) depression management apps were chatbots. Only 5/69 apps (7%) incorporated all six suicide prevention strategies. Six apps (6/69, 9%), including two apps available in both app stores and downloaded more than one million times each, provided an erroneous crisis helpline number. Most apps included emergency contact information (65/69 apps, 94%) and direct access to a crisis helpline through the app (46/69 apps, 67%). Non-existent or inaccurate suicide crisis helpline phone numbers were provided by mental health apps downloaded more than 2 million times. Only five out of 69 depression and suicide prevention apps offered all six evidence-based suicide prevention strategies. This demonstrates a failure of Apple and Google app stores, and the health app industry in self-governance, and quality and safety assurance. Governance levels should be stratified by the risks and benefits to users of the app, such as when suicide prevention advice is provided.

  • Delirium is prevalent in older hospital inpatients and associated with adverse outcomes: results of a prospective multi-centre study on World Delirium Awareness Day
    BMC Med. (IF 8.285) Pub Date : 2019-12-14

    Delirium is a common severe neuropsychiatric condition secondary to physical illness, which predominantly affects older adults in hospital. Prior to this study, the UK point prevalence of delirium was unknown. We set out to ascertain the point prevalence of delirium across UK hospitals and how this relates to adverse outcomes. We conducted a prospective observational study across 45 UK acute care hospitals. Older adults aged 65 years and older were screened and assessed for evidence of delirium on World Delirium Awareness Day (14th March 2018). We included patients admitted within the previous 48 h, excluding critical care admissions. The point prevalence of Diagnostic and Statistical Manual on Mental Disorders, Fifth Edition (DSM-5) delirium diagnosis was 14.7% (222/1507). Delirium presence was associated with higher Clinical Frailty Scale (CFS): CFS 4–6 (frail) (OR 4.80, CI 2.63–8.74), 7–9 (very frail) (OR 9.33, CI 4.79–18.17), compared to 1–3 (fit). However, higher CFS was associated with reduced delirium recognition (7–9 compared to 1–3; OR 0.16, CI 0.04–0.77). In multivariable analyses, delirium was associated with increased length of stay (+ 3.45 days, CI 1.75–5.07) and increased mortality (OR 2.43, CI 1.44–4.09) at 1 month. Screening for delirium was associated with an increased chance of recognition (OR 5.47, CI 2.67–11.21). Delirium is prevalent in older adults in UK hospitals but remains under-recognised. Frailty is strongly associated with the development of delirium, but delirium is less likely to be recognised in frail patients. The presence of delirium is associated with increased mortality and length of stay at one month. A national programme to increase screening has the potential to improve recognition.

  • Calibration: the Achilles heel of predictive analytics
    BMC Med. (IF 8.285) Pub Date : 2019-12-16
    Ben Van Calster; David J. McLernon; Maarten van Smeden; Laure Wynants; Ewout W. Steyerberg

    The assessment of calibration performance of risk prediction models based on regression or more flexible machine learning algorithms receives little attention. Herein, we argue that this needs to change immediately because poorly calibrated algorithms can be misleading and potentially harmful for clinical decision-making. We summarize how to avoid poor calibration at algorithm development and how to assess calibration at algorithm validation, emphasizing balance between model complexity and the available sample size. At external validation, calibration curves require sufficiently large samples. Algorithm updating should be considered for appropriate support of clinical practice. Efforts are required to avoid poor calibration when developing prediction models, to evaluate calibration when validating models, and to update models when indicated. The ultimate aim is to optimize the utility of predictive analytics for shared decision-making and patient counseling.

  • Introduction of primary screening using high-risk HPV DNA detection in the Dutch cervical cancer screening programme: a population-based cohort study
    BMC Med. (IF 8.285) Pub Date : 2019-12-11
    Clare A. Aitken; Heleen M. E. van Agt; Albert G. Siebers; Folkert J. van Kemenade; Hubert G. M. Niesters; Willem J. G. Melchers; Judith E. M. Vedder; Rob Schuurman; Adriaan J. C. van den Brule; Hans C. van der Linden; John W. J. Hinrichs; Anco Molijn; Klaas J. Hoogduin; Bettien M. van Hemel; Inge M. C. M. de Kok

    In January 2017, the Dutch cervical cancer screening programme transitioned from cytomorphological to primary high-risk HPV (hrHPV) DNA screening, including the introduction of self-sampling, for women aged between 30 and 60 years. The Netherlands was the first country to switch to hrHPV screening at the national level. We investigated the health impact of this transition by comparing performance indicators from the new hrHPV-based programme with the previous cytology-based programme. We obtained data from the Dutch nationwide network and registry of histo- and cytopathology (PALGA) for 454,573 women eligible for screening in 2017 who participated in the hrHPV-based programme between 1 January 2017 and 30 June 2018 (maximum follow-up of almost 21 months) and for 483,146 women eligible for screening in 2015 who participated in the cytology-based programme between 1 January 2015 and 31 March 2016 (maximum follow-up of 40 months). We compared indicators of participation (participation rate), referral (screen positivity; referral rate) and detection (cervical intraepithelial neoplasia (CIN) detection; number of referrals per detected CIN lesion). Participation in the hrHPV-based programme was significantly lower than that in the cytology-based programme (61% vs 64%). Screen positivity and direct referral rates were significantly higher in the hrHPV-based programme (positivity rate: 5% vs 9%; referral rate: 1% vs 3%). CIN2+ detection increased from 11 to 14 per 1000 women screened. Overall, approximately 2.2 times more clinical irrelevant findings (i.e. ≤CIN1) were found in the hrHPV-based programme, compared with approximately 1·3 times more clinically relevant findings (i.e. CIN2+); this difference was mostly due to a national policy change recommending colposcopy, rather than observation, of hrHPV-positive, ASC-US/LSIL results in the hrHPV-based programme. This is the first time that comprehensive results of nationwide implementation of hrHPV-based screening have been reported using high-quality data with a long follow-up. We have shown that both benefits and potential harms are higher in one screening round of a well-implemented hrHPV-based screening programme than in an established cytology-based programme. Lower participation in the new hrHPV programme may be due to factors such as invitation policy changes and the phased roll-out of the new programme. Our findings add further to evidence from trials and modelling studies on the effectiveness of hrHPV-based screening.

  • Therapeutic effect of nanoliposomal PCSK9 vaccine in a mouse model of atherosclerosis
    BMC Med. (IF 8.285) Pub Date : 2019-12-10
    Amir Abbas Momtazi-Borojeni; Mahmoud Reza Jaafari; Ali Badiee; Maciej Banach; Amirhossein Sahebkar

    Proprotein convertase subtilisin/kexin 9 (PCSK9) is an important regulator of low-density lipoprotein receptor (LDLR) and plasma levels of LDL cholesterol (LDL-C). PCSK9 inhibition is an efficient therapeutic approach for the treatment of dyslipidemia. We tested the therapeutic effect of a PCSK9 vaccine on dyslipidemia and atherosclerosis. Lipid film hydration method was used to prepare negatively charged nanoliposomes as a vaccine delivery system. An immunogenic peptide called immunogenic fused PCSK9-tetanus (IFPT) was incorporated on the surface of nanoliposomes using DSPE-PEG-maleimide lipid (L-IFPT) and adsorbed to Alhydrogel® (L-IFPTA+). The prepared vaccine formulation (L-IFPTA+) and empty liposomes (negative control) were inoculated four times with bi-weekly intervals in C57BL/6 mice on the background of a severe atherogenic diet and poloxamer 407 (thrice weekly) injection. Antibody titers were evaluated 2 weeks after each vaccination and at the end of the study in vaccinated mice. Effects of anti-PCSK9 vaccination on plasma concentrations of PCSK9 and its interaction with LDLR were determined using ELISA. To evaluate the inflammatory response, interferon-gamma (IFN-γ)- and interleukin (IL)-10-producing splenic cells were assayed using ELISpot analysis. L-IFPTA+ vaccine induced a high IgG antibody response against PCSK9 peptide in the vaccinated hypercholesterolemic mice. L-IFPTA+-induced antibodies specifically targeted PCSK9 and decreased its plasma consecration by up to 58.5% (− 164.7 ± 9.6 ng/mL, p = 0.0001) compared with the control. PCSK9-LDLR binding assay showed that generated antibodies could inhibit PCSK9-LDLR interaction. The L-IFPTA+ vaccine reduced total cholesterol, LDL-C, and VLDL-C by up to 44.7%, 51.7%, and 19.2%, respectively, after the fourth vaccination booster, compared with the control group at week 8. Long-term studies of vaccinated hypercholesterolemic mice revealed that the L-IFPTA+ vaccine was able to induce a long-lasting humoral immune response against PCSK9 peptide, which was paralleled by a significant decrease of LDL-C by up to 42% over 16 weeks post-prime immunization compared to control. Splenocytes isolated from the vaccinated group showed increased IL-10-producing cells and decreased IFN-γ-producing cells when compared with control and naive mice, suggesting the immune safety of the vaccine. L-IFPTA+ vaccine could generate long-lasting, functional, and safe PCSK9-specific antibodies in C57BL/6 mice with severe atherosclerosis, which was accompanied by long-term therapeutic effect against hypercholesterolemia and atherosclerosis.

  • Differential impact of malaria control interventions on P. falciparum and P. vivax infections in young Papua New Guinean children
    BMC Med. (IF 8.285) Pub Date : 2019-12-09
    Maria Ome-Kaius; Johanna Helena Kattenberg; Sophie Zaloumis; Matthew Siba; Benson Kiniboro; Shadrach Jally; Zahra Razook; Daisy Mantila; Desmond Sui; Jason Ginny; Anna Rosanas-Urgell; Stephan Karl; Thomas Obadia; Alyssa Barry; Stephen J. Rogerson; Moses Laman; Daniel Tisch; Ingrid Felger; James W. Kazura; Ivo Mueller; Leanne J. Robinson

    As malaria transmission declines, understanding the differential impact of intensified control on Plasmodium falciparum relative to Plasmodium vivax and identifying key drivers of ongoing transmission is essential to guide future interventions. Three longitudinal child cohorts were conducted in Papua New Guinea before (2006/2007), during (2008) and after scale-up of control interventions (2013). In each cohort, children aged 1–5 years were actively monitored for infection and illness. Incidence of malaria episodes, molecular force of blood-stage infections (molFOB) and population-averaged prevalence of infections were compared across the cohorts to investigate the impact of intensified control in young children and the key risk factors for malaria infection and illness in 2013. Between 2006 and 2008, P. falciparum infection prevalence, molFOB, and clinical malaria episodes reduced by 47%, 59% and 69%, respectively, and a further 49%, 29% and 75% from 2008 to 2013 (prevalence 41.6% to 22.1% to 11.2%; molFOB: 3.4 to 1.4 to 1.0 clones/child/year; clinical episodes incidence rate (IR) 2.6 to 0.8 to IR 0.2 episodes/child/year). P. vivax clinical episodes declined at rates comparable to P. falciparum between 2006, 2008 and 2013 (IR 2.5 to 1.1 to 0.2), while P. vivax molFOB (2006, 9.8; 2008, 12.1) and prevalence (2006, 59.6%; 2008, 65.0%) remained high in 2008. However, in 2013, P. vivax molFOB (1.2) and prevalence (19.7%) had also substantially declined. In 2013, 89% of P. falciparum and 93% of P. vivax infections were asymptomatic, 62% and 47%, respectively, were sub-microscopic. Area of residence was the major determinant of malaria infection and illness. Intensified vector control and routine case management had a differential impact on rates of P. falciparum and P. vivax infections but not clinical malaria episodes in young children. This suggests comparable reductions in new mosquito-derived infections but a delayed impact on P. vivax relapsing infections due to a previously acquired reservoir of hypnozoites. This demonstrates the need to strengthen implementation of P. vivax radical cure to maximise impact of control in co-endemic areas. The high heterogeneity of malaria in 2013 highlights the importance of surveillance and targeted interventions to accelerate towards elimination.

  • The quality of preventive care for pre-school aged children in Australian general practice
    BMC Med. (IF 8.285) Pub Date : 2019-12-06
    Louise K. Wiles; Carl de Wet; Chris Dalton; Elisabeth Murphy; Mark F. Harris; Peter D. Hibbert; Charlotte J. Molloy; Gaston Arnolda; Hsuen P. Ting; Jeffrey Braithwaite

    Variable and poor care quality are important causes of preventable patient harm. Many patients receive less than recommended care, but the extent of the problem remains largely unknown. The CareTrack Kids (CTK) research programme sought to address this evidence gap by developing a set of indicators to measure the quality of care for common paediatric conditions. In this study, we focus on one clinical area, ‘preventive care’ for pre-school aged children. Our objectives were two-fold: (i) develop and validate preventive care quality indicators and (ii) apply them in general medical practice to measure adherence. Clinical experts (n = 6) developed indicator questions (IQs) from clinical practice guideline (CPG) recommendations using a multi-stage modified Delphi process, which were pilot tested in general practice. The medical records of Australian children (n = 976) from general practices (n = 80) in Queensland, New South Wales and South Australia identified as having a consultation for one of 17 CTK conditions of interest were retrospectively reviewed by trained paediatric nurses. Statistical analyses were performed to estimate percentage compliance and its 95% confidence intervals. IQs (n = 43) and eight care ‘bundles’ were developed and validated. Care was delivered in line with the IQs in 43.3% of eligible healthcare encounters (95% CI 30.5–56.7). The bundles of care with the highest compliance were ‘immunisation’ (80.1%, 95% CI 65.7–90.4), ‘anthropometric measurements’ (52.7%, 95% CI 35.6–69.4) and ‘nutrition assessments’ (38.5%, 95% CI 24.3–54.3), and lowest for ‘visual assessment’ (17.9%, 95% CI 8.2–31.9), ‘musculoskeletal examinations’ (24.4%, 95% CI 13.1–39.1) and ‘cardiovascular examinations’ (30.9%, 95% CI 12.3–55.5). This study is the first known attempt to develop specific preventive care quality indicators and measure their delivery to Australian children in general practice. Our findings that preventive care is not reliably delivered to all Australian children and that there is substantial variation in adherence with the IQs provide a starting point for clinicians, researchers and policy makers when considering how the gap between recommended and actual care may be narrowed. The findings may also help inform the development of specific improvement interventions, incentives and national standards.

  • Regulating digital health technologies with transparency: the case for dynamic and multi-stakeholder evaluation
    BMC Med. (IF 8.285) Pub Date : 2019-12-03
    Elena Rodriguez-Villa; John Torous

    The prevalence of smartphones today, paired with the increasing precision and therapeutic potential of digital capabilities, offers unprecedented opportunity in the field of digital medicine. Smartphones offer novel accessibility, unique insights into physical and cognitive behavior, and diverse resources designed to aid health. Many of these digital resources, however, are developed and shared at a faster rate than they can be assessed for efficacy, safety, and security—presenting patients and clinicians with the challenge of distinguishing helpful tools from harmful ones. Leading regulators, such as the FDA in the USA and the NHS in the UK, are working to evaluate the influx of mobile health applications entering the market. Efforts to regulate, however, are challenged by the need for more transparency. They require real-world data on the actual use, effects, benefits, and harms of these digital health tools. Given rapid product cycles and frequent updates, even the most thorough evaluation is only as accurate as the data it is based on. In this debate piece, we propose a complementary approach to ongoing efforts via a dynamic self-certification checklist. We outline how simple self-certification, validated or challenged by app users, would enhance transparency, engage diverse stakeholders in meaningful education and learning, and incentivize the design of safe and secure medical apps.

  • Patterns of symptoms before a diagnosis of first episode psychosis: a latent class analysis of UK primary care electronic health records
    BMC Med. (IF 8.285) Pub Date : 2019-12-04
    Ying Chen; Saeed Farooq; John Edwards; Carolyn A. Chew-Graham; David Shiers; Martin Frisher; Richard Hayward; Athula Sumathipala; Kelvin P. Jordan

    The nature of symptoms in the prodromal period of first episode psychosis (FEP) remains unclear. The objective was to determine the patterns of symptoms recorded in primary care in the 5 years before FEP diagnosis. The study was set within 568 practices contributing to a UK primary care health record database (Clinical Practice Research Datalink). Patients aged 16–45 years with a first coded record of FEP, and no antipsychotic prescription more than 1 year prior to FEP diagnosis (n = 3045) was age, gender, and practice matched to controls without FEP (n = 12,180). Fifty-five symptoms recorded in primary care in the previous 5 years, categorised into 8 groups (mood-related, ‘neurotic’, behavioural change, volition change, cognitive change, perceptual problem, substance misuse, physical symptoms), were compared between cases and controls. Common patterns of symptoms prior to FEP diagnosis were identified using latent class analysis. Median age at diagnosis was 30 years, 63% were male. Non-affective psychosis (67%) was the most common diagnosis. Mood-related, ‘neurotic’, and physical symptoms were frequently recorded (> 30% of patients) before diagnosis, and behavioural change, volition change, and substance misuse were also common (> 10%). Prevalence of all symptom groups was higher in FEP patients than in controls (adjusted odds ratios 1.33–112). Median time from the first recorded symptom to FEP diagnosis was 2–2.5 years except for perceptual problem (70 days). The optimal latent class model applied to FEP patients determined three distinct patient clusters: ‘no or minimal symptom cluster’ (49%) had no or few symptoms recorded; ‘affective symptom cluster’ (40%) mainly had mood-related and ‘neurotic’ symptoms; and ‘multiple symptom cluster’ (11%) consulted for three or more symptom groups before diagnosis. The multiple symptom cluster was more likely to have drug-induced psychosis, female, obese, and have a higher morbidity burden. Affective and multiple symptom clusters showed a good discriminative ability (C-statistic 0.766; sensitivity 51.2% and specificity 86.7%) for FEP, and many patients in these clusters had consulted for their symptoms several years before FEP diagnosis. Distinctive patterns of prodromal symptoms may help alert general practitioners to those developing psychosis, facilitating earlier identification and referral to specialist care, thereby avoiding potentially detrimental treatment delay.

  • Determinants of high residual post-PCV13 pneumococcal vaccine-type carriage in Blantyre, Malawi: a modelling study
    BMC Med. (IF 8.285) Pub Date : 2019-12-05
    J. Lourenço; U. Obolski; T. D. Swarthout; A. Gori; N. Bar-Zeev; D. Everett; A. W. Kamng’ona; T. S. Mwalukomo; A. A. Mataya; C. Mwansambo; M. Banda; S. Gupta; N. French; R. S. Heyderman

    In November 2011, Malawi introduced the 13-valent pneumococcal conjugate vaccine (PCV13) into the routine infant schedule. Four to 7 years after introduction (2015–2018), rolling prospective nasopharyngeal carriage surveys were performed in the city of Blantyre. Carriage of Streptococcus pneumoniae vaccine serotypes (VT) remained higher than reported in high-income countries, and impact was asymmetric across age groups. A dynamic transmission model was fit to survey data using a Bayesian Markov-chain Monte Carlo approach, to obtain insights into the determinants of post-PCV13 age-specific VT carriage. Accumulation of naturally acquired immunity with age and age-specific transmission potential were both key to reproducing the observed data. VT carriage reduction peaked sequentially over time, earlier in younger and later in older age groups. Estimated vaccine efficacy (protection against carriage) was 66.87% (95% CI 50.49–82.26%), similar to previous estimates. Ten-year projected vaccine impact (VT carriage reduction) among 0–9 years old was lower than observed in other settings, at 76.23% (CI 95% 68.02–81.96%), with sensitivity analyses demonstrating this to be mainly driven by a high local force of infection. There are both vaccine-related and host-related determinants of post-PCV13 pneumococcal VT transmission in Blantyre with vaccine impact determined by an age-specific, local force of infection. These findings are likely to be generalisable to other Sub-Saharan African countries in which PCV impact on carriage (and therefore herd protection) has been lower than desired, and have implications for the interpretation of post-PCV carriage studies and future vaccination programs.

  • Causal association of type 2 diabetes with amyotrophic lateral sclerosis: new evidence from Mendelian randomization using GWAS summary statistics
    BMC Med. (IF 8.285) Pub Date : 2019-12-04
    Ping Zeng; Ting Wang; Junnian Zheng; Xiang Zhou

    Associations between type 2 diabetes (T2D) and amyotrophic lateral sclerosis (ALS) were discovered in observational studies in both European and East Asian populations. However, whether such associations are causal remains largely unknown. We employed a two-sample Mendelian randomization approach to evaluate the causal relationship of T2D with the risk of ALS in both European and East Asian populations. Our analysis was implemented using summary statistics obtained from large-scale genome-wide association studies with ~660,000 individuals for T2D and ~81,000 individuals for ALS in the European population, and ~191,000 individuals for T2D and ~4100 individuals for ALS in the East Asian population. The causal relationship between T2D and ALS in both populations was estimated using the inverse-variance-weighted methods and was further validated through extensive complementary and sensitivity analyses. Using multiple instruments that were strongly associated with T2D, a negative association between T2D and ALS was identified in the European population with the odds ratio (OR) estimated to be 0.93 (95% CI 0.88–0.99, p = 0.023), while a positive association between T2D and ALS was observed in the East Asian population with OR = 1.28 (95% CI 0.99–1.62, p = 0.058). These results were robust against instrument selection, various modeling misspecifications, and estimation biases, with the Egger regression and MR-PRESSO ruling out the possibility of horizontal pleiotropic effects of instruments. However, no causal association was found between T2D-related exposures (including glycemic traits) and ALS in the European population. Our results provide new evidence supporting the causal neuroprotective role of T2D on ALS in the European population and provide empirically suggestive evidence of increasing risk of T2D on ALS in the East Asian population. Our results have an important implication on ALS pathology, paving ways for developing therapeutic strategies across multiple populations.

  • Adherence to the World Cancer Research Fund/American Institute for Cancer Research cancer prevention recommendations and risk of in situ breast cancer in the European Prospective Investigation into Cancer and Nutrition (EPIC) cohort
    BMC Med. (IF 8.285) Pub Date : 2019-12-02
    Nena Karavasiloglou; Anika Hüsing; Giovanna Masala; Carla H. van Gils; Renée Turzanski Fortner; Jenny Chang-Claude; Inge Huybrechts; Elisabete Weiderpass; Marc Gunter; Patrick Arveux; Agnès Fournier; Marina Kvaskoff; Anne Tjønneland; Cecilie Kyrø; Christina C. Dahm; Helene Tilma Vistisen; Marije F. Bakker; Maria-Jose Sánchez; María Dolores Chirlaque López; Carmen Santiuste; Eva Ardanaz; Virginia Menéndez; Antonio Agudo; Antonia Trichopoulou; Anna Karakatsani; Carlo La Vecchia; Eleni Peppa; Domenico Palli; Claudia Agnoli; Salvatore Panico; Rosario Tumino; Carlotta Sacerdote; Salma Tunå Butt; Signe Borgquist; Guri Skeie; Matthias Schulze; Timothy Key; Kay-Tee Khaw; Kostantinos K. Tsilidis; Merete Ellingjord-Dale; Elio Riboli; Rudolf Kaaks; Laure Dossus; Sabine Rohrmann; Tilman Kühn

    Even though in situ breast cancer (BCIS) accounts for a large proportion of the breast cancers diagnosed, few studies have investigated potential risk factors for BCIS. Their results suggest that some established risk factors for invasive breast cancer have a similar impact on BCIS risk, but large population-based studies on lifestyle factors and BCIS risk are lacking. Thus, we investigated the association between lifestyle and BCIS risk within the European Prospective Investigation into Cancer and Nutrition cohort. Lifestyle was operationalized by a score reflecting the adherence to the World Cancer Research Fund/American Institute for Cancer Research (WCRF/AICR) cancer prevention recommendations. The recommendations utilized in these analyses were the ones pertinent to healthy body weight, physical activity, consumption of plant-based foods, energy-dense foods, red and processed meat, and sugary drinks and alcohol, as well as the recommendation on breastfeeding. Cox proportional hazards regression was used to assess the association between lifestyle score and BCIS risk. The results were presented as hazard ratios (HR) and corresponding 95% confidence intervals (CI). After an overall median follow-up time of 14.9 years, 1277 BCIS cases were diagnosed. Greater adherence to the WCRF/AICR cancer prevention recommendations was not associated with BCIS risk (HR = 0.98, 95% CI 0.93–1.03; per one unit of increase; multivariable model). An inverse association between the lifestyle score and BCIS risk was observed in study centers, where participants were recruited mainly via mammographic screening and attended additional screening throughout follow-up (HR = 0.85, 95% CI 0.73–0.99), but not in the remaining ones (HR = 0.99, 95% CI 0.94–1.05). While we did not observe an overall association between lifestyle and BCIS risk, our results indicate that lifestyle is associated with BCIS risk among women recruited via screening programs and with regular screening participation. This suggests that a true inverse association between lifestyle habits and BCIS risk in the overall cohort may have been masked by a lack of information on screening attendance. The potential inverse association between lifestyle and BCIS risk in our analyses is consistent with the inverse associations between lifestyle scores and breast cancer risk reported from previous studies.

  • Insulin resistance and systemic metabolic changes in oral glucose tolerance test in 5340 individuals: an interventional study
    BMC Med. (IF 8.285) Pub Date : 2019-11-29
    Qin Wang; Jari Jokelainen; Juha Auvinen; Katri Puukka; Sirkka Keinänen-Kiukaanniemi; Marjo-Riitta Järvelin; Johannes Kettunen; Ville-Petteri Mäkinen; Mika Ala-Korpela

    Insulin resistance (IR) is predictive for type 2 diabetes and associated with various metabolic abnormalities in fasting conditions. However, limited data are available on how IR affects metabolic responses in a non-fasting setting, yet this is the state people are mostly exposed to during waking hours in the modern society. Here, we aim to comprehensively characterise the metabolic changes in response to an oral glucose test (OGTT) and assess the associations of these changes with IR. Blood samples were obtained at 0 (fasting baseline, right before glucose ingestion), 30, 60, and 120 min during the OGTT. Seventy-eight metabolic measures were analysed at each time point for a discovery cohort of 4745 middle-aged Finnish individuals and a replication cohort of 595 senior Finnish participants. We assessed the metabolic changes in response to glucose ingestion (percentage change in relative to fasting baseline) across the four time points and further compared the response profile between five groups with different levels of IR and glucose intolerance. Further, the differences were tested for covariate adjustment, including gender, body mass index, systolic blood pressure, fasting, and 2-h glucose levels. The groups were defined as insulin sensitive with normal glucose (IS-NGT), insulin resistant with normal glucose (IR-NGT), impaired fasting glucose (IFG), impaired glucose tolerance (IGT), and new diabetes (NDM). IS-NGT and IR-NGT were defined as the first and fourth quartile of fasting insulin in NGT individuals. Glucose ingestion induced multiple metabolic responses, including increased glycolysis intermediates and decreased branched-chain amino acids, ketone bodies, glycerol, and triglycerides. The IR-NGT subgroup showed smaller responses for these measures (mean + 23%, interquartile 9–34% at 120 min) compared to IS-NGT (34%, 23–44%, P < 0.0006 for difference, corrected for multiple testing). Notably, the three groups with glucose abnormality (IFG, IGT, and NDM) showed similar metabolic dysregulations as those of IR-NGT. The difference between the IS-NGT and the other subgroups was largely explained by fasting insulin, but not fasting or 2 h glucose. The findings were consistent after covariate adjustment and between the discovery and replication cohort. Insulin-resistant non-diabetic individuals are exposed to a similar adverse postprandial metabolic milieu, and analogous cardiometabolic risk, as those with type 2 diabetes. The wide range of metabolic abnormalities associated with IR highlights the necessity of diabetes diagnostics and clinical care beyond glucose management.

  • Determinants and extent of weight recording in UK primary care: an analysis of 5 million adults’ electronic health records from 2000 to 2017
    BMC Med. (IF 8.285) Pub Date : 2019-11-29
    B. D. Nicholson; P. Aveyard; C. R. Bankhead; W. Hamilton; F. D. R. Hobbs; S. Lay-Flurrie

    Excess weight and unexpected weight loss are associated with multiple disease states and increased morbidity and mortality, but weight measurement is not routine in many primary care settings. The aim of this study was to characterise who has had their weight recorded in UK primary care, how frequently, by whom and in relation to which clinical events, symptoms and diagnoses. A longitudinal analysis of UK primary care electronic health records (EHR) data from 2000 to 2017. Descriptive statistics were used to summarise weight recording in terms of patient sociodemographic characteristics, health professional encounters, clinical events, symptoms and diagnoses. Negative binomial regression was used to model the likelihood of having a weight record each year, and Cox regression to the likelihood of repeated weight recording. A total of 14,049,871 weight records were identified in the EHR of 4,918,746 patients during the study period, representing 26,998,591 person-years of observation. Around a third of patients had a weight record each year. Forty-nine percent of weight records were repeated within a year with an average time to a repeat weight record of 1.92 years. Weight records were most often taken by nursing staff (38–42%) and GPs (37–39%) as part of a routine clinical care, such as chronic disease reviews (16%), medication reviews (6–8%) and health checks (6–7%), or were associated with consultations for contraception (5–8%), respiratory disease (5%) and obesity (1%). Patient characteristics independently associated with an increased likelihood of weight recording were as follows: female sex, younger and older adults, non-drinkers, ex-smokers, low or high BMI, being more deprived, diagnosed with a greater number of comorbidities and consulting more frequently. The effect of policy-level incentives to record weight did not appear to be sustained after they were removed. Weight recording is not a routine activity in UK primary care. It is recorded for around a third of patients each year and is repeated on average every 2 years for these patients. It is more common in females with higher BMI and in those with comorbidity. Incentive payments and their removal appear to be associated with increases and decreases in weight recording.

  • Economic analyses to inform public health decision-making for tuberculosis: the role of understanding implementation
    BMC Med. (IF 8.285) Pub Date : 2019-11-29
    Priya B. Shete; James G. Kahn

    Tuberculosis (TB) kills more people than any other infectious disease in the world today [1]. The global public health response is rising accordingly. To address the considerable gaps in TB management in high-burden countries, there is increasing pressure for health systems to rapidly and effectively implement innovations in TB diagnosis and treatment. High-level policy guidance, such as the World Health Organization’s End TB Strategy [2], the Sustainable Development Goals (SDG) [3], and the United Nation’s (UN’s) High Level Meeting in 2018 [4] all call for novel strategies to optimize the impact of these interventions. Modeling analyses, such as that recently published by Sohn and colleagues in BMC Medicine, are becoming increasingly instrumental in guiding policy decisions [5]. However, in most high-burden settings, it remains unknown how best to operationalize and scale up interventions, at a sustainable cost – critical concerns of stakeholders and decision-makers. Research exists, but it makes for poor generalizations. Delivery models for effective implementation of TB diagnosis and treatment innovations are often context-specific – influenced by local conditions in health system capacity and demand for services. Effectiveness is often narrowly or inconsistently defined. Together, these limitations greatly reduce the utility of cost-effectiveness and budgetary impact analyses, which are intended to inform the choices that programs must make about how to scale up novel technologies to maximize public health benefit. In their paper, Sohn and colleagues consider these extensive planning needs. The authors meticulously conceptualize and implement an epidemiologic and economic model to demonstrate how a national public health system, such as India’s Revised National Tuberculosis Programme (RNTCP), can design a cost-effective deployment of rapid point-of-care molecular diagnostic tests for TB (GeneXpert, GeneXpert MTB/RIF Cepheid, Sunnyvale, USA) [5]. They describe three main drivers of cost-effectiveness for GeneXpert implementation: volume of testing, costs of sputum transportation in a decentralized approach, and the level of pre-treatment loss to follow-up for patients presenting to microscopy centers. These drivers are not unique to India; they have been described in a variety of high TB burden settings in both low- and middle-income countries [6, 7]. The strengths of this analysis are in its use of sensitivity and threshold analyses to characterize the factors that public health policymakers can consider or manipulate in designing an efficient approach to GeneXpert deployment. By applying context-specific cost and utilization assumptions, public health policymakers can reveal target conditions under which scale-up of GeneXpert would be cost-effective from a health system perspective (a major goal for economic analyses). But is this type of analysis sufficient, and how do policymakers know these costs for complex and under-used field activities? Thus, challenges remain in comprehensive economic modeling of TB control. Drivers of GeneXpert cost-effectiveness, as described by Sohn and colleagues, notably depend on critical aspects of GeneXpert implementation and infrastructure. Human and capital costs related to infrastructure, capacity, and shared health systems resources, for example, are not routinely costed and, if mis-estimated, could skew results [8]. Empirical implementation costs that reflect supporting services or activities – either existing, or that would be required to achieve optimal effect – are left unacknowledged. Using the example of a decentralized approach to GeneXpert, this type of cost would include the true cost of starting, scaling and maintaining a sputum transportation network. From a previous cost-effectiveness analysis of GeneXpert implementation in India, Sohn and colleagues provide credible estimates for costs; however, they also acknowledge the limited empirical implementation-based cost data that may affect their results [5]. More broadly, economic evaluations also usually omit the perspectives of decision-makers such as patients, clinics, individual providers, and community organizations, for whom measures of effect are often different than traditional clinical efficacy. Cost drivers can also affect care participation, which will ultimately affect the fidelity of implementation and outcomes [9]. It is a critical and continuing challenge to enhance cost and cost-effectiveness analyses to reflect real-world implementation and its constraints in TB. Many analyses would be more powerful for use in public health decision-making if they could realistically integrate costs for implementation strategies that optimize effectiveness. While they are a move in the right direction, studies using more real-world TB outcomes, such as case detection rates and pre-treatment loss to follow-up, are still the exception [8]. Additional outcomes of interest to stakeholders include those measuring integration, sustainability, and reach of novel interventions [10]. For example, while GeneXpert economic analyses acknowledge that decentralized testing services may become less cost-effective as volume of testing decreases and infrastructure costs increase, they neglect the positive impact that such decentralization may have on improving reach, accessibility, and equity of TB services for at-risk or underserved populations. Is it possible to quantify these less traditional outcomes and cost the intervention components necessary to achieve them in a way that is meaningful to decision-makers? Perhaps more importantly, is there a way to integrate a more expansive interpretation of ‘effectiveness’ in a way that allows decision-makers to choose implementation strategies for novel interventions that may be less efficient than the cost-effectiveness optimal, but ultimately more successful? In the future, innovative approaches to economic analyses that incorporate implementation factors will facilitate a more realistic and practical portrayal of the cost-effectiveness of TB innovations. This would provide a more actionable framework for public health decision-makers in programmatic and strategic planning to eliminate TB. Not applicable. 1. World Health Organization (WHO). Global Tuberculosis Programme. In: Global Tuberculosis Report 2018. Geneva: WHO; 2018. Google Scholar 2. Uplekar M, Weil D, Lonnroth K, Jaramillo E, Lienhardt C, Dias HM, et al. WHO's new end TB strategy. Lancet. 2015;385:1799–801. Article Google Scholar 3. Lonnroth K, Raviglione M. The WHO's new end TB strategy in the post-2015 era of the sustainable development goals. Trans R Soc Trop Med Hyg. 2016;110:148–50. Article Google Scholar 4. United Nations General Assembly. United to end tuberculosis: an urgent global response to a global epidemic. New York; 2018. https://www.un.org/pga/72/wp-content/uploads/sites/51/2018/09/Co-facilitators-Revised-text-Political-Declaraion-on-the-Fight-against-Tuberculosis.pdf. Accessed 11 Nov 2019 5. Sohn H, Kasaie P, Kendall E, Gomez GB, Vassall A, Pai M, et al. Informing decision-making for universal access to quality tuberculosis diagnosis in India: an economic-epidemiological model. BMC Med. 2019;17:155. Article Google Scholar 6. Hsiang E, Little KM, Haguma P, Hanrahan CF, Katamba A, Cattamanchi A, et al. Higher cost of implementing Xpert® MTB/RIF in Ugandan peripheral settings: implications for cost-effectiveness. Int J Tuberc Lung Dis. 2016;20:1212–8. CAS Article Google Scholar 7. Pooran A, Theron G, Zijenah L, Chanda D, Clowes P, Mwenge L, et al. Point of care Xpert MTB/RIF versus smear microscopy for tuberculosis diagnosis in southern African primary care clinics: a multicentre economic evaluation. Lancet Glob Health. 2019;7:e798–807. Article Google Scholar 8. Hauck K, Morton A, Chalkidou K, Chi YL, Culyer A, Levin C, et al. How can we evaluate the cost-effectiveness of health system strengthening? A typology and illustrations. Soc Sci Med. 2019;220:141–9. CAS Article Google Scholar 9. Jones Rhodes WC, Ritzwoller DP, Glasgow RE. Stakeholder perspectives on costs and resource expenditures: tools for addressing economic issues most relevant to patients, providers, and clinics. Transl Behav Med. 2018;8:675–82. Article Google Scholar 10. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89:1322–7. CAS Article Google Scholar Download references Not applicable. Funding PBS and JGK received funding from the US Centers for Disease Control and Prevention and the National Institutes of Health (PBS: K12 UCSF A129905–01; JGK MH107330). PBS is also funded by the Parker B Francis Foundation and the Swedish Research Council. The funders had no role in the in the conception, writing or editing of this manuscript. In addition, the viewpoints in this manuscript are those of the authors alone. Affiliations Division of Pulmonary and Critical Care Medicine, University of California San Francisco, Zuckerberg San Francisco General Hospital, 1001 Potrero Avenue, San Francisco, California, 94110, USA Priya B. Shete Consortium to Assess Prevention Economics, Institute for Health Policy Studies, University of California San Francisco, 3333 California Avenue, San Francisco, California, 94118, USA Priya B. Shete  & James G. Kahn Department of Epidemiology and Biostatistics, University of California San Francisco, 550 16th Street, San Francisco, California, 94158, USA James G. Kahn Authors Search for Priya B. Shete in: PubMed • Google Scholar Search for James G. Kahn in: PubMed • Google Scholar Contributions PBS and JGK conceived the manuscript. Both authors contributed to writing and editing the manuscript. Both authors read and approved the final manuscript. Corresponding author Correspondence to Priya B. Shete. Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Competing interests The authors declare that they have no competing interests. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Reprints and Permissions Cite this article Shete, P.B., Kahn, J.G. Economic analyses to inform public health decision-making for tuberculosis: the role of understanding implementation. BMC Med 17, 224 (2019) doi:10.1186/s12916-019-1468-5 Download citation Received 08 October 2019 Accepted 12 November 2019 Published 29 November 2019 DOI https://doi.org/10.1186/s12916-019-1468-5 Keywords Tuberculosis Cost effectiveness Public health system

  • In utero exposure to mercury and childhood overweight or obesity: counteracting effect of maternal folate status
    BMC Med. (IF 8.285) Pub Date : 2019-11-28
    Guoying Wang; Jessica DiBari; Eric Bind; Andrew M. Steffens; Jhindan Mukherjee; Tami R. Bartell; David C. Bellinger; Xiumei Hong; Yuelong Ji; Mei-Cheng Wang; Marsha Wills-Karp; Tina L. Cheng; Xiaobin Wang

    Low-dose mercury (Hg) exposure has been associated with cardiovascular diseases, diabetes, and obesity in adults, but it is unknown the metabolic consequence of in utero Hg exposure. This study aimed to investigate the association between in utero Hg exposure and child overweight or obesity (OWO) and to explore if adequate maternal folate can mitigate Hg toxicity. This prospective study included 1442 mother-child pairs recruited at birth and followed up to age 15 years. Maternal Hg in red blood cells and plasma folate levels were measured in samples collected 1–3 days after delivery (a proxy for third trimester exposure). Adequate folate was defined as plasma folate ≥ 20.4 nmol/L. Childhood OWO was defined as body mass index ≥ 85% percentile for age and sex. The median (interquartile range) of maternal Hg levels were 2.11 (1.04–3.70) μg/L. Geometric mean (95% CI) of maternal folate levels were 31.1 (30.1–32.1) nmol/L. Maternal Hg levels were positively associated with child OWO from age 2–15 years, independent of maternal pre-pregnancy OWO, diabetes, and other covariates. The relative risk (RR = 1.24, 95% CI 1.05–1.47) of child OWO associated with the highest quartile of Hg exposure was 24% higher than those with the lowest quartile. Maternal pre-pregnancy OWO and/or diabetes additively enhanced Hg toxicity. The highest risk of child OWO was found among children of OWO and diabetic mothers in the top Hg quartile (RR = 2.06; 95% CI 1.56–2.71) compared to their counterparts. Furthermore, adequate maternal folate status mitigated Hg toxicity. Given top quartile Hg exposure, adequate maternal folate was associated with a 34% reduction in child OWO risk (RR = 0.66, 95% CI 0.51–0.85) as compared with insufficient maternal folate. There was a suggestive interaction between maternal Hg and folate levels on child OWO risk (p for interaction = 0.086). In this US urban, multi-ethnic population, elevated in utero Hg exposure was associated with a higher risk of OWO in childhood, and such risk was enhanced by maternal OWO and/or diabetes and reduced by adequate maternal folate. These findings underscore the need to screen for Hg and to optimize maternal folate status, especially among mothers with OWO and/or diabetes.

  • Engagement, not personal characteristics, was associated with the seriousness of regulatory adjudication decisions about physicians: a cross-sectional study
    BMC Med. (IF 8.285) Pub Date : 2019-11-27
    Javier A. Caballero; Steve P. Brown

    Outcomes of processes questioning a physician’s ability to practise —e.g. disciplinary or regulatory— may strongly impact their career and provided care. However, it is unclear what factors relate systematically to such outcomes. In this cross-sectional study, we investigate this via multivariate, step-wise, statistical modelling of all 1049 physicians referred for regulatory adjudication at the UK medical tribunal, from June 2012 to May 2017, within a population of 310,659. In order of increasing seriousness, outcomes were: no impairment (of ability to practise), impairment, suspension (of right to practise), or erasure (its loss). This gave adjusted odds ratios (OR) for: age, race, sex, whether physicians first qualified domestically or internationally, area of practice (e.g. GP, specialist), source of initial referral, allegation type, whether physicians attended their outcome hearing, and whether they were legally represented for it. There was no systematic association between the seriousness of outcomes and the age, race, sex, domestic/international qualification, or the area of practice of physicians (ORs p≥0.05), except for specialists who tended to receive outcomes milder than suspension or erasure. Crucially, an apparent relationship of outcomes to age (Kruskal-Wallis, p=0.009) or domestic/international qualification (χ2,p=0.014) disappeared once controlling for hearing attendance (ORs p≥0.05). Both non-attendance and lack of legal representation were consistently related to more serious outcomes (ORs [95% confidence intervals], 5.28 [3.89, 7.18] and 1.87 [1.34, 2.60], respectively, p<0.001). All else equal, personal characteristics or first qualification place were unrelated to the seriousness of regulatory outcomes in the UK. Instead, engagement (attendance and legal representation), allegation type, and referral source were importantly associated to outcomes. All this may generalize to other countries and professions.

  • Can public health policies on alcohol and tobacco reduce a cancer epidemic? Australia's experience
    BMC Med. (IF 8.285) Pub Date : 2019-11-27
    Heng Jiang; Michael Livingston; Robin Room; Yong Gan; Dallas English; Richard Chenhall

    Although long-term alcohol and tobacco use have widely been recognised as important risk factors for cancer, the impacts of alcohol and tobacco health policies on cancer mortality have not been examined in previous studies. This study aims to estimate the association of key alcohol and tobacco policy or events in Australia with changes in overall and five specific types of cancer mortality between the 1950s and 2013. Annual population-based time-series data between 1911 and 2013 on per capita alcohol and tobacco consumption and head and neck (lip, oral cavity, pharynx, larynx and oesophagus), lung, breast, colorectum and anus, liver and total cancer mortality data from the 1950s to 2013 were collected from the Australian Bureau of Statistics and Cancer Council Victoria, the WHO Cancer Mortality Database and the Australian Institute of Health and Welfare. The policies with significant relations to changes in alcohol and tobacco consumption were identified in an initial model. Intervention dummies with estimated lags were then developed based on these key alcohol and tobacco policies and events and inserted into time-series models to estimate the relation of the particular policy changes with cancer mortality. Liquor licence liberalisation in the 1960s was significantly associated with increases in the level of population drinking and thereafter of male cancer mortality. The introduction of random breath testing programs in Australia after 1976 was associated with a reduction in population drinking and thereafter in cancer mortality for both men and women. Meanwhile, the release of UK and US public health reports on tobacco in 1962 and 1964 and the ban on cigarette ads on TV and radio in 1976 were found to have been associated with a reduction in Australian tobacco consumption and thereafter a reduction in mortality from all cancer types except liver cancer. Policy changes on alcohol and tobacco during the 1960s–1980s were associated with greater changes for men than for women, particularly for head and neck, lung and colorectum cancer sites. This study provides evidence that some changes to public health policies in Australia in the twentieth century were related to the changes in the population consumption of alcohol and tobacco, and in subsequent mortality from various cancers over the following 20 years.

  • Environmental enteric dysfunction: a review of potential mechanisms, consequences and management strategies
    BMC Med. (IF 8.285) Pub Date : 2019-11-25
    Kirkby D. Tickell; Hannah E. Atlas; Judd L. Walson

    Environmental enteric dysfunction (EED) is an acquired enteropathy of the small intestine, characterized by enteric inflammation, villus blunting and decreased crypt-to-villus ratio. EED has been associated with poor outcomes, including chronic malnutrition (stunting), wasting and reduced vaccine efficacy among children living in low-resource settings. As a result, EED may be a valuable interventional target for programs aiming to reduce childhood morbidity in low and middle-income countries. Several highly plausible mechanisms link the proposed pathophysiology underlying EED to adverse outcomes, but causal attribution of these pathways has proved challenging. We provide an overview of recent studies evaluating the causes and consequences of EED. These include studies of the role of subclinical enteric infection as a primary cause of EED, and efforts to understand how EED-associated systemic inflammation and malabsorption may result in long-term morbidity. Finally, we outline recently completed and upcoming clinical trials that test novel interventions to prevent or treat this highly prevalent condition. Significant strides have been made in linking environmental exposure to enteric pathogens and toxins with EED, and in understanding the multifactorial mechanisms underlying this complex condition. Further insights may come from several ongoing and upcoming interventional studies trialing a variety of novel management strategies.

  • Growth faltering is associated with altered brain functional connectivity and cognitive outcomes in urban Bangladeshi children exposed to early adversity
    BMC Med. (IF 8.285) Pub Date : 2019-11-25
    Wanze Xie; Sarah K. G. Jensen; Mark Wade; Swapna Kumar; Alissa Westerlund; Shahria H. Kakon; Rashidul Haque; William A. Petri; Charles A. Nelson

    Stunting affects more than 161 million children worldwide and can compromise cognitive development beginning early in childhood. There is a paucity of research using neuroimaging tools in conjunction with sensitive behavioral assays in low-income settings, which has hindered researchers’ ability to explain how stunting impacts brain and behavioral development. We employed high-density EEG to examine associations among children’s physical growth, brain functional connectivity (FC), and cognitive development. We recruited participants from an urban impoverished neighborhood in Dhaka, Bangladesh. One infant cohort consisted of 92 infants whose height (length) was measured at 3, 4.5, and 6 months; EEG data were collected at 6 months; and cognitive outcomes were assessed using the Mullen Scales of Early Learning at 27 months. A second, older cohort consisted of 118 children whose height was measured at 24, 30, and 36 months; EEG data were collected at 36 months; and Intelligence Quotient (IQ) scores were assessed at 48 months. Height-for-age (HAZ) z-scores were calculated based on the World Health Organization standard. EEG FC in different frequency bands was calculated in the cortical source space. Linear regression and longitudinal path analysis were conducted to test the associations between variables, as well as the indirect effect of child growth on cognitive outcomes via brain FC. In the older cohort, we found that HAZ was negatively related to brain FC in the theta and beta frequency bands, which in turn was negatively related to children’s IQ score at 48 months. Longitudinal path analysis showed an indirect effect of HAZ on children’s IQ via brain FC in both the theta and beta bands. There were no associations between HAZ and brain FC or cognitive outcomes in the infant cohort. The association observed between child growth and brain FC may reflect a broad deleterious effect of malnutrition on children’s brain development. The mediation effect of FC on the relation between child growth and later IQ provides the first evidence suggesting that brain FC may serve as a neural pathway by which biological adversity impacts cognitive development.

  • Tackling the triple threats of childhood malnutrition
    BMC Med. (IF 8.285) Pub Date : 2019-11-25
    Martha Mwangome; Andrew M. Prentice

    The term ‘double burden of malnutrition’ is usually interpreted in terms of the physical status of children: stunted and wasted children on the one hand and overweight/obese children on the other. There is a third category of malnutrition that can occur at either end of the anthropometric spectrum or, indeed, in children whose physical size may be close to ideal. This third type is most commonly articulated with the phrase ‘hidden hunger’ and is often illustrated by micronutrient deficiencies; thus, we refer to it here as ‘undernutrition’. As understanding of such issues advances, we realise that there is a myriad of factors that may be influencing a child’s road to nutritional health. In this BMC Medicine article collection we consider these influences and the impact they have, such as: the state of the child’s environment; the effect this has on their risk of, and responses to, infection and on their gut; the consequences of poor nutrition on cognition and brain development; the key drivers of the obesity epidemic across the globe; and how undernourishment can affect a child’s body composition. This collection showcases recent advances in the field, but likewise highlights ongoing challenges in the battle to achieve adequate nutrition for children across the globe.

  • The epidemiological burden of obesity in childhood: a worldwide epidemic requiring urgent action
    BMC Med. (IF 8.285) Pub Date : 2019-11-25
    Mariachiara Di Cesare; Maroje Sorić; Pascal Bovet; J Jaime Miranda; Zulfiqar Bhutta; Gretchen A Stevens; Avula Laxmaiah; Andre-Pascal Kengne; James Bentham

    In recent decades, the prevalence of obesity in children has increased dramatically. This worldwide epidemic has important consequences, including psychiatric, psychological and psychosocial disorders in childhood and increased risk of developing non-communicable diseases (NCDs) later in life. Treatment of obesity is difficult and children with excess weight are likely to become adults with obesity. These trends have led member states of the World Health Organization (WHO) to endorse a target of no increase in obesity in childhood by 2025. Estimates of overweight in children aged under 5 years are available jointly from the United Nations Children’s Fund (UNICEF), WHO and the World Bank. The Institute for Health Metrics and Evaluation (IHME) has published country-level estimates of obesity in children aged 2–4 years. For children aged 5–19 years, obesity estimates are available from the NCD Risk Factor Collaboration. The global prevalence of overweight in children aged 5 years or under has increased modestly, but with heterogeneous trends in low and middle-income regions, while the prevalence of obesity in children aged 2–4 years has increased moderately. In 1975, obesity in children aged 5–19 years was relatively rare, but was much more common in 2016. It is recognised that the key drivers of this epidemic form an obesogenic environment, which includes changing food systems and reduced physical activity. Although cost-effective interventions such as WHO ‘best buys’ have been identified, political will and implementation have so far been limited. There is therefore a need to implement effective programmes and policies in multiple sectors to address overnutrition, undernutrition, mobility and physical activity. To be successful, the obesity epidemic must be a political priority, with these issues addressed both locally and globally. Work by governments, civil society, private corporations and other key stakeholders must be coordinated.

  • Determinants of linear growth faltering among children with moderate-to-severe diarrhea in the Global Enteric Multicenter Study
    BMC Med. (IF 8.285) Pub Date : 2019-11-25
    Rebecca L. Brander; Patricia B. Pavlinac; Judd L. Walson; Grace C. John-Stewart; Marcia R. Weaver; Abu S. G. Faruque; Anita K. M. Zaidi; Dipika Sur; Samba O. Sow; M. Jahangir Hossain; Pedro L. Alonso; Robert F. Breiman; Dilruba Nasrin; James P. Nataro; Myron M. Levine; Karen L. Kotloff

    Moderate-to-severe diarrhea (MSD) in the first 2 years of life can impair linear growth. We sought to determine risk factors for linear growth faltering and to build a clinical prediction tool to identify children most likely to experience growth faltering following an episode of MSD. Using data from the Global Enteric Multicenter Study of children 0–23 months old presenting with MSD in Africa and Asia, we performed log-binomial regression to determine clinical and sociodemographic factors associated with severe linear growth faltering (loss of ≥ 0.5 length-for-age z-score [LAZ]). Linear regression was used to estimate associations with ΔLAZ. A clinical prediction tool was developed using backward elimination of potential variables, and Akaike Information Criterion to select the best fit model. Of the 5902 included children, mean age was 10 months and 43.2% were female. Over the 50–90-day follow-up period, 24.2% of children had severe linear growth faltering and the mean ΔLAZ over follow-up was − 0.17 (standard deviation [SD] 0.54). After adjustment for age, baseline LAZ, and site, several factors were associated with decline in LAZ: young age, acute malnutrition, hospitalization at presentation, non-dysenteric diarrhea, unimproved sanitation, lower wealth, fever, co-morbidity, or an IMCI danger sign. Compared to children 12–23 months old, those 0–6 months were more likely to experience severe linear growth faltering (adjusted prevalence ratio [aPR] 1.97 [95% CI 1.70, 2.28]), as were children 6–12 months of age (aPR 1.72 [95% CI 1.51, 1.95]). A prediction model that included age, wasting, stunting, presentation with fever, and presentation with an IMCI danger sign had an area under the ROC (AUC) of 0.67 (95% CI 0.64, 0.69). Risk scores ranged from 0 to 37, and a cut-off of 21 maximized sensitivity (60.7%) and specificity (63.5%). Younger age, acute malnutrition, MSD severity, and sociodemographic factors were associated with short-term linear growth deterioration following MSD. Data routinely obtained at MSD may be useful to predict children at risk for growth deterioration who would benefit from interventions.

  • Body composition of children with moderate and severe undernutrition and after treatment: a narrative review
    BMC Med. (IF 8.285) Pub Date : 2019-11-25
    Jonathan C. K. Wells

    Until recently, undernourished children were usually assessed using simple anthropometric measurements, which provide global assessments of nutritional status. There is increasing interest in obtaining more direct data on body composition to assess the effects of undernutrition on fat-free mass (FFM) and its constituents, such as muscle and organs, and on fat mass (FM) and its regional distribution. Recent studies show that severe-acute undernutrition, categorised as ‘wasting’, is associated with major deficits in both FFM and FM that may persist in the long-term. Fat distribution appears more central, but this is more associated with the loss of peripheral fat than with the elevation of central fat. Chronic undernutrition, categorised as ‘stunting’, is associated with deficits in FFM and in specific components, such as organ size. However, the magnitude of these deficits is reduced, or – in some cases – disappears, after adjustment for height. This suggests that FFM is largely reduced in proportion to linear growth. Stunted children vary in their FM – in some cases remaining thin throughout childhood, but in other cases developing higher levels of FM. The causes of this heterogeneity remain unclear. Several different pathways may underlie longitudinal associations between early stunting and later body composition. Importantly, recent studies suggest that short children are not at risk of excess fat deposition in the short term when given nutritional supplementation. The short- and long-term functional significance of FFM and FM for survival, physical capacity and non-communicable disease risk means that both tissues merit further attention in research on child undernutrition.

  • Challenges in management and prevention of ischemic heart disease in low socioeconomic status people in LLMICs
    BMC Med. (IF 8.285) Pub Date : 2019-11-26
    Rajeev Gupta; Salim Yusuf

    Cardiovascular diseases, principally ischemic heart disease (IHD), are the most important cause of death and disability in the majority of low- and lower-middle-income countries (LLMICs). In these countries, IHD mortality rates are significantly greater in individuals of a low socioeconomic status (SES). Three important focus areas for decreasing IHD mortality among those of low SES in LLMICs are (1) acute coronary care; (2) cardiac rehabilitation and secondary prevention; and (3) primary prevention. Greater mortality in low SES patients with acute coronary syndrome is due to lack of awareness of symptoms in patients and primary care physicians, delay in reaching healthcare facilities, non-availability of thrombolysis and coronary revascularization, and the non-affordability of expensive medicines (statins, dual anti-platelets, renin-angiotensin system blockers). Facilities for rapid diagnosis and accessible and affordable long-term care at secondary and tertiary care hospitals for IHD care are needed. A strong focus on the social determinants of health (low education, poverty, working and living conditions), greater healthcare financing, and efficient primary care is required. The quality of primary prevention needs to be improved with initiatives to eliminate tobacco and trans-fats and to reduce the consumption of alcohol, refined carbohydrates, and salt along with the promotion of healthy foods and physical activity. Efficient primary care with a focus on management of blood pressure, lipids and diabetes is needed. Task sharing with community health workers, electronic decision support systems, and use of fixed-dose combinations of blood pressure-lowering drugs and statins can substantially reduce risk factors and potentially lead to large reductions in IHD. Finally, training of physicians, nurses, and health workers in IHD prevention should be strengthened. The management and prevention of IHD in individuals with a low SES in LLMICs are poor. Greater availability, access, and affordability for acute coronary syndrome management and secondary prevention are important. Primary prevention should focus on tackling the social determinants of health as well as policy and individual interventions for risk factor control, supported by task sharing and use of technology.

  • Profiling Mycobacterium tuberculosis transmission and the resulting disease burden in the five highest tuberculosis burden countries
    BMC Med. (IF 8.285) Pub Date : 2019-11-22
    Romain Ragonnet; James M. Trauer; Nicholas Geard; Nick Scott; Emma S. McBryde

    Tuberculosis (TB) control efforts are hampered by an imperfect understanding of TB epidemiology. The true age distribution of disease is unknown because a large proportion of individuals with active TB remain undetected. Understanding of transmission is limited by the asymptomatic nature of latent infection and the pathogen’s capacity for late reactivation. A better understanding of TB epidemiology is critically needed to ensure effective use of existing and future control tools. We use an agent-based model to simulate TB epidemiology in the five highest TB burden countries—India, Indonesia, China, the Philippines and Pakistan—providing unique insights into patterns of transmission and disease. Our model replicates demographically realistic populations, explicitly capturing social contacts between individuals based on local estimates of age-specific contact in household, school and workplace settings. Time-varying programmatic parameters are incorporated to account for the local history of TB control. We estimate that the 15–19-year-old age group is involved in more than 20% of transmission events in India, Indonesia, the Philippines and Pakistan, despite representing only 5% of the local TB incidence. According to our model, childhood TB represents around one fifth of the incident TB cases in these four countries. In China, three quarters of incident TB were estimated to occur in the ≥ 45-year-old population. The calibrated per-contact transmission risk was found to be similar in each of the five countries despite their very different TB burdens. Adolescents and young adults are a major driver of TB in high-incidence settings. Relying only on the observed distribution of disease to understand the age profile of transmission is potentially misleading.

  • Validation of the AJCC prognostic stage for HER2-positive breast cancer in the ShortHER trial
    BMC Med. (IF 8.285) Pub Date : 2019-11-21
    Maria Vittoria Dieci; Giancarlo Bisagni; Alba A. Brandes; Antonio Frassoldati; Luigi Cavanna; Francesco Giotta; Michele Aieta; Vittorio Gebbia; Antonino Musolino; Ornella Garrone; Michela Donadio; Anita Rimanti; Alessandra Beano; Claudio Zamagni; Hector Soto Parra; Federico Piacentini; Saverio Danese; Antonella Ferro; Katia Cagossi; Samanta Sarti; Anna Rita Gambaro; Sante Romito; Viviana Bazan; Laura Amaducci; Gabriella Moretti; Maria Pia Foschini; Sara Balduzzi; Roberto Vicini; Roberto D’Amico; Gaia Griguolo; Valentina Guarneri; Pier Franco Conte

    The 8th edition of the American Joint Committee on Cancer (AJCC) staging has introduced prognostic stage based on anatomic stage combined with biologic factors. We aimed to validate the prognostic stage in HER2-positive breast cancer patients enrolled in the ShortHER trial. The ShortHER trial randomized 1253 HER2-positive patients to 9 weeks or 1 year of adjuvant trastuzumab combined with chemotherapy. Patients were classified according to the anatomic and the prognostic stage. Distant disease-free survival (DDFS) was calculated from randomization to distant relapse or death. A total of 1244 patients were included. Compared to anatomic stage, the prognostic stage downstaged 41.6% (n = 517) of patients to a more favorable stage category. Five-year DDFS based on anatomic stage was as follows: IA 96.6%, IB 94.1%, IIA 92.4%, IIB 87.3%, IIIA 81.3%, IIIC 70.5% (P < 0.001). Five-year DDFS according to prognostic stage was as follows: IA 95.7%, IB 91.4%, IIA 86.9%, IIB 85.0%, IIIA 77.6%, IIIC 67.7% (P < 0.001). The C index was similar (0.69209 and 0.69249, P = 0.975). Within anatomic stage I, the outcome was similar for patients treated with 9 weeks or 1 year trastuzumab (5-year DDFS 96.2% and 96.6%, P = 0.856). Within prognostic stage I, the outcome was numerically worse for patients treated with 9 weeks trastuzumab (5-year DDFS 93.7% and 96.3%, P = 0.080). The prognostic stage downstaged 41.6% of patients, while maintaining a similar prognostic performance as the anatomic stage. The prognostic stage is valuable in counseling patients and may serve as reference for a clinical trial design. Our data do not support prognostic stage as guidance to de-escalate treatment. EUDRACT number: 2007-004326-25; NCI ClinicalTrials.gov number: NCT00629278.

  • Accuracy in detecting inadequate research reporting by early career peer reviewers using an online CONSORT-based peer-review tool (COBPeer) versus the usual peer-review process: a cross-sectional diagnostic study
    BMC Med. (IF 8.285) Pub Date : 2019-11-19
    Anthony Chauvin; Philippe Ravaud; David Moher; David Schriger; Sally Hopewell; Daniel Shanahan; Sabina Alam; Gabriel Baron; Jean-Philippe Regnaux; Perrine Crequit; Valeria Martinez; Carolina Riveros; Laurence Le Cleach; Alessandro Recchioni; Douglas G. Altman; Isabelle Boutron

    The peer review process has been questioned as it may fail to allow the publication of high-quality articles. This study aimed to evaluate the accuracy in identifying inadequate reporting in RCT reports by early career researchers (ECRs) using an online CONSORT-based peer-review tool (COBPeer) versus the usual peer-review process. We performed a cross-sectional diagnostic study of 119 manuscripts, from BMC series medical journals, BMJ, BMJ Open, and Annals of Emergency Medicine reporting the results of two-arm parallel-group RCTs. One hundred and nineteen ECRs who had never reviewed an RCT manuscript were recruited from December 2017 to January 2018. Each ECR assessed one manuscript. To assess accuracy in identifying inadequate reporting, we used two tests: (1) ECRs assessing a manuscript using the COBPeer tool (after completing an online training module) and (2) the usual peer-review process. The reference standard was the assessment of the manuscript by two systematic reviewers. Inadequate reporting was defined as incomplete reporting or a switch in primary outcome and considered nine domains: the eight most important CONSORT domains and a switch in primary outcome(s). The primary outcome was the mean number of domains accurately classified (scale from 0 to 9). The mean (SD) number of domains (0 to 9) accurately classified per manuscript was 6.39 (1.49) for ECRs using COBPeer versus 5.03 (1.84) for the journal’s usual peer-review process, with a mean difference [95% CI] of 1.36 [0.88–1.84] (p < 0.001). Concerning secondary outcomes, the sensitivity of ECRs using COBPeer versus the usual peer-review process in detecting incompletely reported CONSORT items was 86% [95% CI 82–89] versus 20% [16–24] and in identifying a switch in primary outcome 61% [44–77] versus 11% [3–26]. The specificity of ECRs using COBPeer versus the usual process to detect incompletely reported CONSORT domains was 61% [57–65] versus 77% [74–81] and to identify a switch in primary outcome 77% [67–86] versus 98% [92–100]. Trained ECRs using the COBPeer tool were more likely to detect inadequate reporting in RCTs than the usual peer review processes used by journals. Implementing a two-step peer-review process could help improve the quality of reporting. Clinical.Trials.gov NCT03119376 (Registered April, 18, 2017).

  • Bleeding in cardiac patients prescribed antithrombotic drugs: electronic health record phenotyping algorithms, incidence, trends and prognosis
    BMC Med. (IF 8.285) Pub Date : 2019-11-20
    Laura Pasea; Sheng-Chia Chung; Mar Pujades-Rodriguez; Anoop D. Shah; Samantha Alvarez-Madrazo; Victoria Allan; James T. Teo; Daniel Bean; Reecha Sofat; Richard Dobson; Amitava Banerjee; Riyaz S. Patel; Adam Timmis; Spiros Denaxas; Harry Hemingway

    Clinical guidelines and public health authorities lack recommendations on scalable approaches to defining and monitoring the occurrence and severity of bleeding in populations prescribed antithrombotic therapy. We examined linked primary care, hospital admission and death registry electronic health records (CALIBER 1998–2010, England) of patients with newly diagnosed atrial fibrillation, acute myocardial infarction, unstable angina or stable angina with the aim to develop algorithms for bleeding events. Using the developed bleeding phenotypes, Kaplan-Meier plots were used to estimate the incidence of bleeding events and we used Cox regression models to assess the prognosis for all-cause mortality, atherothrombotic events and further bleeding. We present electronic health record phenotyping algorithms for bleeding based on bleeding diagnosis in primary or hospital care, symptoms, transfusion, surgical procedures and haemoglobin values. In validation of the phenotype, we estimated a positive predictive value of 0.88 (95% CI 0.64, 0.99) for hospitalised bleeding. Amongst 128,815 patients, 27,259 (21.2%) had at least 1 bleeding event, with 5-year risks of bleeding of 29.1%, 21.9%, 25.3% and 23.4% following diagnoses of atrial fibrillation, acute myocardial infarction, unstable angina and stable angina, respectively. Rates of hospitalised bleeding per 1000 patients more than doubled from 1.02 (95% CI 0.83, 1.22) in January 1998 to 2.68 (95% CI 2.49, 2.88) in December 2009 coinciding with the increased rates of antiplatelet and vitamin K antagonist prescribing. Patients with hospitalised bleeding and primary care bleeding, with or without markers of severity, were at increased risk of all-cause mortality and atherothrombotic events compared to those with no bleeding. For example, the hazard ratio for all-cause mortality was 1.98 (95% CI 1.86, 2.11) for primary care bleeding with markers of severity and 1.99 (95% CI 1.92, 2.05) for hospitalised bleeding without markers of severity, compared to patients with no bleeding. Electronic health record bleeding phenotyping algorithms offer a scalable approach to monitoring bleeding in the population. Incidence of bleeding has doubled in incidence since 1998, affects one in four cardiovascular disease patients, and is associated with poor prognosis. Efforts are required to tackle this iatrogenic epidemic.

  • NG2 glia regulate brain innate immunity via TGF-β2/TGFBR2 axis
    BMC Med. (IF 8.285) Pub Date : 2019-11-15
    Shu-zhen Zhang; Qin-qin Wang; Qiao-qiao Yang; Huan-yu Gu; Yan-qing Yin; Yan-dong Li; Jin-can Hou; Rong Chen; Qing-qing Sun; Ying-feng Sun; Gang Hu; Jia-wei Zhou

    Brain innate immunity is vital for maintaining normal brain functions. Immune homeostatic imbalances play pivotal roles in the pathogenesis of neurological diseases including Parkinson’s disease (PD). However, the molecular and cellular mechanisms underlying the regulation of brain innate immunity and their significance in PD pathogenesis are still largely unknown. Cre-inducible diphtheria toxin receptor (iDTR) and diphtheria toxin-mediated cell ablation was performed to investigate the impact of neuron-glial antigen 2 (NG2) glia on the brain innate immunity. RNA sequencing analysis was carried out to identify differentially expressed genes in mouse brain with ablated NG2 glia and lipopolysaccharide (LPS) challenge. Neurotoxin 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP)-treated mice were used to evaluate neuroinflammatory response in the presence or absence of NG2 glia. The survival of dopaminergic neurons or glial cell activation was evaluated by immunohistochemistry. Co-cultures of NG2 glia and microglia were used to examine the influence of NG2 glia to microglial activation. We show that NG2 glia are required for the maintenance of immune homeostasis in the brain via transforming growth factor-β2 (TGF-β2)-TGF-β type II receptor (TGFBR2)-CX3C chemokine receptor 1 (CX3CR1) signaling, which suppresses the activation of microglia. We demonstrate that mice with ablated NG2 glia display a profound downregulation of the expression of microglia-specific signature genes and remarkable inflammatory response in the brain following exposure to endotoxin lipopolysaccharides. Gain- or loss-of-function studies show that NG2 glia-derived TGF-β2 and its receptor TGFBR2 in microglia are key regulators of the CX3CR1-modulated immune response. Furthermore, deficiency of NG2 glia contributes to neuroinflammation and nigral dopaminergic neuron loss in MPTP-induced mouse PD model. These findings suggest that NG2 glia play a critical role in modulation of neuroinflammation and provide a compelling rationale for the development of new therapeutics for neurological disorders.

  • Unravelling the complex nature of resilience factors and their changes between early and later adolescence
    BMC Med. (IF 8.285) Pub Date : 2019-11-14
    J. Fritz; J. Stochl; E. I. Fried; I. M. Goodyer; C. D. van Borkulo; P. O. Wilkinson; A.-L. van Harmelen

    Childhood adversity (CA) is strongly associated with mental health problems. Resilience factors (RFs) reduce mental health problems following CA. Yet, knowledge on the nature of RFs is scarce. Therefore, we examined RF mean levels, RF interrelations, RF-distress pathways, and their changes between early (age 14) and later adolescence (age 17). We studied 10 empirically supported RFs in adolescents with (CA+; n = 631) and without CA (CA−; n = 499), using network psychometrics. All inter-personal RFs (e.g. friendships) showed stable mean levels between age 14 and 17, and three of seven intra-personal RFs (e.g. distress tolerance) changed in a similar manner in the two groups. The CA+ group had lower RFs and higher distress at both ages. Thus, CA does not seem to inhibit RF changes, but to increase the risk of persistently lower RFs. At age 14, but not 17, the RF network of the CA+ group was less positively connected, suggesting that RFs are less likely to enhance each other than in the CA− group. Those findings underpin the notion that CA has a predominantly strong proximal effect. RF-distress pathways did not differ in strength between the CA+ and the CA− group, which suggests that RFs have a similarly protective strength in the two groups. Yet, as RFs are lower and distress is higher, RF-distress pathways may overall be less advantageous in the CA+ group. Most RF interrelations and RF-distress pathways were stable between age 14 and 17, which may help explain why exposure to CA is frequently found to have a lasting impact on mental health. Our findings not only shed light on the nature and changes of RFs between early and later adolescence, but also offer some accounts for why exposure to CA has stronger proximal effects and is often found to have a lasting impact on mental health.

  • Live birth rates and perinatal outcomes when all embryos are frozen compared with conventional fresh and frozen embryo transfer: a cohort study of 337,148 in vitro fertilisation cycles
    BMC Med. (IF 8.285) Pub Date : 2019-11-13
    Andrew D. A. C. Smith; Kate Tilling; Deborah A. Lawlor; Scott M. Nelson

    It is not known whether segmentation of an in vitro fertilisation (IVF) cycle, with freezing of all embryos prior to transfer, increases the chance of a live birth after all embryos are transferred. In a prospective study of UK Human Fertilisation and Embryology Authority data, we investigated the impact of segmentation, compared with initial fresh embryo followed by frozen embryo transfers, on live birth rate and perinatal outcomes. We used generalised linear models to assess the effect of segmentation in the whole cohort, with additional analyses within women who had experienced both segmentation and non-segmentation. We compared rates of live birth, low birthweight (LBW < 2.5 kg), preterm birth (< 37 weeks), macrosomia (> 4 kg), small for gestational age (SGA < 10th centile), and large for gestational age (LGA > 90th centile) for a given ovarian stimulation cycle accounting for all embryo transfers. We assessed 202,968 women undergoing 337,148 ovarian stimulation cycles and 399,896 embryo transfer procedures. Live birth rates were similar in unadjusted analyses for segmented and non-segmented cycles (rate ratio 1.05, 95% CI 1.02–1.08) but lower in segmented cycles when adjusted for age, cycle number, cause of infertility, and ovarian response (rate ratio 0.80, 95% CI 0.78–0.83). Segmented cycles were associated with increased risk of macrosomia (adjusted risk ratio 1.72, 95% CI 1.55–1.92) and LGA (1.51, 1.38–1.66) but lower risk of LBW (0.71, 0.65–0.78) and SGA (0.64, 0.56–0.72). With adjustment for blastocyst/cleavage-stage embryo transfer in those with data on this (329,621 cycles), results were not notably changed. Similar results were observed comparing segmented to non-segmented within 3261 women who had both and when analyses were repeated excluding multiple embryo cycles and multiple pregnancies. When analyses were restricted to women with a single embryo transfer, the transfer of a frozen-thawed embryo in a segmented cycles was no longer associated with a lower risk of LBW (0.97, 0.71–1.33) or SGA (0.84, 0.61–1.15), but the risk of macrosomia (1.74, 1.39–2.20) and LGA (1.49, 1.20–1.86) persisted. When the analyses for perinatal outcomes were further restricted to solely frozen embryo transfers, there was no strong statistical evidence for associations. Widespread application of segmentation and freezing of all embryos to unselected patient populations may be associated with lower cumulative live birth rates and should be restricted to those with a clinical indication.

  • Synthetic high-density lipoprotein nanoparticles for the treatment of Niemann–Pick diseases
    BMC Med. (IF 8.285) Pub Date : 2019-11-11
    Mark L. Schultz; Maria V. Fawaz; Ruth D. Azaria; Todd C. Hollon; Elaine A. Liu; Thaddeus J. Kunkel; Troy A. Halseth; Kelsey L. Krus; Ran Ming; Emily E. Morin; Hayley S. McLoughlin; David D. Bushart; Henry L. Paulson; Vikram G. Shakkottai; Daniel A. Orringer; Anna S. Schwendeman; Andrew P. Lieberman

    Niemann–Pick disease type C is a fatal and progressive neurodegenerative disorder characterized by the accumulation of unesterified cholesterol in late endosomes and lysosomes. We sought to develop new therapeutics for this disorder by harnessing the body’s endogenous cholesterol scavenging particle, high-density lipoprotein (HDL). Here we design, optimize, and define the mechanism of action of synthetic HDL (sHDL) nanoparticles. We demonstrate a dose-dependent rescue of cholesterol storage that is sensitive to sHDL lipid and peptide composition, enabling the identification of compounds with a range of therapeutic potency. Peripheral administration of sHDL to Npc1 I1061T homozygous mice mobilizes cholesterol, reduces serum bilirubin, reduces liver macrophage size, and corrects body weight deficits. Additionally, a single intraventricular injection into adult Npc1 I1061T brains significantly reduces cholesterol storage in Purkinje neurons. Since endogenous HDL is also a carrier of sphingomyelin, we tested the same sHDL formulation in the sphingomyelin storage disease Niemann–Pick type A. Utilizing stimulated Raman scattering microscopy to detect endogenous unlabeled lipids, we show significant rescue of Niemann–Pick type A lipid storage. Together, our data establish that sHDL nanoparticles are a potential new therapeutic avenue for Niemann–Pick diseases.

  • Representation of people with comorbidity and multimorbidity in clinical trials of novel drug therapies: an individual-level participant data analysis
    BMC Med. (IF 8.285) Pub Date : 2019-11-12
    Peter Hanlon; Laurie Hannigan; Jesus Rodriguez-Perez; Colin Fischbacher; Nicky J. Welton; Sofia Dias; Frances S. Mair; Bruce Guthrie; Sarah Wild; David A. McAllister

    Clinicians are less likely to prescribe guideline-recommended treatments to people with multimorbidity than to people with a single condition. Doubts as to the applicability of clinical trials of drug treatments (the gold standard for evidence-based medicine) when people have co-existing diseases (comorbidity) may underlie this apparent reluctance. Therefore, for a range of index conditions, we measured the comorbidity among participants in clinical trials of novel drug therapies and compared this to the comorbidity among patients in the community. Data from industry-sponsored phase 3/4 multicentre trials of novel drug therapies for chronic medical conditions were identified from two repositories: Clinical Study Data Request and the Yale University Open Data Access project. We identified 116 trials (n = 122,969 participants) for 22 index conditions. Community patients were identified from a nationally representative sample of 2.3 million patients in Wales, UK. Twenty-one comorbidities were identified from medication use based on pre-specified definitions. We assessed the prevalence of each comorbidity and the total number of comorbidities (level of multimorbidity), for each trial and in community patients. In the trials, the commonest comorbidities in order of declining prevalence were chronic pain, cardiovascular disease, arthritis, affective disorders, acid-related disorders, asthma/COPD and diabetes. These conditions were also common in community-based patients. Mean comorbidity count for trial participants was approximately half that seen in community-based patients. Nonetheless, a substantial proportion of trial participants had a high degree of multimorbidity. For example, in asthma and psoriasis trials, 10–15% of participants had ≥ 3 conditions overall, while in osteoporosis and chronic obstructive pulmonary disease trials 40–60% of participants had ≥ 3 conditions overall. Comorbidity and multimorbidity are less common in trials than in community populations with the same index condition. Comorbidity and multimorbidity are, nevertheless, common in trials. This suggests that standard, industry-funded clinical trials are an underused resource for investigating treatment effects in people with comorbidity and multimorbidity.

  • Aspirin for primary prevention of cardiovascular disease: a meta-analysis with a particular focus on subgroups
    BMC Med. (IF 8.285) Pub Date : 2019-11-04
    Georg Gelbenegger; Marek Postula; Ladislav Pecen; Sigrun Halvorsen; Maciej Lesiak; Christian Schoergenhofer; Bernd Jilma; Christian Hengstenberg; Jolanta M. Siller-Matula

    The role of aspirin in primary prevention of cardiovascular disease (CVD) remains unclear. We aimed to investigate the benefit-risk ratio of aspirin for primary prevention of CVD with a particular focus on subgroups. Randomized controlled trials comparing the effects of aspirin for primary prevention of CVD versus control and including at least 1000 patients were eligible for this meta-analysis. The primary efficacy outcome was all-cause mortality. Secondary outcomes included cardiovascular mortality, major adverse cardiovascular events (MACE), myocardial infarction, ischemic stroke, and net clinical benefit. The primary safety outcome was major bleeding. Subgroup analyses involving sex, concomitant statin treatment, diabetes, and smoking were performed. Thirteen randomized controlled trials comprising 164,225 patients were included. The risk of all-cause and cardiovascular mortality was similar for aspirin and control groups (RR 0.98; 95% CI, 0.93–1.02; RR 0.99; 95% CI, 0.90–1.08; respectively). Aspirin reduced the relative risk (RRR) of major adverse cardiovascular events (MACE) by 9% (RR 0.91; 95% CI, 0.86–0.95), myocardial infarction by 14% (RR 0.86; 95% CI, 0.77–0.95), and ischemic stroke by 10% (RR 0.90; 95% CI, 0.82–0.99), but was associated with a 46% relative risk increase of major bleeding events (RR 1.46; 95% CI, 1.30–1.64) compared with controls. Aspirin use did not translate into a net clinical benefit adjusted for event-associated mortality risk (mean 0.034%; 95% CI, − 0.18 to 0.25%). There was an interaction for aspirin effect in three patient subgroups: (i) in patients under statin treatment, aspirin was associated with a 12% RRR of MACE (RR 0.88; 95% CI, 0.80–0.96), and this effect was lacking in the no-statin group; (ii) in non-smokers, aspirin was associated with a 10% RRR of MACE (RR 0.90; 95% CI, 0.82–0.99), and this effect was not present in smokers; and (iii) in males, aspirin use resulted in a 11% RRR of MACE (RR 0.89; 95% CI, 0.83–0.95), with a non-significant effect in females. Aspirin use does not reduce all-cause or cardiovascular mortality and results in an insufficient benefit-risk ratio for CVD primary prevention. Non-smokers, patients treated with statins, and males had the greatest risk reduction of MACE across subgroups. PROSPERO CRD42019118474.

  • Does low-density lipoprotein cholesterol induce inflammation? If so, does it matter? Current insights and future perspectives for novel therapies
    BMC Med. (IF 8.285) Pub Date : 2019-11-01
    Ruurt A. Jukema; Tarek A. N. Ahmed; Jean-Claude Tardif

    Dyslipidemia and inflammation are closely interrelated contributors in the pathogenesis of atherosclerosis. Disorders of lipid metabolism initiate an inflammatory and immune-mediated response in atherosclerosis, while low-density lipoprotein cholesterol (LDL-C) lowering has possible pleiotropic anti-inflammatory effects that extend beyond lipid lowering. Activation of the immune system/inflammasome destabilizes the plaque, which makes it vulnerable to rupture, resulting in major adverse cardiac events (MACE). The activated immune system potentially accelerates atherosclerosis, and atherosclerosis activates the immune system, creating a vicious circle. LDL-C enhances inflammation, which can be measured through multiple parameters like high-sensitivity C-reactive protein (hsCRP). However, multiple studies have shown that CRP is a marker of residual risk and not, itself, a causal factor. Recently, anti-inflammatory therapy has been shown to decelerate atherosclerosis, resulting in fewer MACE. Nevertheless, an important side effect of anti-inflammatory therapy is the potential for increased infection risk, stressing the importance of only targeting patients with high residual inflammatory risk. Multiple (auto-)inflammatory diseases are potentially related to/influenced by LDL-C through inflammasome activation. Research suggests that LDL-C induces inflammation; inflammation is of proven importance in atherosclerotic disease progression; anti-inflammatory therapies yield promise in lowering (cardiovascular) disease risk, especially in selected patients with high (remaining) inflammatory risk; and intriguing new anti-inflammatory developments, for example, in nucleotide-binding leucine-rich repeat-containing pyrine receptor inflammasome targeting, are currently underway, including novel pathway interventions such as immune cell targeting and epigenetic interference. Long-term safety should be carefully monitored for these new strategies and cost-effectiveness carefully evaluated.

  • Effect of continued folic acid supplementation beyond the first trimester of pregnancy on cognitive performance in the child: a follow-up study from a randomized controlled trial (FASSTT Offspring Trial)
    BMC Med. (IF 8.285) Pub Date : 2019-10-31
    Helene McNulty; Mark Rollins; Tony Cassidy; Aoife Caffrey; Barry Marshall; James Dornan; Marian McLaughlin; Breige A. McNulty; Mary Ward; J. J. Strain; Anne M. Molloy; Diane J. Lees-Murdock; Colum P. Walsh; Kristina Pentieva

    Periconceptional folic acid prevents neural tube defects (NTDs), but it is uncertain whether there are benefits for offspring neurodevelopment arising from continued maternal folic acid supplementation beyond the first trimester. We investigated the effect of folic acid supplementation during trimesters 2 and 3 of pregnancy on cognitive performance in the child. We followed up the children of mothers who had participated in a randomized controlled trial in 2006/2007 of Folic Acid Supplementation during the Second and Third Trimesters (FASSTT) and received 400 μg/d folic acid or placebo from the 14th gestational week until the end of pregnancy. Cognitive performance of children at 7 years was evaluated using the Wechsler Preschool and Primary Scale of Intelligence (WPPSI-III) and at 3 years using the Bayley’s Scale of Infant and Toddler Development (BSITD-III). From a total of 119 potential mother-child pairs, 70 children completed the assessment at age 7 years, and 39 at age 3 years. At 7 years, the children of folic acid treated mothers scored significantly higher than the placebo group in word reasoning: mean 13.3 (95% CI 12.4–14.2) versus 11.9 (95% CI 11.0–12.8); p = 0.027; at 3 years, they scored significantly higher in cognition: 10.3 (95% CI 9.3–11.3) versus 9.5 (95% CI 8.8–10.2); p = 0.040. At both time points, greater proportions of children from folic acid treated mothers compared with placebo had cognitive scores above the median values of 10 (girls and boys) for the BSITD-III, and 24.5 (girls) and 21.5 (boys) for the WPPSI-III tests. When compared with a nationally representative sample of British children at 7 years, WPPSI-III test scores were higher in children from folic acid treated mothers for verbal IQ (p < 0.001), performance IQ (p = 0.035), general language (p = 0.002), and full scale IQ (p = 0.001), whereas comparison of the placebo group with British children showed smaller differences in scores for verbal IQ (p = 0.034) and full scale IQ (p = 0.017) and no differences for performance IQ or general language. Continued folic acid supplementation in pregnancy beyond the early period recommended to prevent NTD may have beneficial effects on child cognitive development. Further randomized trials in pregnancy with follow-up in childhood are warranted. ISRCTN ISRCTN19917787 . Registered 15 May 2013.

  • Simulations for designing and interpreting intervention trials in infectious diseases.
    BMC Med. (IF 8.285) Pub Date : 2017-12-31
    M Elizabeth Halloran,Kari Auranen,Sarah Baird,Nicole E Basta,Steven E Bellan,Ron Brookmeyer,Ben S Cooper,Victor DeGruttola,James P Hughes,Justin Lessler,Eric T Lofgren,Ira M Longini,Jukka-Pekka Onnela,Berk Özler,George R Seage,Thomas A Smith,Alessandro Vespignani,Emilia Vynnycky,Marc Lipsitch

    BACKGROUND Interventions in infectious diseases can have both direct effects on individuals who receive the intervention as well as indirect effects in the population. In addition, intervention combinations can have complex interactions at the population level, which are often difficult to adequately assess with standard study designs and analytical methods. DISCUSSION Herein, we urge the adoption of a new paradigm for the design and interpretation of intervention trials in infectious diseases, particularly with regard to emerging infectious diseases, one that more accurately reflects the dynamics of the transmission process. In an increasingly complex world, simulations can explicitly represent transmission dynamics, which are critical for proper trial design and interpretation. Certain ethical aspects of a trial can also be quantified using simulations. Further, after a trial has been conducted, simulations can be used to explore the possible explanations for the observed effects. CONCLUSION Much is to be gained through a multidisciplinary approach that builds collaborations among experts in infectious disease dynamics, epidemiology, statistical science, economics, simulation methods, and the conduct of clinical trials.

  • A systematic review of multi-level stigma interventions: state of the science and future directions.
    BMC Med. (IF 8.285) Pub Date : 2019-02-17
    Deepa Rao,Ahmed Elshafei,Minh Nguyen,Mark L Hatzenbuehler,Sarah Frey,Vivian F Go

    BACKGROUND Researchers have long recognized that stigma is a global, multi-level phenomenon requiring intervention approaches that target multiple levels including individual, interpersonal, community, and structural levels. While existing interventions have produced modest reductions in stigma, their full reach and impact remain limited by a nearly exclusive focus targeting only one level of analysis. METHODS We conducted the first systematic review of original research on multi-level stigma-reduction interventions. We used the following eligibility criteria for inclusion: (1) peer-reviewed, (2) contained original research, (3) published prior to initiation of search on November 30, 2017, (4) evaluated interventions that operated on more than one level, and (5) examined stigma as an outcome. We stratified and analyzed articles by several domains, including whether the research was conducted in a low-, middle-, or high-income country. RESULTS Twenty-four articles met the inclusion criteria. The articles included a range of countries (low, middle, and high income), stigmatized conditions/populations (e.g., HIV, mental health, leprosy), intervention targets (e.g., people living with a stigmatized condition, health care workers, family, and community members), and stigma reduction strategies (e.g., contact, social marketing, counseling, faith, problem solving), with most using education-based approaches. A total of 12 (50%) articles examined community-level interventions alongside interpersonal and/or intrapersonal levels, but only 1 (4%) combined a structural-level intervention with another level. Of the 24 studies, only 6 (25%) were randomized controlled trials. While most studies (17 of 24) reported statistically significant declines in at least one measure of stigma, fewer than half reported measures of practical significance (i.e., effect size); those that were reported varied widely in magnitude and were typically in the small-to-moderate range. CONCLUSIONS While there has been progress over the past decade in the development and evaluation of multi-level stigma interventions, much work remains to strengthen and expand this approach. We highlight several opportunities for new research and program development.

  • Participatory praxis as an imperative for health-related stigma research.
    BMC Med. (IF 8.285) Pub Date : 2019-02-16
    Laurel Sprague,Rima Afifi,George Ayala,Musah Lumumba El-Nasoor

    BACKGROUND Participatory praxis is increasingly valued for the reliability, validity, and relevance of research results that it fosters. Participatory methods become an imperative in health-related stigma research, where the constitutive elements of stigma, healthcare settings, and research each operate on hierarchies that push those with less social power to the margins. DISCUSSION Particularly for people who are stigmatized, participatory methods balance the scales of equity by restructuring power relationships. As such, participatory praxis facilitates a research process that is responsive to community-identified priorities and creates community ownership of the research, catalyzing policy change at multiple levels and foregrounds, and addresses risks to communities from participating in research. Additionally, through upholding the agency and leadership of communities facing stigma, it can help to mitigate stigma's harmful effects. Health-related stigma research can reduce the health inequities faced by stigmatized groups if funders and institutions require and reward community participation and if researchers commit to reflexive, participatory practices. A research agenda focused on participatory praxis in health-related stigma research could stimulate increased use of such methods. CONCLUSION For community-engaged practice to become more than an ethical aspiration, structural changes in the funding, training, publishing, and tenure processes will be necessary.

  • Universal health coverage for refugees and migrants in the twenty-first century.
    BMC Med. (IF 8.285) Pub Date : 2018-11-27
    Ibrahim Abubakar,Alimuddin Zumla

    Migration is a determinant of health. Tackling the health needs of migrants and refugees will require action at the local, national, and global levels. Over the past 12 months, BMC Medicine has published a collection of articles under the title Migrant and Refugee Health ( https://www.biomedcentral.com/collections/migrant-and-refugee-health ) addressing a range of health issues affecting refugees and migrants in their countries of origin, on transit, and in their destination countries. In light of these articles, we herein discuss the complex and wide-ranging healthcare needs of different refugee groups in their destination countries as well as the need for accessible and culturally appropriate health services.

  • Cannabis use in first episode psychosis: what we have tried and why it hasn't worked.
    BMC Med. (IF 8.285) Pub Date : 2019-10-30
    Michael G McDonell,Oladunni Oluwoye

  • Adaptive trials, efficiency, and ethics.
    BMC Med. (IF 8.285) Pub Date : 2019-10-23
    Spencer Phillips Hey

  • C-peptide persistence in type 1 diabetes: 'not drowning, but waving'?
    BMC Med. (IF 8.285) Pub Date : 2019-09-29
    R David Leslie,Tanwi Vartak

  • Gene-environment interactions and vitamin D effects on cardiovascular risk.
    BMC Med. (IF 8.285) Pub Date : 2019-08-31
    Guido Iaccarino,Bruno Trimarco

  • New approaches for detecting cancer with circulating cell-free DNA.
    BMC Med. (IF 8.285) Pub Date : 2019-08-17
    Clare Fiala,Eleftherios P Diamandis

  • Surveillance and monitoring of antimicrobial resistance: limitations and lessons from the GRAM project.
    BMC Med. (IF 8.285) Pub Date : 2019-09-21
    Jesse Schnall,Arjun Rajkhowa,Kevin Ikuta,Puja Rao,Catrin E Moore

  • Iron deficiency during pregnancy is associated with a reduced risk of adverse birth outcomes in a malaria-endemic area in a longitudinal cohort study.
    BMC Med. (IF 8.285) Pub Date : 2018-09-21
    Freya J I Fowkes,Kerryn A Moore,D Herbert Opi,Julie A Simpson,Freya Langham,Danielle I Stanisic,Alice Ura,Christopher L King,Peter M Siba,Ivo Mueller,Stephen J Rogerson,James G Beeson

    BACKGROUND Low birth weight (LBW) and preterm birth (PTB) are major contributors to infant mortality and chronic childhood morbidity. Understanding factors that contribute to or protect against these adverse birth outcomes is an important global health priority. Anaemia and iron deficiency are common in malaria-endemic regions, but there are concerns regarding the value of iron supplementation among pregnant women in malaria-endemic areas due to reports that iron supplementation may increase the risk of malaria. There is a lack of evidence on the impact of iron deficiency on pregnancy outcomes in malaria-endemic regions. METHODS We determined iron deficiency in a cohort of 279 pregnant women in a malaria-endemic area of Papua New Guinea. Associations with birth weight, LBW and PTB were estimated using linear and logistic regression. A causal model using sequential mediation analyses was constructed to assess the association between iron deficiency and LBW, either independently or mediated through malaria and/or anaemia. RESULTS Iron deficiency in pregnant women was common (71% at enrolment) and associated with higher mean birth weights (230 g; 95% confidence interval, CI 118, 514; p < 0.001), and reduced odds of LBW (adjusted odds ratio, aOR = 0.32; 95% CI 0.16, 0.64; p = 0.001) and PTB (aOR = 0.57; 95% CI 0.30, 1.09; p = 0.089). Magnitudes of effect were greatest in primigravidae (birth weight 351 g; 95% CI 188, 514; p < 0.001; LBW aOR 0.26; 95% CI 0.10, 0.66; p = 0.005; PTB aOR = 0.39, 95% CI 0.16, 0.97; p = 0.042). Sequential mediation analyses indicated that the protective association of iron deficiency on LBW was mainly mediated through mechanisms independent of malaria or anaemia. CONCLUSIONS Iron deficiency was associated with substantially reduced odds of LBW predominantly through malaria-independent protective mechanisms, which has substantial implications for understanding risks for poor pregnancy outcomes and evaluating the benefit of iron supplementation in pregnancy. This study is the first longitudinal study to demonstrate a temporal relationship between antenatal iron deficiency and improved birth outcomes. These findings suggest that iron supplementation needs to be integrated with other strategies to prevent or treat infections and undernutrition in pregnancy to achieve substantial improvements in birth outcomes.

Contents have been reproduced by permission of the publishers.