当前位置: X-MOL 学术Stroke › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Checklists for Authors Improve the Reporting of Basic Science Research.
Stroke ( IF 8.3 ) Pub Date : 2019-11-13 , DOI: 10.1161/strokeaha.119.027626
Jens Minnerup 1 , Ulrich Dirnagl 2, 3 , Wolf-Rüdiger Schäbitz 4
Affiliation  

See related article, p 291


New treatments are usually tested in animal studies to inform clinical trials. However, the value of animal experiments in predicting the effectiveness of a drug in patients has remained controversial.1 The disparity between results of experimental and clinical studies affects many research areas, for example, stroke, neurodegenerative disorders, and sepsis. Shortcomings in the design, conduct, analysis, and reporting of animal experiments contribute to translational failure.2 For example, omitting blinding and randomization in animal studies demonstrably leads to false positives and major overstatement of efficacy. Pioneered by Stroke, some journals and publishers introduced checklists for submitted manuscripts on experimental studies to prompt authors to disclose information about study design elements. In a recent article, Ramirez et al3 systematically reviewed 3 journals (Nature Medicine, Science Translational Medicine, Stroke) with and 2 control journals without such a checklist to evaluate its effect on the quality of published experimental studies. Overall, >4000 articles published over a period between 9 and 18 years were included in the analysis. In summary, marked increases in reporting on randomization, blinding, and sample size estimation were observed after implementation of checklists. Reassuringly, articles published in journals using checklists achieved relatively high reporting levels. Surprisingly and yet unexplained, however, quite a few studies reporting on randomization, blinding, or sample size calculations in Nature Medicine and Science Translational Medicine have apparently not applied them.


Studies published in Stroke lack such a discrepancy, suggesting greater robustness and methodological rigor of basic science articles published in this journal. Conversely, reporting of study design elements of articles published in control journals without checklists did not change over time: Randomization and blinding procedures were reported only in approximately one-third of the articles, whereas sample size calculations were reported in <10% of the studies. A positive effect of guidelines or checklists on reporting practice has also recently been demonstrated by others (The NPQIP Collaborative Group 2019),4 in addition to a general improvement in the reporting of study design elements in the field of focal cerebral ischemia research.5 A randomized controlled trial found no effect of implementing a checklist on the compliance with the ARRIVE (Animal Research: Reporting of In Vivo Experiments) reporting guidelines at the multidiciplinary journal PLOS ONE.6 Together with the findings of the Ramirez study, the available literature points to field specific effects, and more specifically indicates that preclinical cerebrovascular research may be a leader in the quest to improve the quality of published studies.


Although there is clearly still room for improvement (eg, higher prevalence of important measures to prevent bias, such as randomization or blinding, as well as sample size calculations; inclusion of both sexes in studies), these data demonstrate that journals, editorial teams, and reviewers have an important and we think ethical mandate in scientific publishing and evidence generation.7 We posit that every journal publishing basic science articles should assess study quality. This, however, clearly increases work load of investigators and authors but also editors and reviewers and may be more difficult to implement in small journals and in the lower tiers of journal rankings.


The findings of Ramirez and colleagues are reassuring from the perspective of the journal Stroke. Basic science articles published in Stroke are not only selected by innovation and translational importance but also by methodological rigor. In now almost 50 years of publishing, Stroke journal and editorial team should continue its policy of publication of innovative and translationally important experimental studies of high methodological quality contributing to better diagnosis and treatment of our patients.


Dr Schäbitz is associate editor for the journal Stroke. Dr Dirnagl received funding from Berlin Institute of Health. The other author reports no conflicts.


The opinions expressed in this article are not necessarily those of the editors or of the American Heart Association.




中文翻译:

作者清单可改善基础科学研究的报告。

请参阅相关文章,第291页


通常在动物研究中测试新疗法,以为临床试验提供依据。然而,动物实验在预测药物对患者有效性方面的价值仍存在争议。1实验和临床研究结果之间的差异影响许多研究领域,例如中风,神经退行性疾病和败血症。动物实验的设计,进行,分析和报告中的缺陷会导致翻译失败。2个例如,在动物研究中省略盲目性和随机性显然会导致假阳性和功效夸大。在Stroke的倡导下,一些期刊和出版商引入了针对实验研究的已提交手稿的清单,以促使作者披露有关研究设计要素的信息。在最近的一篇文章中,拉米雷斯(Ramirez)等[ 3]对3种期刊(自然医学科学转化医学,中风),其中有2个对照期刊没有这样的检查表,以评估其对已发表的实验研究质量的影响。总体而言,在9到18年间发表的超过4000篇文章被纳入分析。总之,在实施清单后,观察到随机,盲法和样本量估计的报告显着增加。令人放心的是,使用清单的期刊上发表的文章达到了相对较高的报告水平。令人惊讶且尚未解释的是,但是《自然医学》和《科学转化医学》中有关随机化,盲法或样本量计算的许多研究报告显然没有应用它们。


在《中风》杂志上发表的研究缺乏这种差异,表明该杂志发表的基础科学文章具有更高的鲁棒性和方法学上的严谨性。相反,在没有对照清单的对照期刊中发表的文章的研究设计要素的报告并没有随时间变化:仅在大约三分之一的文章中报告了随机化和盲法程序,而在少于10%的研究中报告了样本量计算。最近,其他人(NPQIP合作小组2019)4也证明了指南或清单对报告实践的积极影响,此外还改善了局灶性脑缺血研究领域研究设计元素的报告。5一项随机对照试验未发现在多学科期刊《 PLOS ONE》上实施符合ARRIVE(动物研究:体内实验报告)报告指南的清单没有影响。6连同拉米雷斯研究的发现,现有文献指出了具体领域的作用,更具体地表明,临床前脑血管研究可能是寻求提高已发表研究质量的领导者。


尽管显然仍有改进的空间(例如,较高的预防偏倚的重要措施(例如随机化或盲法)的流行程度以及样本量的计算;在研究中包括性别),但这些数据表明期刊,编辑团队,审稿人具有重要的作用,我们认为在科学出版和证据产生中要遵守道德规范。7我们认为,每篇发表基础科学文章的期刊都应评估研究质量。但是,这显然增加了研究人员和作者的工作量,也增加了编辑和审稿人的工作量,在小型期刊和较低等级的期刊排名中可能更难以实施。


从《中风》杂志的角度来看,拉米雷斯及其同事的发现令人放心。在《中风》中发表的基础科学文章不仅根据创新和翻译的重要性进行选择,而且还根据方法的严谨性进行选择。在近50年的出版历史中,Stroke期刊和编辑团队应继续遵循其出版政策,以创新的,具有翻译意义的高质量方法进行实验研究,以更好地诊断和治疗患者。


Schäbitz博士是《中风》杂志的副主编。Dirnagl博士获得了柏林卫生研究院的资助。另一位作者报告没有冲突。


本文表达的观点不一定是编辑者或美国心脏协会的观点。


更新日期:2019-12-25
down
wechat
bug