当前位置: X-MOL 学术Ecol. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Editorial.
Ecological Applications ( IF 4.3 ) Pub Date : 2020-05-20 , DOI: 10.1002/eap.2136
David S. Schimel , Anne Marie Whelan

Scientists have many cultural tropes about how peer review goes awry, some of which are standing jokes. In my [Schimel’s] research group, we’ll often laugh over beers about a terrible internet video with the line “It’s reviewer three, it’s always reviewer three”, around the idea that the third reviewer, called in when the first two don’t agree, is always the one who kills a paper!

There are serious concerns about bias in peer review, and for years, I, the editorial board, and the ESA journals staff have scrutinized our process and outcomes for evidence of systematic bias in how papers are handled, and whether there is bias due to topic, author’s identify, national origin, or career stage. We know many of these types of bias have been incontrovertibly identified in other journals and areas of science, but in general, we haven’t found statistical evidence of systematic bias, which doesn’t preclude unfair outcomes in individual cases, but suggests on balance our editorial board and its members as individuals are doing a good job of focusing on scientific merit and the quality of the presentation in our papers. As Editor‐in‐Chief, I’d like to reflect on experience in managing peer review, without violating its absolute confidentiality. Each of the points I’ll make is supported in my notes by specific cases, but can’t be openly shared.

One of the oldest tropes in science, often used in writing and the cinema, is the idea that scientists, and especially “the establishment” of senior researchers and lab directors, resist new ideas. Challenging ideas may be perceived as threatening to established programs, or they may suggest that work by eminent researchers is wrong or at least misleading in some way. Our ecological science journals seem amazingly free of many of the biases authors worry about (https://doi.org/10.1002/bes2.1567) but in adjudicating many author appeals every year for two decades, reviewers do seem to hold novel concepts to a higher standard.

Reviewers’ concerns about papers with novel ideas manifest in a number of ways, some not unreasonable. Frequently, reviewers will look for a high level of certainty in analyses supporting a new idea. While this might seem reasonable – extraordinary claims require extraordinary evidence – the opposite is true as well. Innovative and extraordinary ideas supported by reasonable evidence may well be worth a closer look by the community. Very few ideas in ecology are accepted based on a single study; rather those studies trigger a community of response. How much evidence is needed for researchers to return to a topic and examine it in a new light? How much evidence should a paper suggesting a fresh look require?

I’ve reviewed in depth a number of seminal papers published in Ecological Applications, papers that have generated citations but also follow‐on tests and/or applications. Many of these highly influential papers came to our journal after a round of rejections at high profile journals such as Science, Nature and PNAS; or they were rejected by our own ESA journals and were appealed, with additional review requested. Often these papers ended up being some of our most high impact papers; high impact in that they have many citations, or more importantly, that their citations are substantive, where the citing authors state that the paper was an inspiration, proposed a hypothesis, or some other substantive use of the paper, and not just one more paper in a list of prior work.

Many of these influential papers were challenged by reviewers, either on methodology or even because their conclusions conflicted with other work the reviewer considered to be better substantiated, leading the reviewer to judge the paper in hand needed stronger evidence. In other cases, reviewers have challenged technical details, admitting that the paper was overall interesting but might have methodological flaws. Once a reviewer identifies a potential methodological flaw, they only sometimes go on to ask “Does this flaw increase uncertainty or does it undermine the author’s case?” For instance, a flaw in sample size or replication might weaken a case without altering the author’s conclusions. Often, especially in novel papers, reviewers will raise the bar and ask for a perfect study.

In the applications of ecology, perfect studies are rare. Management units may be inconveniently too large to replicate experimentally; processes may play out slowly so that evidence must come from reconstructions or indirect inference, and relevant but not ideal observations or methods; some management challenges simply cannot be replicated, for example impacts of the 1988 fires in Yellowstone on its entire ecosystem, or the consequences of high flow experimental releases from Glen Canyon Dam.

In short, if you as an author feel your most exciting ideas receive rough treatment, you may well be correct. Don’t despair – we have a robust appeal process and always treat a courteous and collegial request for reconsideration with respect. We don’t accept all appeals but we do read them with care, aware that we may be considering our next seminal work!

Day in and day out, our editorial board exercises their judgment about how much evidence is enough for a new idea to be made available to our readers! We depend on our peer reviewers for technical evaluation, but we also carefully evaluate their comments. We are aware of the human impulse to react to a challenging idea, and we remind ourselves that some of our best papers received at least one round of very negative reviews! These reviews often led to an improved result or presentation, but taken at face value would have supported rejection.

Ecological Society editors do not merely count votes from peer review, but read the reviews with care, to understand the critique and to sense the reviewer’s mindset in evaluating the work. We have certainly accepted flawed papers, as can be seen from vigorous exchanges in Letters, but the alternative seems to be, too often, rejecting important work. We encourage authors and our reviewers to remember that the most challenging papers may trigger a reaction, but to use that reaction to take a deeper look. Those papers often turn out to be game changers!

Ecological Applications continues to publish a variety of excellent papers, spanning the discipline of ecology and its applications. This past year we received 837 submissions, which is up 8% from the previous 10‐year average. About 50% of submissions go out for peer review. The average time to reach a decision for peer‐reviewed manuscripts has been about 80 days in the past few years. It’s becoming more difficult over the years to find two reviewers to review a manuscript. On average we ask about five people to review to get two reviewers. For 9% of manuscripts sent out for review, we’ve had to ask 10 or more people to get two to review. For the most part, reviewers are very responsible and on average take 22 days to review a manuscript. We cannot over‐state how grateful we are for the services of our reviewers, including the ones who take more time to craft a thoughtful review. Despite the increasing pressures to get manuscripts to publication ever faster, we continue to value a careful review process, which sometimes can take longer than authors might like, but in the end results in a stronger paper.

Accepted manuscripts comprised 24% of final decisions in the past year. There were 166 manuscripts published in 2019 in Ecological Applications. Time from acceptance to publication was on average a little over two months in the past year, which is less than half the time it took before we went to publishing virtually all articles in Early View a couple years ago.

Thanks to a system which was already operating remotely with editors, reviewers, and authors from all over the world, our submission and review processes have continued uninterrupted in the age of COVID‐19. Most of the ESA journal staff have been working remotely from upstate New York since 2016, when we moved to a publishing partnership with John Wiley & Sons. Wiley is working diligently to minimize impacts; however, a primary vendor operates with staff in India and delays may occur in production. We appreciate your understanding and patience during these trying times, as the impacts are also starting to show on our volunteer editors, reviewers and authors, who are taking on the additional challenges of transitioning to online courses, adjusting research methodologies, child care, and more.



中文翻译:

社论。

科学家对同行评议的方式有很多文化上的偏颇,其中有些是常开的玩笑。在我的[Schimel的]研究小组中,我们经常会用“这是三位审阅者,总是三位审阅者”这一行来嘲笑关于糟糕的互联网视频的啤酒,围绕着第三位审阅者在前两个没有加入时打电话的想法。同意,永远是杀死论文的人!

同行评审存在严重的担忧,多年来,我,编辑委员会和ESA期刊的工作人员都仔细检查了我们的流程和结果,以证明在论文处理方式上是否存在系统性偏见,以及是否由于主题而产生偏见。 ,作者的身分,国籍或职业阶段。我们知道,其他类型的偏见中的许多类型已经在其他期刊和科学领域中被无可辩驳地确定,但是总的来说,我们没有发现系统偏见的统计证据,这不能排除个别情况下的不公平结果,但总的来说是建议我们的编辑委员会及其成员作为个人,在专注于科学价值和论文发表质量方面做得很好。作为总编辑,我想回顾一下管理同行评审的经验,而不违反其绝对机密性。在某些情况下,笔记中会支持我要提出的每一个观点,但不能公开分享。

科学家(尤其是高级研究人员和实验室主任的“组建”)抵制新观念的想法是科学中最古老的常识之一,通常用于写作和电影。具有挑战性的想法可能会被视为对既定计划的威胁,或者可能暗示着杰出研究人员的工作是错误的,或者至少在某种程度上具有误导性。我们的生态科学期刊似乎惊人地摆脱了作者担心的许多偏见(https://doi.org/10.1002/bes2.1567),但是在裁决两年来每年两次的许多作者上诉时,审稿人似乎确实拥有新颖的概念更高的标准。

审稿人对具有新颖想法的论文的担忧以多种方式体现出来,有些并非不合理。通常,审阅者会在支持新思想的分析中寻求高度确定性。尽管这似乎是合理的-非同寻常的主张需要非同寻常的证据,但事实恰恰相反。在合理的证据支持下,创新和非凡的想法很可能值得社区进一步关注。根据一项研究,几乎没有生态学的观点被接受。而是那些研究触发了回应社区。研究人员需要多少证据才能重新回到一个主题并以新的视角对其进行研究?一份建议重新焕发外观的论文应提供多少证据?

我已经深入审查了《生态学应用》中发表的许多开创性论文,这些论文不仅产生了引文,而且还进行了后续测试和/或应用。在《科学》,《自然》和《 PNAS》等知名期刊遭到一轮否决之后,其中许多具有高度影响力的论文都进入了我们的期刊。或被我们自己的ESA期刊拒绝并提出上诉,并要求进行额外的审查。通常,这些论文最终成为我们影响最大的论文。产生高影响力的原因是,它们有很多引用,或更重要的是,它们的引用是实质性的,其中引用作者指出,本文是一种启发,提出了假设或对本文进行了其他实质性使用,而不仅仅是一篇论文在先前的工作清单中。

许多有影响力的论文都受到了审稿人的质疑,无论是在方法论上,还是因为其结论与审稿人认为更好的依据都与其他工作相抵触,导致审稿人判断手中的论文需要更有力的证据。在其他情况下,审稿人质疑技术细节,承认该论文总体而言很有趣,但可能存在方法上的缺陷。一旦审阅者发现了潜在的方法缺陷,他们有时只会继续问:“该缺陷会增加不确定性还是破坏作者的案子?” 例如,样本大小或复制的缺陷可能会弱化案件,而不会改变作者的结论。通常,尤其是在新颖的论文中,审稿人会提高标准并要求进行完善的学习。

在生态学的应用中,很少进行完善的研究。管理单元可能不方便,太大了,无法通过实验进行复制;过程可能缓慢进行,因此证据必须来自重构或间接推论,以及相关但非理想的观察或方法;一些管理挑战根本无法重现,例如1988年黄石大火对整个生态系统的影响,或格伦峡谷大坝高流量试验释放的后果。

简而言之,如果您作为作者感到最激动人心的想法受到粗暴对待,那么您很可能是正确的。别失望-我们拥有强大的上诉程序,始终尊重和尊重礼貌和合议的复议请求。我们不接受所有上诉,但请务必仔细阅读它们,并意识到我们可能正在考虑我们的下一个开创性工作!

我们的编辑委员会日复一日地对有多少证据足以使读者提出新想法进行判断!我们依靠同行评审员进行技术评估,但我们也会仔细评估他们的评论。我们意识到人类对具有挑战性的想法做出反应的冲动,并且提醒自己,我们的一些最佳论文至少收到了一轮非常负面的评价!这些评论通常可以带来更好的结果或呈现效果,但从表面上看却可以支持拒绝。

生态学会的编辑不仅会计算同行评议的票数,还应仔细阅读评论,以了解批评意见,并感悟评论者在评估作品时的心态。从信件的积极交流中可以看出,我们当然接受了有缺陷的论文,但替代方案似乎常常是拒绝重要的工作。我们鼓励作者和我们的审稿人记住,最具挑战性的论文可能会引起反应,但要使用该反应进行更深入的了解。这些文件经常证明是改变游戏规则的人!

《生态学应用》继续发表许多涵盖生态学及其应用领域的优秀论文。在过去的一年中,我们收到了837份申请,比前十年的平均水平增长了8%。大约50%的提交出去供同行评审。在过去几年中,做出决定的平均时间大约为80天。多年来,找到两名审稿人审阅手稿变得越来越困难。平均而言,我们要求大约五个人进行审核,以获得两名审核员。对于发送给审稿的9%的稿件,我们不得不要求10个或更多的人让两个人审稿。在大多数情况下,审稿人非常负责,平均需要22天才能审阅稿件。我们不能高估我们对审阅者服务的感谢,包括那些花费更多时间进行深思熟虑的人。尽管越来越迫切要求手稿的出版,但我们仍继续重视审慎的审阅过程,该过程有时可能需要比作者希望的时间更长的时间,但最终会导致论文更加完善。

在过去的一年中,接受的手稿占最终决定的24%。2019年《生态学应用》上发表了166篇手稿。从接受到发表的时间平均在过去一年中超过两个月,不到两年前我们在“早期观点”中发表几乎所有文章所花费的时间的一半。

由于已经与来自世界各地的编辑,审阅者和作者进行远程操作的系统,在COVID-19时代,我们的提交和审阅过程一直没有中断。自2016年以来,我们与约翰·威利父子(John Wiley&Sons)成为出版合作伙伴以来,大多数ESA期刊工作人员一直在纽约北部地区进行远程工作。Wiley努力工作以最大程度地减少影响;但是,主要供应商在印度与员工一起经营,因此生产可能会延迟。我们感谢您在这段艰难时期的理解和耐心,因为这种影响也开始显示在我们的志愿者编辑,审稿人和作者身上,他们正面临着向在线课程过渡,调整研究方法,育儿等更多挑战。 。

更新日期:2020-05-20
down
wechat
bug