当前位置: X-MOL 学术Scientometrics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A framework for assessing the peer review duration of journals: case study in computer science
Scientometrics ( IF 3.9 ) Pub Date : 2020-11-05 , DOI: 10.1007/s11192-020-03742-9
Besim Bilalli , Rana Faisal Munir , Alberto Abelló

In various fields, scientific article publication is a measure of productivity and in many occasions it is used as a critical factor for evaluating researchers. Therefore, a lot of time is dedicated to writing articles that are then submitted for publication in journals. Nevertheless, the publication process in general and the review process in particular tend to be rather slow. This is the case for instance of computer science (CS) journals. Moreover, the process typically lacks in transparency, where information about the duration of the review process is at best provided in an aggregated manner, if made available at all. In this paper, we develop a framework as a step towards bringing more reliable data with respect to review duration. Based on this framework, we implement a tool—journal response time (JRT), that allows for automatically extracting the review process data and helps researchers to find the average response times of journals, which can be used to study the duration of CS journals’ peer review process. The information is extracted as metadata from the published articles, when available. This study reveals that the response times publicly provided by publishers differ from the actual values obtained by JRT (e.g., for ten selected journals the average duration reported by publishers deviates by more than 500% from the actual average value calculated from the data inside the articles), which we suspect could be from the fact that, when calculating the aggregated values, publishers consider the review time of rejected articles too (including quick desk-rejections that do not require reviewers).

中文翻译:

评估期刊同行评审持续时间的框架:计算机科学案例研究

在各个领域,科学文章发表是生产力的衡量标准,在许多情况下,它被用作评估研究人员的关键因素。因此,很多时间都花在写文章上,然后提交到期刊上发表。然而,一般的出版过程,特别是审查过程往往相当缓慢。例如,计算机科学 (CS) 期刊就是这种情况。此外,该流程通常缺乏透明度,如果有的话,最好以汇总的方式提供有关审核流程持续时间的信息。在本文中,我们开发了一个框架,作为在审查持续时间方面提供更可靠数据的一步。基于这个框架,我们实现了一个工具——日志响应时间(JRT),允许自动提取审稿过程数据,帮助研究人员找到期刊的平均响应时间,可用于研究 CS 期刊同行评审过程的持续时间。如果可用,信息将从已发布的文章中提取为元数据。这项研究表明,出版商公开提供的响应时间与 JRT 获得的实际值不同(例如,对于十个选定的期刊,出版商报告的平均持续时间与根据文章内部数据计算的实际平均值相差 500% 以上),我们怀疑这可能是因为在计算聚合值时,出版商也会考虑被拒绝文章的审阅时间(包括不需要审稿人的快速桌面拒绝)。
更新日期:2020-11-05
down
wechat
bug