In my main specialty, I am a gerontologist [13]. However, being the Associate Editor-in-Chief of the scientific journal of the School of Biology of Moscow State University Vestnik Moskovskogo Universiteta, Seriya 16: Biologiya (Moscow University Biological Sciences Bulletin) [46], I give much attention to scientometric studies of the current situation with scientific publications both in Russia and abroad. In 2014, our journal won the contest for the state support of development programs and promotion of Russian scientific journals into international scientific and information space, which was organized by the Ministry of Education and Science of the Russian Federation and the National Electronic Information Consortium (NEICON). Having bypassed more than 500 competing journals, Moscow University Biological Sciences Bulletin became one of the two winners in the “Biology” section and received financial support for 3 years. Under the terms of the grant, we had to carry out a number of reforms in our journal, which would meet the requirements of the global citation indexing systems (Web of Science, Scopus, etc.), which we have done in subsequent years and achieved some success [6]. In connection with this victory in the competition, we were invited to give a report [7] at the International Conference World-Class Scientific Publication—2015: Current World Trends and Practice in Editing, Publishing and Assessment of Scientific Publications, at which specialists (including representatives of the aforementioned global citation indexing systems) shared their ideas regarding the organization of the process of publishing scientific works both in Russia and abroad. Over the next several years, we annually (except, unfortunately, 2020, when the event was canceled due to the epidemiological situation) took part in these conferences with reports on various aspects of this process [811]. The materials of these symposia were published in the form of collections of articles, and the main ideas of our papers in the books I would like to consider in this editorial note. In addition, it seemed reasonable to touch upon some of the problems that I mentioned in a short article based on the materials of the presentation at the International Conference “Bioinformatics of Genome Regulation and Structure/Systems Biology,” which, despite the coronavirus pandemic, was held in July 2020 in Novosibirsk [12]. Although the last work is devoted mainly to a scientometric analysis of the situation with publications on gerontology, in my opinion, it may be of interest also to scientists of other specialties.

In the last decades, the ideas of what a good scientific publication is, has changed dramatically. Earlier, as it seems to me, attention was paid primarily to the contents of an article, whereas today the emphasis in assessing its quality has sharply shifted towards the ranking of the journal in which it is published.

At present, scientometric indicators (number of citations, impact factor, Hirsch index, SJR index, etc.) of the effectiveness of activities are acquiring more and more importance for scientists working in various areas of science. Unfortunately, these indicators, but not the essence of the research performed by the authors (or the essence of the concepts formulated by them), are now often decisive for reviewers from various foundations that award grants ensuring successful conduct of scientific work. In particular, as my experience testifies, to receive a good grant for the study of aging mechanisms or for the development of model systems for the search for geroprotectors, it is no longer sufficient for specialists in biology of aging to submit an application to a scientific foundation describing their advanced ideas or interesting research methods that they have developed [12]. Reviewers of applications pay attention primarily to the “quality” of works already published by the applicants as well as to the corresponding rankings of the authors. Here, the “quality” and “rankings” mean just the above-mentioned scientometric indicators (both of the researchers and the journals used by them to publish their results). A vicious circle arises: to obtain high-quality scientific results, you need money, and you can get it only after publishing a significant number of “cool” articles.

In this situation, only periodicals with a high impact factor are currently considered “good” journals (there are also alternative approaches to such assessment, for example, those based on the number of downloads of papers from the websites of respective publishers [7], but they have not yet become widespread). In other words, the more often articles from a particular journal are cited, the “better” it is. Accordingly, publications in such journals are automatically considered “good” articles, and “bad” articles are those that do not appear in such periodicals, although, in my opinion, bad articles are the manuscripts that are written at a very low level in terms of the quality of ideas, data, language, mathematical processing of results, design of illustrations and reference list. The majority of researchers have only look for the ways to publish weak works in “good” journals and then include references to these articles in grant applications. Of course, it is possible, like Grigory Perelman, to publish on the Internet a proof of a hypothesis that defies anyone else and win a Fields Prize, but for this, alas, you have to be a real genius.

It should be noted that very weak articles are often encountered also in the most famous “good” periodicals, and some of them are then subjected to so-called retraction. Examples of retraction can be found in the practice of even such “monsters” of scientific publications as Nature or The Lancet. This raises the question of how the “bad” articles still penetrate the strong barriers built by authoritative editorial boards and reviewers as well as a whole staff of publishing houses who monitor the observance of the rules for preparing manuscripts. Possible answers to this question were considered by us in detail earlier [9].

By the way, regarding the journal Nature, which for most of us has been considered for many years the highest standard of a scientific periodical, I can say the following. As mentioned above, in my main specialty I am a gerontologist. Recently, I felt interested in the methodological details of one work [13] published in this journal devoted to the effect of a certain mutation on the lifespan of mice. I wanted to know how many animals the authors used in their experiments and how they analyzed their survival curves. Naturally, this determines the significance of results. To my surprise, I did not find such information either in the article itself or in the Supplementary Materials published on the journal’s website. And this is an article that has been cited over 1800 times! And this is Nature!

Apparently, in many cases, some periodicals from Q1 (25% of journals with the highest scientometric indicators; publications in them provide scientists with high rankings necessary for obtaining grants, as well as for recertification or passing through competitions for prestigious positions) do not take seriously peer reviewing and editing of manuscripts submitted to the editor. Moreover, they may even be the so-called “predatory journals,” the appearance of which is associated with the increasingly widespread distribution of articles whose publishing is fully paid by authors [14, 15]. Such journals are mainly interested in attracting as much money as possible from authors, and they do not particularly care about WHAT is published.

Several years ago I already analyzed the situation with a small foreign publishing house specializing in biomedical journals [9]. It has existed for a little over 10 years and currently publishes four scientific journals. The impact factors of these editions reach five to six, and they can easily be classified as “highly ranked” (Q1). All of them operate under an open access model, being paid by authors. The cost of publishing an article varies within 3000–4000 US dollars. Wherein, the number of published articles is quite enormous. For example, in 2019, the most popular journal of the publishing house published nearly 18 000 (!) articles distributed over 52 (!) issues. The publication time is from 2 weeks (!), which completely excludes the possibility of normal peer reviewing and editing of manuscripts. It should be noted that this journal was mentioned by Jeffrey Beall, well-known fighter for the purity of scientific publications [14, 15], on his website as a very likely “predator.” However, this, unfortunately, had not prevented this periodical from being indexed in international global citation systems. I would like to emphasize that, as my practice shows, normal peer reviewing of an article (taking into account the search for relevant peer reviewers as well as multiple correspondence between the editorial board, authors, and peer reviewers) cannot take less than 3–4 weeks and quite often may require up to 2–3 months of hard work. I cannot imagine how an adequate blind peer reviewing of 18 000 articles a year can be accomplished!

It should be borne in mind that journals can be ranked in quite different ways. Most often, ranking is based on the impact factor (Web of Science) and CiteScore (Scopus). Until recently, they were very similar, both being based on the number of citations in a particular year of the articles from a given journal that were published in the previous two (Web of Science) or three (Scopus) years. Therefore, the distribution by quartiles Q1–Q4 in these systems was quite similar. However, since June 23, 2020, Scopus has completely changed the approach to calculating the CiteScore scientometric index. Wherein, my idea, formulated some time ago, which I shared with the colleagues from Scopus, was taken into account. As noted above, the CiteScore index of any periodical was calculated approximately on the same principle as the impact factor in Web of Science: the number of citations in a given year of the articles published in this journal in the previous 3 years was divided by the number of these articles. For example, for the CiteScore-2018 of journal “X,” the number of citations in 2018 (in any periodicals indexed in Scopus) of articles published in 2015–2017 in journal “X” was divided by the number of these articles. However, this approach created a certain “dead zone” of citations of works published in the same year (this happens quite often, especially if the article was published at the very beginning of the year). For example, if a publication appeared in January 2018, then it could have been cited many times until December 2018. These citations were never taken into account when calculating both the CiteScore (Scopus) and, by the way, the impact factor (Web of Science), because only the citations of articles of past years were considered. The new CiteScore calculation method made it possible to eliminate the “dead zone” problem. Now, the number of citations for 4 years is taken and divided by the number of publications for the same 4 years. As a result, ALL references to articles of a particular periodical are taken into account. Apparently, time will tell how much better this approach is than the previous one.

However, it should be noted that, in the very popular SCImago Journal & Country Rank system, which is based on Scopus data, journals are ranked according to the SJR (SCImago Journal Rank) index. This is the same citation index of the journal but already normalized to the ranking of periodicals in which articles are cited. If the rank of citing journals is very high, the SJR indicator may exceed the Citations per document (2 years) indicator, which is a complete analogue of the impact factor. However, if the rank is fairly low, the SJR indicator will be lower (sometimes several times) than the Citations per document (2 years) indicator. In many scientific institutions, it is the SJR-based ranking that is used in the certification of tutors and researchers.

It should be emphasized that each subject area and subject category has its own rankings and quartiles, which may not be related in any way to these indicators in other areas of science. For example, in the category “Gerontology,” SJR 0.84 is sufficient for a journal to get into Q1, whereas SJR should already be 1.6 for this in the category “Aging.” The SJR indicator of the highest-ranked gerontological journal Ageing Research Reviews is 3.79, whereas its Citations per document (2 years) index today is already almost 11. However, it is very far from the “coolest” journal Ca-A Cancer Journal for Clinicians with an SJR of 88.19 and a Citations per document (2 years) of approximately 255 (all figures are given for 2019).

The pursuit of formal ratings and scores forces scientists to rush in writing scientific articles. They simply do not have enough time to thoroughly check the manuscripts for grammatical and stylistic errors or to accurately analyze the data obtained. Blunders with the assessment of significance of revealed patterns occur in many articles submitted to our journal, and peer reviewers are often even unable to check the correctness of calculations, because they do not have access to the primary data obtained in the work.

A separate problem is reference lists. Ideally, authors should check every reference used in their work, especially since the Internet now allows doing this quickly and efficiently. In one of our publications [10], we proposed using the following scheme for this process: (1) search for an article in the appropriate databases (PubMed, Google Scholar, etc.), (2) verification of imprint data on the publisher’s website (Springer, Elsevier, Wiley, Taylor & Francis, etc.), (3) verification of the journal title abbreviations on the Web of Science website (Journal Title Abbreviations) or using the CAS Source Index (CASSI), and (4) bringing the reference in accordance with the requirements of the citation format accepted in the journal planned for publication of the article. Unfortunately, my experience of working in several scientific journals shows that almost 100% of the manuscripts submitted to the editorial board contain incorrect or inadequately formatted references in the reference list.

However, it should be noted that, in some well-known publishing houses, the practice of publishing journal articles and chapters in collective monographs has appeared, which allows authors to independently choose the citation format. In this case, the editorial staff retains the selected reference format in the article without editing the manuscript in this regard. For example, not so long ago we published two chapters in a book by the world-famous publishing house Taylor & Francis [16, 17]. This collective monograph contains 35 chapters written by various authors, and the format of citations and reference lists varies from chapter to chapter. Apparently, this circumstance, in the opinion of the editors and publishers of the book, does not hamper an adequate perception of the material by the readers.

In any case, in my opinion, the use of “broken” references in published articles not only makes journals candidates for removal from serious global citation systems but also calls into question the scientific value of the concepts and experimental data presented in the articles.

Recently, it is often said that the use of DOI may help solve problems with the “crooked” references. Indeed, adding DOIs to reference lists reduces the probability of appearance of a reference that cannot be found. However, with a large number of references, this approach decently increases the volume of the article. In addition, as my experience shows, DOIs are also quite often printed with errors and DOI may lead nowhere even in the final version of the publication. Apparently, the situation can be improved only by a careful work of authors with the reference lists.

Thus, from my standpoint, in the struggle between “informal” (focused only on the quality of published articles) and “formal” (taking into account mainly the scientometric indicators of authors and journals) approaches to scientific publications, the second option definitely wins. And the reason for this, in my opinion, lies mainly in the significant commercialization of the process. Publishing houses are interested in the authors’ money (it can be easily estimated how much only the above-mentioned journal earns from 18 000 articles per year, receiving approximately $4000 for each), and authors are interested in the publications with high scientometric indices, which provide them with grants, positions, titles, and victories in competitions. I do not rule out that this is the “correct” strategy, but I still nostalgically recall the times when the “informal” approach to the assessment of scientific achievements reigned.