Editors of international scientific journals find it more and more difficult to locate reviewers that are willing to evaluate manuscripts prior to publication (called ‘peer-review’). The most experienced researchers, best qualified to undertake these evaluations, increasingly decline peer-review requests, either because they do not have time or because they have reviewed so many manuscripts that they have become weary of the task. As a result, editors often have to call upon less experienced researchers or scientists, increasingly from more distant fields, often to the detriment of the quality of the review. So how did we arrive at this impasse, and what can we do to resolve it? The stakes are high, because the reputation of the whole peer-review system used in scientific publishing could be damaged in the long term.
The key point that we wish to stress here is the pernicious trend within the current publication system for researchers in all fields of science to be incited to submit ever more while traditional scientific journals are encouraged to publish fewer and fewer articles. Indeed, scientific journals are engaged in a frantic race to obtain the highest possible Impact Factor (Garfield, 1955), leading them to reject most of the articles submitted to them (about 75% on average). Thus, before its publication, a scientifically robust article may be examined by as many as eight reviewers (plus handling editors), from four different journals, multiplying the number of requests for peer review. Conversely, scientists are often evaluated on the basis of the number of articles they have published in journals with a high Impact Factor, favouring publication in a particular journal over content and quantity over quality. These two forces are behind the current dysfunction of the editorial system for peer-reviewed science, causing a total gridlock that slows down science.
How can we break this vicious circle? Quite simply, by changing the bibliometric indices used! To evaluate journals and researchers, scientists have imposed the use of such indices on us only sixty years ago. At the turn of the millenia one of the initial advocators already compared the mixed blessings of the journal impact factor to those of nuclear power (“I expected that it would be used constructively while recognizing that in the wrong hands it might be abused”, stated by E. Garfield (1999), CMAJ 161, 979-980). Regrettable as it is, we have no choice but to use publication indices. To ensure that their use creates a virtuous circle for science they need to be liberated from certain principles. And indeed, a possible solution that has little effect on journal rankings, but a radical effect on their editorial policy has been presented by Google Scholar, called H5 Factor. Traditionally the impact factor of journals (IF5) was calculated by the WOS method that is based on the mean number of citations per article. Its maximization thus imposes the elimination of a large number of scientifically robust articles adjudged, very subjectively, by the editor to be insufficiently “profitable” in terms of the Impact Factor of the journal. By contrast, H5 does not take into account the articles that score few citations. In a new era based on the use of H5, which we would like to promote, editors would be able to publish articles unlikely to be frequently cited without the risk of lowering the ranking of their journals. This would prevent the never-ending spiral of evaluation (see the figure below, in which we compare the two editorial models for an identical rate of rejection). It is worth noting that the Google Scholar Metrics based on H5-index does not disrupt the ranking of academic journals (see inset in Figure 1 for Plant Science). Moreover, the two major generalist scientific journals, Nature and Science, rank No. 7 and 20, respectively when using IF5 and No. 1 and 3 based on the H5 Factor.
Figure 1. The traditional editorial model aims to maximize the impact factor (IF) of journals, which implies a high rejection rate and obliges authors to resubmit their articles several times. In a model based on the H5 factor of journals, the articles are peer-reviewed just once, by a single journal, with the same overall rate of rejection for the two models (20%). The switch from an IF model to an H5 model would have little effect on the ranking of journals, as shown by the strong correlation between these two indices in plant sciences (inset). However, by greatly decreasing the editorial load, the use of the H5 factor would place science back at the heart of our vocation as researchers.
As far as authors are concerned, the challenge is to incite journals to adopt a policy of “still better” rather than “still more”, thereby favouring quality over quantity (i.e. less publications but better ones). As recommended by the DORA initiative (see San Francisco Declaration on Research Assessment), during evaluations of individual researchers and research groups, the use of the Impact Factor of the journals in which the authors have published should not be considered. Instead, the focus should be on the intrinsic scientific quality of the articles, possibly assessed on the basis of scientific indices relating exclusively to the statistics for these articles. Here for example metrics such as the quotient of number of citations and year since published for individual articles could be used to create a more impartial portrait of scientific merit, taking into account that interest in specific research today may vary from that in the future.
Clearly, the current editorial system in scientific publishing based on the impact factor favours the “big boys” of scientific publication. This is because we are enslaved to aim for journals with high impact factor where it will be very difficult to place a manuscript, but once accepted the research partakes in the success of the journal. However, the low chance of success encourages the creation of new journals and the negotiation of increases in the price of large for-profit publisher packages. The interests of science lie elsewhere, and we need to be aware that the current failings of this system impede the advance of scientific knowledge.
- Garfield E. 1955. Citation indexes for science: A new dimension in documentation through association of ideas. Science 122: 108-11. doi:10.1126/science.122.3159.108
- Garfield E. 1999. Journal impact factor: a brief review. Canadian Medical Association Journal, 161(8), 979-980.