-
Manuscript Processing Times Are Negatively Correlated with Journal Impact Factors / La corrélation négative entre les délais de traitement des manuscrits et les facteurs d’impact des revues scientifiques
A limited two-year time window is used to calculate a journal’s impact factor, which suggests that journals with faster publication times will have higher impact factors. To confirm this hypothesis, the manuscript processing times (median time to acceptance and median time to publication) were determined for articles from 42 journals selected from seven different research areas. Both acceptance time and publication time were found to be negatively correlated with impact factor. When analysed by research category, processing times were even more strongly negatively correlated with the group’s median impact factor.
Le calcul du facteur d’impact se fait à partir d’une intervalle limitée de deux ans, ce qui donne à penser que les revues scientifiques dont les délais de publication sont plus courtes seront aussi celles qui jouiront des facteurs d’impact les plus élevés. Pour confirmer cette hypothèse, nous avons déterminé les délais de traitement des manuscrits (délai moyen d’acceptation et délai moyen de publication) pour les articles de 42 revues sélectionnées parmi sept domaines de recherche différents. Nous avons constaté une corrélation négative entre les deux délais d’acceptation et de publication et le facteur d’impact. Lors de l’analyse par domaine de recherche, nous avons constaté une corrélation encore plus fortement négative entre les délais de traitement et le facteur d’impact du groupe moyen.
impact factor, manuscript handling, acceptance time, publication time
facteur d’impact, traitement des manuscrits, délai d’acceptation, délai de publication
Introduction
The impact factor was originally introduced as a means of selecting journals for inclusion in the Science Citation Index, now owned by Thomson Reuters (Garfield 1999). Its role has expanded dramatically since that time. For example, many administrators and funding bodies now evaluate researchers based on the impact factors of the journals in which they publish. This information is used to [End Page 225] guide funding decisions (Adam 2002), the granting of tenure (Monastersky 2005), and even the disbursement of financial incentives (Fuyuno and Cyranoski 2006; Shao and Shen 2011). As a result of this prominence, the virtues and shortcomings of the impact factor are hotly debated in the literature (e.g., Vanclay 2012; Brody 2013). Therefore, given its current importance in funded science, it is critical that the behaviour and limitations of the journal impact factor be clearly understood.
The impact factor (IF) is calculated as the ratio of citations to citable items for a given time window. A journal’s impact factor for 2009, for example, is given by:
Although five-year impact factors are also calculated and reported by Thomson Reuters, it is the two-year version which is used most frequently.
One less-appreciated aspect of the impact factor is the effect of the limited time window over which it is calculated (Falagas and Alexiou 2008; Garfield 1999). For example, citations of an article A published in January 2007 will only contribute to the impact factor in 2008 and 2009; citations received in 2007, or in 2010 or later years, will not count toward the impact factor. Assuming that most researchers will become aware of A only at the time of its publication, any article B which cites A must be written, reviewed, and published in under three years to contribute to the journal impact factor. This suggests that fields which publish more rapidly increase their likelihood of generating contributing citations within that time window. The theoretical models of Yu, Wang, and Yu (2005) and Yu, Guo, and Li (2006) have demonstrated such a link between publication times and impact factors; it has also been observed empirically in isolated research areas (Metcalfe 1995; Pautasso and Schäfer 2010). Nevertheless, a correlation between manuscript processing times and journal impact factors has yet to be shown to exist more broadly in the scientific literature.
The current work examines the correlation between manuscript processing times (i.e., time to acceptance and time to publication) and journal impact factors across a range of subject fields. This relationship is considered at the level of the individual journals but also at the level of journal categories. The latter is motivated by the recognition that only a small portion of a journal’s citations will come from itself; most are likely to come from other journals with a similar scope. For this study 42 journals, 6 from each of seven different subject categories, were selected, and their processing times were calculated and related to individual and group impact factors. The confirmation of an impact factor–time relationship will deepen our understanding of a number which has become a critical evaluation metric in modern science. [End Page 226]
Materials and methods
Journal category selections
Impact factors for academic journals are calculated by Thomson Reuters based on entries in the Web of Science citation database. Thomson Reuters also groups each journal into one or more journal categories—of which there were 173 for the year 2009—for its Journal Citation Reports. The median impact factors for each of the 173 groups were ordered, and the values representing the 10th, 25th, 35th, 50th, 65th, 75th, and 90th percentiles were identified.
In addition to differences in median impact factor, the journal categories also vary greatly in other characteristics, such as the total number of journals included in each group, the total number of articles produced in a given year, and so on. Each of these variables could confound the attempt to link publication times with impact factors. To attempt to control for some of this variability, three such characteristics were selected to identify “similar” categories: the number of journals (J), the total number of articles (A), and the total number of citations (C). A vector distance from the median, d, for each journal category was calculated using:
where the Pi terms represent the percentile rank of each category for the three parameters. Note that the denominator will normalize values to the range [0,1], where 0 represents those journal categories closest to the median in all three parameters.
The journals with the smallest distances (d < 0.1) were identified. From this subset of 38 journal categories, the 7 with median impact factors closest to the specified percentiles for all journal groups were selected (10th, 25th, 35th, 50th, 65th, 75th, and 90th). These 7 categories are presented in table 1.
Each journal group was selected from among the categories used in Thomson Reuter’s Journal Citation Reports for 2009 based on the objective criteria [End Page 227] described above. This selection approach was intended to identify groups which represented a large range of impact factors but which were otherwise comparable along those three metrics (number of journals, number of articles, and number of citations). Furthermore, these selection criteria allowed us to simplify our statistical analysis and analyse the dependence of impact factor on journal processing times using a univariate model.
Journal selections
Following the identification of the seven journal categories, the initial plan was to select seven journals from each category to represent the 10th, 25th, 35th, 50th, 65th, 75th, and 90th percentiles of impact factors. A journal was selected provided that (a) articles listed a “received” date as well as either a “revised” or “accepted” date, (b) at least four issues were published in 2009, (c) articles were published in English, and (d) the articles’ publication history data were either freely available online or the journal was one to which the author’s institution had an electronic subscription1. This final criterion allowed for automated computation of the processing times, in turn permitting a larger number of articles to be included in the study.
Finding journals near the 10th percentile that met the above criteria proved to be impossible in many groups, so the requirement for a 10th-percentile journal was removed; this reduced the total number of journals selected to only six per category. For the remaining six cases, the qualifying journals closest to the designated percentiles were used. The journals selected for each category are given in table 2. It should be noted that, as a result of the selection criteria, impact factors vary considerably both within and between journal categories.
Date calculations
Electronic data mining was used to identify article publication history dates from either HTML (Hypertext Markup Language) or PDF (Portable Document Format) versions of the articles (Lievers and Pilkey 2012). Special issues and supplements presenting work from an academic conference were omitted from the analysis on the basis that the articles therein might have been reviewed and published using a non-standard editorial process.
Up to four dates were recorded for each article: the date received (drec), the date revised (drev), the date accepted (dacc), and the date published (dpub). The value of dpub was estimated to be the middle of the time period spanned by the issue in which the article appeared (Amat 2008). For monthly journals, dpub was taken to be the 15th day of that month. For bi-monthly journals, dpub was taken as the first day of the second month. Numbered issues with no specified dates were assigned a proportion of the year, based on the total number of issues, and dpub was assigned as the central date of that span.
Following the automated date extractions, five randomly selected papers were sampled from each of the 42 journals for manual verification. No errors were found in the four dates extracted from these 210 articles. Please note that [End Page 228]
[End Page 229]
this sample included articles that were later excluded from analysis as described in the following section.
Data analysis
Two processing times were calculated based on the article data. Acceptance time was generally calculated as Δtacc = dacc – drec. When a revision date but no acceptance date was given, then Δtacc = drev − drec. Publication time was calculated as Δtpub = dpub − drec. Articles that lacked sufficient information to calculate both Δtacc and Δtpub were omitted from further analysis. In a few cases typographical errors in the dates resulted in negative times; these too were omitted. The total number of valid articles for each journal is given in table 2. Please note that all articles were published in issues with publication dates in 2009.
Processing times were calculated in days and converted to months by dividing by (365.25/12). This conversion was performed because months are a more intuitive unit for the times under consideration, and because the precision associated with days seemed unwarranted given the elasticity associated with identifying the date of publication.
Linear regressions were performed relating median acceptance (Δtacc)and median publication time (Δtpub) to journal impact factors. This relationship was examined both by using the individual journal impact factors and by grouping journals according to the median impact factor of their journal category. A linear regression was also performed between Δtacc and Δtpub to examine whether publication delays increased with longer times to acceptance. In addition to the linear regressions, Spearman’s rank correlation coefficient (Spearman’s ρ) was also calculated. A value of p < .05 was deemed to be significant, and a value of .5 ≤ p < .10 was interpreted as a trend towards significance.
Results
The relationships between journal impact factors and either median acceptance time or publication time are shown in figure 1. In both cases a significant [End Page 230] (R = −0.31, p = .045; R = −0.33; p = .035) negative relationship was observed. That is, the median time to acceptance and publication decreased as the impact factor of a journal rose, irrespective of its journal category. Spearman’s correlation also indicated a negative relationship; however, it was significant only for acceptance time (ρ = −.32, p = .038) and not publication time (ρ = −.25, p = .116)
A similar behaviour is observed when these relationships are examined within the context of the journal categories (figure 2). A significant negative relationship is observed between the median impact factors of the journal categories and the manuscript processing times associated with the six journals within those categories (R = −0.46, p = .002; R = −0.38; p = .013). The Spearman’s correlation was also negative and significant (ρ = −0.55, p = .0001; ρ = −0.33, p = .031).
A positive linear relationship (R = 0.69, p < .001) was observed between acceptance and publication time (figure 3). The slope (0.94) is close to 1, which [End Page 231] suggests that the delay between acceptance and publication remains roughly constant regardless of how quickly the review process is performed. The offset of the linear regression equation suggests that this acceptance-publication lag time is approximately 6.4 months. The results of the Spearman’s correlation were also significant (ρ = 0.67, p < .001).
Discussion
A survey of 42 journals, 6 from each of seven different journal categories, was performed to relate impact factors with manuscript processing times. The median time to acceptance (Δtacc) and median time to publication (Δtpub) were both found to be significantly negatively related to the individual journals’ impact factors as well as the median impact factor for the journal group (p < .05). That is, processing times decreased with increasing impact factor.
The current results confirm the work of Pautasso and Schäfer (2010), who also found a negative relationship between journal impact factor and acceptance time in a group of 22 ecology journals. A similar observation was reported by Metcalfe (1995) in 28 biological and biomedical journals. As noted by Pautasso and Schäfer (2010), this speed of processing occurs despite the fact that journals with higher impact factors receive more submissions. Rousseau and Rousseau (2012) have recently shown that authors are willing to wait longer for editorial decisions for more prestigious journals; the negative correlation reported here suggests that they will not have to do so. Nevertheless, the current work demonstrates that these earlier observations extend beyond individual disciplines and are representative of a broader pattern in the scientific literature.
It should be noted that other studies have also reported a negative correlation between publication time and impact factor; however, they have not explicitly measured the processing times at a particular journal. For example, Ray, Berkwits, and Davidoff (2000) measured the time between rejection by one journal and subsequent publication in a second. While the impact factor of the publishing journal was observed to decrease with publication time, this delay does not necessarily reflect longer manuscript processing times at those journals. An alternate explanation would be that authors “just work their way down the ladder” (Adam and Knight 2002, 774) by successively submitting to progressively lower-impact journals until the manuscript is accepted. The work by de Marchi and Rocchi (2001) also showed a negative correlation but relied on average times self-reported by the journals themselves in response to a survey; the reply rate was 9.6%. The current study explicitly accounts for manuscript handling times on a paper-by-paper basis.
Median acceptance times varied from as little as 0.5 to over 13.5 months (15–412 days), while the median time to publication ranged from 5.7 to 20.2 months (173–615 days). Both of these ranges are wider than those previously reported in the literature (“Acceptance Rates” 2002; Labanaris et al. 2007; Pautasso and Schäfer 2010; Metcalfe 1995). For example, Amat (2008) reported acceptance and publication times of 101–292 and 181–491 days. It should be noted, however, that those previous studies have been limited to a single topic [End Page 232] area. The scatter of figure 2 makes it clear that large inter- and intra-group differences exist.
The mechanisms which allow for more rapid processing in higher-impact journals warrant further investigation, particularly given the greater volume of manuscripts they receive (Pautasso and Schäfer 2010). Editorial policies are expected to have a huge impact on this process. For example, some journals employ a triage system which leads to over half of the submitted manuscripts being rejected without peer review (Crawford et al. 2008). Others impose strict time limits or deadlines for reviewers, which may be as little as two weeks (Drubin 2011). These types of techniques may allow editors and editorial boards to reduce the time needed to process and publish manuscripts.
As noted earlier, the publication date is difficult to identify with precision. The date of the journal issue may not reflect the time at which it was physically published. While an issue may have been released earlier or later than the cover date would indicate, those discrepancies should exist at the journal level and would be expected to be randomly distributed. Other potential sources of error include the selection of journals and journal categories. While every effort was made to select journal categories which differed primarily by median impact factor, it was not possible to control for all possible confounding variables. Journal groups were determined objectively by selecting research areas which were close to the median in three areas: the number of journals, the total number of articles, and the total number of citations. In particular,
Journals were selected from within each journal group only semi-objectively based on four pragmatic criteria. First, the journals had to provide information about the dates on which manuscripts were received, revised, and/or accepted. This information was needed for the study and, as shown by de Marchi and Rocchi (2001), is difficult to obtain directly from individual journals. Only journals publishing in English, with at least four issues a year, were included to avoid the confounding effect of language and artificially inflated publishing times. Finally, journals were also limited to those with electronically accessible publication data, available either by institutional subscription or to the general public. This criterion permitted the automated determination of publication times necessary to process the more than 4,700 articles included in this study.
The current work has demonstrated a statistically significant relationship between article processing times and journal impact factors using a univariate model. There are several journal-specific variables that are also known to affect the values of impact factors, such as the editorial practices and the overall prestige of the journal. These parameters have been ignored due to the difficulties associated with quantifying them. Nevertheless, future work should consider a more complex multivariate approach to investigate the effect and interdependence of additional explanatory variables, as well as a larger sample of journals and journal groups. [End Page 233]
Advanced online publication has also been ignored in this study (Drummond and Reeves 2005), a decision dictated by the diverse practices of individual journals. Some journals post electronic versions of articles immediately after acceptance, others do so once corrected proofs are available, and some post electronic articles only in conjunction with the publication of the print version. Recent work by Tort, Targino, and Amaral (2012) has shown that a larger delay between online and print publication is associated with an increase in journal impact factor. This inflation is caused by artificially extending the citable time for an article. Future work should address the effects of manuscript handling times and the delay between online and print publication concurrently.
This article has also left unaddressed the effects that rapid processing times may have on the quality of the manuscript review process itself. While determination of an objective measure of “quality” is certainly a challenge, one possible metric would be the number of retracted or corrected articles. The rate of retractions in academic journals has been noted to be increasing, and many of these articles are retracted for methodological, data, or analysis errors (Steen 2011; Grieneisen and Zhang 2012). Peer review is not capable of identifying every possible error; however, more thorough review may be related to reduced rates of published corrections, errata, or retractions. Nevertheless, it would be difficult to disentangle the various effects of publishing delays, time spent actually reviewing the article, the quality of the review, the prestige of the journal, and so on.
A statistically significant relationship between article processing times and journal impact factors has been demonstrated across the seven journal categories studied; however, correlation does not indicate causation. Editors cannot necessarily expect to improve their journal’s impact factor simply by improving the speed of their manuscript review and processing system. For example, the level of previous publishing experience on the part of submitting authors (Yegros and Amat 2009) and the publishing model itself (Dong, Loh, and Mondry 2006) have also been shown to be correlated with manuscript processing times. Nevertheless, a faster review system is undeniably appealing to authors (Rowlands, Nicholas, and Huntington 2004; Ware and Monkman 2008). It may be that the reduced review times attract better authors, thereby raising the impact factor; however, more study is needed.
The acceptance and publication times of 4,735 articles from 42 journals, spanning seven journal categories, were determined. The results confirm that manuscripts are processed faster both in journals with higher impact factors and in journal categories with higher median impact factors. The effect of the limited time window on the calculation of the journal impact factor must be properly understood in order for this important metric to be interpreted and applied correctly.
blievers@laurentian.ca
Notes
1. This work was performed while the author was employed by the University of Virginia in Charlottesville, VA. [End Page 234]