University of Toronto Press
  • Manuscript Processing Times Are Negatively Correlated with Journal Impact Factors / La corrélation négative entre les délais de traitement des manuscrits et les facteurs d’impact des revues scientifiques
Abstract

A limited two-year time window is used to calculate a journal’s impact factor, which suggests that journals with faster publication times will have higher impact factors. To confirm this hypothesis, the manuscript processing times (median time to acceptance and median time to publication) were determined for articles from 42 journals selected from seven different research areas. Both acceptance time and publication time were found to be negatively correlated with impact factor. When analysed by research category, processing times were even more strongly negatively correlated with the group’s median impact factor.

Résumé :

Le calcul du facteur d’impact se fait à partir d’une intervalle limitée de deux ans, ce qui donne à penser que les revues scientifiques dont les délais de publication sont plus courtes seront aussi celles qui jouiront des facteurs d’impact les plus élevés. Pour confirmer cette hypothèse, nous avons déterminé les délais de traitement des manuscrits (délai moyen d’acceptation et délai moyen de publication) pour les articles de 42 revues sélectionnées parmi sept domaines de recherche différents. Nous avons constaté une corrélation négative entre les deux délais d’acceptation et de publication et le facteur d’impact. Lors de l’analyse par domaine de recherche, nous avons constaté une corrélation encore plus fortement négative entre les délais de traitement et le facteur d’impact du groupe moyen.

Keywords

impact factor, manuscript handling, acceptance time, publication time

Mots-clés :

facteur d’impact, traitement des manuscrits, délai d’acceptation, délai de publication

Introduction

The impact factor was originally introduced as a means of selecting journals for inclusion in the Science Citation Index, now owned by Thomson Reuters (Garfield 1999). Its role has expanded dramatically since that time. For example, many administrators and funding bodies now evaluate researchers based on the impact factors of the journals in which they publish. This information is used to [End Page 225] guide funding decisions (Adam 2002), the granting of tenure (Monastersky 2005), and even the disbursement of financial incentives (Fuyuno and Cyranoski 2006; Shao and Shen 2011). As a result of this prominence, the virtues and shortcomings of the impact factor are hotly debated in the literature (e.g., Vanclay 2012; Brody 2013). Therefore, given its current importance in funded science, it is critical that the behaviour and limitations of the journal impact factor be clearly understood.

The impact factor (IF) is calculated as the ratio of citations to citable items for a given time window. A journal’s impact factor for 2009, for example, is given by:

inline graphic

Although five-year impact factors are also calculated and reported by Thomson Reuters, it is the two-year version which is used most frequently.

One less-appreciated aspect of the impact factor is the effect of the limited time window over which it is calculated (Falagas and Alexiou 2008; Garfield 1999). For example, citations of an article A published in January 2007 will only contribute to the impact factor in 2008 and 2009; citations received in 2007, or in 2010 or later years, will not count toward the impact factor. Assuming that most researchers will become aware of A only at the time of its publication, any article B which cites A must be written, reviewed, and published in under three years to contribute to the journal impact factor. This suggests that fields which publish more rapidly increase their likelihood of generating contributing citations within that time window. The theoretical models of Yu, Wang, and Yu (2005) and Yu, Guo, and Li (2006) have demonstrated such a link between publication times and impact factors; it has also been observed empirically in isolated research areas (Metcalfe 1995; Pautasso and Schäfer 2010). Nevertheless, a correlation between manuscript processing times and journal impact factors has yet to be shown to exist more broadly in the scientific literature.

The current work examines the correlation between manuscript processing times (i.e., time to acceptance and time to publication) and journal impact factors across a range of subject fields. This relationship is considered at the level of the individual journals but also at the level of journal categories. The latter is motivated by the recognition that only a small portion of a journal’s citations will come from itself; most are likely to come from other journals with a similar scope. For this study 42 journals, 6 from each of seven different subject categories, were selected, and their processing times were calculated and related to individual and group impact factors. The confirmation of an impact factor–time relationship will deepen our understanding of a number which has become a critical evaluation metric in modern science. [End Page 226]

Materials and methods

Journal category selections

Impact factors for academic journals are calculated by Thomson Reuters based on entries in the Web of Science citation database. Thomson Reuters also groups each journal into one or more journal categories—of which there were 173 for the year 2009—for its Journal Citation Reports. The median impact factors for each of the 173 groups were ordered, and the values representing the 10th, 25th, 35th, 50th, 65th, 75th, and 90th percentiles were identified.

In addition to differences in median impact factor, the journal categories also vary greatly in other characteristics, such as the total number of journals included in each group, the total number of articles produced in a given year, and so on. Each of these variables could confound the attempt to link publication times with impact factors. To attempt to control for some of this variability, three such characteristics were selected to identify “similar” categories: the number of journals (J), the total number of articles (A), and the total number of citations (C). A vector distance from the median, d, for each journal category was calculated using:

inline graphic

where the Pi terms represent the percentile rank of each category for the three parameters. Note that the denominator will normalize values to the range [0,1], where 0 represents those journal categories closest to the median in all three parameters.

The journals with the smallest distances (d < 0.1) were identified. From this subset of 38 journal categories, the 7 with median impact factors closest to the specified percentiles for all journal groups were selected (10th, 25th, 35th, 50th, 65th, 75th, and 90th). These 7 categories are presented in table 1.

Table 1. Seven journal categories and associated publication data for 2009 including the median impact factor for 2009 (IF2009)
Click for larger view
View full resolution
Table 1.

Seven journal categories and associated publication data for 2009 including the median impact factor for 2009 (IF2009)

Each journal group was selected from among the categories used in Thomson Reuter’s Journal Citation Reports for 2009 based on the objective criteria [End Page 227] described above. This selection approach was intended to identify groups which represented a large range of impact factors but which were otherwise comparable along those three metrics (number of journals, number of articles, and number of citations). Furthermore, these selection criteria allowed us to simplify our statistical analysis and analyse the dependence of impact factor on journal processing times using a univariate model.

Journal selections

Following the identification of the seven journal categories, the initial plan was to select seven journals from each category to represent the 10th, 25th, 35th, 50th, 65th, 75th, and 90th percentiles of impact factors. A journal was selected provided that (a) articles listed a “received” date as well as either a “revised” or “accepted” date, (b) at least four issues were published in 2009, (c) articles were published in English, and (d) the articles’ publication history data were either freely available online or the journal was one to which the author’s institution had an electronic subscription1. This final criterion allowed for automated computation of the processing times, in turn permitting a larger number of articles to be included in the study.

Finding journals near the 10th percentile that met the above criteria proved to be impossible in many groups, so the requirement for a 10th-percentile journal was removed; this reduced the total number of journals selected to only six per category. For the remaining six cases, the qualifying journals closest to the designated percentiles were used. The journals selected for each category are given in table 2. It should be noted that, as a result of the selection criteria, impact factors vary considerably both within and between journal categories.

Date calculations

Electronic data mining was used to identify article publication history dates from either HTML (Hypertext Markup Language) or PDF (Portable Document Format) versions of the articles (Lievers and Pilkey 2012). Special issues and supplements presenting work from an academic conference were omitted from the analysis on the basis that the articles therein might have been reviewed and published using a non-standard editorial process.

Up to four dates were recorded for each article: the date received (drec), the date revised (drev), the date accepted (dacc), and the date published (dpub). The value of dpub was estimated to be the middle of the time period spanned by the issue in which the article appeared (Amat 2008). For monthly journals, dpub was taken to be the 15th day of that month. For bi-monthly journals, dpub was taken as the first day of the second month. Numbered issues with no specified dates were assigned a proportion of the year, based on the total number of issues, and dpub was assigned as the central date of that span.

Following the automated date extractions, five randomly selected papers were sampled from each of the 42 journals for manual verification. No errors were found in the four dates extracted from these 210 articles. Please note that [End Page 228]

Table 2. List of journals by category showing the impact factor for 2009 (IF2009) and the total number of articles from 2009 included in the analysis
Click for larger view
View full resolution
Table 2.

List of journals by category showing the impact factor for 2009 (IF2009) and the total number of articles from 2009 included in the analysis

[End Page 229]

this sample included articles that were later excluded from analysis as described in the following section.

Data analysis

Two processing times were calculated based on the article data. Acceptance time was generally calculated as Δtacc = daccdrec. When a revision date but no acceptance date was given, then Δtacc = drevdrec. Publication time was calculated as Δtpub = dpubdrec. Articles that lacked sufficient information to calculate both Δtacc and Δtpub were omitted from further analysis. In a few cases typographical errors in the dates resulted in negative times; these too were omitted. The total number of valid articles for each journal is given in table 2. Please note that all articles were published in issues with publication dates in 2009.

Processing times were calculated in days and converted to months by dividing by (365.25/12). This conversion was performed because months are a more intuitive unit for the times under consideration, and because the precision associated with days seemed unwarranted given the elasticity associated with identifying the date of publication.

Linear regressions were performed relating median acceptance (Δtacc)and median publication time (Δtpub) to journal impact factors. This relationship was examined both by using the individual journal impact factors and by grouping journals according to the median impact factor of their journal category. A linear regression was also performed between Δtacc and Δtpub to examine whether publication delays increased with longer times to acceptance. In addition to the linear regressions, Spearman’s rank correlation coefficient (Spearman’s ρ) was also calculated. A value of p < .05 was deemed to be significant, and a value of .5 ≤ p < .10 was interpreted as a trend towards significance.

Figure 1. Relationship between individual journals’ impact factors and median acceptance time (Δtacc) and median publication time (Δtpub), in months, for each of the 42 journals included in this study. Linear regression lines are indicated in bold, and regression equations are shown.
Click for larger view
View full resolution
Figure 1.

Relationship between individual journals’ impact factors and median acceptance time (Δtacc) and median publication time (Δtpub), in months, for each of the 42 journals included in this study. Linear regression lines are indicated in bold, and regression equations are shown.

Results

The relationships between journal impact factors and either median acceptance time or publication time are shown in figure 1. In both cases a significant [End Page 230] (R = −0.31, p = .045; R = −0.33; p = .035) negative relationship was observed. That is, the median time to acceptance and publication decreased as the impact factor of a journal rose, irrespective of its journal category. Spearman’s correlation also indicated a negative relationship; however, it was significant only for acceptance time (ρ = −.32, p = .038) and not publication time (ρ = −.25, p = .116)

Figure 2. Relationship between a journal group’s median impact factor and the median acceptance time (Δtacc) and median publication time (Δtpub), in months, for each of the 42 journals included in this study. Linear regression lines are indicated in bold, and regression equations are shown.
Click for larger view
View full resolution
Figure 2.

Relationship between a journal group’s median impact factor and the median acceptance time (Δtacc) and median publication time (Δtpub), in months, for each of the 42 journals included in this study. Linear regression lines are indicated in bold, and regression equations are shown.

Figure 3. Relationship between individual journals’ median publication time (Δtpub) and median acceptance time (Δtacc), in months, for each of the 42 journals included in this study. Linear regression line is indicated in bold, and the regression equation is shown.
Click for larger view
View full resolution
Figure 3.

Relationship between individual journals’ median publication time (Δtpub) and median acceptance time (Δtacc), in months, for each of the 42 journals included in this study. Linear regression line is indicated in bold, and the regression equation is shown.

A similar behaviour is observed when these relationships are examined within the context of the journal categories (figure 2). A significant negative relationship is observed between the median impact factors of the journal categories and the manuscript processing times associated with the six journals within those categories (R = −0.46, p = .002; R = −0.38; p = .013). The Spearman’s correlation was also negative and significant (ρ = −0.55, p = .0001; ρ = −0.33, p = .031).

A positive linear relationship (R = 0.69, p < .001) was observed between acceptance and publication time (figure 3). The slope (0.94) is close to 1, which [End Page 231] suggests that the delay between acceptance and publication remains roughly constant regardless of how quickly the review process is performed. The offset of the linear regression equation suggests that this acceptance-publication lag time is approximately 6.4 months. The results of the Spearman’s correlation were also significant (ρ = 0.67, p < .001).

Discussion

A survey of 42 journals, 6 from each of seven different journal categories, was performed to relate impact factors with manuscript processing times. The median time to acceptance (Δtacc) and median time to publication (Δtpub) were both found to be significantly negatively related to the individual journals’ impact factors as well as the median impact factor for the journal group (p < .05). That is, processing times decreased with increasing impact factor.

The current results confirm the work of Pautasso and Schäfer (2010), who also found a negative relationship between journal impact factor and acceptance time in a group of 22 ecology journals. A similar observation was reported by Metcalfe (1995) in 28 biological and biomedical journals. As noted by Pautasso and Schäfer (2010), this speed of processing occurs despite the fact that journals with higher impact factors receive more submissions. Rousseau and Rousseau (2012) have recently shown that authors are willing to wait longer for editorial decisions for more prestigious journals; the negative correlation reported here suggests that they will not have to do so. Nevertheless, the current work demonstrates that these earlier observations extend beyond individual disciplines and are representative of a broader pattern in the scientific literature.

It should be noted that other studies have also reported a negative correlation between publication time and impact factor; however, they have not explicitly measured the processing times at a particular journal. For example, Ray, Berkwits, and Davidoff (2000) measured the time between rejection by one journal and subsequent publication in a second. While the impact factor of the publishing journal was observed to decrease with publication time, this delay does not necessarily reflect longer manuscript processing times at those journals. An alternate explanation would be that authors “just work their way down the ladder” (Adam and Knight 2002, 774) by successively submitting to progressively lower-impact journals until the manuscript is accepted. The work by de Marchi and Rocchi (2001) also showed a negative correlation but relied on average times self-reported by the journals themselves in response to a survey; the reply rate was 9.6%. The current study explicitly accounts for manuscript handling times on a paper-by-paper basis.

Median acceptance times varied from as little as 0.5 to over 13.5 months (15–412 days), while the median time to publication ranged from 5.7 to 20.2 months (173–615 days). Both of these ranges are wider than those previously reported in the literature (“Acceptance Rates” 2002; Labanaris et al. 2007; Pautasso and Schäfer 2010; Metcalfe 1995). For example, Amat (2008) reported acceptance and publication times of 101–292 and 181–491 days. It should be noted, however, that those previous studies have been limited to a single topic [End Page 232] area. The scatter of figure 2 makes it clear that large inter- and intra-group differences exist.

The mechanisms which allow for more rapid processing in higher-impact journals warrant further investigation, particularly given the greater volume of manuscripts they receive (Pautasso and Schäfer 2010). Editorial policies are expected to have a huge impact on this process. For example, some journals employ a triage system which leads to over half of the submitted manuscripts being rejected without peer review (Crawford et al. 2008). Others impose strict time limits or deadlines for reviewers, which may be as little as two weeks (Drubin 2011). These types of techniques may allow editors and editorial boards to reduce the time needed to process and publish manuscripts.

As noted earlier, the publication date is difficult to identify with precision. The date of the journal issue may not reflect the time at which it was physically published. While an issue may have been released earlier or later than the cover date would indicate, those discrepancies should exist at the journal level and would be expected to be randomly distributed. Other potential sources of error include the selection of journals and journal categories. While every effort was made to select journal categories which differed primarily by median impact factor, it was not possible to control for all possible confounding variables. Journal groups were determined objectively by selecting research areas which were close to the median in three areas: the number of journals, the total number of articles, and the total number of citations. In particular, Equation (2) was employed to select journals closest to the median in each of these three areas and thereby permit a univariate analysis.

Journals were selected from within each journal group only semi-objectively based on four pragmatic criteria. First, the journals had to provide information about the dates on which manuscripts were received, revised, and/or accepted. This information was needed for the study and, as shown by de Marchi and Rocchi (2001), is difficult to obtain directly from individual journals. Only journals publishing in English, with at least four issues a year, were included to avoid the confounding effect of language and artificially inflated publishing times. Finally, journals were also limited to those with electronically accessible publication data, available either by institutional subscription or to the general public. This criterion permitted the automated determination of publication times necessary to process the more than 4,700 articles included in this study.

The current work has demonstrated a statistically significant relationship between article processing times and journal impact factors using a univariate model. There are several journal-specific variables that are also known to affect the values of impact factors, such as the editorial practices and the overall prestige of the journal. These parameters have been ignored due to the difficulties associated with quantifying them. Nevertheless, future work should consider a more complex multivariate approach to investigate the effect and interdependence of additional explanatory variables, as well as a larger sample of journals and journal groups. [End Page 233]

Advanced online publication has also been ignored in this study (Drummond and Reeves 2005), a decision dictated by the diverse practices of individual journals. Some journals post electronic versions of articles immediately after acceptance, others do so once corrected proofs are available, and some post electronic articles only in conjunction with the publication of the print version. Recent work by Tort, Targino, and Amaral (2012) has shown that a larger delay between online and print publication is associated with an increase in journal impact factor. This inflation is caused by artificially extending the citable time for an article. Future work should address the effects of manuscript handling times and the delay between online and print publication concurrently.

This article has also left unaddressed the effects that rapid processing times may have on the quality of the manuscript review process itself. While determination of an objective measure of “quality” is certainly a challenge, one possible metric would be the number of retracted or corrected articles. The rate of retractions in academic journals has been noted to be increasing, and many of these articles are retracted for methodological, data, or analysis errors (Steen 2011; Grieneisen and Zhang 2012). Peer review is not capable of identifying every possible error; however, more thorough review may be related to reduced rates of published corrections, errata, or retractions. Nevertheless, it would be difficult to disentangle the various effects of publishing delays, time spent actually reviewing the article, the quality of the review, the prestige of the journal, and so on.

A statistically significant relationship between article processing times and journal impact factors has been demonstrated across the seven journal categories studied; however, correlation does not indicate causation. Editors cannot necessarily expect to improve their journal’s impact factor simply by improving the speed of their manuscript review and processing system. For example, the level of previous publishing experience on the part of submitting authors (Yegros and Amat 2009) and the publishing model itself (Dong, Loh, and Mondry 2006) have also been shown to be correlated with manuscript processing times. Nevertheless, a faster review system is undeniably appealing to authors (Rowlands, Nicholas, and Huntington 2004; Ware and Monkman 2008). It may be that the reduced review times attract better authors, thereby raising the impact factor; however, more study is needed.

The acceptance and publication times of 4,735 articles from 42 journals, spanning seven journal categories, were determined. The results confirm that manuscripts are processed faster both in journals with higher impact factors and in journal categories with higher median impact factors. The effect of the limited time window on the calculation of the journal impact factor must be properly understood in order for this important metric to be interpreted and applied correctly.

W. Brent Lievers
Bharti School of Engineering, Laurentian University, Sudbury, Ontario
blievers@laurentian.ca

Notes

1. This work was performed while the author was employed by the University of Virginia in Charlottesville, VA. [End Page 234]

References

“Acceptance Rates and Publication Times.” 2002. Journal of Orthodontics 29 (3): 171–72. http://dx.doi.org/10.1093/ortho/29.3.171.
Adam, D. 2002. “The Counting House.” Nature 415 (6873): 726–29. http://dx.doi.org/10.1038/415726a. Medline:11845174
Adam, D., and J. Knight. 2002. “Journals under Pressure: Publish, and Be Damned.” Nature 419 (6909): 772–76. http://dx.doi.org/10.1038/419772a. Medline:12397323
Amat, C. B. 2008. “Editorial and Publication Delay of Papers Submitted to 14 Selected Food Research Journals: Influence of Online Posting.” Scientometrics 74 (3): 379–89. http://dx.doi.org/10.1007/s11192-007-1823-8.
Brody, S. 2013. “Impact Factor: Imperfect but Not Yet Replaceable.” Scientometrics 96 (1): 255–57. http://dx.doi.org/10.1007/s11192-012-0863-x.
Crawford, J. M., C. M. Ketcham, R. Braylan, L. Morel, N. Terada, J. R. Turner, and A. T. Yachnis. 2008. “The Publishing Game: Reflections of an Editorial Team.” Laboratory Investigation 88 (12): 1258–63. http://dx.doi.org/10.1038/labinvest.2008.113. Medline:19020521
de Marchi, M., and M. Rocchi. 2001. “The Editorial Policies of Scientific Journals: Testing an Impact Factor Model.” Scientometrics 51 (2): 395–404. http://dx.doi.org/10.1023/A:1012705818635.
Dong, P., M. Loh, and A. Mondry. 2006. “Publication Lag in Biomedical Journals Varies due to the Periodical’s Publishing Model.” Scientometrics 69 (2): 271–86. http://dx.doi.org/10.1007/s11192-006-0148-3.
Drubin, D. G. 2011. “Any Jackass Can Trash a Manuscript, but It Takes Good Scholarship to Create One (How MBoC Promotes Civil and Constructive Peer Review).” Molecular Biology of the Cell 22 (5): 525–27. http://dx.doi.org/10.1091/mbc.E11-01-0002. Medline:21357757
Drummond, C.W.E., and D. S. Reeves. 2005. “Reduced Time to Publication and Increased Rejection Rate.” Journal of Antimicrobial Chemotherapy 55 (6): 815–16. http://dx.doi.org/10.1093/jac/dki121.
Falagas, M. E., and V. G. Alexiou. 2008. “The Top-Ten in Journal Impact Factor Manipulation.” Archivum Immunologiae et Therapiae Experimentalis 56 (4): 223–26. http://dx.doi.org/10.1007/s00005-008-0024-5. Medline:18661263
Fuyuno, I., and D. Cyranoski. 2006. “Cash for Papers: Putting a Premium on Publication.” Nature 441 (7095): 792. http://dx.doi.org/10.1038/441792b. Medline:16778850
Garfield, E. 1999. “Journal Impact Factor: A Brief Review.” Canadian Medical Association Journal 161 (8): 979–80. Medline:10551195
Grieneisen, M. L., and M. Zhang. 2012. “A Comprehensive Survey of Retracted Articles from the Scholarly Literature.” PLoS ONE 7 (10): e44118. http://dx.doi.org/10.1371/journal.pone.0044118. Medline:23115617
Labanaris, A. P., A. P. Vassiliadu, E. Polykandriotis, J. Tjiawi, A. Arkudas, and R. E. Horch. 2007. “Impact Factors and Publication Times for Plastic Surgery Journals.” Plastic and Reconstructive Surgery 120 (7): 2076–81. http://dx.doi.org/10.1097/01.prs.0000295985.51578.77. Medline:18090778
Lievers, W. B., and A. K. Pilkey. 2012. “Characterizing the Frequency of Repeated Citations: The Effects of Journal, Subject Area, and Self-citation.” Information Processing and Management 48 (6): 1116–23. http://dx.doi.org/10.1016/j.ipm.2012.01.009.
Metcalfe, N. B. 1995. “Journal Impact Factors.” Nature 376 (6543): 720. http://dx.doi.org/10.1038/376720b0. Medline:7651526 [End Page 235]
Monastersky, R. 2005. “The Number That’s Devouring Science.” Chronicle of Higher Education 52 (8): A12–17.
Pautasso, M., and H. Schäfer. 2010. “Peer Review Delay and Selectivity in Ecology Journals.” Scientometrics 84 (2): 307–15. http://dx.doi.org/10.1007/s11192-009-0105-z.
Ray, J., M. Berkwits, and F. Davidoff. 2000. “The Fate of Manuscripts Rejected by a General Medical Journal.” American Journal of Medicine 109 (2): 131–35. http://dx.doi.org/10.1016/S0002-9343(00)00450-2. Medline:10967154
Rousseau, S., and R. Rousseau. 2012. “Interactions between Journal Attributes and Authors’ Willingness to Wait for Editorial Decisions.” Journal of the American Society for Information Science and Technology 63 (6): 1213–25. http://dx.doi.org/10.1002/asi.22637.
Rowlands, I., D. Nicholas, and P. Huntington. 2004. “Scholarly Communication in the Digital Environment: What Do Authors Want?” Learned Publishing 17 (4): 261–73. http://dx.doi.org/10.1087/0953151042321680.
Shao, J., and H. Shen. 2011. “The Outflow of Academic Papers from China: Why Is It Happening and Can It Be Stemmed?” Learned Publishing 24 (2): 95–97. http://dx.doi.org/10.1087/20110203.
Steen, R. G. 2011. “Retractions in the Scientific Literature: Is the Incidence of Research Fraud Increasing?” Journal of Medical Ethics 37 (4): 249–53. http://dx.doi.org/10.1136/jme.2010.040923. Medline:21186208
Tort, A.B.L., Z. H. Targino, and O. B. Amaral. 2012. “Rising Publication Delays Inflate Journal Impact Factors.” PLoS ONE 7 (12): e53374. http://dx.doi.org/10.1371/journal.pone.0053374. Medline:23300920
Vanclay, J. K. 2012. “Impact Factor: Outdated Artefact or Stepping-Stone to Journal Certification?” Scientometrics 92 (2): 211–38. http://dx.doi.org/10.1007/s11192-011-0561-0.
Ware, M., and M. Monkman. 2008. Peer Review in Scholarly Journals: Perspective of the Scholarly Community—an International Study. London: Publishing Research Consortium. http://www.publishingresearch.org.uk/documents/PeerReviewFullPRCReport-final.pdf.
Yegros, A. Y., and C. B. Amat. 2009. “Editorial Delay of Food Research Papers Is Influenced by Authors’ Experience but Not by Country of Origin of the Manuscripts.” Scientometrics 81 (2): 367–80. http://dx.doi.org/10.1007/s11192-008-2164-y.
Yu, G., R. Guo, and Y.-J. Li. 2006. “The Influence of Publication Delays on Three ISI Indicators.” Scientometrics 69 (3): 511–27. http://dx.doi.org/10.1007/s11192-006-0167-0.
Yu, G., X.-H. Wang, and D.-R. Yu. 2005. “The Influence of Publication Delays on Impact Factors.” Scientometrics 64 (2): 235–46. http://dx.doi.org/10.1007/s11192-005-0249-4. [End Page 236]

Share