Scholar as E-Publisher:The Future Role of [Anonymous] Peer Review within Online Publishing
The advent of online journals has opened a vast opportunity for small journals published by a variety of institutions. It also has given scholars many more options, from more general publications to far more journals addressing very narrowly defined subjects, and it suggests that in the near future the role of online journals and peer review will radically change. The author proposes roles for user-generated content and university libraries in the evaluation and publication of research.
peer review, online academic publishing, user-generated content, university libraries
The advent of online journals has opened a massive opportunity for the creation of small journals published by a variety of institutions, nongovernmental organizations, and universities. It also has given scholars many more options, from more general publications to a variety of journals addressing very narrowly defined subjects. This shift toward online journal publishing has been constant and irresistible. Starting in the mid-1990s, publishers of print journals began putting all or some limited amount of their content online, usually through database subscriptions paid by universities. The bundling of journals into databases by publishers, often with little thought as to their mutual relevance, resulted not only in an economic crisis for libraries but in an inability of librarians to craft collections to institutional needs.1
And as the cost of subscriptions to academic journals and databases rose, the cost of the software and data storage necessary to publish online dropped. At the turn of the twenty-first century, a few small online-only journals had appeared, most sponsored by a university or [End Page 428] foundation; soon to follow were dozens, then hundreds, of specialized online-only journals joining a migration of print journals that allowed limited access to articles. A comprehensive examination of all academic journals by the Association of Research Libraries (ARL) tracked the upsurge in publication of scholarly research online. The January 1991 edition of the ARL Directory of Electronic Journals reported 110 journals online, likely accessed via File Transfer Protocol (FTP), since the Web was still a few years away; by 1998, the number had jumped to more than 6000.2 By 2007, the ARL reported that 60 per cent of 20,000 peer-reviewed journals were available online in some form.3 Publisher and cataloguer EBSCO provides access to more than 22,000 online journals as of April 2010.4
The appearance of these journals on Web sites was driven by demand. Online journals are far and away the preferred method—especially for younger faculty members—to access research. These new researchers have warmly embraced the shift online of print journals and the appearance of 'born online' journals. 'Scholarship, particularly in science, is becoming increasingly born-digital and networked digitally,' and younger patrons of libraries and other research sources overwhelmingly prefer electronic access to journal research over print.5 Ware notes a conversation with a librarian at a large research institution: 'The librarian concluded [from a study he had conducted] that on present trends, there would be little demand for print journals within five years.'6
Concurrently at least one major grant provider—the National Institutes of Health (NIH)—has required publicly funded research be made available online at no charge. NIH's requirement specifies that the research it has funded be made available via open access online one year after its publication in an academic journal, regardless of whether that journal is a for-profit publication.7 Thus, after appearing for some period of time within a 'traditional' journal environment, the research would be spun out to the public, typically within a university's electronic reserves (e-reserves), specifically structured to hold such works. This new mandate on publicly funded research is likely to be adopted by other research-funding government agencies and by non-government organizations. Further, it is not a stretch to imagine that the one-year delay between initial publication and open access will become shorter.
The appearance of academic research within a university's e-reserves raises many issues. Given this digital 'publication' format, what prevents [End Page 429] the research from stepping around the journal phase and going directly to the e-reserves? And, given this direct author-to-publication model, what role will journals play, specifically in the areas of peer review and editing? How will academics know which are the 'acceptable' research papers and which might be found to lack appropriate filtering, such as that provided by peer review? The purpose of this article is to discuss the rise of online journals and the historical role of peer review and anonymous peer review, then push further and propose a more narrowly defined/refined publishing/evaluation model for the future.
Online Journal Publishing: A Recent History
If the rapidly increasing production costs of print journals and the equally rapidly decreasing costs of online journals have played a part in driving research to the Web, so has the ability of library archivists to invoke far more powerful search routines to find these e-data. Consider that an academic journal provides a shortcut for researchers looking for prior discussion and data in their areas of interest; that is, before search engines and library e-reserves, researchers in mass communication could count on Journalism History as a place where they would find, logically, articles dealing with the history of journalism. This seems rather simplistic; however, this 'subject shortcut' might be seen as the first role previously played by academic journals that would be replaced by searchable electronic databases. Even today, users of various databases interact less with Web addresses (URLs) than with specific journals. These same researchers could simplify their searches by using meta-databases—collections of various related databases. The library card catalogue, in the past, allowed researchers to find related information residing within the stacks; find one relevant work and you could expect to find other books on similar subjects nearby. The result was a balkanization of research driven by the separation of one area from other areas, one set of shelves from other sets of shelves, one floor of stacks from other stacks—cross-referenced through massive subject guides such as Sociological Abstracts.
Digital databases are also somewhat guilty of balkanization; some fields have their own journalsin in subject-specific collections that may be difficult to find or inaccessible to researchers outside the field. Yet this need not be the future pattern as we move toward e-reserves managed by what might be called 'millennial librarians' trained to provide [End Page 430] a more semantic approach to Web storage. Using carefully designed metadata descriptions or semantic 'objects' of research, the managers of a university's e-reserves could ensure that researchers would be able to find the appropriate sources and data sets. In fact, to some degree such expert 'hunters' of data will be vital as Web sites grow beyond the billions and Web pages grow past the trillions. As envisioned by Web creator Tim Berners-Lee, the Semantic Web makes possible very powerful searches for data, searches not possible with the separate databases and their artificial divisions from other databases. Berners-Lee defines this more powerful search environment as 'a web of data that can be processed directly and indirectly by machines.'8 It acts similarly to intelligent agents with higher search capabilities, using its own Web code language composed of objects defined and used in conjunction with other objects included within a data set.
The Semantic Web does not include artificial intelligence, something often associated with current discussions regarding other intelligent agents. Rather, it relies on rules of inference that create a pathway between different data sets. Instead of looking for specific metadata filled with specific words, the Semantic Web uses objects to find specific data that include the rules of inference—objects—that will make it possible for a researcher to find the precise document or data set necessary for a given project. Some of the tools associated with the Semantic Web are the Resource Description Framework (RDF); corresponding data formats such as RDF/XML, N3, Turtle, and N-Triples; and schemes such as Ontology Web Language (OWL). The resulting knowledge base system would allow future researchers not only to find a relevant article but then to find the associated data set, how it was created, what other data sets have been created based on it, and what other findings have been published, enhancing access to relevant information and rendering it faster and more accurately than is possible today. 'Search quality,' to use Google's term, is enhanced and made more accurate, which is the goal of any researcher using the Web to find information. It is as if the library's card catalogue could offer a researcher not only the best reference card but also all the related cards associated with that single card. Add to this hyper-powered cloud computing—wherein literally hundreds of thousands of servers act as one massive electronic field of data, software, and other informational sources—and researchers will find not only the research they need but also a place to store it, modify it, and share it. [End Page 431]
All of this requires content management beyond that need by a single journal or a single database. This is the final factor that renders the argument over print versus online moot. We are past the point of wondering whether the research of the future will be published online, as John Peters pointed out more than a decade ago.9 We have not yet fully reached the phase of 'scholarly skywriting' suggested at the time by Stevan Harnad, wherein researchers—through university libraries—would post their as-yet-unfinished articles online, seeking comment and improvement (though this 'commons' approach will soon be upon us).10 And, while overestimating the role of existing publishers in the online movement, Gregory Newby predicted as early as 1996 that the scholar-as-publisher model would eliminate the journal-as-publisher model entirely.11 This movement toward direct access to all research reflects, in part, what Wang Feng-Nian has described as a key innovative element of scholarly work in the new century—the online editor:
If unable to accept new ideas and apt to neglect them, [print] editors will function to hinder civilization and social progress. In this sense, enhancing editors' innovative spirit is beneficial for accelerating human civilization.12
It is a perfect storm: rising costs and prices for print, lower online publishing budgets, and the appearance of a generation of editors comfortable with the Web environment. And the rapid growth in the number of online journals, especially those 'born online,' is related to three factors: economy, software development, and researcher preference.
The economic issues outlined at a Stanford University Libraries colloquium in 2006 addressing the online journal movement included
• the rise in academic journal prices of 215 per cent between 1986 and 2003, compared with a 68-per-cent rise in the consumer price index over the same period;
• the fact that for-profit journals charged three times the per-page cost charged by not-for-profit journals; and
• the fact that 100 per cent of the articles in four leading economics journals, and 73 per cent of articles in all economic journals, could be found online for free.13 [End Page 432]
Notably, two years before the Stanford colloquium, that university's faculty senate had passed a resolution encouraging its faculty to factor in the price of a journal when considering where to publish research. The colloquium itself was described as a response to the 'crisis in journal pricing.'14 Indeed, many university libraries were developing strategies to deal with the 'current cancellation crisis such as electronic document delivery, resource sharing and electronic journals.'15
In 1998, Carol Tenopir and Donald King cited research by Hal Varian suggesting that the cost of producing a quarterly, special-purpose, non STM academic print journal was roughly $120,000 per issue; the estimated institutional subscription fee was $200 per issue for non-profit journals and $600 per issue for for-profit journals.16 Add to that, he noted, Michael Lesk's estimated increase in subscription cost for such a journal of between 48 per cent and 93 per cent over ten years,17 together with an estimated per-subscriber cost for some journal articles of $200 or more, and the result is an economic model that is difficult to sustain.
Varian concluded that reducing the costs of academic communication would require re-engineering the manuscript-handling process. Using electronic distribution could cut costs within the editorial process by 50 per cent, he suggested. Add to this the reduction of shelf space in libraries, the costs to monitor holdings, the ease of online searches, and the ability to store accompanying support documents such as images, data sets, and (though these are not mentioned by Varian) audio and video files, and cost savings could be significant. 'When everything is electronic,' Varian noted, 'publications will have much more general forms, new filtering and refereeing mechanisms will be used, [but] archiving and standardization will remain a problem.'18
Roger Clarke and Danny Kingsley suggest that this movement toward an open-access (OA) model will not come without a 'spirited' defence from the 'for-profit corporations that have grown rich through exploitation of their multiple- and mini-monopolies' within the academic publishing world.19 Publishers' death-like grip on access to the research expected at a top-ranked university library was almost complete by the end of the millennium,20 with annual subscription prices increasing at alarming rates. University libraries at the turn of this century consistently faced increased costs just to hold onto the subscriptions they already had, with little or no room to add new publications. Indeed, sit in on any faculty committee dealing with university library holdings [End Page 433] and the conversation almost always includes some discussion of what journals will be kept, added, and deleted to fit the coming year's budget. It is no small matter for some: a library's holdings are taken into account in ranking academic libraries and universities,21 though the value assigned to this measure may be fading.22
Traditional publishers have reacted strongly and predictably to this rising popularity of the OA movement. As noted by several researchers and news organizations,23 in early 2007 the publishing giants hired lobbyists whose sole intent would be to discredit the OA movement while extolling existing publishing houses as protectors of the peer-review system. The same sources noted at the time that this response could be understood within the context of the monetary threat most publishers would perceive in online OA as well as a genuine fear of the unstable (perhaps 'unsettled' would be a better descriptor) nature of electronic archives.
Dozens of new software packages, many of them available at little or no costs, have made the labour of creating and maintaining an online journal easier. This new software makes it possible for any university, any academic department—in fact, any faculty member—to establish an online journal. All of the economics that historically reserved publishing to the wealthiest foundations and individuals have been reversed; in fact, entry costs are now so low that the single remaining barrier is often the absence of the desire to create a particular journal. What has resulted is the appearance of very narrowly defined journals that would not have been considered remotely feasible two decades ago.
More than two dozen of these software packages are listed, with short descriptions, on the Web site of the Scholarly Publishing and Academic Resources Coalition (SPARC).24 SPARC, a division of the Association of Research Libraries, provides information on new technologies and strategies, among other things, intended to assist online academic journals. Of these, a few are free (open-source) software packages, including ePress, published by the University of Surrey;25 Open Journal Systems, published by the Public Knowledge Project at Simon Fraser University;26 and Zope.27 Like many of these open-source software packages, Open Journal Systems (OJS) offers substantial support to editors in the way of file management and workflow coordination: [End Page 434]
Open Journal Systems (OJS) is a journal management and publishing system that has been developed by the Public Knowledge Project through its federally funded efforts to expand and improve access to research.
1. OJS is installed locally and locally controlled.
2. Editors configure requirements, sections, review process, etc.
3. Online submission and management of all content.
4. Subscription module with delayed open access options.
5. Comprehensive indexing of content part of global system.
6. Reading Tools for content, based on field and editors' choice.
7. Email notification and commenting ability for readers.
8. Complete context-sensitive online Help support.28
These relatively new software solutions—most created within the past decade—have had a significant impact on the publishing landscape. The costs of creating an online journal, in terms of both online and offline management, are significantly reduced. The software provides tracking of submissions, reviewers, and publication, all within an online environment. The need to print, mail, and re-mail manuscripts, and, ultimately to re-mail revised manuscripts again to authors has been eliminated by moving all these functions to a secure Web area that provides easy downloads and uploads, and extremely valuable tracking and logging of the entire process. No team considering launching a new online journal should overlook the massive impact these systems will have on their operations.
A study by researchers at Drexel University showed a significant preference among graduate students, but less among faculty, for electronic materials over print journals.29 Two other researchers—tracking acceptance of electronic materials among faculty—found a much higher rate for all groups, in large part because of the 24/7 availability of research materials:
Our in-depth interviews with faculty indicate a high degree of comfort with electronic access to journal literature. The scholars we spoke with clearly recognized the convenience of 24/7 access from home or [End Page 435] office. Like many librarians, most faculty would prefer to retain print just in case, but when confronted with forced choices, the overwhelming majority either supported more electronic access at the cost of print retention or felt unequipped to make this choice.30
One significant piece of earlier research dug deeper than most. Varian's 'The Future of Electronic Journals,' presented at a conference at Emory University in Atlanta in April 1997, addressed the future evolution of online journals. Varian proposed a supply-and-demand model for publishing scholarly work, concluding that, for most universities, 'the ability . . . to attract top-flight researchers depends on the size of the collection of the library. Threats to cancel journal subscriptions are met with cries of outrage by faculty.'31
Given the economics, the software available, and the preferences of researchers (especially younger ones), let us accept that, at some point in the near future, all academic research will be on a university server available to any researcher, without the need for registration or subscription fees. How would such research be vetted to ensure its quality? How would academic research articles published on a university's DSpace be peer-reviewed to ensure that only the best is actually publicly available? This is an interesting dilemma: What research, evaluated by what reviewers, within what matrix of control, actually makes it into the light of day? And, given that the research would still require editing and—presumably—review of some sort, what role might these (purportedly) soon-to-be 'unemployed' academic print journal editors and reviewers play in this new world?
The Historical Role of Peer Review
As Harnad suggested in 2001, the Faustian relationship between authors and publishers is a well-tooled model not likely to give way without a fight from some academic authors, who mistrust electronic archives, and almost all print publishers, who are deeply entrenched in the scroll era.32 This trust in the author-university-publisher-research model has its merits. The large publisher has a more substantial monetary investment—and thus faces a higher risk than the small publisher—in ensuring that a journal is held to high standards. Authors are assured full academic credit for appearing in the 'right' journals. Universities can tout their 'well-published' researchers as reflecting the quality of the [End Page 436] institution itself. Perceived failure to maintain such high academic standards might lead to an exodus of authors and a decline in author submissions and, as a result, in library subscriptions.
Of course, to suggest that peer review is an august tradition, unspotted by controversy, is a bit nai¨ve. Roughly a quarter of a century ago, two professors tested the peer-review process in place at a dozen highly regarded academic psychology journals by resubmitting twelve articles published between eighteen months and two years earlier in each of these journals under fictitious names and institutions. They reported that three had been caught as resubmissions, one was accepted, and eight were rejected. The rationale for the rejections was, in many cases, that the articles contained 'serious methodological flaws.' As the researchers noted, 'a major portion of the criticism of the journal review system has concerned the reliability of peer review'; their findings suggested that the high rejection rates of previously published articles might be related to author standing, institutional standing, peer bias, and poor reviewer performance.33
Research published in 2001 suggested that women face a much higher hurdle than men in getting their articles published, because of gender bias and nepotism on the part of reviewers and editors. The authors suggested that to avoid the loss of a 'large pool of promising talent,' the peer-review process needed retooling to create 'built in resistances to the weaknesses of human nature.'34 Other researchers have found similar weaknesses within the peer-review system, a system intended to ensure that only the best research is published.35
Despite its frailties, peer review is still valued as a method of identifying research appropriate for publication and blocking work that might be considered inappropriate. This is the model that academia has relied upon, in one form or another, for more than 400 years. As noted by the UK Parliament's Select Committee on Science and Technology in 2004, the concept of peer review in scientific research was established by Henry Oldenburg in 1655 to provide researchers with a 'publication run by an independent third-party that would faithfully record the name of a discoverer, the date the paper was submitted and a description of the discovery.'36 This publication, Philosophical Transactions, was owned by Oldenburg but relied upon the Royal Society of London to provide peer review. Authors of scientific discoveries would flock to Oldenburg's journal, secure in the knowledge their work would be shared and 'safe in the knowledge that their "rights" as "first discoverers" were protected.'37 [End Page 437]
Oldenburg's journal provided registration, dissemination, peer review, and an archival record. These functions are seen today as the primary roles of any academic journal. What has changed in the last century is the manner in which peer review is conducted.
The Rise of Anonymous Peer Review
The exact beginnings of anonymous peer review are a bit more vague than those of peer review itself. Some researchers have suggested that the practice of providing anonymity to reviewers began shortly after World War II, with the intent to generate more candid evaluations unaffected by personal feelings or institutional biases. Many who fear that a return to the previous model would result in far more errors fiercely defend the tradition of anonymous peer review. The battle has raged for decades between those who believe that the anonymity of peer review assures that only the best research is published and those who suggest that the model is filled with bias and error.
The presumed need for peer review and for anonymity of that review are separate arguments. The use of the opinions of learned researchers in a particular field as a benchmark for research publishing is not without its critics, as previously discussed; but this is a tradition reaching back centuries. The more recently adopted practice of anonymous peer review has generated much more controversy. Some have suggested that requiring reviewers to sign their opinions would lead to a lowering of standards without conferring any advantage. Susan Von Rooyen et al. argued in 1998, however, that the 'blinding and unmasking [of reviewers' identities] made no editorially significant difference to review quality, reviewers' recommendations, or time taken to review.'38 Instead, they suggested, 'other considerations should guide decisions as to the form of peer review adopted by a journal, and improvements in the quality of peer review should be sought elsewhere.'39
A 2001 editorial in Nature argues that, in spite of its failings, the system of peer review is sound and reliable:
As is the case with any process, peer review is not an infallible system and to a large extent depends on the integrity and competence of the people involved and the degree of editorial oversight and quality assurance of the peer review process itself. Nonetheless we are satisfied that publishers are taking reasonable measures to main high standards of peer review.40 [End Page 438]
However, others argue that anonymity allows for the equivalent of academic bullying and introduces a degree of elitism that should not have any part in academic research publishing. The tales are many of renowned scholars' being snubbed in their early research—research that would later be hailed as profound, such as that by Gregor Mendel, Joseph Fourier, Edwin G. Krebs, and John James Waterson.41 Peters writes that
a scholarly journal can be likened to a club where non-members will not be told the house rules, but are expected to know them, and will not be admitted if they transgress.42
Peer review also has been criticized as too slow, too harsh, too peremptory, and of little assistance to the author in her or his efforts to improve the work.43 Reviews can take months to complete; reviewer comments can be furious in nature, almost as if the reviewer feels insulted at having to read the article in the first place. And few authors with any experience in submitting their work for review can say that they have never received a one-sentence rebuff. Unfortunately, the opportunity to improve on the proposed work is too often missed by reviewers, who may be dismissive with short rejections lacking any details or rationale.
At its core, the current process of peer review often gives no opportunity for the dismissed author to argue against the rejections offered by the secret panel arranged against the submission. And when rejections are short, with little or no reasoning provided, authors are left with no way to counter the outcome.
Peters likens peer review to an employer-employee relationship:
It is perfectly possible to make hard criticism in a way which others can consume. Granted, it takes more work. But how, for example, do you tell an employee you like and who is generally doing well and who has a great future that he or she has messed up? With care and empathy I think—because you want them to understand what they have done 'wrong,' and improve it, without getting disillusioned or hostile. As reviewers, we don't always take time and care to do that.44
Of course, it is doubtful that Peters would suggest that employers leave unsigned criticisms (anonymous peer reviews) on employees' desks. A key point here is that suggestions shared by those interested in the research might lead to better research. Yet the competitive nature of print [End Page 439] journals, with their limited space, engendered a sense of 'me against them.' Today, limits on the amount of research that can be published are no longer just a function of economics or available space in a print journal; in a 'commons' area, the vastness of space available may lead not only to better work by a researcher but, by extension, to valuable responses from readers that can lead to better research. 'If replaced by a system of open commentary and ongoing revision,' Fiona Godlee writes, 'in which responsibility for quality control is shared by many rather than depending on the necessarily subjective judgments of a chosen few, . . . [this] should not spell disaster.'45 If the intent is to provide the best research results, as Godlee argues, why not provide the best critiques within the best environment?
Nora Newcombe has suggested that the first of five suggested 'commandments' for peer-reviewed journals should require the judgement of 'scientific articles only the validity of their logic and the strength of their evidence.'46 She goes on to ask that academic journals adhere to the judgements and rules of their peer-review systems: 'despite all [the] problems . . . no one has invented a better alternative.'47
Tom Jefferson, the on the other hand feels it is high time that peer review, as currently structured, be discarded. Quoting Richard Smith, former editor of the British Medical Journal, Jefferson criticizes peer review in the pharmaceutical industry as 'a process that research has always shown to be an ineffective lottery prone to bias and abuse.'48
A Suggested New Model: Online Publication and Revision
Yet this recent battle over anonymity of reviewers may miss the point, or has at least been rendered moot by the Internet. The options for a future publishing model are numerous, if only because the economics make it so. This is probably best exemplified by Chris Anderson's Long Tail theory,49 used most commonly to describe the impact of the Web on business models. The Long Tail also can aptly suggest a future for academic journals. With the cost of publishing a new journal dropping so low as to rely more on desire than on funding, new journals on the most narrowly defined subjects will begin to appear. These journals may attract only handfuls of readers. They may reflect the desires of an institution, a university, a college, a department, or even just a few faculty members. And, by itself, this new model of publishing might survive for some time, were it not for the much more simplified model [End Page 440] just on the horizon: direct publication by authors. However, both of these models will still require editorial staff and some minor technical support. From where will these funds derive?
Perhaps one source might be the funds previously used by libraries to purchase access to private databases. Logically, the public that funds research would like to be able to access the results. The desire for research to be publicly available will no doubt drive more and more works to university e-reserves, where they will be immediately accessible. The cost savings in new journal subscriptions to university libraries could eventually run into the billions of dollars, which may be sufficient to provide the editorial support necessary to ensure the material published is grammatically and in other ways accurate. In addition, these editors might add the semantics that may be used in future online journals to provide greater depth to the information presented.50
But what of peer review? Let us consider three new models for the peer review of academic research, given the assumption that the research in question has been published on a university's e-reserves, either as part of a university-sponsored journal or by an author. In all cases, the journal or author will have notified appropriate peers of the publication, although, in at least one of the cases described below, this may not be necessary.
Peer Review by Rankings
One method to guide researchers in determining what research has met an appropriate level of competence is to provide a ranking, similar in many ways to those used to evaluate movies. A group of researchers, who might be identified by what was once a publishing journal—say, Journalism and Mass Communication Quarterly (JMCQ)—would examine new research published on university servers worldwide and rank that research within levels of acceptability, or simply on a pass/fail basis. Appropriate links to the articles would be provided. The actual publishing of the journal (as well as the editing that would precede publication today) would occur elsewhere. In addition, the ranking could be accompanied by some suggestions for improvement or areas of future research. This communal behaviour could foster improvements both in the research and in researchers.
This method would also produce quicker reviews, thus addressing one of the most common complaints about the peer-review system. [End Page 441] Publication of authors' work would allow other groups to coalesce with the purpose of commenting on one work or on a group of works. The collaborative nature of the reviews could be subject driven in numerous ways. Why not a group that 'meets' within a review structure to discuss the latest articles in mass communications dealing with agenda setting? Or new survey methods? Or the latest postings relating to cultural theory? Again, this reflects the nature of the Long Tail theory: the costs of posting research related to a narrow, but common, area of interest are minimal compared to the return on investment for those involved.
Peer Review within a Commons Area
While a full discussion of the concept of an academic commons is beyond the scope of this work, few major universities in the world are not already supporting or contemplating supporting such a community. Universities are fostering these communities to enhance collaboration among faculty, and, as might be expected, they come in all shapes and sizes.
The use of a university commons with respect to academic publishing might occur on two levels: pre-publication review within the university and post-publication comment. In the former case, colleagues within a university or department might be engaged in offering suggestions to researchers nearing publication. The work in question could be shared within an online 'commons' that allows readers to offer comments and suggestions for improving the work.
In the latter model, the work could attract comments from academics outside the college or university. Of course, some control over the process would be required, most likely within a user group, much like the 'manager' role in Usenet groups in the early days of the Internet or, more recently, a message-board moderator. Either way—pre-publication review or post-publication comment—user-generated content (UGC) has proved key to the success of many corporate sites, and should be seriously considered as a goal for any academic journal. UGC is one of many possible artefacts of social networks that can lead to a more enriched conversation about new research findings. It challenges the elitism of anonymous peer review, replacing it with a wide-open, robust discussion. Of course, such a discussion may require moderation to ensure that the supportive intent is protected. [End Page 442]
Peer Review within Weblogs
Finally, some form of reader feedback should be provided. Whether posted on a Weblog (blog) or on a more traditional message board, comments and suggestions can lead to improvements to the research itself. Feedback can be presented in various ways: A comments area can be provided if the article is published online, or a link can be provided to a separate message area (the URL for this message area could also be included in the print version of the article). Of course, such comment areas could also attract harsh responses that offer little constructive criticism, but with proper moderation, message boards can be a useful resource. In addition, these boards could be restricted to a set of academics pre-qualified to comment on the research, who could be identified by the journal editors or drawn from the suggestions of the journal editorial board; these 'reviewers' would not be anonymous. This model has its precedents. Peters and Ceci's article in Behavioral and Brain Sciences includes responses from more than fifty academics to their research on 'published articles submitted again.'51 The research was intended to reveal the flaws in peer review, and the authors note their finding that eight of the nine articles they studied, which had been accepted roughly two earlier by the same journals, were rejected when resubmitted. Many of the responses are quite extensive and, overall, represent a lively discussion among researchers about the flaws in peer review and, perhaps more importantly, the flaws in the research itself.
However, of all of those who commented on Peters and Ceci's article, either for or against, none suggested, for instance, that the science itself had moved forward and rendered the research of less value. Given the highly structured, 'locked up' nature of print material, there is little opportunity for direct comment on the research; instead, research is criticized more indirectly, typically in papers published later. Within the static print environment, updating, revising, correcting, and improving research is difficult, if not impossible. What is printed is printed, and it may be cited forever, despite whatever confounding information may later arise. It is up to a researcher to find all possible confounding information, rather than simply seeing it alongside the original research, as would be possible in an online environment. Furthermore, a researcher's ability of to defend his or her published research is similarly tied up in later publications that may or may not be found by those reading the initial criticism. [End Page 443]
This new publish-and-review-by-all model may be the most controversial: it suggests that research, once published, can be modified—essentially, that it can be corrected based on comments outside the traditional group of select reviewers. Of course, it may also suggest that progress in any field depends on the aggressive collaborative commentary offered in an open market. What this model of allowing more direct commentary and subsequent modification of published research (in ways that would preserve the original work) would require is a more nimble academic community, one willing to see progress as the ultimate goal. In many ways, the suggested model might feel and look very much like a blog, where ideas in postings are immediately challenged and, perhaps, corrected. This type of research environment would be vital, interactive, and far richer than the comparatively static and, it must be admitted, slow publishing environment (even online) that we have today.
As Wang notes, 'the competition among periodicals and the ever-emerging new ideas compel every journal toward constant innovations.' 52 Just as journalists 'set free' (fired) from their newspaper and television organizations are re-bundling themselves as online news sources, so editors and reviewers cut loose from economically failing publishing houses must re-package themselves as the online academic specialists they are. At the same time, universities—and especially their libraries—must step forward and ensure that the savings accrued from online open-access academic research will be used, in part, to support these new 'review boards.' Academia cannot afford to lose the expertise of editors, nor the keen insights of peer reviewers. Universities must change the traditional evaluation of academic editorship as 'service' and consider it as part of a faculty member's scholarly research. This change in the value attached to editorship and peer review is especially important because many small journals will not be able to provide any financial reward or even a 'buy-out' of faculty time. Finally, academia must embrace the change in the peer-review process as not only a critique of the presented research but also a tool to improve that research. Adding of more voices to the review discussion may be a source of distress for some, but it will result in a richer, more valuable conversation that will lead to greater success and progress. [End Page 444]
Thomas H.P. Gould was a reporter, a magazine publisher, the president of an ad agency, and a new media consultant in the last century. He joined the faculty of Kansas State University in 1998, after completing his doctorate at the University of North Carolina at Chapel Hill. He founded Orion Online Design & Management, a student group that designs and manages Web sites for clients across the nation. His main area of research is online publishing, specifically academic journals and peer review, and the creation and maintenance of Learning, Teaching, and Research Commons. He is in the midst of writing a book on the role of university libraries in information gathering and publishing. Journal articles on the horizon include a bibliographic tracking of the patterns of online mass communication research, an examination of perceptions of 'born online' academic journals among faculty and tenure committees, and an argument for a total re-examination of the anonymous peer-review system.
1. Donna Packer, 'Acquisitions Allocations: Fairness, Equity and Bundled Pricing,' Portal: Libraries and the Academy 1, 3 (July 2001): 209-24, 209
2. Dru Mogge, 'Seven Years of Tracking Electronic Publishing: The ARL "Directory of Electronic Journals, Newsletters and Academic Discussion Lists,"' Library Hi Tech 17, 1 (1999): 17-25
3. Richard K. Johnson and Judy Luther, The E-Only Tipping Point for Journals: What's Ahead in the Print-to-Electronic Transition Zone (Washington, DC: Associationof Research Libraries 2007)
5. Mike Ware, 'E-Only Journals: Is It Time to Drop Print?' Learned Publishing 18, 3 (July 2005): 193-9, 199
9. John Peters, 'The Hundred Years War Started Today: An Exploration of Electronic Peer Review,' Journal of Electronic Publishing 1, 1/2 (May 1996), doi:10.3998/3336451.0001.117
10. Stevan Harnad, 'Sorting the Esoterica from the Exoterica: There's Plenty of Room in Cyberspace: Response to Fuller,' Information Society 11, 4 (October-December 1995): 305-19
12. Wang Feng-Nian, 'On the Innovative Spirit of Academic Journal Editors,' Journal of Scholarly Publishing 38, 3 (April 2007): 156-61, 160
13. Barbara Palmer, 'Ongoing Crisis in Academic-Journal Pricing Is the Focus of Recent Colloquium: Attendees Agree High Costs of Subscriptions Are Unsustainableand Electronic Distribution Has Radically Changed Publishing,' Stanford News Service, 15 November 2006
15. Linden Sweeney, 'The Future of Academic Journals: Considering the Current Situation in Academic Libraries,' New Library World 98, 1 (1997): 5-9, 6
16. Carol Tenopir and Donald W. King, 'Trends in Scientific Scholarly Journal Publishing,' Journal of Scholarly Publishing 28, 3 (April 1997): 135-70, citing Hal R. Varian, 'Some Speculations about the Evolution of Academic Electronic Publishing' (paper presented at the Scholarly Communication and Technology Conference, Emory University, Atlanta, GA, April 1997). Varian's research was later published in the Journal of Electronic Publishing (see note 18 below).
17. Michael Lesk, Books, Bytes and Bucks: Practical Digital Libraries, ed. Jennifer Mann (San Francisco: Morgan Kaufmann 1997); Hal R. Varian, 'The Future of Electronic Journals,' Journal of Electronic Publishing 4, 1 (1998), doi:10.3998/3336451.0004.105
18. Varian, 'The Future of Electronic Journals,' para. 46
19. Roger Clarke and Danny Kingsley, 'ePublishing's Impacts on Journals and Journal Articles,' Journal of Internet Commerce 7, 1 (March 2008): 120-51, 141
20. William Loughner, 'Top Ten Science Publishers Take 76 Percent of Science Budget,' Newsletter on Serials Pricing Issues 221, 3 (20 May 1999), available at http://www.lib.unc.edu/prices/1999/PRIC221.HTML#221.3
21. Kendon Stubbs, 'Lies, Damned Lies . . . and ARL Statistics?' Minutes of the 108th Meeting of the Association of Research Libraries (Minneapolis: ARL 1986)
22. Martha Kyrillidou, 'Research Library Trends: ARL Statistics,' Journal of Academic Librarianship 26, 6 (November 2000): 427-36; Martha Kyrillidou and William Crowe, 'In Search of New Measures,' ARL: A Bimonthly Report 197 (April 1998); Thomas E. Nisonger, Evaluation of Library Collections, Access and Electronic Re-sources: A Literature Guide and Annotated Bibliography (Westport, CT: Libraries Unlimited 2003)
23. David Biello, 'Open Access to Science Under Attack,' Scientific American (26 January 2007), available at http://www.scientificamerican.com/article.cfm?id=open-access-to-science-un; Mark Chillingworth, 'Leaked Plan to Attack Open Access Has Science in Uproar: PR Advice Backfires in Exposed Email Thread,' Information World Review (5 February 2007), available at http://www.iwr.co.uk/information-world-review/news/2174291/leaked-plan-attack-open-access; Jim Giles, [End Page 446] 'Journal Publishers Lock Horns with Free Information Movement,' Nature 445 (25 January 2007): 347; Jennifer Howard, 'Anti-Open Access by Publishing Group Loses Another University Press,' Chronicle of Higher Education (4 October 2007), available at http://chronicle.com/article/Anti-Open-Access-Effort-by/39710/Onepage
24. SPARC, 'Journal Management Systems,' http://www.arl.org/sparc/publisher/journal_management.shtml
29. Irma F. Dillon and Karla Hahn, 'Are Researchers Ready for the Electronic-Only Journal Collection? Results of a Survey at the University of Maryland,' Libraries and the Academy 2, 3 (July 2002): 375-90
30. Janet P. Palmer and Mark Sandler, 'What Do Faculty Want?' Library Journal 128, 1 (Winter 2003): 26-9, 28
31. Hal Varian, 'The Future of Electronic Journals' (paper presented at the Scholarly Communication and Technology Conference, Emory University, Atlanta, April 1997)
32. Stevan Harnad, 'For Whom the Gate Tolls? Free the Online-Only Refereed Literature' [communication], American Scientist forum (1998), available at http://www.cindoc.csic.es/cybermetrics/articulos.asp?art=288
33. Douglas P. Peters and Stephen J. Ceci, 'Peer-Review Practices of Psychological Journals: The Fate of Published Articles Submitted Again,' Behavioral and Brain Sciences 5, 2 (June 1982): 187-95 (responses 196-255), 187
34. Christine Wennerás and Agnes Wold, 'Nepotism and Sexism in Peer Review,' in Women, Science, and Technology, ed. Mary Wyer et al. (New York: Routledge 2001): 46-52, 52
35. Rex Dalton, 'Peers Under Pressure' (Abstract), Nature 413, 6852 (13 September 2001): 103; Tom Jefferson, 'Peer Review and Publishing: It's Time to Move the Agenda On,' Lancet 366, 9482 (July 2005): 283-4; Bryan D. Neff and Julian D. Olden, 'Is Peer Review a Game of Chance?' Bioscience 56, 4 (April 2006):333-42; Michael J. Mahone, 'Publication Prejudices: An Experimental Study of Confirmatory Bias in the Peer Review System,' Cognitive Therapy and Research 1, 1 (June 1977): 161-75; Peters, 'The Hundred Years War'; David Shulenburger, 'On Scholarly Evaluation and Scholarly Communication,' College Research Libraries News 62, 8 (September 2001), available at http://www.ala.org/ala/mgrps/divs/acrl/publications/crlnews/2001/sep/scholarlyevaluation.cfm [End Page 447]
36. UK, House of Commons, 'The Origin of the Scientific Journal and the Process of Peer Review' (annex 1 to the report of the Select Committee on Science and Technology), available at http://eprints.ecs.soton.ac.uk/13105/2/399we23.htm
38. Susan von Rooyen, Fiona Godlee, Stephen Evans, Richard Smith, and Nick Black, 'Effects of Blinding and Unmasking on Quality of Peer Review,' Journal of the American Medical Association 280, 3 (15 July 1998): 234-7, 234
40. Quoted in UK, House of Commons, 'Select Committee on Science and Technology—Tenth Report' (20 July 2004), s. 207, para. 1
41. Michael Gordon, 'Evaluating the Evaluators,' New Scientist (10 February 1977):342-3
42. Peters, 'The Hundred Years War,' para. 20
43. Dale J. Benos et al., 'The Ups and Downs of Peer Review,' Advances in PhysiologicalEducation 31, 2 (June 2007): 145-52; Allen Clark, Jill Singleton-Jackson, and Ron Newsom, 'Journal Editing: Managing the Peer Review Process for Timely Publication of Articles,' Publishing Research Quarterly 16, 3 (Fall 2000): 62; Lisa Guernsey and Vincent Kiernan, 'Journals See the Internet as a Tool in the PeerReview System,' Chronicle of Higher Education 45, 30 (2 April 1999): A29-31; Bryan D. Neff and Julian D. Olden, 'Is Peer Review a Game of Chance?' Bioscience 56, 4 (April 2006): 333-42
44. John Peters, 'The Hundred Years War'
45. Fiona Godlee, 'Making Reviewers Visible: Openness, Accountability, and Credit (Commentaries),' Journal of the American Medical Association 287, 21 (June 2002): 2762
46. Nora Newcombe, 'Five Commandments for APA,' American Psychologist 57, 3 (March 2002): 202-205
48. Tom Jefferson, 'Peer Review and Publishing: It's Time to Move the Agenda on,' The Lancet 366, 9482 (July 2005): 283-4, 283
49. Chris Anderson, 'The Long Tail,' Wired 12, 10 (October 2004), available at http://www.wired.com/wired/archive/12.10/tail.html
50. Thomas H.P. Gould, 'A Baker's Dozen of Issues Facing Online Academic JournalStart-Ups,' Web Journal of Mass Communication Research 14 (March 2009), available at http://www.scripps.ohiou.edu:16080/wjmcr/vol14/14-b.html
51. Peters and Ceci, 'Peer-Review Practices of Psychological Journals'
52. Wang, 'On the Innovative Spirit of Academic Journal Editors,' 156 [End Page 448]