- Statistics, Measures and Quality Standards for Assessing Digital Reference Library Services: Guidelines and Procedures (review)
- portal: Libraries and the Academy
- Johns Hopkins University Press
- Volume 3, Number 4, October 2003
- pp. 692-693
- View Citation
- Additional Information
- Purchase/rental options available:
portal: Libraries and the Academy 3.4 (2003) 692-693
[Access article in PDF]
Statistics, Measures and Quality Standards for Assessing Digital Reference Library Services: Guidelines and Procedures, Charles R. McClure, R. David Lankes, Melissa Gross, and Beverly Choltco-Devlin. Syracuse, NY; Tallahassee, FL: Information Institute of Syracuse, School of Information Studies, Syracuse University; School of Information Studies, Information Use Management and Policy Institute, Florida State University, . 104 p. $25.00.
The authors of this volume observe correctly that there is no shortage of useful information on the development and implementation of virtual reference services. Indeed, three of the authors, Lankes, McClure (no relation to the reviewer), and Gross, together with Jeffrey Pomerantz, have themselves edited a thoughtful collection of essays on the subject, Implementing Digital Reference Services: Setting Standards and Making It Real (NewYork: Neal-Schuman, 2003). Less studied, perhaps because more difficult or perhaps because less valued in our impatient culture, has been the related issue of assessment. In their latest volume, McClure, Lankes, Gross, and Beverly Choltco-Devlin tackle the problem of defining and measuring quality in the digital realm.
The book is the product of the Assessing Quality in Digital Reference Project, a response to concerns over the lack of assessment strategies for digital reference, voiced at the 2000 Virtual Reference Desk Conference. With financial support from fifteen participating institutions, ranging from public and academic libraries to groups such as OCLC and the Digital Library Federation, McClure and Lankes, the authors of the original proposal, designed a multi-phased project with a decidedly practical emphasis. Following the recruitment of participants and a literature review, the researchers conducted a series of site visits to study best practices in the area of virtual reference assessment. From these examples and from their own vast experience with assessment, the authors produced a draft manual for field testing at several of the participating libraries. The initial proposal, bibliography, and various progress reports are available on the project's Web site at <http://quartz.syr.edu/quality>, and two pieces relating to the project are included in Implementing Digital Reference Services.
The authors believe that virtual reference must be conceived and evaluated as an integral part of a library's larger reference mission. More importantly, they remind us that virtual reference, like all reference [End Page 692] work, must fulfill a meaningful and definable purpose. Borrowing techniques from traditional reference assessment and supplementing these with tools developed specifically for the digital environment, the authors identified 35 statistics and measures by which to evaluate virtual reference services and establish quality standards. In addition to the obvious descriptive measures related to usage and question type, they explored methods for analyzing user satisfaction, cost, staff time, and answer success rates. Because the authors do not expect any program to use all of the measures, they present each item as a discrete entity, complete with discussion of definition, rationale, data collection procedures, and related "Issues and Considerations." While this decision results in occasionally tedious and repetitious reading (particularly in the data collection sections), the result is a manual that can be used piecemeal to create an assessment plan tailored to a library's particular needs and interests. The "Rationale" and "Issues" sections frequently focus on the policy implications of virtual reference with regard to such wide-ranging issues as privacy, peer review, collection development, and the gap between user expectations and staffing realities. Finally, the discussion of each measure directs the reader to relevant items in an appendix of "Sample Forms, Reports, Logs, Worksheets and Survey Instruments."
The researchers state emphatically in the introduction what the manual is not—that it is neither a practical guide to implementing a virtual reference service nor a research methods text. Yet the manual is more than the authors claim and something of what they deny. They urge us to adopt an attitude in which ongoing assessment is pursued with multiple commitments to guarantee its success: full administrative support and a willingness to respond to the findings of an evaluation; the active involvement of the...