Evaluation of Library
Services
|
|
|
Module 2 Min-Yen KAN |
|
Fundamentals of LIS |
Why Evaluation?
|
|
|
Run as a business, need to justify
costs and expenditure |
|
Quantitative data analysis necessitated
by evolution into automated and digital libraries |
|
|
|
Need benchmarks to evaluate
effectiveness of library |
Quantitative metrics
|
|
|
Circulation per capita |
|
Library visits per capita |
|
Program attendance per capita |
|
________________ |
|
________________ |
|
|
|
- Output measures for public libraries |
|
Zweizig and Rodger (1982) |
Evaluation types
|
|
|
|
Macroevaluation |
|
Quantitative |
|
Degree of exposure |
|
|
|
Microevaluation |
|
Diagnostic |
|
Gives rationale for performance |
Macroevaluation
|
|
|
|
|
Axiom |
|
The more a book in a library is
exposed, the more effective the library. |
|
|
|
Defining “an exposure” as a simple
count |
|
Pros |
|
Easy; can different levels of
granularity |
|
Cons |
|
5 × 1 day borrowing is five times more
exposure than 1 × 5 day borrowing |
|
__________________________ |
More exact ways to
quantify exposure
|
|
|
|
Item-use days: Meier (61) |
|
A book borrowed for five days may not
be used at all |
|
|
|
Effective user hours: De Prospo et al. (73) |
|
Sample users in library |
Bang for the buck?
|
|
|
|
|
|
|
|
|
___________________________, the
greater the exposure. |
|
|
|
|
Macroevaluation -
Conclusions
|
|
|
|
In general, more exact measures require
sampling and tend towards microevaluation |
|
So it’s a continuum after all |
|
|
|
Administrators use a battery of
measures; not a single one, to measure effectiveness – Spray (76) |
Microevaluation Axes
|
|
|
Quality |
|
Time |
|
Costs (including human effort) |
|
User satisfaction (ultimately, they are
bearing the library’s operating costs) |
Microevaluation Trends
|
|
|
|
The more concrete the need, the easier
to evaluate |
|
Failure is harder to measure than
success |
|
Case 1: Got a sub-optimal resource |
|
Case 2: Got some material but not all |
"Technical Services"
|
|
|
Technical Services Public Services |
|
|
|
Quality 1. Select and acquisition 1.
Range of services offered |
|
Size, appropriateness, and 2. Helpfulness of shelf order and |
|
balance of collection
guidance |
|
2. Cataloging and Indexing 3. Catalog |
|
Accuracy, consistency, and
Completeness, accuracy and |
|
completeness ease of use |
|
4. Reference and retrieval |
|
Completeness, accuracy and |
|
percentage success |
|
5. Document Delivery |
|
Percentage Success |
|
|
|
Time 1. Delays in Acquisition 1. Hours
of Service |
|
2. Delays in Cataloging 2. Response
Time |
|
3. Productivity of Staff 3. Loan
Periods |
|
|
|
Cost 1. Unit cost to purchase 1.
Effort of use |
|
2. Unit cost to process Location of library |
|
Accession Physical
accessibility of collection |
|
Classify Assistance from
staff |
|
Catalog 2. Charges Levied |
Principle of Least Effort
|
|
|
|
|
Zipf’s Law (49) |
|
“Least Effort” |
|
|
|
A corollary: |
|
Mooer’s Law (60): |
|
“An information retrieval system will
tend not be used whenever it is more troublesome for a customer to have
information than for him not to have it.” |
More on accessibility and
convenience
|
|
|
|
Expanding on this, Allen and
Gerstberger (67) note: |
|
|
|
Perceived accessibility is the most
important determinant of the overall extent to which an information channel
is used. |
|
|
|
The more experience a user has with a
channel, the more accessible he or she will perceive it to be. |
|
|
|
After the user finds an accessible
source of information, he or she will screen it on the basis of other factors
(e.g., technical quality) |
|
|
|
High motivation to find specific
information may prompt users to seek out less-accessible sources of
information |
Accessibility versus
Motivation
|
|
|
A supply and demand relationship |
Materials-centered
Collection Evaluation
|
|
|
|
|
What’s the purpose… |
|
|
|
… of the collection |
|
Who’s the readership – academic,
public? |
|
|
|
… of the evaluation |
|
Document change in demand? |
|
Justify funding? |
|
___________________ |
|
___________________ |
Principled methods for
materials-based evaluations
|
|
|
|
Checklist |
|
Use standard reference bibliographies
to check against |
|
Citation |
|
Use an initial seed of resources to
search for resources that cite and are cited by them |
|
|
|
Are these methods really distinct? |
|
How do people compile bibliographies in
the first place? |
Use-centered Collection
Evaluation Methodologies
|
|
|
Circulation |
|
General |
|
Interlibrary Loan (ILL) |
|
|
|
In-house uses |
|
Stack |
|
Catalog |
Effectiveness as
Circulation
Collection Mapping
|
|
|
|
|
Idea: Build the collection in parts |
|
Prioritize and budget specific subjects |
|
Shrink, grow, keep constant |
|
Evaluate subjects according to specific
use |
|
Which courses it serves, what are each
courses’ needs |
|
|
Use Factors
|
|
|
|
Age |
|
Language |
|
Subject |
|
Shelf Arrangement |
|
Quality |
|
Expected Use |
|
Popularity |
|
Information Chain placement |
|
|
In-House Use Evaluation
Methods
|
|
|
Mostly done by sampling |
|
Table Counting |
|
Slip |
|
Interviews |
|
Observation |
Material Availability
|
|
|
The myth: If we have it, you can get
it. |
|
|
|
The reality: If we have it, you have a
chance of getting it. |
Slide 23
Evaluating Catalog Use
|
|
|
|
Usability Evaluation |
|
Does the interface allow you to find
things by the way you want? |
|
Experiment on finding a set of
resources |
|
Return to this issue in UI Module |
|
|
|
Analysis of Transaction Logs |
|
Different types of searches:
known-item, by subject |
|
Return to this issue in Bibliometrics
Module |
References
|
|
|
Baker and Lancaster (91) The
Measurement and Evaluation of Library Services, Information Resources Press
(On Reserve) |