reference assessment programs: evaluating current and future reference services dr. john v....

Post on 19-Dec-2015

216 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Reference Assessment Reference Assessment Programs: Programs:

Evaluating Current and FutureEvaluating Current and FutureReference ServicesReference Services

Dr. John V. Richardson Jr.Professor of Information StudiesUCLA Department of Information

Studies

Presentation OutlinePresentation Outline

Why Survey Our Users?Question Design and Validity ConcernsMethodological IssuesMini Case StudiesRecommended Readings

Why Survey Our Users?Why Survey Our Users?

Need to know what we don’t knowSatisfaction and dissatisfactionLoyalty and the InternetUser needs and expectationsCan’t design effective, new programsBest practices

Question Design and Validity Question Design and Validity ConcernsConcerns

Nine issues which must be addressed to insure validity of survey results:– Intent of the question– Clarity of the question– Unidimensionality– Scaling– Number of questions to include– Timing of administration– Question order– Sample sizes

1. Intent of the Question1. Intent of the Question

RUSA Behavioral Guidelines (1996)– Approachability– Interest in the query, and – Active listening skills

UniFocus (300 factor analyses of the hospitality industry)– Friendliness – Helpfulness or accuracy– Promptness of service

2. Clarity of the Question2. Clarity of the Question

Data from unclear questions – may be invalid

Use instructions – to enhance question clarity

Mini Case StudyMini Case Study

What is the literal correct answer to the question posed?

3. Unidimensionality3. Unidimensionality

Unidimensionality is a statistical concept that describes the extent to which a set of questions all measure the same topic

Constellation of AttitudesConstellation of Attitudes

SatisfactionDelightIntent to ReturnFeelings about ExperiencesValueLoyalty

RUSA Behavioral GuidelinesRUSA Behavioral Guidelines

Approachability

Interest in the query

Active listening skills

4. Scaling4. Scaling

Three key characteristics:– Does the scale have the right number of points

(called response options)?– Are the words used to describe the scale points

appropriate?– Is there a midpoint or neutral point on the

scale?

A. Response OptionsA. Response Options

A common four point scale:– Very good, good, fair, and poor

Distance between very good and good is not the same as the distance between fair and poor

Numeric values associated with these options:– 4, 3, 2, and 1 may lead to invalid results…

Mini Case StudyMini Case Study

What is the distance between each of these response options?

B1. Scale AnchorsB1. Scale Anchors

VERY… VERY…Satisfied DissatisfiedMuch Agree Much DisagreePositive NegativeValuable CostlyEnjoyable UnpleasantFriendly Unfriendly

Mini Case StudyMini Case Study

What are the scale anchors here?

B2. Seven Point ScalesB2. Seven Point Scales Scale A:

Very good Very Poor N/A 7 6 5 4 3 2 1 0

Scale B: Excellent Very Poor N/A 7 6 5 4 3 2 1 0

Scale C: Outstanding Disappointing N/A 7 6 5 4 3 2 1 0

C. Wording of OptionsC. Wording of Options

The only difference in the preceding slide are the response anchors…

– Is very good a rigorous enough expectation?– Would excellent be better?– What about outstanding?

Mini Case StudyMini Case Study

How many response points are there?What is the level of expectation?

D. Midpoint or Neutral PointD. Midpoint or Neutral Point

The rate of skipped questions increases when a neutral response is not included

Use an odd number of response points

Also, a neutral response provides a way to treat missing data

Mini Case StudyMini Case Study

What’s the midpoint?

5. Number of Questions5. Number of Questions

Short enough– So that users will answer all the questions

Long enough– So that enough information is gathered for

decision making purposes

A. Longer SurveysA. Longer Surveys

Take more time and effort on the part of the respondent

High perceived “cost of completion” results in partially or completely unanswered questions in surveys

B. Likelihood of Complete B. Likelihood of Complete ResponsesResponses

Higher salience or more important the topic to the user, the greater the likelihood that they will complete a longer survey

Multiple questions measuring a single attitude make for longer surveys, although

They also aid in evaluating user attitudes

6. Timing and Ease6. Timing and Ease During or immediately following

– Blurring together?

Cards or mail method (IVR=interactive voice response)

Delay seems to cause more positive results

Electronic reference allows for ease of administration (more on PaSS™ later)

7. Question Order7. Question Order

Specific questions first– Technology, resources, or staffing

More general second– Value, overall satisfaction, intent to return– Halo Effect

Four question survey: one overall and three specific questions– Asking general question last produces better data

Mini Case StudyMini Case Study

8. Sample Sizes8. Sample Sizes

Depends upon population size– Error rate– Confidence

Consult a table of sample sizes

A. Error RateA. Error Rate

Defined as the precision of measurement

Accurate to plus or minus some figure

Has to be precise enough to know which direction service quality is going (i.e., up or down)

B. ConfidenceB. Confidence

Refers to the overall confidence in the results:– .99 confidence level means that one can be

relatively certain that the results are within that range 99% of the time

– .95 confidence level is common– .90 confidence level is less common, but…– a 90 CL requires fewer respondents, but will

result in a less accurate survey

C. Population and SampleC. Population and SamplePopulation (N) refers to the people of

interest

Sample (n) refers to the people measured to represent the population

Response rate is the proportion of the population who respond to the survey

D. Population & Sample SizeD. Population & Sample SizeN= n=

– 100 80– 200 132– 500 217– 1000 278– 10000 370– 20000 377

SOURCE: Robert V. Krejcie and Daryle W. Morgan, “Determining sample size for research activities," Educational and Psychological Measurement 30 (Autumn 1970): 607-610

Appropriate Sample SizesAppropriate Sample Sizes

Case StudiesCase Studies

Much of the extant surveying of reference service is inadequate, misleading, and can result in poor decision-making

Improving user service means understanding what leads to satisfied and loyal users

Patron Satisfaction Survey (PaSS)™– http://www.vrtoolkit.net/PaSS.html

Recommended BibliographiesRecommended Bibliographies

1,000 citations to reference studies at – http://purl.org/net/reference

300 citations to virtual reference studies at– http://purl.org/net/vqa

Best Single OverviewBest Single Overview

Richardson, “The Current State of Research on Reference Transactions,” In Advances in Librarianship, vol. 26, pages 175-230, edited by Frederick C. Lynden. New York: Academic Press, 2002.

Recommended ReadingsRecommended Readings

Saxton and Richardson, Understanding Reference Transactions (2002)– Most complete list of dependent and independent

variables used in the study of reference service

McClure et al., Statistics, Measures and Quality Standards (2002)– Most complete list of measures for virtual reference

work

Questions and AnswersQuestions and Answers

What do you want to know now?

top related