1 whither survey response rates do they still matter? presented to: mria ottawa chapter october 26,...

43
Agenda How Important Are Response Rates? What Is Happening With Response Rates? Measuring Response Rates Does Any Of This Really Matter?

Upload: kiara-stakem

Post on 16-Dec-2015

220 views

Category:

Documents


2 download

TRANSCRIPT

Agenda

How Important Are Response Rates?

What Is Happening With Response Rates?

Measuring Response Rates

Does Any Of This Really Matter?

What is the Issue Regarding Response Rates?

Telephone survey response rates have been declining over the past few decades from a

high of 60% in the early years.

Range of factors seen to contribute to declining rates:

Answering machines, voice mail, call blocking, caller ID, etc.

Refusals: time constraints, general cynicism, inconvenience, privacy and

confidentiality concerns, etc.

Cell only households now becoming an issue – up to almost 10% in some areas of

U.S.

Result of declining response rates?

High non-response = risk of lower quality data

Increased cost and time to reach target response rates

For some, “response rate” is seen as only measure of survey “quality”

Response Rate Not the Only Factor in Determining Survey Quality

Apart from Response Rates, There Are Many Other Factors Affecting Survey Quality

Sampling errors

Universe definition

Sample design

Sample source

Non-sampling errors

Data collection methods

Interviewers, coders, data processing

Respondent boredom

Analysis

How Important is “Response Rate”?

Higher response rates always desirable

But, response rates should be only one consideration when research design and budgetary

issues are considered

Avoid effects of other sources of error

Looking at research objectives, allocate resources where maximum benefit achieved

In many commercial surveys, response rate not even an issue (primarily quota

samples)

Low response rates need not always be cause for concern

Key issue: how survey respondents differ from non-respondents

Bias from non-response will only be an issue when responders differ from non-

responders

What is Happening to Response Rates?

The PMRS Response Rate Committee measured refusal rates in 1995, 1999, 2002 and

again in 2005. Up until 2002, refusal rates have increased and response rates have

fallen.

When analyzed on an increment basis year by year, the 2002 survey suggested that for

one-time studies, the rate of refusals was accelerating.

One-time Telephone Studies, Incidence 50% Plus

February 1 – June 30 1995 1999 2002 2005

Refusal Rate 66% 68% 78% ?

Response Rate 16% 17% 12% ?

(Refusal Rate = Refusals/Total Asked; Response Rate = Cooperative Contacts/Total Eligible Numbers)

Average Annual Increase 1995 – 1999 1999 - 2002

Increase in refusal rate per year 0.5% 3.3%

Data for 2005 are not yet available so it is not clear whether this process has continued,

although results I will present in a few minutes suggest average response rates may be

in the 10% - 12% range in 2005/2006.

What is Happening to Response Rates? … cont’d

The longer the interview, the higher the refusal rate. 2002 data showed this impact very

clearly.

Aggregate Refusal Rate

Interview Length (Minutes) <10 10 – 19 20+

1995 50 59 68

1999 45 62 63

2002 65 74 80

Standardized Response Rate

Calculation

Why a Standard Method of Measuring Response Rates?

MRIA has recently adopted a “Standard Method of Measuring Response Rates” as a result

of a request from the Federal Government.

Literature reviews among a range of sources unearthed a myriad of “acceptable”

definitions of Response Rate. The American Association for Public Opinion Research

(AAPOR) alone publishes at least six different calculation methods that it deems to be

acceptable under varying circumstances.

The goal for the Response Rate Committee became one of developing a response rate

definition that would let research buyers compare levels of fieldwork effort and productivity

across research suppliers. With this goal clearly in mind, the Committee endorsed a

response rate calculation method that it considered to be the most appropriate for reporting

call outcomes at the data collection stage of a telephone survey.

En route, the committee consulted with Statistics Canada and with members of AIRMS

Quebec. Both groups endorsed the concept.

How Do We Measure Response Rates – MRIA Approved Definition

Empirical Method of Response Rate Calculation

Empirical Calculation for Data Collection Example(Every HHLD qualifies)

Total Numbers Attempted 4000InvalidNIS, fax/modem, business/non-res.

10001000

Unresolved (U)

Busy, no answer, answering machine

900

900In-scope – non-responding (IS)

Language problemIllness, incapableSelected respondent not available

Household refusalRespondent refusalQualified respondent break-off

1050

10050

100

50025050

In-scope – Responding units ( R )

Language disqualifyNo one 18+Other disqualifyCompleted interviews

1050

---

1050Response Rate = R / (U + IS + R): 1050/900 + 1050 + 1050 35%

High or Low Response Rates

- Does it really matter?

Presented to MRIA Annual Conference

June 2006 by Gary Halpenny and Don Ambrose

on behalf of MRIA Response Rate Committee

High or Low Response Rate – Does It Really Matter?

Telephone surveys have been under attack recently on the grounds that “Results are no

longer accurate nor representative”

Low response rates are cited as the reason

However, a growing body of research begs to differ

A number of investigative projects in the U.S. have shown:

For most commercial and public opinion applications a 30% response rate

produces essentially the same results as a 50% response rate

High or Low Response Rate – Does It Really Matter? … cont’d

Some of the research literature:

In 1997, two identical surveys, one at 61% response rate and the other at 36%,

produced no meaningful differences

This project was replicated in 2003 with 51% and 27% response rates and with

similar results

Researchers concluded “carefully conducted polls with relatively low response

rates still yield representative samples and accurate data” (Keeter el al, Pew

Research)

High or Low Response Rate – Does It Really Matter? … cont’d

The reality today is that few commercial telephone surveys even approach the 30% level

The demand for faster turnaround means most telephone response rates are now in

the 10% to 20% range

Quick 1 or 2-day polls can yield even lower rates

The Critical Issue!

Can response rates at these levels still produce accurate and meaningful data?

Clearly more research was needed

MRIA’s Research Project

The Plan

In 2005, the MRIA Response Rate Committee sponsored research to investigate whether

response rates as low as 10% can still produce reliable and useful data.

Five Canadian research companies who regularly conduct national omnibus surveys

volunteered to combine efforts.

Stage 1

Using an identical 5-minute question set, each company completed approximately 250

interviews on a single wave of its Omnibus in January, 2006.

1,238 completed interviews in total

4 days in-field

9% aggregate response rate

Stage 2

Using the same 5-minute question set, each company completed a second sample of

approximately 250 interviews over January/February 2006.

1,273 completed interviews in total

4-to-5 weeks in-field

First refusals recontacted

31% aggregate response rate

Both Samples Were:

National RDD, age 18+

Weighted to Census for:

Age

Gender

Province

Community size

Fieldwork Undertaken By:

Ipsos-Reid

Maritz

Opinion Search

Synovate

TNS-Canadian Facts

Record of Call Comparison

Table on next slide indicates that additional call attempts yield three main benefits:

Higher contact ratio (lower proportion of busy/no answer)

Completion/refusal ratio increases from .26 to .77

Means that fewer good telephone numbers required to yield same number of

interviews

Disposition of Last Attempt 9% RR 31% RR

Valid numbers attempted 14,832 4,348

100% 100%

Busy/No Answer 5,843 780

Refused 4,826 1,647

Other Non-Responding 2,820 569

Cooperative Respondents 1,343 1,352

Response Rate = R / (U + IS + R) 9.1% 31.1%

Disqualified 105 79

Completed Interviews 1,238 1,273

IS

R

U

Key Findings

Both Studies Yield Identical Results for:

Incidence of food items used in past 6 months

List of items bought in last 12 months

Appliances in household

Print media readership – not title specific

Incidence of travel outside Canada

Personal access to the internet

Cell phone ownership and carrier used

Food Items Used in Past 6 Months

Results Identical

9% RR 31% RR Sig. Diff. *

Eggs 97 96 N

Cold Cereals 86 86 N

Cheese (Not processed) 69 71 N

Honey 67 66 N

Frozen Pizza 55 55 N

* At 90% level of confidence

Items Bought in the Last 12 Months

Same result regardless of whether category incidence is high, medium or low

9% RR 31% RR Sig. Diff.

Men’s or Women’s Clothing 93 93 N

Sunscreen / Suntan Lotion 54 54 N

Paint or Stain 52 51 N

Camping Equipment 23 23 N

Car Polish / Wax 21 20 N

Traveler’s Cheques 8 8 N

Appliances in Household

Similar findings for both commonplace and more esoteric items

9% RR 31% RR Sig. Diff.

Microwave oven 95 95 N

Automatic Dishwasher 63 61 N

Gas BBQ 59 57 N

Security System 34 37 N

Espresso/Cappuccino Maker 14 13 N

Print Media Readership

Similar estimates of generic print media consumption

9% RR 31% RR Sig. Diff.

Read a Daily Newspaper

- Yesterday 60 60 N

- Past Week 84 84 N

Last Time Read a Magazine

- Yesterday 38 40 N

- Past Week 72 72 N

Traveled Outside Canada in Past 12 Months

Parallel results for both business and personal travel behaviour

9% RR 31% RR Sig. Diff.

For Personal 32 33 N

For Business 8 9 N

Personal Access to The Internet

Penetration levels virtually identical

9% RR 31% RR Sig. Diff.

Any Access 76 76 N

At Home 70 70 N

At Work 45` 46 N

Cell Phones

No differences in either ownership incidence or carrier share

9% RR 31% RR Sig. Diff.

Has a Cell Phone 58 58 N

Cellular Provider *

Bell 29 28 N

Telus 25 27 N

Rogers 25 25 N

Fido 6 6 N

Other 11 11 N

* Base Total Cell Phone Owners

Credit Card Ownership and Usage

Difference are found here.

Higher response rate yields higher incidence of credit card ownership

Among card owners, high RR yields a higher incidence of owning American

Express and a lower incidence of MasterCard

Posit that the higher RR captures a more upscale, harder-to-find group of

people but not proven in the demos

Equally as likely to be a statistical anomaly

No differences in card used most often

9% RR 31% RR Sig. Diff.

Has any Credit Cards 78 82 + 5

Specific Cards Owned *

Visa 67 70 N

MasterCard 52 48 -4

American Express 13 18 + 5

Diners 1 1 N

Any Department Store 45 46 N

Any Gasoline Company 15 14 N

Average # of Cards Owned * 2.4 2.5 N

* Base: Total Credit Card Owners

Credit Cards Owned

Credit Cards Used Most Often

Base = Owners of Credit Cards 9% RR 31% RR Sig. Diff.

Visa 49 52 N

MasterCard 30 29 N

American Express 3 4 N

Any Department Store Card 3 3 N

Any Gasoline Company Card 1 1 N

Claimed usage level unaffected by higher response rate.

12 Attitudinal Statements Measured

Mean scores the same on 11 attributes out of 12

Difference on the statement related to shopping was statistically significant but would

not have changed the interpretation

9% RR 31% RR Sig. Diff.

I like to try new and different products 6.0 6.0 N

I am willing to pay extra to save time 5.4 5.4 N

I lead a fairly busy social life 5.9 6.0 N

A person’s career should be their 1st priority 4.9 4.9 N

TV is a primary source of entertainment 5.7 5.7 N

I have more self-confidence than most people my age 6.9 6.9 N

I keep up-to-date with changes in style 5.4 5.3 N

I am careful of what I eat 7.2 7.2 N

I go out with friends a great deal of the time 5.1 5.2 N

To me shopping is a chore rather than a pleasure 6.1 5.9 - 0.2

I prefer to postpone a purchase rather than buy on

credit

6.6 6.7 N

Attitudinal Statements

Conclusions

Previous findings are corroborated – “carefully conducted polls with relatively low

response rates still yield representative samples and accurate data”

Important that all other aspects of good survey design also must be present:

The set of telephone numbers is a randomly drawn, representative sample of the

universe

Respondent selection at HH level is as random as possible

The data are weighted appropriately

Conclusions… cont’d

High response rates are still achievable for studies where this is an important design

criterion

Fast field turnaround and high response rates are incompatible

Available time to complete the fieldwork is the main factor

More focus on the sample management process is required, e.g. call scheduling,

elapsed time between attempts, etc.

Where Next?

Will repeat this test in January 2007.

Can the overall findings be replicated?

Are the few data differences found real or merely random data anomalies

Modify the question set somewhat

Replace the attitudinal questions with questions related to public policy

Online Research

Online Surveys

Fastest growing methodology in North America

Primarily opt-in panels, but also client lists and pop-ups

Is “Response Rate” a valid term within this environment?

None of the standard criteria for true random sampling hold (unless we are doing a

random sample of internet panel members)

What then do we use as measures of field effort and data quality

Online Surveys … cont’d

Lots of activity around online standards and Response Rates

ISO standards in process of development

MRIA standards developed

Response Rate Committee working with internet providers looking at data quality and

measures of “success rate” for online surveys:

A. Total invitations (broadcast or pop-ups) B. Undeliverables (nil in pop-ups) C. Net usable invitations (c = a – b)

D. Total completes E. Qualified break-offs F. Disqualified G. Not responded H. Quota filled

Contact Rate = (d + e + f + h)/c Success Rate = (d + f + h)/C

Conclusions

Response Rates continue to be of concern, and efforts to at least maintain current levels

of respondent cooperation are needed

However, a well-designed and managed survey with a lower response rate is unlikely to

result in a different management decision than would have been made if the response rate

had been higher

Cost, time and overall research objectives must all be part of the decision process