comparator selection in observational comparative effectiveness research prepared for: agency for...

15
Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ) www.ahrq.gov

Upload: kelsey-terrell

Post on 01-Apr-2015

217 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)

Comparator Selection in Observational Comparative

Effectiveness Research

Prepared for:Agency for Healthcare Research and Quality (AHRQ)

www.ahrq.gov

Page 2: Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)

This presentation will: Show how to choose concurrent, active

comparators from the same source population (or justify the use of no-treatment comparisons/ historical comparators/different data sources)

Discuss potential bias (and methods to minimize it) associated with comparator choice

Define time 0 for all comparator groups in describing planned analyses

Outline of Material

Page 3: Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)

In comparative effectiveness research, the choice of comparator directly affects the clinical implication, interpretation, and validity of study results.

Treatment decisions are based on factors associated with underlying disease and its severity, general health status or frailty, quality of life, and patient preferences. There is potential for confounding by indication or

severity and selection bias associated with different comparison groups.

Internal validity relies on defining appropriate dose, intensity of treatment, and exposure window for comparator groups.

Introduction

Page 4: Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)

Confounding arises when a risk factor for the study outcome of interest directly or indirectly affects exposure (e.g., treatment assignment).

The magnitude of potential confounding is generally expected to be smaller when the comparator: Has the same indication Has similar contraindications Shares the same treatment modality (e.g., tablet

or capsule) Conduct sensitivity analyses to quantify effects of

potential unmeasured confounding.

Consequences of Comparator Choice (1 of 2)

Page 5: Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)

Exposure misclassification: Arises when exposure measurement differs

between the exposure and comparator groups Is often more complex in comparative

effectiveness research, since each group represents active treatment (nonuse of exposure treatment does not imply use of the comparator treatment)

Can differ in each group, especially if different treatment modalities are used

Assess separately for exposure versus comparison groups

Consequences of Comparator Choice (2 of 2)

Page 6: Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)

Spectrum of Possible Comparisons (1 of 3)

Alternative treatments Most common scenario and typically least

biased More clinically meaningful and methodologically

valid Could still result in confounding by severity if

not adequately controlled through design/analysis

No treatment/testing Absence of exposure or absence of exposure

and use of an unrelated treatment (active comparator)

Choice of time 0 must be clinically appropriate in order to reduce bias

Page 7: Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)

Spectrum of Possible Comparisons (2 of 3) Usual or standard care

Develop a valid operational definition for care and for time at initiation (none, single, or a set of treatment/testing modalities)

Real-world use must be understood for proper definition Can vary across geographic regions/treatment settings or

change over time; avoid a “wastebasket definition” Historical comparison

Used with a dramatic shift from one treatment to another May be the only choice with strong selection for a new

treatment that is uncontrollable and randomization is unethical/not realistic

Vulnerable to confounding by indication/severity when this information is unmeasured (overcome by instrumental variable analysis using calendar time)

Page 8: Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)

Comparison groups from different data sources Multiple data sources can be linked to enhance the

validity of observational comparative effectiveness studies

Residual confounding might occur due to: Incomparability of information in exposure and

comparison groups Differences in observed and unobserved domains

as they are sampled differently or different source populations

Issues with generalizability when exposure and comparison groups come from different databases

Spectrum of Possible Comparisons (3 of 3)

Page 9: Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)

Indication Another treatment used for the same indication as the

exposure treatment typically is used as the comparison group

Treatments approved for multiple indications—appropriate indication will have to be ensured by defining the indication and restricting the study population

Initiation New-user design prevents underascertainment of early

events and avoids selection bias arising from prevalent users

Inclusion of prevalent users may be justified when outcomes are rare or occur after long periods of use

Operationalizing the Comparison Group in Comparative Effectiveness Research (1 of 2)

Page 10: Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)

Exposure time window Period where therapeutic benefit and/or risk would plausibly

occur Sensitivity analysis to assess whether results are sensitive

to different specifications of the exposure window(s) Nonadherence

May differ between treatment and comparators Treatment effects should be compared at adherence levels

observed in clinical practice, rather than adjusting for the difference in adherence

Dose/intensity of drug comparison Assess and report dose in each group Make comparisons at clinically equivalent dose levels

Operationalizing the Comparison Group in Comparative Effectiveness Research (2 of 2)

Page 11: Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)

Confounding by indication or severity: Medications may be used for patients with a

milder disease, and surgery might be reserved for those with more severe disease.

Selection of healthier patients to receive more invasive treatments: Sicker patients are less likely to be considered

for invasive procedures. Selection becomes more problematic in

comparisons across different treatment modalities.

Considerations for Comparisons AcrossDifferent Treatment Modalities (1 of 3)

Page 12: Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)

Time from disease onset to a treatment: Pay careful attention to the time from initial

diagnosis and the general sequence of different treatment modalities needed to prevent immortal person-time bias.

Different magnitude of misclassification in drug exposure versus procedure comparison: Misclassification of exposure might be greater

with drugs than with devices/procedures. Pharmacy records do not provide information on

actual intake.

Considerations for Comparisons AcrossDifferent Treatment Modalities (2 of 3)

Page 13: Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)

Provider effects in using devices or surgeries: Consider the characteristics of the operating

physician and institution where the device implantation or surgery was carried out

Be aware of the documented direct relationship between the level of physician experience and better patient outcomes for complex procedures

Adherence to drugs and device failure or removal: Requires assumptions in most data sources May be appropriate to compare without adjusting,

as it reflects real-world use

Considerations for Comparisons AcrossDifferent Treatment Modalities (3 of 3)

Page 14: Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)

Understanding the impact of comparator choice on study design is important.

Selection of the comparator group should be primarily driven by a comparative effectiveness question prioritized by the stakeholder community.

An over-riding consideration is the generation of evidence that should directly inform decisions on treatments, testing, or health care–delivery systems.

Some study questions may not be answered validly due to intractable bias in observational comparative effectiveness research.

Conclusions

Page 15: Comparator Selection in Observational Comparative Effectiveness Research Prepared for: Agency for Healthcare Research and Quality (AHRQ)

Summary Checklist

Guidance Key Considerations

Choose concurrent, active comparators from the same source population (or justify use of no-treatment comparisons/historical comparators/different data sources)

• Comparator choice should be primarily driven by a comparative effectiveness question prioritized by the informational needs of the stakeholder community and secondarily as a strategy to minimize bias

Discuss potential bias associated with comparator choice and methods to minimize such bias, when possible

• Be sure to also describe how study design/analytic methods will be used to minimize bias

Define time 0 for all comparator groups in describing planned analyses

• Choice of time 0, particularly in no-treatment or usual care, should be carefully considered in light of potential immortal time bias and prevalent user bias

• Employ a new-user design as a default, if possible