local vs central image review - icon hosted webinar 2014

Post on 15-Aug-2015

103 Views

Category:

Health & Medicine

5 Downloads

Preview:

Click to see full reader

TRANSCRIPT

16 January 2014

The Truth About Local vs. Central Image Review

• ICON plc is a global provider of outsourced development services to the pharmaceutical, biotechnology and medical device industries.

• The company specialises in the strategic development, management and analysis of programs that support clinical development - from compound selection to Phase I-IV clinical studies.

• ICON currently operates from 77 locations in 38 countries and has approximately 10,300 employees.

• Further information is available at www.iconplc.com

About ICON

• ICON Signature Series is our thought leadership program that offers expert insights into value-driven strategies for clinical development.

• The program features ICON and external experts in all aspects of clinical development and post-approval product value strategies.

• For a list of featured topics and upcoming events go to: http://www.iconplc.com/icon-views/

• ICON Signature Series is our thought leadership program that offers expert insights into value-driven strategies for clinical development.

• The program features ICON and external experts in all aspects of clinical development and post-approval product value strategies.

• For a list of featured topics and upcoming events go to: http://www.iconplc.com/icon-views/

ICON Signature Series

Agenda

• Image Review History

• Proposed New Model

• Audit Methods

• Financial Impact

A recording of this Webinar is also available

To view click on the link below http://www.iconplc.com/webinar/14/TheTruthaboutLocalvsCentralImageReview.wmv

1

Introductions

David Raunig, Ph.D.Senior Vice President, Medical and Scientific Affairs

David Raunig has worked extensively in statistical analysis and design of nonclinical and clinical biomarker studies. He has 15 years of experience as a research statistician in the pharmaceutical industry and directed the statistical support for both preclinical and clinical imaging for Pfizer Global Research and Development, working closely with molecular imaging and pharmacometrics groups to develop novel biomarkers and design and analyze early to late phase clinical trials. He was one of the first statisticians involved in FDA Biomarker qualification and co-inventor of random sample pixel superresolution. He is presently the Chair for QIBA Technical Performance Metrology Working Group and working on research for reader performance real-time monitoring algorithms, imaging biomarker qualification for hemarthropathy and performance characteristics for AD biomarkers.

Introductions

Gregory Goldmacher, M.D., Ph.D.Senior Director, Medical & Scientific Affairs, Head of Oncology Imaging, ICON Clinical Research

Dr. Goldmacher is a radiologist by training. He leads oncology imaging for ICON, and also oversees projects in rheumatology, cardiovascular, pulmonary, CNS, and infectious disease, as well as diagnostic agent trials.

He has given numerous lectures, and published papers in the academic literature on radiology and its application in clinical trials. He has developed methods for standardized imaging response assessment, and trained radiologists, oncologists, and study staff in the United States and worldwide.

He has a leading role in the development and validation of novel imaging biomarkers as a member of the Steering Committee of the Quantitative Imaging Biomarkers Alliance (QIBA), and co-chairs the QIBA Committee on Volumetric CT.

Background

• Clinton-Kessler Oncology Initiative (1996)– Tumor size evidence of benefit– Imaging-based endpoints– No defined process– RECIST in development

Early Discussions

• Central vs. Local– Bias, errors, fraud– Blinding investigators is hard

• Reader variability– Two heads are better than one… but need ONE answer– What design?

• “Consensus” Loudest voice wins• “2+1” model wins out (Obuchowski 2004)

• Need performance monitoring– FDA request / statistician recommendations– Monitor for independence and lack of bias

• Cost– Central 2+1 teams vs. local reader– CRO costs: who manages sites?

• Claim: local reads identical or better– Meta-analysis apparent equivalent results

• Only 11 of 27 independent • Internal review 6 studies ~10% increased variance

– Assumption of no local bias• Notable exceptions

Recent Interest in Local Evaluations

Society for Clinical Trials 2013 21 May 20123 7

Audit Methods

• ODAC Meeting July 2012– Audit methods: NCI and Industry– Cost savings opinions offered– Evidence of local-to-central equivalence– Limits

• Phase IIb/III trials• Solid tumors• PFS / TTE

Proposed Audit Approach

Compare sample to local read results

Central read sample of scans

Collect local reads

Collect all scans centrally

Local Results Confirmed

Local Results Confirmed

Full Independent Review

Full Independent Review

No Bias Possible bias

Implementation

• Details of method undetermined– Blinded vs. unblinded– NCI vs. industry vs. study-specific– Sample size– Sampling: random vs. block vs. site vs. region

• No ODAC or FDA recommendation

Audit Methods

Audit Methodologies – Stated Objectives

• Primary– Guard against falsely declaring a therapeutic intervention as

better than the comparator– Allow local evaluation to enhance patient information

• Secondary– LE = variability seen in practice– Elimination of Central Reviewer disagreement

• False Objective Warning!– Audit of local evaluations does not protect against

informative censoring by the site

Local Evaluator Statistical Assumptions

• Local Evaluators have equivalent results to BICR– 11 verified publications of successful results– Simulations done on studies where LE and BICR agree

• Local Evaluators have more patient information than displaced central radiologists– Published successful trials Equal information– Published unsuccessful trials Not equal information

• BICR is not biased– LE and BICR discordance LE is biased, not BICR

• Audit conducted by 2+1 central read paradigm– Single reader more variability less power

Statistical Design: Hypotheses Tests

• Null Hypothesis– LE PFS NOT BETTER THAN BICR PFS

• HR(local) = HR(central)• PFS(local) = PFS(central)• Other endpoints?

– Accept H0 Accept LE results

• Alternative Hypothesis– LE PFS BETTER THAN BICR PFS

• HR(local) < HR(central)• PFS(local) > PFS(central)• Other endpoints?

– Accept HA 100% central review

Audit Methodology Options

• NCI Method– Evaluates central review HR < Threshold– Follows a successful LE result (HR < 1.0) at end of study

• Industry Method– Evaluates non-directional bias across treatment arms– May be done at interim or end of study

• Mixed Reviewer– Local + Central with central adjudication– Immediate blinded “audit” of all reads

• Study Specific– Requires FDA approval– Difficult?

Audit Methodology – NCI Method

• Sample Size Factors– HR dependent

• 100% audit at HR ≈ 0.6 - 0.7– Minimum Important Difference (MID)

• HR ≤ 1.0• MID ≈ 0.9 100% audit highly likely

• Sensitivity/Specificity– Set by study / 95%

• Timing– End of trial– HR(local) significant (<1.0) Audit

1.0

MID

HR (audit)

Upper 95% CI

0.0

NCI Method Audit Size Simulation

Median Hazard Ratio MID=1.0 MID=0.9 BICR Event

0.48 66% 100% 1210.51 63% 100% 1890.54 28% 37% 3570.73 57% 100% 6300.73 100% 100% 165

From: Dodd LE, Korn EL, Freidlin B, et al. An audit strategy for time-to-event outcomes measured with error: Application to five randomized controlled trials in oncology. Clinical Trials. 2013; 10: 754-60.

Audit Methodology – Industry Method

• Sample Size– Number of events– Criteria for discordance

• Timing– Interim or end of trial– Clinical Cutoff– Caution: Multiple evaluations increase chance finding

• Sensitivity / Specificity– 80+ / 80+ with 80 events and discordance cutoff =0.1

• Conditional– HR extremely low no audit– Caution (Everolimus study)

Industry Method Audit Sample Size - Simulation

Amit, O. A sample based approach for independent review of PFS using differential discordance, PFS Independent Review Working Group, Oct 2009.

Mixed Reviewer

• Identical to central review– Blinded

• Sample size– 100% of patients– No sampling

• Timing– Continuous

• Sensitivity/Specificity– Assisted by adjudicator selection

• Additional: No delay in results

Audit Options

• Central audit– Single central auditor– 2+1 central auditor– Considerations

• Rolling audit or end of study Timing• 2+1 more precise for comparison (smaller confidence limits)

• Confirmation of progression– Protects against informative censoring– Protects against event loss– Not a part of the audit

Financial Impact

Proposed New Approach

Audit

Central read sample of scans

Collect local reads

Collect all scans centrally

Local Results Confirmed

Local Results Confirmed

Full Independent Review

Full Independent Review

No Bias Possible bias

Financial Implications of Audits

• Central imaging costs

• Local read costs

• Trial size impact

• Market delay risk

Central Imaging Costs

• Fixed costs– Startup documentation– System programming– Reader training

• Variable costs Driver– Project management/tech ops Duration– Site initiation and training Sites– Image collection and QC Timepoints– Central reads Timepoints

6% 5%

17%

24%

32%

15%Fixed

Site

Monthly

Image

Read

Ops/Mgmt

“per scan”

“per site/month/study”

Historical Average Trial

• 8 recent solid tumor trials

Variable Average

Subjects 700

Timepoints 4200

Sites 119

Duration 46 months

Central Read Costs

• Collect all scans, 30% audit (2+1 design)

• If 15% go to full read savings 18%– Optimistic!– 18% of $3M = $540K savings

• $100M trial = 0.54%

Variable AverageSubjects 700Timepoints 4200Sites 119Duration 46 monthsTotal central imaging costs $3.0 MProjected savings with audit $660 K (22%)

Local Read Costs

• Currently free

• If FDA wants auditable results– Need local read system with audit trail– Estimate: $2000 per site– 119 sites Additional cost of $238,000

• Local readers might want to be paid– Note: imaging may not be performed at PI’s facility– Note: some hospitals do not allow direct contracts with radiology– Estimate $50 per timepoint– 4200 timepoints Additional cost of $210,000

• Additional monitoring visits

Trial Size Costs

• Local readers– Large, unevenly trained group– IMI site survey: dedicated radiologist = 40%

• Higher variability need more subjects to reach endpoint

• 700 subject trial– Average cost per subject: ~ $50K– Marginal cost per subject: ~ $20K– ~10% increased variance = 70 x $20K = $1.4M additional cost

• Additional duration– Recruit 10% more

Marketing Delay Risk

• If audit failed full central read– If upper bound on HR > MID

• Typical central read happens throughout trial– Can be essentially real time

• Post-audit read begins after trial done

• Estimated market delay: 3 months– If HR = 0.3, risk = 15%– Loss depends on monthly revenue and margins

Overall Financial Implications

Item Savings/costCentral read $540 K savings

Local read Free / $200 K / $400 K

Trial size $1,400 K cost

Market delay Unknown cost, but real risk

Conclusions

• Audits are statistically viable

• Statistical assumptions for need are not validated

• Cost savings unlikely

• Cost increase possible

• Design requires strong statistical, imaging and clinical trial needs considerations

Like to know more?

Enquiries@iconplc.com

Twitter handle: @ICONplc

Join us on social:

top related