evaluation of prediction models for marketing campaigns author: saharon rosset advisor: dr. hsu...

Post on 01-Jan-2016

216 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Evaluation of Prediction Models for Marketing Campaigns

Author: Saharon RossetAdvisor: Dr. Hsu Graduate: Lin Yan-Cheng

Abstract

Discuss model-evaluation criteria about their robustness

Ex. Response Rate in Customer retention

Agenda

Introduction Model Evaluation

– Planning Campaigns– Performance Measures

Prediction Model Performance– From Sample to Population– Confidence Intervals

Case Study Conclusion Opinion

Motivation

dealing with marketing applications the issue of evaluating prediction models is following twofold– Evaluation has to be statistically sound– Evaluate models should utilize from business

perspective

Objective

To discuss some applicable model-evaluation and selection criteria

Model Evaluation

Evaluate the models’ performance on an independent test set

Adjust the models’ score to fit the full population distribution, in case it is expected to be different from the sample distribution used for training and test

Planning Campaigns

To measure by the amount of responders captured within the targeted population

The amount can be measured in two diff. way– Lift: How much better are we doing by using our

model to select the target population relative to a random selection of the target population

– RR: How frequently do we expect to encounter a responder when running our campaign?

Performance Measures

A, B : Total number of responders and non-responders, respectively

Aj, Bj: Total number of responders and non-responders, respectively, in the j-th top quantile.

j*(A+B)or(Aj+Bj): all cases in the j-th top quantile

A/(A+B): overall response rate

Measures at Pre-Specified Cutoff Points

Response Rate– RR(j) = Aj/(Aj+Bj)

Lift

Response Non-Response Ratio– RNR(j) = (Aj/A)/(Bj/B)

Comarison of Cut-Point Measures

Predicting Model Performance

Performance measures are usually calculated on a test sample data set

These measures need to be adjusted to the full population

From Sample to Population

A, B : the number of responders and non-responders in the FP (full population), respectively.

a, b : the number of responders and non-responders in the TS (Test Set), respectively.

ai, bi :the number of responders and non-responders in percentile i in the TS

Transformation

Extrapolate each percentile pair( ai, bi) in the TS to (Ai, Bi) in the FP

Ai = ai (A / a)

(Ai, Bi) does not add up to a FP percentile, TS percentiles are merged or split in order to attain FP percentiles

Confidence Intervals

Percentile point-estimators are not sufficient for evaluating the model predictive ability

confidence intervals for predict a model’s performance on future data

Case Study

Amdocs is a leading provider of CRM, Billing and Order Management solutions to the communications and IP industry worldwide

Consider a prediction model for a retention campaign, in which responders are potential churners and the overall response rate is the overall churn rate

Legacy model vs. New model

Initially legacy models RR at 10% was 2.75 times better than new model, but that was evaluated based on different test populations. Churn rate is 4.5 times in legacy models

RR vs. Lift vs. RNR

Conclusion

Discuss a few model-evaluation criteria about their robustness under changing population distributions

RR is a non-robust measure, Lift and RNR measures be commended to be used

Opinion

We need to consider the robustness of measure in our case before we conclude that.

top related