Subscribe to DSC Newsletter

 

Hi Members,

 

I built a model for creating a Churn model. In my training dataset and infact in my entire population I have about 12% of churners and 88% of non-churners. When I looked at the model accuracy it is pretty good (88%). But what I realised is I see very low prediction accuracy on my Churners which is 27%. Given high percentage of non-churners in my dataset, I am getting high model accuracy.

 

Can someone tell me if I can consider this model as a Good Model for prediction purposes? Please provide inputs on how to understand this Classification table.

 

Thanks for your help in advance

 

best,

Hari

Views: 5958

Reply to This

Replies to This Discussion

Hari, What are the specific statistics associated with the 88% and 27% prediction accuracy?

 

-Ralph Winters

 

Ralph, Thanks for your response 

 

if I understood your question correctly,

 

88% is the over all model accuracy. The model was able to predict 88% of cases correctly.

27% refers, of the actual churners the model is able to predict correctly. Below is the classification table if I am not still clear..

 

Actual Predicted Total
0 1
0 84 4 88
1 8 4 12
Actual Predicted Total
0 1
0 95% 5% 100%
1 67% 33% 100%

 

You are only working with 100 observations?  If so, you don't have enough data to make any inferences.

 

-Ralph Winters

Ralph, I working with about 9000 observation... I just put it on 100 respondent basis...

 

Hari

Hari,

No - you can't call it a good model. Whether a model is good or bad depends on its application. In the domain you are talking about, we are more interested in catching a true churner than catching a true non-churner. The metric "% misclassified" gives equal credit for catching a churner and a non-churner. In other words it puts equal penalty for not-catching a churner vs a non-churner. Hence this mentric doesn't do our job.

In such cases, you should use the ROC (Receiver Operating Curve)  - which is a plot of %True positves against % False positives. Typically this is how it works. Say your model is a Logistic model and each guy in your data gets a score from the model. Then you say - if the score < 0.8 - I predict him as non-churner, if it's more than 0.8 - I predict him as churner.

Now from your data you can find - if you use the 0.8 as the cutoff - what %of true churners you correctly predict (true +ve) and what % of true non-churners you wrongly label as churners (false +ve). Say the fist number is x and the second number is y. Plot the point (x,y) in a plot. Vary your cutoff (say from 0.05 to 0.99) and keep plotting the points. This is your ROC.

 

Now decide  upfornt (BEFORE looking at the ROC) how much %true +ve you want. ROC tells you, what should be your cutoff and to get there how much false +ve you need to tolerate.

 

Hope this LONG reply helps.

 

angshu

 

 

Hi Angshu,

 

Thanks so much for your response... this is really helpful.. I am little curious to how helpful is lift curve/gain chart helpful in determining the robustness of the model..

 

For example, my model shows really high area under the curve (given its high efficiency in predicting the true non-churners correctly) ... is there a method from which I can figure out from lift curve that the model does not perform well...

 

Thanks for your help in advace...

 

Best

Hari  

Hari,

Area under ROC may be good for comparing overall performance of two competing models. Of course, if the area is very high or very low then it is easier to say model is very good or very bad. But in most cases, if your model is reasonable, the area will be somewhere in the middle.

To cut a long story short, you may report that area to give some idea of performance. But as you are seeing, it will suffer from the same kind of problems. So better is to -

a) either fix %true +ve beforehand and choose a model & report false +ve rate there... OR

b) decide beforehand how much false +ve is tolerable and choose the model & report true +ve rate there

 

Regards,

Angshu

Hi Hari,

I am working on a similar churn model and get a high rate of misclassification just like what you got. How did you fix your problem? I am looking for ideas to work with.

Regards,

Lily

RSS

Follow Us

On Data Science Central

On DataViz

On Hadoop

© 2018   AnalyticBridge.com is a subsidiary and dedicated channel of Data Science Central LLC   Powered by

Badges  |  Report an Issue  |  Terms of Service