A Data Science Central Community
Yes it is.
If you have event rate of 10%, then the predicted probabilities will cluster around 0.1 and hence the cut-off point will also be arount 0.1.
If you have event rate of 70%, then the predicted probabilities will cluster around 0.7 and hence the cut-off point will also be arount 0.7.
Hope it helps.
if u r talking about cut point for probability value, u can decide it by 2 ways .
1. Calculate the misclassification cost for different probability values, and choose the one which will have least misclassification cost. .
2 . Draw lift chart for probility values, number of acuretely classified events per decile (In precise Results of KS test). Point where u get highest distance is the cut of point for your probability
Hope this helps
If your event rate is around 17% and you say that at 50% cutoff you're getting a very good classification, there's something fishy! How can a logistic model trained to fit only 17% be better than what information the dataset has?
Unless, you're measure of accuracy of fit is different from misclassification! Remember, the model usually fits the remaining 83% well, so the misclassification there would be low as compared to the 17%. But I'm unsure how you're getting a 50% cutoff more accurate in terms of misclassification - since, a decrease here, is going to increase it there.
The best way to find out the cutoff is by plotting for different values as already suggested, but it's usually got to be around the event rate! In cases where you fit multiple logistic models for homogeneous segments, you could generally lift the cutoff point, not otherwise from my experience!
Would be interesting to know what you find out...
I would assume that Hari has balanced the data set to a near 50% probability of event rate and he has not accounted for the balanced sampling in his question.
If the apriori probability of event is 17% in the original sample and the balanced sampling yields a near/exact 50% probability, then a 50% cutoff will minimize misclassification.
The key point is balancing between predicting true positives in the presence of false positives. So Use ROC.
Another method is to use a cost/revenue function for the true positive vs. false positives, so one can use a loss/profit as a measure of balancing true positive with respect to false positives.