Logistic Regression - Hosmer Lemeshow test - AnalyticBridge2021-01-19T12:25:11Zhttps://www.analyticbridge.datasciencecentral.com/forum/topics/logistic-regression-hosmer?commentId=2004291%3AComment%3A80471&feed=yes&xn_auth=noI'm also confused about this…tag:www.analyticbridge.datasciencecentral.com,2012-03-09:2004291:Comment:1802562012-03-09T03:55:39.200ZNur Andihttps://www.analyticbridge.datasciencecentral.com/profile/NurAndi
<p>I'm also confused about this test for binary logistic and have similar problem with Hariharan. Thank you for advices. Significant H-L stat on my model has been solved follow suggestion on this treads.</p>
<p>I'm also confused about this test for binary logistic and have similar problem with Hariharan. Thank you for advices. Significant H-L stat on my model has been solved follow suggestion on this treads.</p> I have the same issues with t…tag:www.analyticbridge.datasciencecentral.com,2011-07-28:2004291:Comment:1291462011-07-28T20:47:17.379ZJane Huhttps://www.analyticbridge.datasciencecentral.com/profile/JaneHu
<p>I have the same issues with this stat. I think it is sample size issue. </p>
<p> </p>
<p>I used VIF option under proc reg to make sure those variables entered into logistic model do not highly correlated. So correlation is not an issue.</p>
<p> </p>
<p>My logistic model also has very high KS value. </p>
<p> </p>
<p>I suspect it is binning issue when dealing with very small group of responders. Sometime, low respond % also tends to generates logstics model with high KS. H-L test is like…</p>
<p>I have the same issues with this stat. I think it is sample size issue. </p>
<p> </p>
<p>I used VIF option under proc reg to make sure those variables entered into logistic model do not highly correlated. So correlation is not an issue.</p>
<p> </p>
<p>My logistic model also has very high KS value. </p>
<p> </p>
<p>I suspect it is binning issue when dealing with very small group of responders. Sometime, low respond % also tends to generates logstics model with high KS. H-L test is like Chi-square test performed on 10 group of bins.</p>
<p> </p>
<p>1 responder falling into a different bin can change a lot outcome on this chi-square test if responders are very thin. </p> Hi, when evaluating predictio…tag:www.analyticbridge.datasciencecentral.com,2010-10-12:2004291:Comment:804712010-10-12T06:59:11.656Zpaul dhttps://www.analyticbridge.datasciencecentral.com/profile/pauld
Hi, when evaluating predictions, look at the initial breakdown in the data, because while you can get a good overall hit rate (i use 80% as a simple rule of thumb), looking at the data, what was your sensitivity and specificity. In other words, does your model classify both sets of conditions (outcome a and outcome b) you are modelling well? Having a high percentage in one group, and getting them classified correctly can really make your overall hit rate misleading.<br />
<br />
I would chek your residuals…
Hi, when evaluating predictions, look at the initial breakdown in the data, because while you can get a good overall hit rate (i use 80% as a simple rule of thumb), looking at the data, what was your sensitivity and specificity. In other words, does your model classify both sets of conditions (outcome a and outcome b) you are modelling well? Having a high percentage in one group, and getting them classified correctly can really make your overall hit rate misleading.<br />
<br />
I would chek your residuals (the difference between your expected as a probability) and the observed, and see which cases you are misclassifying, and which ones you are misclassifying really badly,and perhaps then try and profile them.<br />
<br />
Also, remember that statistical significance can be boosted by sample size (power), and if you have a lot of cases, your predictors can be significanct. I would check the odds ratios etc, to see how much the presence of each predictor changes the overall odds of falling into one group or another<br />
<br />
The only advantage i see to using model fitting stats like this is to compare different models (:<br />
<br />
PS - don't use backward or forward elimination, or stepwise procedures, try to compile a rationale for including and or excluding certain variables, i would only use this as a last resort, also see if you can find some freeware to run tree algorithms on your data to see the impact of different models (logistic is not the be all and end all of modelling), rattlle in R is a nice GUI you could look at, and it removes the programming learning curve, and offers logisitc, boosting, random forests, svm etc which enables comparisons and classification rates Also check your residuals to…tag:www.analyticbridge.datasciencecentral.com,2010-09-29:2004291:Comment:795642010-09-29T00:42:55.514ZRalph Wintershttps://www.analyticbridge.datasciencecentral.com/profile/RalphWinters
Also check your residuals to see if they are random. Your model may be missing something.<br />
<br />
-Ralph Winters
Also check your residuals to see if they are random. Your model may be missing something.<br />
<br />
-Ralph Winters Hi
I recommend that you shlou…tag:www.analyticbridge.datasciencecentral.com,2010-09-28:2004291:Comment:795332010-09-28T18:00:07.249ZMANISH NEGIhttps://www.analyticbridge.datasciencecentral.com/profile/MANISHNEGI
Hi<br />
I recommend that you shlould read the following web page on Logistice regression assumptions which is though for SPSS but will be certainly useful to you:-<br />
<a href="http://faculty.chass.ncsu.edu/garson/PA765/logistic.htm" target="_blank">http://faculty.chass.ncsu.edu/garson/PA765/logistic.htm</a>
Hi<br />
I recommend that you shlould read the following web page on Logistice regression assumptions which is though for SPSS but will be certainly useful to you:-<br />
<a href="http://faculty.chass.ncsu.edu/garson/PA765/logistic.htm" target="_blank">http://faculty.chass.ncsu.edu/garson/PA765/logistic.htm</a> What would be a good method t…tag:www.analyticbridge.datasciencecentral.com,2010-09-28:2004291:Comment:795102010-09-28T09:26:30.954ZHariharan Sunderhttps://www.analyticbridge.datasciencecentral.com/profile/HariharanSunder
What would be a good method to detect multi-collinearity and correaltion among variables. I have lots of categorical variables and a few continuos variables.<br />
I can use Proc Reg VIF option but need to recode the categorical to dummy variables.<br />
Can anybody suggest a better way to detect correlation and multi-collimearity?<br />
<br />
P.S. I have access to only Base SAS
What would be a good method to detect multi-collinearity and correaltion among variables. I have lots of categorical variables and a few continuos variables.<br />
I can use Proc Reg VIF option but need to recode the categorical to dummy variables.<br />
Can anybody suggest a better way to detect correlation and multi-collimearity?<br />
<br />
P.S. I have access to only Base SAS Hariharan,
Before using stepw…tag:www.analyticbridge.datasciencecentral.com,2010-09-28:2004291:Comment:795072010-09-28T06:36:43.668ZPurnendu Majihttps://www.analyticbridge.datasciencecentral.com/profile/PurnenduMaji
Hariharan,<br />
Before using stepwise regression to eliminate the independent variable you should eliminate the multicolinearity effect. You know multicolinearity produce over-estimate and p-value going to be least. To get better performance, please remove multicolinearity first. In that case you can proceed with standardization the data.<br />
If possible you can remove serial correlation as well.<br />
<br />
Please do some emphasize on HL statistic also.
Hariharan,<br />
Before using stepwise regression to eliminate the independent variable you should eliminate the multicolinearity effect. You know multicolinearity produce over-estimate and p-value going to be least. To get better performance, please remove multicolinearity first. In that case you can proceed with standardization the data.<br />
If possible you can remove serial correlation as well.<br />
<br />
Please do some emphasize on HL statistic also. Hi Tom,
I used stepwise regr…tag:www.analyticbridge.datasciencecentral.com,2010-09-28:2004291:Comment:795032010-09-28T05:06:09.215ZHariharan Sunderhttps://www.analyticbridge.datasciencecentral.com/profile/HariharanSunder
Hi Tom,<br />
<br />
I used stepwise regression procedure to eliminate independent variables but most of my independent variables are significant (p<0.001) so I eliminated only a few insignificant variables.<br />
Binning: I have binned data based on preliminary Univariate Analysis. I have binned only the demographics data and kept continuous variable as it is. Should i try changing my binning?<br />
<br />
Also i have Age values missing for almost 30% of my data so i created a separate group called Unspecified , is this…
Hi Tom,<br />
<br />
I used stepwise regression procedure to eliminate independent variables but most of my independent variables are significant (p<0.001) so I eliminated only a few insignificant variables.<br />
Binning: I have binned data based on preliminary Univariate Analysis. I have binned only the demographics data and kept continuous variable as it is. Should i try changing my binning?<br />
<br />
Also i have Age values missing for almost 30% of my data so i created a separate group called Unspecified , is this correct way to do?