A Data Science Central Community
Based on my opinion. You are welcome to discuss. Note that most of these techniques have evolved over time (in the last 10 years) to the point where most drawbacks have been eliminated - making the updated tool far different and better than its original version. Typically, these bad techniques are still widely used.
And remember to use sound cross-validations techniques when testing models!
Additional comments:
The reasons why such poor models are still widely used are:
In addition, poor cross-validation allows bad models to make the cut, by over-estimating the true lift to be expected in future data, the true accuracy or the true ROI outside the training set. Good cross validations consist in
Conclusion
I described the drawbacks of popular predictive modeling techniques that are used by many practitioners. While these techniques work in particular contexts, they've been applied carelessly to everything, like magic recipes, with disastrous consequences. More robust techniques are described here.
Related article:
Comment
I understand some of the issues with linear regression, but I've never encountered it having problems with overfitting (unless you are talking about some step-wise variable selection of throwing everything in and seeing which beta sticks); certainly not relative to techniques like trees.
Usually a relatively simple linear regression is going to be one of the least prone to overfitting - although probably not the most accurate model.
Myles, these tools have been widely abused. There are much better and simpler tools, more robust and scalable, model-independent, suitable for black-box predictions and bad data, that can be used and understood by non-statisticians, without causing problems. Many will be discussed in my upcoming self-published book Data Science 2.0. Just like there are driver-less cars that cause far less accidents than traditional cars that we've been driving for dozens of years.
Vincent- I would like your opinion on the rest of the post. Just not commenting on reviews part.
So do we throw these tools out and and stop teaching them? (and my review of Kuhn's book is positive. I guess yours is not. But I was only using one book as an example).
If these 8 tools are as bad as you say- should they be tossed from either teaching, learning or using?
Myles, reviews don't mean anything, most are fakes. And if you have truly new, original content, as an author, it scares publishers and you won't get published. That's why the same material get re-published ad nausea, it does not mean it still has value.
Vincent- so most of us agree these have their downsides and are used inappropriately. But you also mention outdated textbooks. Are you suggesting we don't even teach these anymore? Also how do you define an outdated textbook? Your list above is basically the table of contents of Max Kuhn's new Applied Predictive Models with R book. That has received pretty good reviews. Just curious on when we throw the baby out with the bathwater?
Vincent - you presented a nice, quick summary of the potential drawbacks of these techniques. However, you essentially claim that these techniques are used out of ignorance and due to a lag in awareness, which is often untrue, and may be a misleading statement. In some cases, some of these techniques are the best tool for the task.
For instance, in psychology, artificial neural networks are often utilized because they provide a rough analogy to biological neural networks. Their supposed "deficiencies" are actually useful traits in some cases - for example, overfitting can be taken advantage of for modeling experimental data because it sometimes matches what subgroups of participants tend to do on some tasks: you can go from modeling one group of study participants to another by tweaking parameters so as to encourage overfitting. The supposed instability of neural networks can be similarly used to one's advantage, or it can be mitigated by tweaking model parameters, or by averaging out a bunch of results.
Thus, the above article, while presenting some valid points, also presents a narrow perspective, and is consequently overly dismissive and misleading.
There's no miracle cure. My solution is to
So, Vincent, I respect your opinion, if only because you have so many that there must be some that are right, eh? Only joking, but just a little bit. (I am a very opinionated person too.) I echo the other commenter who asked you which approaches you favor. I am new to the data mining game (was taught, like many economists, that it was a no-no), so am particularly interested in what you might say.
Thanks
Bill Luker
Among these 8 worst techniques are the 5 BEST techniques (including linear regression). Why? They have stood the test of time and even today can be used to solve most of the worlds statistical problems. If you take the time to learn and master 3-4 of these techniques you are on your way to understanding the pitfalls that Vincent describes and can overcome them. What makes them the worst is not the techniques thamselves, but how they are employed.
-Ralph Winters
© 2021 TechTarget, Inc.
Powered by
Badges | Report an Issue | Privacy Policy | Terms of Service
Most Popular Content on DSC
To not miss this type of content in the future, subscribe to our newsletter.
Other popular resources
Archives: 2008-2014 | 2015-2016 | 2017-2019 | Book 1 | Book 2 | More
Most popular articles
You need to be a member of AnalyticBridge to add comments!
Join AnalyticBridge