Subscribe to DSC Newsletter

Learn how each ML classifier works: decision boundary vs. assumed true boundary

In the latest post of my own blog, I argued about how to learn how each machine learning classifier works visually. My idea is that first I prepare samples for training and then I show its assumed true boundary, and finally decision boundary estimated by the classifier with a dense grid covering over the space as test dataset and the assumed boundary are compared.

In the case below, the assumed true boundary of the space is a set of 3 parallel lines; I think everybody will guess so intuitively, but the most important point here is whether any machine learning classifier works so.

For example, when multinomial logit - one of linear classifiers - is trained by samples below, it gives decision boundary for a grid dataset covering the whole space. It looks almost the same as the assumed boundary.

On the other hand, when RBF kernel SVM - typical non-linear classifier - is trained with the same samples, it gives its decision boundary like below; as obviously shown, it looks much curious and almost never goes along with the assumed boundary.

I'll discuss such an issue in a series of posts about various machine learning classifiers, e.g. decision tree, logistic regression, SVM, 1-hidden-layer NN, random forest, Adaboost, and so on.

Click here for details

DSC Resources

Views: 1588

Comment

You need to be a member of AnalyticBridge to add comments!

Join AnalyticBridge

On Data Science Central

© 2020   TechTarget, Inc.   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service