A Data Science Central Community

Below is the latest post (and the first post in these 10 months...) of my blog.

*What kind of decision boundaries does Deep Learning (Deep Belief Net) draw? Practice with R and {h2o} package*

Once I wrote a post about a relationship between features of machine learning classifiers and their decision boundaries on the same dataset. The result was much interesting and many people looked to enjoy and even argued about it.

Actually I've been looking for similar attempts about Deep Learning but I couldn't find anything so far. I hope this post may help you understand how Deep Learning works with various sets of parameters.

**Here's the article**:

For a while (at least several months since many people began to implement it with Python and/or Theano, PyLearn2 or something like that), nearly I've given up practicing Deep Learning with R and I've felt I was left alone much further away from advanced technology...

But now we (not only I!) have a great masterpiece: {h2o}, an implementation of H2O framework in R. I believe {h2o} is the easiest way of applying Deep Learning technique to our own datasets because we don't have to even write any code scripts but only to specify some of its parameters. That is, using {h2o} we are free from complicated codes; we can only focus on its underlying essences and theories.

With using {h2o} on R, in principle we can implement "Deep Belief Net", that is the original version of Deep Learning*1. I know it's already not the state-of-the-art style of Deep Learning, but it must be helpful for understanding how Deep Learning works on actual datasets. Please remember a previous post of this blog that argues about how decision boundaries tell us how each classifier works in terms of overfitting or generalization, if you already read this blog. :)

It's much simple how to tell which overfits or well gets generalized with the given dataset generated by 4 sets of fixed 2D normal distribution. My points are: 1) if decision boundaries look well smoothed, they're well generalized, 2) if they look too complicated, they're overfitting, because underlying true distributions can be clearly divided into 4 quadrants with 2 perpendicular axes.

OK, let's run the same trial with Deep Learning of {h2o} on R in order to see how DL works on the given dataset.

**Datasets**

Please get 3 datasets from my repository on GitHub: simple XOR pattern, complex XOR pattern, and a grid dataset.

[...]

## You need to be a member of AnalyticBridge to add comments!

Join AnalyticBridge