Subscribe to DSC Newsletter

Deep Belief Net with {h2o} on MNIST and its Kaggle competition

In order to evaluate how Deep Belief Net (Deep Learning) of {h2o} works on actual datasets, I applied it to MNIST dataset; but I got the dataset from a Kaggle competition on MNIST so consequently I joined the competition. :P)

As well known, classification tasks such as for MNIST should be done by rather Convolutional NN (ConvNet) than Deep Belief Net, but I think this challenge was fruitful and helpful to understand how Deep Belief Net of {h2o} works and how accurately it can identify 2D images.

Throughout a lot of trial and errors, I reached a conclusion; parameter tuning is everything. As well as more units in a hidden layer don't always improve classification performance in the case of conventional 3-layer NN, more hidden layers in Deep Belief Net don't always improve performance. The lesson here is that we have to optimize a set of parameters even for Deep Learning. (about ConvNet, I'm not sure)

For more details, click here

DSC Resources

Additional Reading

Follow us on Twitter: @DataScienceCtrl | @AnalyticBridge

Views: 1630


You need to be a member of AnalyticBridge to add comments!

Join AnalyticBridge

On Data Science Central

© 2021   TechTarget, Inc.   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service