A Data Science Central Community
In order to evaluate how Deep Belief Net (Deep Learning) of {h2o} works on actual datasets, I applied it to MNIST dataset; but I got the dataset from a Kaggle competition on MNIST so consequently I joined the competition. :P)
As well known, classification tasks such as for MNIST should be done by rather Convolutional NN (ConvNet) than Deep Belief Net, but I think this challenge was fruitful and helpful to understand how Deep Belief Net of {h2o} works and how accurately it can identify 2D images.
Throughout a lot of trial and errors, I reached a conclusion; parameter tuning is everything. As well as more units in a hidden layer don't always improve classification performance in the case of conventional 3-layer NN, more hidden layers in Deep Belief Net don't always improve performance. The lesson here is that we have to optimize a set of parameters even for Deep Learning. (about ConvNet, I'm not sure)
For more details, click here.
DSC Resources
Additional Reading
Follow us on Twitter: @DataScienceCtrl | @AnalyticBridge
© 2021 TechTarget, Inc.
Powered by
Badges | Report an Issue | Privacy Policy | Terms of Service
Most Popular Content on DSC
To not miss this type of content in the future, subscribe to our newsletter.
Other popular resources
Archives: 2008-2014 | 2015-2016 | 2017-2019 | Book 1 | Book 2 | More
Most popular articles
You need to be a member of AnalyticBridge to add comments!
Join AnalyticBridge