Subscribe to DSC Newsletter

I found this method by a Netflix Prize participant on using "Aggressive Parametrization".  It seems like he is using optimization technique for the learning rates between iterations of a gradient descent algorithm.  Any ideas on methods of doing the step optimizations.  

(Note:  powerpoint presentation in that website)

Views: 194

Reply to This

Replies to This Discussion

Follow up. Apparently stochastic gradient descent seems to be the preferred method. Also momentum is a preferred way as well.


On Data Science Central

© 2021   TechTarget, Inc.   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service