Subscribe to DSC Newsletter

I found this method by a Netflix Prize participant on using "Aggressive Parametrization".  It seems like he is using optimization technique for the learning rates between iterations of a gradient descent algorithm.  Any ideas on methods of doing the step optimizations.

http://sites.google.com/site/wojtekkulik/  

(Note:  powerpoint presentation in that website)

Views: 190

Reply to This

Replies to This Discussion

Follow up. Apparently stochastic gradient descent seems to be the preferred method. Also momentum is a preferred way as well.

http://www.willamette.edu/~gorr/classes/cs449/momrate.html

RSS

On Data Science Central

© 2020   AnalyticBridge.com is a subsidiary and dedicated channel of Data Science Central LLC   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service