A Data Science Central Community
People interested in the Data Mining and have interest to discuss on this topic
Latest Activity: Nov 22, 2017
Started by Jeff. Last reply by Yi-Chun Tsai Dec 5, 2014.
Started by xingwei huang Mar 18, 2014.
Started by consultsp. Last reply by Garry Jun 25, 2012.
Started by Vera Klimkovsky Feb 4, 2012.
Started by Vera Klimkovsky Oct 1, 2011.
Started by Vera Klimkovsky May 20, 2011.
Started by David G. Young Sep 1, 2010.
Started by Greg Makowski Oct 17, 2009.
Started by Deborah Deng Sep 14, 2009.
Started by Matt Wroblewski. Last reply by Kesavan Hariharasubramanian Jul 29, 2009.
Started by consultsp. Last reply by Ralph Winters May 21, 2009.
Started by Manish. Last reply by DataLLigence May 3, 2009.
Started by Christina Yang. Last reply by saibabu Mar 22, 2009.
Started by Sandro Saitta. Last reply by Sandro Saitta Oct 29, 2008.
Started by Ralf Klinkenberg Sep 22, 2008.
ACM Talk on February 28 Monday at LinkedIn (Mountain View, CA)
Title: Heuristic Design of Experiments with Meta-Gradient Search of Model Training Parameters http://www.sfbayacm.org/?p=2464 LOCATION: LinkedIn, 2025 Stierlin Ct, Mountain View, CA 94043 Date: Monday February 28, 2011; 6:30 pm 6:30 – 9:00 pm (6:30 – 7:00 networking & snacks; 7:00 – 7:10 announcements; 7:10+ presentation, Q&A) Cost: Free and open to all who wish to attend, but membership is only $20/year. Anyone may join our mailing list at no charge, and receive announcements of upcoming events. Speakers: Greg Makowski Title: Heuristic Design of Experiments with Meta-Gradient Search of Model Training Parameters
Key questions discussed include: as a data miner with many algorithms and software available, how to stay organized with all the choices that can be varied during a project? Choices to search frequently include a) algorithm parameters, b) cost-profit (related to Type 1 vs 2) error bias, c) definition of the target field, d) boosting, bagging, ensemble model combining or stacking, and e) iterating over data versions in an Agile process. How should you plan, how can you best learn as you go? Should you constrain your algorithm choices if you need to describe your resulting data mining system?
As an example, SAS Enterprise Miner’s model training parameters are organized in a “scientific or laboratory notebook” for computational experiments, what I call a “model notebook” data structure to help plan a Design Of Experiments (DOE). A meta-heuristic search process is described to plan and search the many model parameters and data mining choices. The search process is related to gradient descent, only on model training parameters and project choices instead of on model weights. A brief overview of sensitivity analysis is provided to describe how any arbitrarily complex system can be described to a reasonable level of detail, both globally and at the record level (if you need reason codes for each forecast produced).
Greg Makowski is the Director of Risk Analytics and Policy at CashEdge, in Sunnyvale, CA. His data mining group forecasts fraud detection and identity theft for electronic funds transfer. CashEdge integrates as a SaaS with over 700 banks providing features like Pay Other People (with your cell phone or email), mov
Sign Upor Sign In
© 2020 AnalyticBridge.com is a subsidiary and dedicated channel of Data Science Central LLC
Report an Issue |
Terms of Service
Please check your browser settings or contact your system administrator.
Most Popular Content on DSC
To not miss this type of content in the future, subscribe to our newsletter.
Other popular resources
Archives: 2008-2014 |
Book 1 |
Book 2 |
Follow us: Twitter | Facebook
Most popular articles