A Data Science Central Community
Guest blog post by Zied HY. Zied is Senior Data Scientist at Capgemini Consulting. He is specialized in building predictive models utilizing both traditional statistical methods (Generalized Linear Models, Mixed Effects Models, Ridge, Lasso, etc.) and modern machine learning techniques (XGBoost, Random Forests, Kernel Methods, neural networks, etc.). Zied run some workshops for university students (ESSEC, HEC, Ecole polytechnique) interested in Data Science and its applications, and he is the co-founder of Global International Trading (GIT), a central purchasing office based in Paris.
I have started reading about Deep Learning for over a year now through several articles and research papers that I came across mainly in LinkedIn, Medium and Arxiv.
When I virtually attended the MIT 6.S191 Deep Learning courses during the last few weeks, I decided to begin to put some structure in my understanding of Neural Networks through this series of articles.
I will go through the first four courses:
For each course, I will outline the main concepts and add more details and interpretations from my previous readings and my background in statistics and machine learning.
Starting from the second course, I will also add an application on an open-source dataset for each course.
That said, let’s go!
Read the first part, here.