A summary on Maximum likelihood Estimator

# Why to learn MLE?

The drawback of least square estimator

A general method of building a predictive model requires least square estimation at first. Then we need work on the residuals, find the confidence interval of parameters and test how well the model fits the data which are based on the normally distributed assumption of the residuals (or noises). But unfortunately the assumption is not guaranteed. Most of the time, you will have a graph of residuals that looks like another distribution rather than the normal.

At this moment you could add one more factor term to your model so as to filter out the non-normal distributed noise, and then calculate the LSE again. But you may still have the same problem again. Or if you can recognize the distribution of the graph (or somehow you know the pdf of the noise), you can just calculate the MLE of the parameters of your model. This time, your work is really finished.

Due to the difficulty of editing mathematical symbol, I can't post all of the note here.

For the complete content, please click this link to download Word document.

## You need to be a member of AnalyticBridge to add comments!

Join AnalyticBridge