Subscribe to DSC Newsletter

Linear regression is one of the first things you should try if you’re modeling a linear relationship (actually, non-linear relationships too!). It’s fairly simple, and probably the first thing to learn when tackling machine learning.

At first, linear regression shows up just as a simple equation for a line. In machine learning, the weights are usually represented by a vector θ (in statistics they’re often represented by A and B!).

Screen Shot 2015-07-17 at 11.12.35 AM

But then we have to account for more than just one input variable. A more general equation for linear regression goes as follows – we multiply each input feature Xi by it’s corresponding weight in the the weight vector θ. This is also equivalent to theta transpose times input vector X.

linreg1.2

There are two main ways to train a linear regression model. You can use the normal equation (in which you set the derivative of the negative log likelihood NLL to 0), or gradient descent.

Sorry for switching notation below. Note – the matrices are i x ji signifies rows, or training examples.

Gradient Descent

The cost function is essentially the sum of the squared distances. The “distance” is the vertical distance between the predicted y and the observed y. This is known as the residual.

Unknown

Gradient descent It achieves this by stepping down the cost function to the (hopefully) global minimum.

Here is the cost function –

linregCost

The cost is the residual sum of squares. The one half is just in there to make the derivative prettier. Sometimes you’ll see m (number of training examples) out front in the denominator too. It would be present in the derivative as well then, because it’s a constant. This just makes ‘cost per training example’, which is perfectly valid.

And the gradient descent algorithm –

linregGD

This is really the current weight minus the alpha times the partial derivative of the cost function. Or, in math..

linregPartialDeriv

linregGD2

The original equation switches the position of h(x) and y, to pull out a negative. This makes the equation prettier.

Learning rate alpha is a value that determines how big each individual hop down the cost function is. If alpha is too small, gradient descent can take longer than you want to train. If alpha is too big, it can hop over the minima.

The entire goal of gradient descent is to learn the optimal parameters θ. 

Here is a cool resource where Andrew Ng breaks down the derivative of the cost function. And here is a great tutorial on coding gradient descent yourself in Python.

But today, I’m just covering linear regression in R. I believe R is a lot stronger for linear regression, and most things more statistics-like.

Linear Regression in R

I’m using a sample crime dataset that’s loaded into R. You can download it on my github.

First, let’s look at one feature, and throw in a linear model[1].

a <- ggplot(crime, aes(x = policeFunding, y = reportedCrime)) +
   geom_smooth(method=lm, colour = "#0BB5FF") +
   ylab("Crime") +
   xlab("Police Funding") +
   ggtitle("Predicting Crime Rate")

plot1

The shaded area is a confidence region.

Note – remember correlation doesn’t equal causation? Well, this is a good example. More police funding per capita doesn’t cause a higher crime rate. It’s likely the other way around.

Now let’s look at lm() a little more.

lr <- lm(reportedCrime ~ policeFunding, data = crime)
lr # returns the weights (theta).

Screen Shot 2015-07-19 at 1.04.34 AM

lr$residuals # residuals
plot(lr) # 4 plots to evaluate your model
anova(lr) # Analysis of Variance Table

Multiple Regression

In this case, we shouldn’t limit ourselves to one input feature. We have several that would all make the model better. This is the line to train a simple multiple regression model. All the same functions from before apply.


lr2 <- lm(reportedCrime ~ graduatedHS25 + teensHS + inCollege +   graduatedCollege25 + policeFunding, data = crime)

Here are a few of the features visualized –

Screen Shot 2015-07-19 at 1.12.12 AM

Or if we want to see the residuals…

Screen Shot 2015-07-19 at 1.32.12 AM

A few more notes about linear regression –

It can map non-linear relationships, if you replace the input vector x with a non-linear function Φ(x). This is useful, but you have to figure out whether you want Φ(x) to be x-squared, or some other exponential, quadratic, logarithmic form…I would try another algorithm (perhaps an SVM!).

This last part is a bit random, I just happened to love the math here.

You can represent linear regression in it’s relationship to a Gaussian distribution. If you replace predicted y (yhat) with observed y, you have to add an error ϵ. 

linreg2

However, ϵ is often assumed (probably correctly) to have a normal distribution. So we can rewrite –

linreg2.5

Screen Shot 2015-07-16 at 1.31.31 PM

linreg3

This gives us a different way of thinking about y. It’s now the output of a normal distribution whose mean is changing. I really like this representation!

Sources –

1. Winston Chang’s Cookbook for R.

2. Scatterplot3d documentation.

3. Kevin Murphy’s super awesome textbook.

4. Some lecture notes from Georgia Tech.


View the original post, and others from the author here.

Views: 6421

Comment

You need to be a member of AnalyticBridge to add comments!

Join AnalyticBridge

Comment by Alex Woods on July 21, 2015 at 9:13am

Note for those reading thoroughly - the second to last equation should be Y ~ ... , not Y = ...

Follow Us

On Data Science Central

On DataViz

On Hadoop

© 2017   AnalyticBridge.com is a subsidiary and dedicated channel of Data Science Central LLC   Powered by

Badges  |  Report an Issue  |  Terms of Service