Regression Linear Regression Jeff Howbert Introduction to Machine
Regression Linear Regression Jeff Howbert Introduction to Machine Learning Winter 2012 1
slide thanks to Greg Shakhnarovich (CS 195 -5, Brown Univ. , 2006) Jeff Howbert Introduction to Machine Learning Winter 2012 2
slide thanks to Greg Shakhnarovich (CS 195 -5, Brown Univ. , 2006) Jeff Howbert Introduction to Machine Learning Winter 2012 3
slide thanks to Greg Shakhnarovich (CS 195 -5, Brown Univ. , 2006) Jeff Howbert Introduction to Machine Learning Winter 2012 4
slide thanks to Greg Shakhnarovich (CS 195 -5, Brown Univ. , 2006) Jeff Howbert Introduction to Machine Learning Winter 2012 5
slide thanks to Greg Shakhnarovich (CS 195 -5, Brown Univ. , 2006) Jeff Howbert Introduction to Machine Learning Winter 2012 6
Loss function l l Suppose target labels come from set Y – Binary classification: Y = { 0, 1 } – Regression: Y= (real numbers) A loss function maps decisions to costs: – defines the penalty for predicting when the true value is. Standard choice for classification: 0/1 loss (same as misclassification error) Standard choice for regression: squared loss Jeff Howbert Introduction to Machine Learning Winter 2012 7
Least squares linear fit to data l l Most popular estimation method is least squares: – Determine linear coefficients , that minimize sum of squared loss (SSL). – Use standard (multivariate) differential calculus: u differentiate SSL with respect to , u find zeros of each partial differential equation u solve for , One dimension: Jeff Howbert Introduction to Machine Learning Winter 2012 8
Least squares linear fit to data l Multiple dimensions – To simplify notation and derivation, change to 0, and add a new feature x 0 = 1 to feature vector x: – Calculate SSL and determine : Jeff Howbert Introduction to Machine Learning Winter 2012 9
Least squares linear fit to data Jeff Howbert Introduction to Machine Learning Winter 2012 10
Least squares linear fit to data Jeff Howbert Introduction to Machine Learning Winter 2012 11
Extending application of linear regression l The inputs X for linear regression can be: – Original quantitative inputs – Transformation of quantitative inputs, e. g. log, exp, square root, square, etc. – Polynomial transformation u example: y = 0 + 1 x + 2 x 2 + 3 x 3 – Basis expansions – Dummy coding of categorical inputs – Interactions between variables u l example: x 3 = x 1 x 2 This allows use of linear regression techniques to fit much more complicated non-linear datasets. Jeff Howbert Introduction to Machine Learning Winter 2012 12
Example of fitting polynomial curve with linear model Jeff Howbert Introduction to Machine Learning Winter 2012 13
Prostate cancer dataset l l l 97 samples, partitioned into: – 67 training samples – 30 test samples Eight predictors (features): – 6 continuous (4 log transforms) – 1 binary – 1 ordinal Continuous outcome variable: – lpsa: log( prostate specific antigen level ) Jeff Howbert Introduction to Machine Learning Winter 2012 14
Correlations of predictors in prostate cancer dataset Jeff Howbert Introduction to Machine Learning Winter 2012 15
Fit of linear model to prostate cancer dataset Jeff Howbert Introduction to Machine Learning Winter 2012 16
Regularization l Complex models (lots of parameters) often prone to overfitting. l Overfitting can be reduced by imposing a constraint on the overall magnitude of the parameters. l Two common types of regularization in linear regression: – L 2 regularization (a. k. a. ridge regression). Find which minimizes: u is the regularization parameter: bigger imposes more constraint – L 1 regularization (a. k. a. lasso). Find which minimizes: Jeff Howbert Introduction to Machine Learning Winter 2012 17
Example of L 2 regularization shrinks coefficients towards (but not to) zero, and towards each other. Jeff Howbert Introduction to Machine Learning Winter 2012 18
Example of L 1 regularization shrinks coefficients to zero at different rates; different values of give models with different subsets of features. Jeff Howbert Introduction to Machine Learning Winter 2012 19
Example of subset selection Jeff Howbert Introduction to Machine Learning Winter 2012 20
Comparison of various selection and shrinkage methods Jeff Howbert Introduction to Machine Learning Winter 2012 21
L 1 regularization gives sparse models, L 2 does not Jeff Howbert Introduction to Machine Learning Winter 2012 22
Other types of regression l In addition to linear regression, there are: – many types of non-linear regression decision trees u nearest neighbor u neural networks u support vector machines u – locally linear regression – etc. Jeff Howbert Introduction to Machine Learning Winter 2012 23
MATLAB interlude matlab_demo_07. m Part B Jeff Howbert Introduction to Machine Learning Winter 2012 24
- Slides: 24