SlideShare a Scribd company logo
6
Most read
7
Most read
10
Most read
Lesson 12
Machine Learning - Regression Kush Kulshrestha
Introduction
It has long been known that crickets (an insect species) chirp more frequently on hotter days than on cooler days.
For decades, professional and amateur scientists have catalogued data on chirps-per-minute and temperature.
Using this data, you want to explore this relationship.
Introduction
As expected, the plot shows the temperature rising with the number of chirps. Is this relationship between chirps
and temperature linear?
Yes, you could draw a single straight line like the following to approximate this relationship: True, the line doesn't
pass through every dot, but the line does clearly show the relationship between chirps and temperature.
Introduction
Using the equation for a line, you could write down this
relationship as follows:
where,
- y is the temperature in Celsius—the value we're trying
to predict.
- m is the slope of the line.
- x is the number of chirps per minute—the value of our
input feature.
- b is the y-intercept
Introduction
By convention in machine learning, you'll write the above equation for a model slightly differently:
where,
- y’ is the predicted label (a desired output).
- b is the bias (the y-intercept), sometimes referred to as wo
- w1 is the weight of feature 1. Weight is the same concept as the "slope“ m in the traditional equation of the line.
- X1 is the feature.
To infer (predict) the temperature y’ for a new chirps-per-minute value x1, just substitute the x1 value into this
model.
Although this model uses only one feature, a more sophisticated model might rely on multiple features, each
having a separate weight (w1, w2 , etc.). For example, a model that relies on three features might look as follows:
What exactly is Training ?
Training a model simply means learning (determining) good values for all the weights and the bias from labelled
examples. In supervised learning, a machine learning algorithm builds a model by examining many examples and
attempting to find a model that minimizes loss.
Loss is the penalty for a bad prediction. That is, loss is a number indicating how bad the model's prediction was on
a single example. If the model's prediction is perfect, the loss is zero; otherwise, the loss is greater. The goal of
training a model is to find a set of weights and biases that have low loss, on average, across all examples.
Example, Figure 3 shows a high loss model on the left and a low loss model on the right.
Linear Regression
Linear regression is very good to answer the following questions:
• Is there a relationship between 2 variables?
• How strong is the relationship?
• Which variable contributes the most?
• How accurately can we estimate the effect of each variable?
• How accurately can we predict the target?
• Is the relationship linear?
• Is there an interaction effect?
Let’s assume we only have one variable and one target. Then, linear regression is expressed as:
In the equation above, the betas are the coefficients. These coefficients are what we need in order to make
predictions with our model.
To find the parameters, we need to minimize the least squares or the sum of squared errors.
Why do we use squared errors?
Linear Regression
In the below above, the red dots are the true data and the blue line is linear model. The grey lines illustrate the
errors between the predicted and the true values. The blue line is thus the one that minimizes the sum of the
squared length of the grey lines.
Linear Regression
After some math heavy lifting, you can finally estimate the coefficients with the following equations:
where x bar and y bar represent the mean.
Correlation coefficient
β1 can also be written as:
β1
where,
Sy and Sx are standard deviations of x and y values respectively, and r is the correlation coefficient defined as,
By examining the second equation for the estimated slope β1, we see that since sample standard deviations Sx and
Sy are positive quantities, the correlation coefficient (r), which is always between−1 and 1, measures how much x is
related to y and whether the trend is positive or negative. Figure 3.2 illustrates different correlation strengths.
Correlation coefficient
Figure below illustrates different correlation strengths.
An illustration of correlation strength. Each plot shows data with a particular correlation coefficient r. Values farther
than 0 (outside) indicate a stronger relationship than values closer to 0 (inside). Negative values (left) indicate an
inverse relationship, while positive values (right)indicate a direct relationship.
Coefficient of Determination
The square of the correlation coefficient, r^2 will always be positive and is called the coefficient of determination.
This also is equal to the proportion of the total variability that’s explained by a linear model.
As an extremely crucial remark, correlation does not imply causation!
Correlation and Causation
Just because there’s a strong correlation between two variables, there isn’t necessarily a causal relationship
between them.
For example, drowning deaths and ice-cream sales are strongly correlated, but that’s because both are affected by
the season (summer vs. winter). In general, there are several possible cases, as illustrated below:
1) Causal Link: Even if there is a causal link between x and y, correlation alone cannot tell us whether y causes x or x
causes y.
Correlation and Causation
2) Hidden Cause: A hidden variable z causes both x and y, creating the correlation.
3) Confounding Factor: A hidden variable z and x both affect y, so the results also depend on the value of z.
4) Coincidence: The correlation just happened by chance (e.g. the strong correlation between sun cycles and
number of Republicans in Congress)
Multiple Linear Regression
This is the case when instead of being single x value, we have a vector of x values (x1, x2, ……, xn) for every data
point i.
So, we have n data points (just like before), each with p different predictor variables or features. We’ll then try to
predict y for each data point as a linear function of the different x variables:
Even though it’s still linear, this representation is very versatile; here are just a few of thethings we can represent
with it:
• Multiple dependent variables: for example, suppose we’re trying to predict medical outcome as a function of
several variables such as age, genetic susceptibility, and clinical diagnosis. Then we might say that for each
patient, x1= age, x2= genetics, x3=diagnosis, and y= outcome.
• Nonlinearities: Suppose we want to predict a quadratic function y=ax^2 + bx + c, then for each data point we
might say x1= 1 , x2=x , and x3=x^2. This can easily be extended to any nonlinear function we want.
One may ask: why not just use multiple linear regression and fit an extremely high-degree polynomial to our data?
While the model then would be much richer, one runs the risk of overfitting
Multiple Linear Regression
Using too many features or too complex of a model can often lead to overfitting.
Suppose we want to fit a model to the points in Figure 1. If we fit a linear model, it might look like Figure 2. But, the
fit isn’t perfect. What if we use our newly acquired multiple regression powers to fit a 6th order polynomial to
these points? The result is shown in Figure 3
While our errors are definitely smaller than they were with the linear model, the new model is far too complex, and
will likely go wrong for values too far outside the range.
Application – Linear Regression in Scikit Learn
Please check out the jupyter notebook
Application – Linear Regression in Statsmodels
Please check out the jupyter notebook
Machine Learning Algorithm - Linear Regression
Machine Learning Algorithm - Linear Regression
Machine Learning Algorithm - Linear Regression

More Related Content

What's hot (19)

Simple (and Simplistic) Introduction to Econometrics and Linear Regression
Simple (and Simplistic) Introduction to Econometrics and Linear Regression
Philip Tiongson
 
Scatterplots, Correlation, and Regression
Scatterplots, Correlation, and Regression
Long Beach City College
 
Scatter plots
Scatter plots
swartzje
 
Displaying Distributions with Graphs
Displaying Distributions with Graphs
nszakir
 
Graphs that Enlighten and Graphs that Deceive
Graphs that Enlighten and Graphs that Deceive
Long Beach City College
 
Scatter diagrams and correlation
Scatter diagrams and correlation
keithpeter
 
Scattergrams
Scattergrams
Hana Nur Fitrah
 
Chapter 2 part3-Least-Squares Regression
Chapter 2 part3-Least-Squares Regression
nszakir
 
Scatter plot- Complete
Scatter plot- Complete
Irfan Yaqoob
 
Machine Learning Algorithm - KNN
Machine Learning Algorithm - KNN
Kush Kulshrestha
 
Histograms
Histograms
Long Beach City College
 
Quantitative Methods for Lawyers - Class #19 - Regression Analysis - Part 2
Quantitative Methods for Lawyers - Class #19 - Regression Analysis - Part 2
Daniel Katz
 
Quantitative Methods for Lawyers - Class #17 - Scatter Plots, Covariance, Cor...
Quantitative Methods for Lawyers - Class #17 - Scatter Plots, Covariance, Cor...
Daniel Katz
 
Scatter Plots
Scatter Plots
Bitsy Griffin
 
Geographical Skills
Geographical Skills
clemaitre
 
assignment 2
assignment 2
William Shonk
 
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5
Daniel Katz
 
Quantitative Methods for Lawyers - Class #21 - Regression Analysis - Part 4
Quantitative Methods for Lawyers - Class #21 - Regression Analysis - Part 4
Daniel Katz
 
Machine learning session4(linear regression)
Machine learning session4(linear regression)
Abhimanyu Dwivedi
 
Simple (and Simplistic) Introduction to Econometrics and Linear Regression
Simple (and Simplistic) Introduction to Econometrics and Linear Regression
Philip Tiongson
 
Scatterplots, Correlation, and Regression
Scatterplots, Correlation, and Regression
Long Beach City College
 
Scatter plots
Scatter plots
swartzje
 
Displaying Distributions with Graphs
Displaying Distributions with Graphs
nszakir
 
Graphs that Enlighten and Graphs that Deceive
Graphs that Enlighten and Graphs that Deceive
Long Beach City College
 
Scatter diagrams and correlation
Scatter diagrams and correlation
keithpeter
 
Chapter 2 part3-Least-Squares Regression
Chapter 2 part3-Least-Squares Regression
nszakir
 
Scatter plot- Complete
Scatter plot- Complete
Irfan Yaqoob
 
Machine Learning Algorithm - KNN
Machine Learning Algorithm - KNN
Kush Kulshrestha
 
Quantitative Methods for Lawyers - Class #19 - Regression Analysis - Part 2
Quantitative Methods for Lawyers - Class #19 - Regression Analysis - Part 2
Daniel Katz
 
Quantitative Methods for Lawyers - Class #17 - Scatter Plots, Covariance, Cor...
Quantitative Methods for Lawyers - Class #17 - Scatter Plots, Covariance, Cor...
Daniel Katz
 
Geographical Skills
Geographical Skills
clemaitre
 
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5
Daniel Katz
 
Quantitative Methods for Lawyers - Class #21 - Regression Analysis - Part 4
Quantitative Methods for Lawyers - Class #21 - Regression Analysis - Part 4
Daniel Katz
 
Machine learning session4(linear regression)
Machine learning session4(linear regression)
Abhimanyu Dwivedi
 

Similar to Machine Learning Algorithm - Linear Regression (20)

The future is uncertain. Some events do have a very small probabil.docx
The future is uncertain. Some events do have a very small probabil.docx
oreo10
 
Lecture 4 - Linear Regression, a lecture in subject module Statistical & Mach...
Lecture 4 - Linear Regression, a lecture in subject module Statistical & Mach...
Maninda Edirisooriya
 
Linear regression model in econometrics undergraduate
Linear regression model in econometrics undergraduate
JadZakariaElo
 
An Introduction to Regression Models: Linear and Logistic approaches
An Introduction to Regression Models: Linear and Logistic approaches
Bhanu Yadav
 
Lecture 1.pdf
Lecture 1.pdf
JamalBibi1
 
Artifical Intelligence And Machine Learning Algorithum.pptx
Artifical Intelligence And Machine Learning Algorithum.pptx
Aishwarya SenthilNathan
 
MachineLearning_Unit-II.pptxScrum.pptxAgile Model.pptxAgile Model.pptxAgile M...
MachineLearning_Unit-II.pptxScrum.pptxAgile Model.pptxAgile Model.pptxAgile M...
22eg105n11
 
Unit-III Correlation and Regression.pptx
Unit-III Correlation and Regression.pptx
Anusuya123
 
Applied statistics part 4
Applied statistics part 4
Mohammad Hadi Farjoo MD, PhD, Shahid behehsti University of Medical Sciences
 
3ml.pdf
3ml.pdf
MianAdnan27
 
Regression Analysis presentation by Al Arizmendez and Cathryn Lottier
Regression Analysis presentation by Al Arizmendez and Cathryn Lottier
Al Arizmendez
 
Regression (Linear Regression and Logistic Regression) by Akanksha Bali
Regression (Linear Regression and Logistic Regression) by Akanksha Bali
Akanksha Bali
 
Ordinary least squares linear regression
Ordinary least squares linear regression
Elkana Rorio
 
What is Multiple Linear Regression and How Can it be Helpful for Business Ana...
What is Multiple Linear Regression and How Can it be Helpful for Business Ana...
Smarten Augmented Analytics
 
Linear models for data science
Linear models for data science
Brad Klingenberg
 
Regression Analysis-Machine Learning -Different Types
Regression Analysis-Machine Learning -Different Types
Global Academy of Technology
 
regression.pptx
regression.pptx
Rashi Agarwal
 
Linear regression aims to find the "best-fit" linear line
Linear regression aims to find the "best-fit" linear line
rnycsepp
 
Linear Regression final-1.pptx thbejnnej
Linear Regression final-1.pptx thbejnnej
mathukiyak44
 
SSP PRESENTATION COMPLETE ( ADVANCE ) .pptx
SSP PRESENTATION COMPLETE ( ADVANCE ) .pptx
1312004sp
 
The future is uncertain. Some events do have a very small probabil.docx
The future is uncertain. Some events do have a very small probabil.docx
oreo10
 
Lecture 4 - Linear Regression, a lecture in subject module Statistical & Mach...
Lecture 4 - Linear Regression, a lecture in subject module Statistical & Mach...
Maninda Edirisooriya
 
Linear regression model in econometrics undergraduate
Linear regression model in econometrics undergraduate
JadZakariaElo
 
An Introduction to Regression Models: Linear and Logistic approaches
An Introduction to Regression Models: Linear and Logistic approaches
Bhanu Yadav
 
Artifical Intelligence And Machine Learning Algorithum.pptx
Artifical Intelligence And Machine Learning Algorithum.pptx
Aishwarya SenthilNathan
 
MachineLearning_Unit-II.pptxScrum.pptxAgile Model.pptxAgile Model.pptxAgile M...
MachineLearning_Unit-II.pptxScrum.pptxAgile Model.pptxAgile Model.pptxAgile M...
22eg105n11
 
Unit-III Correlation and Regression.pptx
Unit-III Correlation and Regression.pptx
Anusuya123
 
Regression Analysis presentation by Al Arizmendez and Cathryn Lottier
Regression Analysis presentation by Al Arizmendez and Cathryn Lottier
Al Arizmendez
 
Regression (Linear Regression and Logistic Regression) by Akanksha Bali
Regression (Linear Regression and Logistic Regression) by Akanksha Bali
Akanksha Bali
 
Ordinary least squares linear regression
Ordinary least squares linear regression
Elkana Rorio
 
What is Multiple Linear Regression and How Can it be Helpful for Business Ana...
What is Multiple Linear Regression and How Can it be Helpful for Business Ana...
Smarten Augmented Analytics
 
Linear models for data science
Linear models for data science
Brad Klingenberg
 
Regression Analysis-Machine Learning -Different Types
Regression Analysis-Machine Learning -Different Types
Global Academy of Technology
 
Linear regression aims to find the "best-fit" linear line
Linear regression aims to find the "best-fit" linear line
rnycsepp
 
Linear Regression final-1.pptx thbejnnej
Linear Regression final-1.pptx thbejnnej
mathukiyak44
 
SSP PRESENTATION COMPLETE ( ADVANCE ) .pptx
SSP PRESENTATION COMPLETE ( ADVANCE ) .pptx
1312004sp
 
Ad

More from Kush Kulshrestha (12)

Machine Learning Algorithm - Decision Trees
Machine Learning Algorithm - Decision Trees
Kush Kulshrestha
 
Machine Learning Algorithm - Naive Bayes for Classification
Machine Learning Algorithm - Naive Bayes for Classification
Kush Kulshrestha
 
Machine Learning Algorithm - Logistic Regression
Machine Learning Algorithm - Logistic Regression
Kush Kulshrestha
 
Interpreting Regression Results - Machine Learning
Interpreting Regression Results - Machine Learning
Kush Kulshrestha
 
General Concepts of Machine Learning
General Concepts of Machine Learning
Kush Kulshrestha
 
Performance Metrics for Machine Learning Algorithms
Performance Metrics for Machine Learning Algorithms
Kush Kulshrestha
 
Inferential Statistics
Inferential Statistics
Kush Kulshrestha
 
Descriptive Statistics
Descriptive Statistics
Kush Kulshrestha
 
Scaling and Normalization
Scaling and Normalization
Kush Kulshrestha
 
Wireless Charging of Electric Vehicles
Wireless Charging of Electric Vehicles
Kush Kulshrestha
 
Time management
Time management
Kush Kulshrestha
 
Handshakes and their types
Handshakes and their types
Kush Kulshrestha
 
Machine Learning Algorithm - Decision Trees
Machine Learning Algorithm - Decision Trees
Kush Kulshrestha
 
Machine Learning Algorithm - Naive Bayes for Classification
Machine Learning Algorithm - Naive Bayes for Classification
Kush Kulshrestha
 
Machine Learning Algorithm - Logistic Regression
Machine Learning Algorithm - Logistic Regression
Kush Kulshrestha
 
Interpreting Regression Results - Machine Learning
Interpreting Regression Results - Machine Learning
Kush Kulshrestha
 
General Concepts of Machine Learning
General Concepts of Machine Learning
Kush Kulshrestha
 
Performance Metrics for Machine Learning Algorithms
Performance Metrics for Machine Learning Algorithms
Kush Kulshrestha
 
Wireless Charging of Electric Vehicles
Wireless Charging of Electric Vehicles
Kush Kulshrestha
 
Handshakes and their types
Handshakes and their types
Kush Kulshrestha
 
Ad

Recently uploaded (20)

Hypothesis Testing Training Material.pdf
Hypothesis Testing Training Material.pdf
AbdirahmanAli51
 
2.5-DESPATCH-ORDINARY MAILS.pptxlminub7b7t6f7h7t6f6g7g6fg
2.5-DESPATCH-ORDINARY MAILS.pptxlminub7b7t6f7h7t6f6g7g6fg
mk1227103
 
Module 1Integrity_and_Ethics_PPT-2025.pptx
Module 1Integrity_and_Ethics_PPT-2025.pptx
Karikalcholan Mayavan
 
FME Beyond Data Processing: Creating a Dartboard Accuracy App
FME Beyond Data Processing: Creating a Dartboard Accuracy App
jacoba18
 
apidays New York 2025 - Why I Built Another Carbon Measurement Tool for LLMs ...
apidays New York 2025 - Why I Built Another Carbon Measurement Tool for LLMs ...
apidays
 
QUALITATIVE EXPLANATORY VARIABLES REGRESSION MODELS
QUALITATIVE EXPLANATORY VARIABLES REGRESSION MODELS
Ameya Patekar
 
apidays New York 2025 - Life is But a (Data) Stream by Sandon Jacobs (Confluent)
apidays New York 2025 - Life is But a (Data) Stream by Sandon Jacobs (Confluent)
apidays
 
apidays New York 2025 - Beyond Webhooks: The Future of Scalable API Event Del...
apidays New York 2025 - Beyond Webhooks: The Future of Scalable API Event Del...
apidays
 
Report_Government Authorities_Index_ENG_FIN.pdf
Report_Government Authorities_Index_ENG_FIN.pdf
OlhaTatokhina1
 
apidays New York 2025 - Unifying OpenAPI & AsyncAPI by Naresh Jain & Hari Kri...
apidays New York 2025 - Unifying OpenAPI & AsyncAPI by Naresh Jain & Hari Kri...
apidays
 
LONGSEM2024-25_CSE3015_ETH_AP2024256000125_Reference-Material-I.pptx
LONGSEM2024-25_CSE3015_ETH_AP2024256000125_Reference-Material-I.pptx
vemuripraveena2622
 
apidays New York 2025 - Boost API Development Velocity with Practical AI Tool...
apidays New York 2025 - Boost API Development Velocity with Practical AI Tool...
apidays
 
REGRESSION DIAGNOSTIC I: MULTICOLLINEARITY
REGRESSION DIAGNOSTIC I: MULTICOLLINEARITY
Ameya Patekar
 
METHODS OF DATA COLLECTION (Research methodology)
METHODS OF DATA COLLECTION (Research methodology)
anwesha248
 
Pause Travail 22 Hostiou Girard 12 juin 2025.pdf
Pause Travail 22 Hostiou Girard 12 juin 2025.pdf
Institut de l'Elevage - Idele
 
apidays Singapore 2025 - 4 Identity Essentials for Scaling SaaS in Large Orgs...
apidays Singapore 2025 - 4 Identity Essentials for Scaling SaaS in Large Orgs...
apidays
 
apidays New York 2025 - Breaking Barriers: Lessons Learned from API Integrati...
apidays New York 2025 - Breaking Barriers: Lessons Learned from API Integrati...
apidays
 
BODMAS-Rule-&-Unit-Digit-Concept-pdf.pdf
BODMAS-Rule-&-Unit-Digit-Concept-pdf.pdf
SiddharthSean
 
SAP_S4HANA_EWM_Food_Processing_Industry.pptx
SAP_S4HANA_EWM_Food_Processing_Industry.pptx
vemulavenu484
 
最新版美国威斯康星大学拉克罗斯分校毕业证(UW–L毕业证书)原版定制
最新版美国威斯康星大学拉克罗斯分校毕业证(UW–L毕业证书)原版定制
Taqyea
 
Hypothesis Testing Training Material.pdf
Hypothesis Testing Training Material.pdf
AbdirahmanAli51
 
2.5-DESPATCH-ORDINARY MAILS.pptxlminub7b7t6f7h7t6f6g7g6fg
2.5-DESPATCH-ORDINARY MAILS.pptxlminub7b7t6f7h7t6f6g7g6fg
mk1227103
 
Module 1Integrity_and_Ethics_PPT-2025.pptx
Module 1Integrity_and_Ethics_PPT-2025.pptx
Karikalcholan Mayavan
 
FME Beyond Data Processing: Creating a Dartboard Accuracy App
FME Beyond Data Processing: Creating a Dartboard Accuracy App
jacoba18
 
apidays New York 2025 - Why I Built Another Carbon Measurement Tool for LLMs ...
apidays New York 2025 - Why I Built Another Carbon Measurement Tool for LLMs ...
apidays
 
QUALITATIVE EXPLANATORY VARIABLES REGRESSION MODELS
QUALITATIVE EXPLANATORY VARIABLES REGRESSION MODELS
Ameya Patekar
 
apidays New York 2025 - Life is But a (Data) Stream by Sandon Jacobs (Confluent)
apidays New York 2025 - Life is But a (Data) Stream by Sandon Jacobs (Confluent)
apidays
 
apidays New York 2025 - Beyond Webhooks: The Future of Scalable API Event Del...
apidays New York 2025 - Beyond Webhooks: The Future of Scalable API Event Del...
apidays
 
Report_Government Authorities_Index_ENG_FIN.pdf
Report_Government Authorities_Index_ENG_FIN.pdf
OlhaTatokhina1
 
apidays New York 2025 - Unifying OpenAPI & AsyncAPI by Naresh Jain & Hari Kri...
apidays New York 2025 - Unifying OpenAPI & AsyncAPI by Naresh Jain & Hari Kri...
apidays
 
LONGSEM2024-25_CSE3015_ETH_AP2024256000125_Reference-Material-I.pptx
LONGSEM2024-25_CSE3015_ETH_AP2024256000125_Reference-Material-I.pptx
vemuripraveena2622
 
apidays New York 2025 - Boost API Development Velocity with Practical AI Tool...
apidays New York 2025 - Boost API Development Velocity with Practical AI Tool...
apidays
 
REGRESSION DIAGNOSTIC I: MULTICOLLINEARITY
REGRESSION DIAGNOSTIC I: MULTICOLLINEARITY
Ameya Patekar
 
METHODS OF DATA COLLECTION (Research methodology)
METHODS OF DATA COLLECTION (Research methodology)
anwesha248
 
apidays Singapore 2025 - 4 Identity Essentials for Scaling SaaS in Large Orgs...
apidays Singapore 2025 - 4 Identity Essentials for Scaling SaaS in Large Orgs...
apidays
 
apidays New York 2025 - Breaking Barriers: Lessons Learned from API Integrati...
apidays New York 2025 - Breaking Barriers: Lessons Learned from API Integrati...
apidays
 
BODMAS-Rule-&-Unit-Digit-Concept-pdf.pdf
BODMAS-Rule-&-Unit-Digit-Concept-pdf.pdf
SiddharthSean
 
SAP_S4HANA_EWM_Food_Processing_Industry.pptx
SAP_S4HANA_EWM_Food_Processing_Industry.pptx
vemulavenu484
 
最新版美国威斯康星大学拉克罗斯分校毕业证(UW–L毕业证书)原版定制
最新版美国威斯康星大学拉克罗斯分校毕业证(UW–L毕业证书)原版定制
Taqyea
 

Machine Learning Algorithm - Linear Regression

  • 1. Lesson 12 Machine Learning - Regression Kush Kulshrestha
  • 2. Introduction It has long been known that crickets (an insect species) chirp more frequently on hotter days than on cooler days. For decades, professional and amateur scientists have catalogued data on chirps-per-minute and temperature. Using this data, you want to explore this relationship.
  • 3. Introduction As expected, the plot shows the temperature rising with the number of chirps. Is this relationship between chirps and temperature linear? Yes, you could draw a single straight line like the following to approximate this relationship: True, the line doesn't pass through every dot, but the line does clearly show the relationship between chirps and temperature.
  • 4. Introduction Using the equation for a line, you could write down this relationship as follows: where, - y is the temperature in Celsius—the value we're trying to predict. - m is the slope of the line. - x is the number of chirps per minute—the value of our input feature. - b is the y-intercept
  • 5. Introduction By convention in machine learning, you'll write the above equation for a model slightly differently: where, - y’ is the predicted label (a desired output). - b is the bias (the y-intercept), sometimes referred to as wo - w1 is the weight of feature 1. Weight is the same concept as the "slope“ m in the traditional equation of the line. - X1 is the feature. To infer (predict) the temperature y’ for a new chirps-per-minute value x1, just substitute the x1 value into this model. Although this model uses only one feature, a more sophisticated model might rely on multiple features, each having a separate weight (w1, w2 , etc.). For example, a model that relies on three features might look as follows:
  • 6. What exactly is Training ? Training a model simply means learning (determining) good values for all the weights and the bias from labelled examples. In supervised learning, a machine learning algorithm builds a model by examining many examples and attempting to find a model that minimizes loss. Loss is the penalty for a bad prediction. That is, loss is a number indicating how bad the model's prediction was on a single example. If the model's prediction is perfect, the loss is zero; otherwise, the loss is greater. The goal of training a model is to find a set of weights and biases that have low loss, on average, across all examples. Example, Figure 3 shows a high loss model on the left and a low loss model on the right.
  • 7. Linear Regression Linear regression is very good to answer the following questions: • Is there a relationship between 2 variables? • How strong is the relationship? • Which variable contributes the most? • How accurately can we estimate the effect of each variable? • How accurately can we predict the target? • Is the relationship linear? • Is there an interaction effect? Let’s assume we only have one variable and one target. Then, linear regression is expressed as: In the equation above, the betas are the coefficients. These coefficients are what we need in order to make predictions with our model. To find the parameters, we need to minimize the least squares or the sum of squared errors. Why do we use squared errors?
  • 8. Linear Regression In the below above, the red dots are the true data and the blue line is linear model. The grey lines illustrate the errors between the predicted and the true values. The blue line is thus the one that minimizes the sum of the squared length of the grey lines.
  • 9. Linear Regression After some math heavy lifting, you can finally estimate the coefficients with the following equations: where x bar and y bar represent the mean.
  • 10. Correlation coefficient β1 can also be written as: β1 where, Sy and Sx are standard deviations of x and y values respectively, and r is the correlation coefficient defined as, By examining the second equation for the estimated slope β1, we see that since sample standard deviations Sx and Sy are positive quantities, the correlation coefficient (r), which is always between−1 and 1, measures how much x is related to y and whether the trend is positive or negative. Figure 3.2 illustrates different correlation strengths.
  • 11. Correlation coefficient Figure below illustrates different correlation strengths. An illustration of correlation strength. Each plot shows data with a particular correlation coefficient r. Values farther than 0 (outside) indicate a stronger relationship than values closer to 0 (inside). Negative values (left) indicate an inverse relationship, while positive values (right)indicate a direct relationship.
  • 12. Coefficient of Determination The square of the correlation coefficient, r^2 will always be positive and is called the coefficient of determination. This also is equal to the proportion of the total variability that’s explained by a linear model. As an extremely crucial remark, correlation does not imply causation!
  • 13. Correlation and Causation Just because there’s a strong correlation between two variables, there isn’t necessarily a causal relationship between them. For example, drowning deaths and ice-cream sales are strongly correlated, but that’s because both are affected by the season (summer vs. winter). In general, there are several possible cases, as illustrated below: 1) Causal Link: Even if there is a causal link between x and y, correlation alone cannot tell us whether y causes x or x causes y.
  • 14. Correlation and Causation 2) Hidden Cause: A hidden variable z causes both x and y, creating the correlation. 3) Confounding Factor: A hidden variable z and x both affect y, so the results also depend on the value of z. 4) Coincidence: The correlation just happened by chance (e.g. the strong correlation between sun cycles and number of Republicans in Congress)
  • 15. Multiple Linear Regression This is the case when instead of being single x value, we have a vector of x values (x1, x2, ……, xn) for every data point i. So, we have n data points (just like before), each with p different predictor variables or features. We’ll then try to predict y for each data point as a linear function of the different x variables: Even though it’s still linear, this representation is very versatile; here are just a few of thethings we can represent with it: • Multiple dependent variables: for example, suppose we’re trying to predict medical outcome as a function of several variables such as age, genetic susceptibility, and clinical diagnosis. Then we might say that for each patient, x1= age, x2= genetics, x3=diagnosis, and y= outcome. • Nonlinearities: Suppose we want to predict a quadratic function y=ax^2 + bx + c, then for each data point we might say x1= 1 , x2=x , and x3=x^2. This can easily be extended to any nonlinear function we want. One may ask: why not just use multiple linear regression and fit an extremely high-degree polynomial to our data? While the model then would be much richer, one runs the risk of overfitting
  • 16. Multiple Linear Regression Using too many features or too complex of a model can often lead to overfitting. Suppose we want to fit a model to the points in Figure 1. If we fit a linear model, it might look like Figure 2. But, the fit isn’t perfect. What if we use our newly acquired multiple regression powers to fit a 6th order polynomial to these points? The result is shown in Figure 3 While our errors are definitely smaller than they were with the linear model, the new model is far too complex, and will likely go wrong for values too far outside the range.
  • 17. Application – Linear Regression in Scikit Learn Please check out the jupyter notebook
  • 18. Application – Linear Regression in Statsmodels Please check out the jupyter notebook