SlideShare a Scribd company logo
Machine Learning
Linear Regression
Agenda
• Single Dimension Linear Regression

• Multi Dimension Linear Regression

• Gradient Descent

• Generalisation, Over-fitting & Regularisation

• Categorical Inputs
What is Linear Regression?
• Learning

• A supervised algorithm that learns from a set of training samples.

• Each training sample has one or more input values and a single output value.

• The algorithm learns the line, plane or hyper-plane that best fits the training
samples.

• Prediction

• Use the learned line, plane or hyper-plane to predict the output value for any
input sample.
Single Dimension Linear
Regression
Single Dimension Linear Regression
• Single dimension linear regression
has pairs of x and y values as input
training samples. 

• It uses these training sample to
derive a line that predicts values of y.

• The training samples are used to
derive the values of a and b that
minimise the error between actual
and predicated values of y. 

Single Dimension Linear Regression
• We want a line that minimises the
error between the Y values in
training samples and the Y values
that the line passes through.

• Or put another way, we want the
line that “best fits’ the training
samples.

• So we define the error function for
our algorithm so we can minimise
that error.
Single Dimension Linear Regression
• To determine the value of a that
minimises the error E, we look for
where the partial differential of E
with respect to a is zero.
Single Dimension Linear Regression
• To determine the value of b that
minimises the error E, we look for
where the partial differential of E
with respect to b is zero.
Single Dimension Linear Regression
• By substituting the final equations
from the previous two slides we
derive equations for a and b that
minimise the error
Single Dimension Linear Regression
• We also define a function which we can
use to score how well derived line fits.

• A value of 1 indicates a perfect fit. 

• A value of 0 indicates a fit that is no
better than simply predicting the mean
of the input y values. 

• A negative value indicates a fit that is
even worse than just predicting the
mean of the input y values.
Single Dimension Linear Regression
Single Dimension Linear Regression
Single Dimension Linear Regression
Multi Dimension Linear
Regression
Multi Dimension Linear Regression
• Each training sample has an x made
up of multiple input values and a
corresponding y with a single value. 

• The inputs can be represented as
an X matrix in which each row is
sample and each column is a
dimension. 

• The outputs can be represented as
y matrix in which each row is a
sample.
Multi Dimension Linear Regression
• Our predicated y values are
calculated by multiple the X matrix
by a matrix of weights, w.

• If there are 2 dimension, then this
equation defines plane. If there are
more dimensions then it defines a
hyper-plane.
Multi Dimension Linear Regression
• We want a plane or hyper-plane
that minimises the error between
the y values in training samples
and the y values that the plane or
hyper-plane passes through.

• Or put another way, we want the
plane/hyper-plane that “best fits’
the training samples.

• So we define the error function for
our algorithm so we can minimise
that error.
Multi Dimension Linear Regression
• To determine the value of w that
minimises the error E, we look for
where the differential of E with
respect to w is zero.

• We use the Matrix Cookbook to
help with the differentiation!
Multi Dimension Linear Regression
• We also define a function which we can
use to score how well derived line fits.

• A value of 1 indicates a perfect fit. 

• A value of 0 indicates a fit that is no
better than simply predicting the mean
of the input y values. 

• A negative value indicates a fit that is
even worse than just predicting the
mean of the input y values.
Multi Dimension Linear Regression
Multi Dimension Linear Regression
Multi Dimension Linear Regression
• In addition to using the X matrix to represent basic features our training
data, we can can also introduce additional dimensions (i.e. columns in
our X matrix) that are derived from those basic feature values.

• If we introduce derived features whose values are powers of basic
features, our multi-dimensional linear regression can then derive
polynomial curves, planes and hyper-planes.
Multi Dimension Linear Regression
• For example, if we have just one
basic feature in each sample of X, we
can include a range of powers of that
value into our X matrix like this:

• In non-matrix form our multi-
dimensional linear equation is: 

• Inserting the powers of the basic
feature that we have introduced this
becomes a polynomial:
Multi Dimension Linear Regression
Multi Dimension Linear Regression
Gradient Descent
Singular Matrices
• As we have seen, we can use
numpy’s linalg.solve() function to
determine the value of the weights
that result in the lowest possible error.

• But this doesn’t work if np.dot(X.T, X)
is a singular matrix.

• It results in the matrix equivalent of a
divide by zero.

• Gradient descent is an alternative
approach to determining the optimal
weights that in works for all cases,
including this singular matrix case.
Gradient Descent
• Gradient descent is a technique we can use to find the minimum of
arbitrarily complex error functions.

• In gradient descent we pick a random set of weights for our algorithm and
iteratively adjust those weights in the direction of the gradient of the error
with respect to each weight.

• As we iterate, the gradient approaches zero and we approach the
minimum error.

• In machine learning we often use gradient descent with our error function
to find the weights that give the lowest errors.
Gradient Descent
• Here is an example with a very
simple function:

• The gradient of this function is
given by:

• We choose an random initial
value for x and a learning rate of
0.1 and then start descent.

• On each iteration our x value is
decreasing and the gradient (2x)
is converging towards 0.
Gradient Descent
• The learning rate is a what is know as a hyper-parameter.

• If the learning rate is too small then convergence may take a very long
time.

• If the learning rate is too large then convergence may never happen
because our iterations bounce from one side of the minima to the other.

• Choosing a suitable value for hyper-parameters is an art so try different
values and plot the results until you find suitable values.
Multi Dimension Linear Regression
with Gradient Descent
• For multi dimension linear
regression our error function
is:

• Differentiating this with
respect to the weights vector
gives:

• We can iteratively reduce the
error by adjusting the weights
in the direction of these
gradients.
Multi Dimension Linear Regression
with Gradient Descent
Multi Dimension Linear Regression
with Gradient Descent
Generalisation, Over-fitting &
Regularisation
Generalisation & Over-fitting
• As we train our model with more and more data the it may start to fit the training data more and
more accurately, but become worse at handling test data that we feed to it later. 

• This is know as “over-fitting” and results in an increased generalisation error.

• To minimise the generalisation error we should 

• Collect as much sample data as possible. 

• Use a random subset of our sample data for training.

• Use the remaining sample data to test how well our model copes with data it was not trained
with.

• Also, experiment with adding higher degrees of polynomials (X2, X3, etc) as this can reduce
overfitting.
L1 Regularisation (Lasso)
• Having a large number of samples (n) with respect to the number of
dimensionality (d) increases the quality of our model. 

• One way to reduce the effective number of dimensions is to use those that
most contribute to the signal and ignore those that mostly act as noise.

• L1 regularisation achieves this by adding a penalty that results in the
weight for the dimensions that act as noise becoming 0. 

• L1 regularisation encourages a sparse vector of weights in which few are
non-zero and many are zero.
L1 Regularisation (Lasso)
• In L1 regularisation we add a penalty to
the error function: 

• Expanding this we get: 

• Take the derivative with respect to w to
find our gradient:

• Where sign(w) is -1 if w < 0, 0 if w = 0
and +1 if w > 0

• Note that because sign(w) has no
inverse function we cannot solve for w
and so must use gradient descent.
L1 Regularisation (Lasso)
L1 Regularisation (Lasso)
L2 Regularisation (Ridge)
• Another way to reduce the complexity of our model and prevent overfitting
to outliers is L2 regression, which is also known as ridge regression.

• In L2 Regularisation we introduce an additional term to the cost function
that has the effect of penalising large weights and thereby minimising this
skew.
L2 Regularisation (Ridge)
• In L2 regularisation we the sum of
the squares of the weights to the
error function.

• Expanding this we get: 

• Take the derivative with respect to
w to find our gradient:
L2 Regularisation (Ridge)
• Solving for the values of w that give
minimal error:
L2 Regularisation (Ridge)
L2 Regularisation (Ridge)
L1 & L2 Regularisation (Elastic Net)
• L1 Regularisation minimises the impact of dimensions that have low
weights and are thus largely “noise”.

• L2 Regularisation minimise the impacts of outliers in our training data.

• L1 & L2 Regularisation can be used together and the combination is
referred to as Elastic Net regularisation.

• Because the differential of the error function contains the sigmoid which
has no inverse, we cannot solve for w and must use gradient descent.
Categorical Inputs
One-hot Encoding
• When some inputs are categories (e.g. gender) rather than numbers (e.g.
age) we need to represent the category values as numbers so they can be
used in our linear regression equations.

• In one-hot encoding we allocate each category value it's own dimension in
the inputs. So, for example, we allocate X1 to Audi, X2 to BMW & X3 to
Mercedes.

• For Audi X = [1,0,0]

• For BMW X = [0,1,0])

• For Mercedes X = [0,0,1]
Summary
• Single Dimension Linear Regression

• Multi Dimension Linear Regression

• Gradient Descent

• Generalisation, Over-fitting & Regularisation

• Categorical Inputs

More Related Content

PPTX
Machine Learning-Linear regression
PDF
Network traffic analysis course
PPTX
Logistic regression
PPTX
Linear regression with gradient descent
PPTX
Linear regression
PPTX
Introduction to ML (Machine Learning)
PPT
Linear regression
PPTX
Deep learning presentation
Machine Learning-Linear regression
Network traffic analysis course
Logistic regression
Linear regression with gradient descent
Linear regression
Introduction to ML (Machine Learning)
Linear regression
Deep learning presentation

What's hot (20)

PPTX
KNN Algorithm - How KNN Algorithm Works With Example | Data Science For Begin...
PDF
Bias and variance trade off
PPTX
Feedforward neural network
PPTX
Supervised and Unsupervised Learning In Machine Learning | Machine Learning T...
PPTX
Autoencoders in Deep Learning
PPTX
Machine learning session4(linear regression)
PPTX
Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...
PPSX
Lasso and ridge regression
PDF
Dimensionality Reduction
PPTX
Logistic Regression | Logistic Regression In Python | Machine Learning Algori...
PPTX
Naive Bayes Classifier | Naive Bayes Algorithm | Naive Bayes Classifier With ...
PPTX
Support Vector Machine ppt presentation
PDF
Introduction to Neural Networks
PDF
Statistical Pattern recognition(1)
PPTX
Machine Learning - Splitting Datasets
PDF
Dimensionality Reduction
PDF
Feature Engineering in Machine Learning
PPTX
Support vector machines (svm)
PDF
Introduction to XGBoost
PPTX
Linear Regression Analysis | Linear Regression in Python | Machine Learning A...
KNN Algorithm - How KNN Algorithm Works With Example | Data Science For Begin...
Bias and variance trade off
Feedforward neural network
Supervised and Unsupervised Learning In Machine Learning | Machine Learning T...
Autoencoders in Deep Learning
Machine learning session4(linear regression)
Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...
Lasso and ridge regression
Dimensionality Reduction
Logistic Regression | Logistic Regression In Python | Machine Learning Algori...
Naive Bayes Classifier | Naive Bayes Algorithm | Naive Bayes Classifier With ...
Support Vector Machine ppt presentation
Introduction to Neural Networks
Statistical Pattern recognition(1)
Machine Learning - Splitting Datasets
Dimensionality Reduction
Feature Engineering in Machine Learning
Support vector machines (svm)
Introduction to XGBoost
Linear Regression Analysis | Linear Regression in Python | Machine Learning A...
Ad

Similar to Linear regression (20)

PPTX
Linear regression in machine learning
PDF
Logistic regression
PDF
Introduction to Artificial Intelligence_ Lec 7
PPTX
regression analysis presentation slides.
PPTX
07 logistic regression and stochastic gradient descent
PPTX
Regression ppt
PPTX
Logistic Regression in machine learning ppt
PDF
3.1. Linear Regression and Gradient Desent.pdf
PPTX
Regression Analysis.pptx
PPTX
Regression Analysis Techniques.pptx
PDF
Lecture 5 - Gradient Descent, a lecture in subject module Statistical & Machi...
PDF
Model Selection and Validation
PPTX
2a-linear-regression-18Maykjkij;oik;.pptx
PPTX
Supervised learning for IOT IN Vellore Institute of Technology
PDF
Module 5.pdf Machine Learning Types and examples
PPT
15303589.ppt
PPTX
Bootcamp of new world to taken seriously
PDF
Lecture 5 - Linear Regression Linear Regression
PPTX
Unit III_Ch 17_Probablistic Methods.pptx
PDF
Artificial Intelligence Course: Linear models
Linear regression in machine learning
Logistic regression
Introduction to Artificial Intelligence_ Lec 7
regression analysis presentation slides.
07 logistic regression and stochastic gradient descent
Regression ppt
Logistic Regression in machine learning ppt
3.1. Linear Regression and Gradient Desent.pdf
Regression Analysis.pptx
Regression Analysis Techniques.pptx
Lecture 5 - Gradient Descent, a lecture in subject module Statistical & Machi...
Model Selection and Validation
2a-linear-regression-18Maykjkij;oik;.pptx
Supervised learning for IOT IN Vellore Institute of Technology
Module 5.pdf Machine Learning Types and examples
15303589.ppt
Bootcamp of new world to taken seriously
Lecture 5 - Linear Regression Linear Regression
Unit III_Ch 17_Probablistic Methods.pptx
Artificial Intelligence Course: Linear models
Ad

Recently uploaded (20)

PPTX
Log360_SIEM_Solutions Overview PPT_Feb 2020.pptx
PPTX
Computer Software and OS of computer science of grade 11.pptx
PDF
iTop VPN Free 5.6.0.5262 Crack latest version 2025
PPTX
AMADEUS TRAVEL AGENT SOFTWARE | AMADEUS TICKETING SYSTEM
PDF
17 Powerful Integrations Your Next-Gen MLM Software Needs
PPTX
assetexplorer- product-overview - presentation
PDF
Download FL Studio Crack Latest version 2025 ?
PPTX
Embracing Complexity in Serverless! GOTO Serverless Bengaluru
PDF
Nekopoi APK 2025 free lastest update
PPTX
Advanced SystemCare Ultimate Crack + Portable (2025)
PDF
Adobe Premiere Pro 2025 (v24.5.0.057) Crack free
PDF
Product Update: Alluxio AI 3.7 Now with Sub-Millisecond Latency
PDF
Wondershare Filmora 15 Crack With Activation Key [2025
PDF
Complete Guide to Website Development in Malaysia for SMEs
PDF
Tally Prime Crack Download New Version 5.1 [2025] (License Key Free
PDF
wealthsignaloriginal-com-DS-text-... (1).pdf
PDF
AutoCAD Professional Crack 2025 With License Key
PPTX
L1 - Introduction to python Backend.pptx
PDF
CCleaner Pro 6.38.11537 Crack Final Latest Version 2025
PDF
medical staffing services at VALiNTRY
Log360_SIEM_Solutions Overview PPT_Feb 2020.pptx
Computer Software and OS of computer science of grade 11.pptx
iTop VPN Free 5.6.0.5262 Crack latest version 2025
AMADEUS TRAVEL AGENT SOFTWARE | AMADEUS TICKETING SYSTEM
17 Powerful Integrations Your Next-Gen MLM Software Needs
assetexplorer- product-overview - presentation
Download FL Studio Crack Latest version 2025 ?
Embracing Complexity in Serverless! GOTO Serverless Bengaluru
Nekopoi APK 2025 free lastest update
Advanced SystemCare Ultimate Crack + Portable (2025)
Adobe Premiere Pro 2025 (v24.5.0.057) Crack free
Product Update: Alluxio AI 3.7 Now with Sub-Millisecond Latency
Wondershare Filmora 15 Crack With Activation Key [2025
Complete Guide to Website Development in Malaysia for SMEs
Tally Prime Crack Download New Version 5.1 [2025] (License Key Free
wealthsignaloriginal-com-DS-text-... (1).pdf
AutoCAD Professional Crack 2025 With License Key
L1 - Introduction to python Backend.pptx
CCleaner Pro 6.38.11537 Crack Final Latest Version 2025
medical staffing services at VALiNTRY

Linear regression

  • 2. Agenda • Single Dimension Linear Regression • Multi Dimension Linear Regression • Gradient Descent • Generalisation, Over-fitting & Regularisation • Categorical Inputs
  • 3. What is Linear Regression? • Learning • A supervised algorithm that learns from a set of training samples. • Each training sample has one or more input values and a single output value. • The algorithm learns the line, plane or hyper-plane that best fits the training samples. • Prediction • Use the learned line, plane or hyper-plane to predict the output value for any input sample.
  • 5. Single Dimension Linear Regression • Single dimension linear regression has pairs of x and y values as input training samples. • It uses these training sample to derive a line that predicts values of y. • The training samples are used to derive the values of a and b that minimise the error between actual and predicated values of y. 

  • 6. Single Dimension Linear Regression • We want a line that minimises the error between the Y values in training samples and the Y values that the line passes through. • Or put another way, we want the line that “best fits’ the training samples. • So we define the error function for our algorithm so we can minimise that error.
  • 7. Single Dimension Linear Regression • To determine the value of a that minimises the error E, we look for where the partial differential of E with respect to a is zero.
  • 8. Single Dimension Linear Regression • To determine the value of b that minimises the error E, we look for where the partial differential of E with respect to b is zero.
  • 9. Single Dimension Linear Regression • By substituting the final equations from the previous two slides we derive equations for a and b that minimise the error
  • 10. Single Dimension Linear Regression • We also define a function which we can use to score how well derived line fits. • A value of 1 indicates a perfect fit. • A value of 0 indicates a fit that is no better than simply predicting the mean of the input y values. • A negative value indicates a fit that is even worse than just predicting the mean of the input y values.
  • 15. Multi Dimension Linear Regression • Each training sample has an x made up of multiple input values and a corresponding y with a single value. • The inputs can be represented as an X matrix in which each row is sample and each column is a dimension. • The outputs can be represented as y matrix in which each row is a sample.
  • 16. Multi Dimension Linear Regression • Our predicated y values are calculated by multiple the X matrix by a matrix of weights, w. • If there are 2 dimension, then this equation defines plane. If there are more dimensions then it defines a hyper-plane.
  • 17. Multi Dimension Linear Regression • We want a plane or hyper-plane that minimises the error between the y values in training samples and the y values that the plane or hyper-plane passes through. • Or put another way, we want the plane/hyper-plane that “best fits’ the training samples. • So we define the error function for our algorithm so we can minimise that error.
  • 18. Multi Dimension Linear Regression • To determine the value of w that minimises the error E, we look for where the differential of E with respect to w is zero. • We use the Matrix Cookbook to help with the differentiation!
  • 19. Multi Dimension Linear Regression • We also define a function which we can use to score how well derived line fits. • A value of 1 indicates a perfect fit. • A value of 0 indicates a fit that is no better than simply predicting the mean of the input y values. • A negative value indicates a fit that is even worse than just predicting the mean of the input y values.
  • 22. Multi Dimension Linear Regression • In addition to using the X matrix to represent basic features our training data, we can can also introduce additional dimensions (i.e. columns in our X matrix) that are derived from those basic feature values. • If we introduce derived features whose values are powers of basic features, our multi-dimensional linear regression can then derive polynomial curves, planes and hyper-planes.
  • 23. Multi Dimension Linear Regression • For example, if we have just one basic feature in each sample of X, we can include a range of powers of that value into our X matrix like this: • In non-matrix form our multi- dimensional linear equation is: • Inserting the powers of the basic feature that we have introduced this becomes a polynomial:
  • 27. Singular Matrices • As we have seen, we can use numpy’s linalg.solve() function to determine the value of the weights that result in the lowest possible error. • But this doesn’t work if np.dot(X.T, X) is a singular matrix. • It results in the matrix equivalent of a divide by zero. • Gradient descent is an alternative approach to determining the optimal weights that in works for all cases, including this singular matrix case.
  • 28. Gradient Descent • Gradient descent is a technique we can use to find the minimum of arbitrarily complex error functions. • In gradient descent we pick a random set of weights for our algorithm and iteratively adjust those weights in the direction of the gradient of the error with respect to each weight. • As we iterate, the gradient approaches zero and we approach the minimum error. • In machine learning we often use gradient descent with our error function to find the weights that give the lowest errors.
  • 29. Gradient Descent • Here is an example with a very simple function: • The gradient of this function is given by: • We choose an random initial value for x and a learning rate of 0.1 and then start descent. • On each iteration our x value is decreasing and the gradient (2x) is converging towards 0.
  • 30. Gradient Descent • The learning rate is a what is know as a hyper-parameter. • If the learning rate is too small then convergence may take a very long time. • If the learning rate is too large then convergence may never happen because our iterations bounce from one side of the minima to the other. • Choosing a suitable value for hyper-parameters is an art so try different values and plot the results until you find suitable values.
  • 31. Multi Dimension Linear Regression with Gradient Descent • For multi dimension linear regression our error function is: • Differentiating this with respect to the weights vector gives: • We can iteratively reduce the error by adjusting the weights in the direction of these gradients.
  • 32. Multi Dimension Linear Regression with Gradient Descent
  • 33. Multi Dimension Linear Regression with Gradient Descent
  • 35. Generalisation & Over-fitting • As we train our model with more and more data the it may start to fit the training data more and more accurately, but become worse at handling test data that we feed to it later. • This is know as “over-fitting” and results in an increased generalisation error. • To minimise the generalisation error we should • Collect as much sample data as possible. • Use a random subset of our sample data for training. • Use the remaining sample data to test how well our model copes with data it was not trained with. • Also, experiment with adding higher degrees of polynomials (X2, X3, etc) as this can reduce overfitting.
  • 36. L1 Regularisation (Lasso) • Having a large number of samples (n) with respect to the number of dimensionality (d) increases the quality of our model. • One way to reduce the effective number of dimensions is to use those that most contribute to the signal and ignore those that mostly act as noise. • L1 regularisation achieves this by adding a penalty that results in the weight for the dimensions that act as noise becoming 0. • L1 regularisation encourages a sparse vector of weights in which few are non-zero and many are zero.
  • 37. L1 Regularisation (Lasso) • In L1 regularisation we add a penalty to the error function: • Expanding this we get: • Take the derivative with respect to w to find our gradient: • Where sign(w) is -1 if w < 0, 0 if w = 0 and +1 if w > 0 • Note that because sign(w) has no inverse function we cannot solve for w and so must use gradient descent.
  • 40. L2 Regularisation (Ridge) • Another way to reduce the complexity of our model and prevent overfitting to outliers is L2 regression, which is also known as ridge regression. • In L2 Regularisation we introduce an additional term to the cost function that has the effect of penalising large weights and thereby minimising this skew.
  • 41. L2 Regularisation (Ridge) • In L2 regularisation we the sum of the squares of the weights to the error function. • Expanding this we get: • Take the derivative with respect to w to find our gradient:
  • 42. L2 Regularisation (Ridge) • Solving for the values of w that give minimal error:
  • 45. L1 & L2 Regularisation (Elastic Net) • L1 Regularisation minimises the impact of dimensions that have low weights and are thus largely “noise”. • L2 Regularisation minimise the impacts of outliers in our training data. • L1 & L2 Regularisation can be used together and the combination is referred to as Elastic Net regularisation. • Because the differential of the error function contains the sigmoid which has no inverse, we cannot solve for w and must use gradient descent.
  • 47. One-hot Encoding • When some inputs are categories (e.g. gender) rather than numbers (e.g. age) we need to represent the category values as numbers so they can be used in our linear regression equations. • In one-hot encoding we allocate each category value it's own dimension in the inputs. So, for example, we allocate X1 to Audi, X2 to BMW & X3 to Mercedes. • For Audi X = [1,0,0] • For BMW X = [0,1,0]) • For Mercedes X = [0,0,1]
  • 48. Summary • Single Dimension Linear Regression • Multi Dimension Linear Regression • Gradient Descent • Generalisation, Over-fitting & Regularisation • Categorical Inputs