SlideShare a Scribd company logo
Proprietary Information created by Parth Khare
Machine Learning
Classification & Decision Trees
04/01/2013
2
Contents
 Recursive Partitioning
 Classification
 Regression/Decision
 Bagging
 Random Forest
 Boosting
 Gradient Boosting
 Questions
2
3
Detail and flow
 What is the difference between supervised and unsupervised learning?
 What is ML? how is it different from classical statistics?
 Supervised learning: machine -> an application is Trees
 Most elementary analysis: CART
 Tree
3
4
Basics
 Supervised Learning:
 Called “supervised” because of the presence of the outcome variable to guide learning process
 building a learner (/model) to predict the outcome for new unseen objects.
 Alternatively,
 Unsupervised Learning:
 observe only the features and have no measurements of the outcome
 task is rather to describe how the data are organized or clustered
4
5
Machine Learning viz Statistics
‘learning’ viz ‘fitting’
 Machine learning: a branch of artificial intelligence, is about the construction and study
of systems that can learn from data.
 Statistics bases everything on probability models
 assuming your data are samples from a random variable with some
 distribution, then making
 inferences about the parameters of the distribution
 Machine learning may use probability models, and when it does, it overlaps with
statistics.
 isn't so committed to probability
 use other approaches to problem solving that are not based on probability
 The basic optimization concept is the same for trees is same as that of parametric
techniques, minimizing errors metrics. Instead of square error function or MLE,
Machine Learning supervises optimization of entropy, node impurity etc
 An application _-> Trees
5
6
Decision Tree Approach: Parlance
 A decision tree represents a hierarchical segmentation of the data
 The original segment is called the root node and is the entire data set
 The root node is partitioned into two or more segments by applying a series of simple
rules over an input variables
 For example, risk = low, risk = not low
 Each rule assigns the observations to a segment based on its input value
 Each resulting segment can be further partitioned into sub-segments, and so on
 For example risk = low can be partitioned into income = low and income = not low
 The segments are also called nodes, and the final segments are called leaf nodes or
leaves
 Final node surviving the partitions called the terminal node
7
Decision Tree Example: Risk
Assessment(Loan)
Income
< $30k >= $30k
Age Credit Score
< 25 >=25 < 600 >= 600
not on-time on-time not on-time on-time
8
CART: Heuristic and Visual
 Generic supervised learning problem:
 given a bunch of data (x1, y1), (x2, y2)…(xn,yn), and a new point ‘xi ‘, supervised learning
objective: associates a ‘y’ with this new ‘x’
 Main Idea: form a binary tree and minimize error in each leaf
 Given dataset, a decision tree: choose a sequence of binary split of the data
8
9
Growing the tree
 Growing the tree involves successively partitioning the data – recursively partitioning
 If an input variable is binary, then the two categories can be used to split the data
(relative concentration of ‘0’’s and ‘1’’s)
 If an input variable is interval, a splitting value is used to classify the data into two
segments
 For example, if household income is interval and there are 100 possible incomes in the
data set, then there are 100 possible splitting values
 For example, income < $30k, and income >= $30k
10
Classification Tree: again (referrence)
 Represented by a series of binary splits.
 Each internal node represents a value
query on one of the variables — e.g. “Is
X3 > 0.4”. If the answer is “Yes”, go right,
else go left.
 The terminal nodes are the decision
nodes. Typically each terminal node is
dominated by one of the classes.
 The tree is grown using training data, by
recursive splitting.
 The tree is often pruned to an optimal
size, evaluated by cross-validation.
 New observations are classified by
passing their X down to a terminal node of
the tree, and then using majority vote.
10
11
Evaluating the partitions
 When the target is categorical, for each partition of an input variable a chi-square
statistic is computed
 A contingency table is formed that maps responders and non-responders against the
partitioned input variable
 For example, the null hypothesis might be that there is no difference between people
with income <$30k and those with income >=$30k in making an on-time loan payment
 The lower the significance or p-value, the more likely that we reject this hypothesis,
meaning that this income split is a discriminating factor
12
Splitting Criteria: Categorical
 Information Gain -> Entropy
 The rarity of an event is defined as: -log2(pi)
 Impurity Measure:
- Pr(Y=0) X log2 [Pr(Y=0)] - Pr(Y=1) X log2 [Pr(Y=1)]
e.g. check at Pr(Y=0) = 0.5??
 Entropy sums up the rarity of response and non-response over all observations
 Entropy ranges from the best case of 0 (all responders or all non-responders) to 1
(equal mix of responders and non-responders)
link
http://www.youtube.com/watch?v=p17C9q2M00Q
12
13
Splitting Criteria :Continuous
 An F-statistic is used to measure the degree of separation of a split for an interval
target, such as revenue
 Similar to the sum of squares discussion under multiple regression,
F-statistic is based on the ratio of the sum of squares between the groups and the sum
of squares within groups, both adjusted for the number of degrees of freedom
 The null hypothesis is that there is no difference in the target mean between the two
groups
14
Contents
 Recursive Partitioning
 Classification
 Regression/Decision
 Bagging
 Random Forest
 Boosting
 Gradient Boosting
14
15
Bagging
 Ensemble Models : Combines the results from different models
 An ensemble classifier using many decision tree models
 Bagging: Bootstrapped Samples of data
Working: Random Forest
 A different subset of the training data are selected (~2/3), with replacement, to train
each tree
 Remaining training data (OOB) are used to estimate error and variable importance
 Class assignment is made by the number of votes from all of the trees and for
regression the average of the results is used
 A randomly selected subset of variables is used to split each node
 The number of variables used is decided by the user (mtry parameter in R)
15
16
Bagging: Stanford
 Suppose
 C(S, x) is a classifier, such as a tree, based
on our training data S, producing a
predicted class label at input point x.
 ‘To bag C, we draw bootstrap samples
S∗1,...S∗B each of size N with replacement
from the training data.
 Then
 Cˆbag(x) = Majority Vote{C(S∗b, x)}B
b =1.
 Bagging can dramatically reduce the
variance of unstable procedures (like
trees), leading to improved prediction.
 However any simple structure in C (e.g a
tree) is lost.
16
17
Bootstrapped samples
17
18
Contents
 Recursive Partitioning
 Classification
 Regression/Decision
 Bagging
 Random Forest
 Boosting
 Gradient Boosting
18
19
Boosting
Make Copies of Data
 Boosting idea: Based on "strength of weak learn ability" principles
 Example:
IF Gender=MALE AND Age<=25 THEN claim_freq.=‘high’
Combination of weak learners increased accuracy
 Simple or “weak" learners are not perfect!
 Every “boosting” algorithm can be interpreted as optimizing the loss function in a “greedy stage-
wise” manner
Working: Gradient Descent
 First tree is created, residuals observed
 Now, a tree is fitted on the residuals of the first tree and so on
 In this way, boosting grows trees in series, with later trees dependent on the results of previous
trees
 Shrinkage, CV folds, Interaction Depth
 Adaboost, DirectBoost, Laplace Loss(Gaussian Boost)
19
20
GBM
 Gradient Tree Boosting is a generalization of boosting to arbitrary differentiable loss functions.
GBRT is an accurate and effective off-the-shelf procedure that can be used for both regression and
classification problems.
 What it does essentially
 By sequentially learning form the errors of the previous trees Gradient Boosting, in a way tries to
‘learn’ the unconditional distribution of the target variable. So, analogus to how we use different
types of distributions in GLM modeling, GBM creates/replicates the distribution in the given data
as close as possible.
 This comes with an additional risk of over-fitting, resolved by methods like cross validation
within, min observation per node etc.
 Parameters working: OOB data/error
 We know that the first tree of GBM is build on training data and the subsequent trees are
developed on the error form the first tree. This process carries on.
 For OOB, the training data is also split in two parts, on one part the trees and developed, and on
the other part the tree developed on the first part is tested. This second part is called the OOB
data and the error obtained is known as OOB error.
20
21
Summary: Rf and GBM
Main similarities:
 Both derive many benefits from ensembling, with few disadvantages
 Both can be applied to ensembling decision trees
Main differences:
 Boosting performs an exhaustive search for best predictor to split on; RF searches
only a small subset
 Boosting grows trees in series, with later trees dependent on the results of
previous trees
 RF grows trees in parallel independently of one another.
 RF cannot work with missing values GBM can
21
22
More diff b/w RF and GBM
 Algorithmic difference is;
 Random Forests are trained with random sample of data (even more randomized
cases available like feature randomization) and it trusts randomization to have better
generalization performance on out of train set.
 On the other spectrum, Gradient Boosted Trees algorithm additionally tries to find
optimal linear combination of trees (assume final model is the weighted sum of
predictions of individual trees) in relation to given train data. This extra tuning might
be deemed as the difference. Note that, there are many variations of those
algorithms as well.
 At the practical side; owing to this tuning stage,
 Gradient Boosted Trees are more susceptible to jiggling data. This final stage makes
GBT more likely to overfit therefore if the test cases are inclined to be so verbose
compared to train cases this algorithm starts lacking.
 On the contrary, Random Forests are better to strain on overfitting although it is
lacking on the other way around.
22
23
Questions
 Concept/ Interpretation
 Application
23
For further details contact:
Parth Khare
https://www.linkedin.com/profile/view?id=43877647&trk=nav_responsive_tab_profile

More Related Content

PPTX
Random forest algorithm
PPTX
Decision trees for machine learning
PPT
Decision tree and random forest
PDF
Decision trees in Machine Learning
PPTX
Random forest
PPTX
Decision tree
PPTX
Random Forest and KNN is fun
PPTX
Decision tree
Random forest algorithm
Decision trees for machine learning
Decision tree and random forest
Decision trees in Machine Learning
Random forest
Decision tree
Random Forest and KNN is fun
Decision tree

What's hot (20)

PPTX
Dbscan algorithom
PPTX
Ensemble methods in machine learning
PDF
Understanding random forests
PPT
Decision tree
PDF
Performance Metrics for Machine Learning Algorithms
PPTX
Feature selection concepts and methods
PPT
Decision tree
PPTX
CART – Classification & Regression Trees
PDF
Dimensionality Reduction
PPTX
K nearest neighbor
PDF
Feature Engineering in Machine Learning
PDF
Lecture13 - Association Rules
PPTX
Birch Algorithm With Solved Example
PPTX
Random Forest Tutorial | Random Forest in R | Machine Learning | Data Science...
PDF
From decision trees to random forests
PPTX
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
PPTX
Clustering, k-means clustering
PPTX
Ensemble methods
PDF
Data Mining: Association Rules Basics
PDF
Classification Based Machine Learning Algorithms
Dbscan algorithom
Ensemble methods in machine learning
Understanding random forests
Decision tree
Performance Metrics for Machine Learning Algorithms
Feature selection concepts and methods
Decision tree
CART – Classification & Regression Trees
Dimensionality Reduction
K nearest neighbor
Feature Engineering in Machine Learning
Lecture13 - Association Rules
Birch Algorithm With Solved Example
Random Forest Tutorial | Random Forest in R | Machine Learning | Data Science...
From decision trees to random forests
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
Clustering, k-means clustering
Ensemble methods
Data Mining: Association Rules Basics
Classification Based Machine Learning Algorithms
Ad

Similar to Machine learning basics using trees algorithm (Random forest, Gradient Boosting) (20)

PPTX
Classification.pptx
PPTX
Introduction to RandomForests 2004
PDF
Random forest sgv_ai_talk_oct_2_2018
PPTX
13 random forest
PPT
classification in data warehouse and mining
PPT
RANDOM FORESTS Ensemble technique Introduction
PPTX
Decision-trees basic decryptions DT .pptx
PPTX
Machine learning Chapter three (16).pptx
PPTX
Decision tree, softmax regression and ensemble methods in machine learning
PDF
M3R.FINAL
PPTX
Predictive analytics
PDF
Data Science - Part V - Decision Trees & Random Forests
PDF
Machine Learning Unit-5 Decesion Trees & Random Forest.pdf
PPTX
Intro to ml_2021
PDF
Applied machine learning: Insurance
PPTX
Ai & Machine learning - 31140523010 - BDS302.pptx
PDF
22PCOAM16 _ML_Unit 3 Notes & Question bank
PDF
22PCOAM16 ML Unit 3 Full notes PDF & QB.pdf
DOCX
Classification Using Decision Trees and RulesChapter 5.docx
PPTX
Chapter4-ML.pptx slide for concept of mechanic learning
Classification.pptx
Introduction to RandomForests 2004
Random forest sgv_ai_talk_oct_2_2018
13 random forest
classification in data warehouse and mining
RANDOM FORESTS Ensemble technique Introduction
Decision-trees basic decryptions DT .pptx
Machine learning Chapter three (16).pptx
Decision tree, softmax regression and ensemble methods in machine learning
M3R.FINAL
Predictive analytics
Data Science - Part V - Decision Trees & Random Forests
Machine Learning Unit-5 Decesion Trees & Random Forest.pdf
Intro to ml_2021
Applied machine learning: Insurance
Ai & Machine learning - 31140523010 - BDS302.pptx
22PCOAM16 _ML_Unit 3 Notes & Question bank
22PCOAM16 ML Unit 3 Full notes PDF & QB.pdf
Classification Using Decision Trees and RulesChapter 5.docx
Chapter4-ML.pptx slide for concept of mechanic learning
Ad

Recently uploaded (20)

PPT
Predictive modeling basics in data cleaning process
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PPT
ISS -ESG Data flows What is ESG and HowHow
PPTX
Qualitative Qantitative and Mixed Methods.pptx
PDF
Introduction to the R Programming Language
PDF
Optimise Shopper Experiences with a Strong Data Estate.pdf
PPTX
Managing Community Partner Relationships
PDF
Lecture1 pattern recognition............
PDF
Capcut Pro Crack For PC Latest Version {Fully Unlocked 2025}
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PPTX
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
PPT
Miokarditis (Inflamasi pada Otot Jantung)
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PPTX
Introduction-to-Cloud-ComputingFinal.pptx
PPTX
Leprosy and NLEP programme community medicine
PPTX
IBA_Chapter_11_Slides_Final_Accessible.pptx
PDF
Data Engineering Interview Questions & Answers Cloud Data Stacks (AWS, Azure,...
PPTX
climate analysis of Dhaka ,Banglades.pptx
PPT
Quality review (1)_presentation of this 21
PDF
.pdf is not working space design for the following data for the following dat...
Predictive modeling basics in data cleaning process
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
ISS -ESG Data flows What is ESG and HowHow
Qualitative Qantitative and Mixed Methods.pptx
Introduction to the R Programming Language
Optimise Shopper Experiences with a Strong Data Estate.pdf
Managing Community Partner Relationships
Lecture1 pattern recognition............
Capcut Pro Crack For PC Latest Version {Fully Unlocked 2025}
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
Miokarditis (Inflamasi pada Otot Jantung)
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
Introduction-to-Cloud-ComputingFinal.pptx
Leprosy and NLEP programme community medicine
IBA_Chapter_11_Slides_Final_Accessible.pptx
Data Engineering Interview Questions & Answers Cloud Data Stacks (AWS, Azure,...
climate analysis of Dhaka ,Banglades.pptx
Quality review (1)_presentation of this 21
.pdf is not working space design for the following data for the following dat...

Machine learning basics using trees algorithm (Random forest, Gradient Boosting)

  • 1. Proprietary Information created by Parth Khare Machine Learning Classification & Decision Trees 04/01/2013
  • 2. 2 Contents  Recursive Partitioning  Classification  Regression/Decision  Bagging  Random Forest  Boosting  Gradient Boosting  Questions 2
  • 3. 3 Detail and flow  What is the difference between supervised and unsupervised learning?  What is ML? how is it different from classical statistics?  Supervised learning: machine -> an application is Trees  Most elementary analysis: CART  Tree 3
  • 4. 4 Basics  Supervised Learning:  Called “supervised” because of the presence of the outcome variable to guide learning process  building a learner (/model) to predict the outcome for new unseen objects.  Alternatively,  Unsupervised Learning:  observe only the features and have no measurements of the outcome  task is rather to describe how the data are organized or clustered 4
  • 5. 5 Machine Learning viz Statistics ‘learning’ viz ‘fitting’  Machine learning: a branch of artificial intelligence, is about the construction and study of systems that can learn from data.  Statistics bases everything on probability models  assuming your data are samples from a random variable with some  distribution, then making  inferences about the parameters of the distribution  Machine learning may use probability models, and when it does, it overlaps with statistics.  isn't so committed to probability  use other approaches to problem solving that are not based on probability  The basic optimization concept is the same for trees is same as that of parametric techniques, minimizing errors metrics. Instead of square error function or MLE, Machine Learning supervises optimization of entropy, node impurity etc  An application _-> Trees 5
  • 6. 6 Decision Tree Approach: Parlance  A decision tree represents a hierarchical segmentation of the data  The original segment is called the root node and is the entire data set  The root node is partitioned into two or more segments by applying a series of simple rules over an input variables  For example, risk = low, risk = not low  Each rule assigns the observations to a segment based on its input value  Each resulting segment can be further partitioned into sub-segments, and so on  For example risk = low can be partitioned into income = low and income = not low  The segments are also called nodes, and the final segments are called leaf nodes or leaves  Final node surviving the partitions called the terminal node
  • 7. 7 Decision Tree Example: Risk Assessment(Loan) Income < $30k >= $30k Age Credit Score < 25 >=25 < 600 >= 600 not on-time on-time not on-time on-time
  • 8. 8 CART: Heuristic and Visual  Generic supervised learning problem:  given a bunch of data (x1, y1), (x2, y2)…(xn,yn), and a new point ‘xi ‘, supervised learning objective: associates a ‘y’ with this new ‘x’  Main Idea: form a binary tree and minimize error in each leaf  Given dataset, a decision tree: choose a sequence of binary split of the data 8
  • 9. 9 Growing the tree  Growing the tree involves successively partitioning the data – recursively partitioning  If an input variable is binary, then the two categories can be used to split the data (relative concentration of ‘0’’s and ‘1’’s)  If an input variable is interval, a splitting value is used to classify the data into two segments  For example, if household income is interval and there are 100 possible incomes in the data set, then there are 100 possible splitting values  For example, income < $30k, and income >= $30k
  • 10. 10 Classification Tree: again (referrence)  Represented by a series of binary splits.  Each internal node represents a value query on one of the variables — e.g. “Is X3 > 0.4”. If the answer is “Yes”, go right, else go left.  The terminal nodes are the decision nodes. Typically each terminal node is dominated by one of the classes.  The tree is grown using training data, by recursive splitting.  The tree is often pruned to an optimal size, evaluated by cross-validation.  New observations are classified by passing their X down to a terminal node of the tree, and then using majority vote. 10
  • 11. 11 Evaluating the partitions  When the target is categorical, for each partition of an input variable a chi-square statistic is computed  A contingency table is formed that maps responders and non-responders against the partitioned input variable  For example, the null hypothesis might be that there is no difference between people with income <$30k and those with income >=$30k in making an on-time loan payment  The lower the significance or p-value, the more likely that we reject this hypothesis, meaning that this income split is a discriminating factor
  • 12. 12 Splitting Criteria: Categorical  Information Gain -> Entropy  The rarity of an event is defined as: -log2(pi)  Impurity Measure: - Pr(Y=0) X log2 [Pr(Y=0)] - Pr(Y=1) X log2 [Pr(Y=1)] e.g. check at Pr(Y=0) = 0.5??  Entropy sums up the rarity of response and non-response over all observations  Entropy ranges from the best case of 0 (all responders or all non-responders) to 1 (equal mix of responders and non-responders) link http://www.youtube.com/watch?v=p17C9q2M00Q 12
  • 13. 13 Splitting Criteria :Continuous  An F-statistic is used to measure the degree of separation of a split for an interval target, such as revenue  Similar to the sum of squares discussion under multiple regression, F-statistic is based on the ratio of the sum of squares between the groups and the sum of squares within groups, both adjusted for the number of degrees of freedom  The null hypothesis is that there is no difference in the target mean between the two groups
  • 14. 14 Contents  Recursive Partitioning  Classification  Regression/Decision  Bagging  Random Forest  Boosting  Gradient Boosting 14
  • 15. 15 Bagging  Ensemble Models : Combines the results from different models  An ensemble classifier using many decision tree models  Bagging: Bootstrapped Samples of data Working: Random Forest  A different subset of the training data are selected (~2/3), with replacement, to train each tree  Remaining training data (OOB) are used to estimate error and variable importance  Class assignment is made by the number of votes from all of the trees and for regression the average of the results is used  A randomly selected subset of variables is used to split each node  The number of variables used is decided by the user (mtry parameter in R) 15
  • 16. 16 Bagging: Stanford  Suppose  C(S, x) is a classifier, such as a tree, based on our training data S, producing a predicted class label at input point x.  ‘To bag C, we draw bootstrap samples S∗1,...S∗B each of size N with replacement from the training data.  Then  Cˆbag(x) = Majority Vote{C(S∗b, x)}B b =1.  Bagging can dramatically reduce the variance of unstable procedures (like trees), leading to improved prediction.  However any simple structure in C (e.g a tree) is lost. 16
  • 18. 18 Contents  Recursive Partitioning  Classification  Regression/Decision  Bagging  Random Forest  Boosting  Gradient Boosting 18
  • 19. 19 Boosting Make Copies of Data  Boosting idea: Based on "strength of weak learn ability" principles  Example: IF Gender=MALE AND Age<=25 THEN claim_freq.=‘high’ Combination of weak learners increased accuracy  Simple or “weak" learners are not perfect!  Every “boosting” algorithm can be interpreted as optimizing the loss function in a “greedy stage- wise” manner Working: Gradient Descent  First tree is created, residuals observed  Now, a tree is fitted on the residuals of the first tree and so on  In this way, boosting grows trees in series, with later trees dependent on the results of previous trees  Shrinkage, CV folds, Interaction Depth  Adaboost, DirectBoost, Laplace Loss(Gaussian Boost) 19
  • 20. 20 GBM  Gradient Tree Boosting is a generalization of boosting to arbitrary differentiable loss functions. GBRT is an accurate and effective off-the-shelf procedure that can be used for both regression and classification problems.  What it does essentially  By sequentially learning form the errors of the previous trees Gradient Boosting, in a way tries to ‘learn’ the unconditional distribution of the target variable. So, analogus to how we use different types of distributions in GLM modeling, GBM creates/replicates the distribution in the given data as close as possible.  This comes with an additional risk of over-fitting, resolved by methods like cross validation within, min observation per node etc.  Parameters working: OOB data/error  We know that the first tree of GBM is build on training data and the subsequent trees are developed on the error form the first tree. This process carries on.  For OOB, the training data is also split in two parts, on one part the trees and developed, and on the other part the tree developed on the first part is tested. This second part is called the OOB data and the error obtained is known as OOB error. 20
  • 21. 21 Summary: Rf and GBM Main similarities:  Both derive many benefits from ensembling, with few disadvantages  Both can be applied to ensembling decision trees Main differences:  Boosting performs an exhaustive search for best predictor to split on; RF searches only a small subset  Boosting grows trees in series, with later trees dependent on the results of previous trees  RF grows trees in parallel independently of one another.  RF cannot work with missing values GBM can 21
  • 22. 22 More diff b/w RF and GBM  Algorithmic difference is;  Random Forests are trained with random sample of data (even more randomized cases available like feature randomization) and it trusts randomization to have better generalization performance on out of train set.  On the other spectrum, Gradient Boosted Trees algorithm additionally tries to find optimal linear combination of trees (assume final model is the weighted sum of predictions of individual trees) in relation to given train data. This extra tuning might be deemed as the difference. Note that, there are many variations of those algorithms as well.  At the practical side; owing to this tuning stage,  Gradient Boosted Trees are more susceptible to jiggling data. This final stage makes GBT more likely to overfit therefore if the test cases are inclined to be so verbose compared to train cases this algorithm starts lacking.  On the contrary, Random Forests are better to strain on overfitting although it is lacking on the other way around. 22
  • 24. For further details contact: Parth Khare https://www.linkedin.com/profile/view?id=43877647&trk=nav_responsive_tab_profile