SlideShare a Scribd company logo
Classification
Based Machine
Learning
Algorithms
Md Main Uddin Rony,
Software Engineer
.
1
What is Classification?
Classification is a data mining task of predicting the value of a
categorical variable (target or class)
This is done by building a model based on one or more numerical
and/or categorical variables ( predictors, attributes or features)
Considered an instance of supervised learning
Corresponding unsupervised procedure is known as clustering
2
Classification
Based Algorithms
Four main groups of classification
algorithms are:
● Frequency Table
- ZeroR
- OneR
- Naive Bayesian
- Decision Tree
● Covariance Matrix
- Linear Discriminant Analysis
- Logistic Regression
● Similarity Functions
- K Nearest Neighbours
● Others
- Artificial Neural Network
- Support Vector Machine
3
4
Naive Bayes Classifier
● Works based on Bayes’ theorem
● Why its is called Naive?
- Because it assumes that the presence of a particular feature
in a class is unrelated to the presence of any other feature
● Easy to build
● Useful for very large data sets
Bayes’ Theorem
The theorem can be stated mathematically as follow:
P(A) and P(B) are the probabilities of observing A and B without regard
to each other. Also known as Prior Probability.
P(A | B), a conditional (Posterior) probability, is the probability of
observing event A given that B is true.
P(B | A) is the conditional (Posterior)probability of observing event B
given that A is true.
So, how does naive bayes classifier work based on this?
5
How Naive Bayes works?
● Let D be a training set of tuples and each tuple is represented by n-dimensional
attribute vector, X = ( x1, x2, ….., xn)
● Suppose that there are m classes, C1, C2,...., Cm. Given a tuple, X, the classifier will
predict that X belongs to the class having the highest posterior probability, conditioned
on X. That is, the Naive Bayesian classifier predicts that tuple X belongs to the class Ci
if and only if
● By Bayes’ theorem
● P(X) is constant for all classes, only needs to be maximized
6
How Naive Bayes works? (Contd.)
● To reduce computation in evaluating , the naive assumption of
class-conditional independence is made. This presumes that the attributes’ values are
conditionally independent of one another, given the class label of the tuple (i.e., that
there are no dependence relationships among the attributes). This assumption is
called class conditional independence.
● Thus,
7
How Naive
Bayes
Works?
(Hands on
Calculation)
Given all the previous patient's symptoms and
diagnosis
Does the patient with the following symptoms have
the flu?
8
chills runny nose headache fever flu?
Y N Mild Y N
Y Y No N Y
Y N Strong Y Y
N Y Mild Y Y
N N No N N
N Y Strong Y Y
N Y Strong N N
Y Y Mild Y Y
chills runny nose headache fever flu?
Y N Mild Y ?
How Naive
Bayes
Works?
(Hands on
Calculation)
Contd.
First, we compute all possible individual
probabilities conditioned on the target attribute
(flu).
9
P(Flu=Y) 0.625 P(Flu=N) 0.375
P(chills=Y|flu=Y) 0.6 P(chills=Y|flu=N) 0.333
P(chills=N|flu=Y) 0.4 P(chills=N|flu=N) 0.666
P(runny nose=Y|flu=Y) 0.8 P(runny nose=Y|flu=N) 0.333
P(runny nose=N|flu=Y) 0.2 P(runny nose=N|flu=N) 0.666
P(headache=Mild|flu=Y) 0.4 P(headache=Mild|flu=N) 0.333
P(headache=No|flu=Y) 0.2 P(headache=No|flu=N) 0.333
P(headache=Strong|flu=Y) 0.4 P(headache=Strong|flu=N) 0.333
P(fever=Y|flu=Y) 0.8 P(fever=Y|flu=N) 0.333
P(fever=N|flu=Y) 0.2 P(fever=N|flu=N) 0.666
How Naive
Bayes
Works?
(Hands on
Calculation)
Contd.
And then decide:
P(flu=Y|Given attribute) = P(chills = Y|flu=Y).P(runny
nose = N|flu=Y).P(headache = Mild|flu=Y).P(fever =
N|flu=Y).P(flu=Y)
= 0.6 * 0.2 * 0.4 * 0.2 * 0.625
= 0.006
VS
P(flu=N|Given attribute) = P(chills = Y|flu=N).P(runny
nose = N|flu=N).P(headache = Mild|flu=N).P(fever =
N|flu=N).P(flu=N)
= 0.333 * 0.666 * 0.333 * 0.666 * 0.375
= 0.0184
So, Naive Bayes classifier predicts that the patient
doesn’t have the flu.
10
The Decision
Tree Classifier
11
Decision Tree
● Decision tree builds classification or regression models in the form of a
tree structure
● It breaks down a dataset into smaller and smaller subsets while at the
same time an associated decision tree is incrementally developed.
● The final result is a tree with decision nodes and leaf nodes.
- A decision node has two or more branches
- Leaf node represents a classification or decision
● The topmost decision node in a tree which corresponds to the best
predictor called root node
● Decision trees can handle both categorical and numerical data
12
Example Set
we will work
on...
13
Outlook Temp Humidity Windy Play Golf
Rainy Hot High False No
Rainy Hot High True No
Overcast Hot High False Yes
Sunny Mild High False Yes
Sunny Cool Normal False Yes
Sunny Cool Normal True No
Overcast Cool Normal True Yes
Rainy Mild High False No
Rainy Cool Normal False Yes
Sunny Mild Normal False Yes
Rainy Mild Normal True Yes
Overcoast Mild High True Yes
Overcoast Hot Normal False Yes
Sunny Mild High True No
So, our
tree looks
like this...
14
How it works
● The core algorithm for building decision trees called ID3
by J. R. Quinlan
● ID3 uses Entropy and Information Gain to construct a
decision tree
15
Entropy
● A decision tree is built top-down from a root node and
involves partitioning the data into subsets that contain
instances with similar values (homogeneous)
● ID3 algorithm uses entropy to calculate the homogeneity
of a sample
● If the sample is completely homogeneous the entropy is
zero and if the sample is an equally divided it has entropy
of one
16
Compute Two
Types of
Entropy
● To build a decision tree, we need to calculate
two types of entropy using frequency tables
as follows:
● a) Entropy using the frequency table of one
attribute
(Entropy of the Target):
17
● b) Entropy using the
frequency table of two
attributes:
18
Information
Gain
● The information gain is based on the decrease
in entropy after a dataset is split on an attribute
● Constructing a decision tree is all about finding
attribute that returns the highest information
gain (i.e., the most homogeneous branches)
19
Example
● Step 1: Calculate entropy of the target
20
Example
● Step 2: The dataset is then split on
the different attributes.
The entropy for each branch is
calculated.
● Then it is added proportionally, to
get total entropy for the split.
● The resulting entropy is subtracted
from the entropy before the split.
● The result is the Information Gain,
or decrease in entropy
21
Example
22
Example
● Step 3: Choose attribute with the largest information gain as the decision
node
23
Example
● Step 4a: A branch with entropy of 0 is a leaf node.
24
Example
● Step 4b: A branch with entropy more than 0 needs further splitting.
25
Example
● Step 5: The ID3 algorithm is run
recursively on the non-leaf
branches, until all data is classified.
26
Decision
Tree to
Decision
Rules
● A decision tree can easily be transformed to a
set of rules by mapping from the root node to
leaf nodes one by one
27
Any idea about
Random Forest??
After all, Forests are made of trees….
28
K Nearest Neighbors
Classification
29
k-NN Algorithm
● K nearest neighbors is a simple algorithm that stores all available cases
and classifies new cases based on a similarity measure (e.g., distance
functions)
● KNN has been used in statistical estimation and pattern recognition
already in the beginning of 1970’s
● A case is classified by a majority vote of its neighbors, with the case being
assigned to the class most common amongst its K nearest neighbors
measured by a distance function
● If k =1 , the what will it do?
30
Diagram
31
Distance measures for cont. variables
32
How many
neighbors?
● Choosing the optimal value for K is best
done by first inspecting the data
● In general, a large K value is more
precise as it reduces the overall noise
but there is no guarantee
● Cross-validation is another way to
retrospectively determine a good K
value by using an independent dataset
to validate the K value
● Historically, the optimal K for most
datasets has been between 3-10. That
produces much better results than 1NN
33
Example
● Consider the following data concerning credit default. Age and Loan are
two numerical variables (predictors) and Default is the target
34
Example
● We can now use the training set to classify an
unknown case (Age=48 and Loan=$142,000)
using Euclidean distance.
● If K=1 then the nearest neighbor is the last
case in the training set with Default=Y
● D = Sqrt[(48-33)^2 + (142000-150000)^2] =
8000.01 >> Default=Y
● With K=3, there are two Default=Y and one
Default=N out of three closest neighbors. The
prediction for the unknown case is again
Default=Y
35
Standardized
Distance
● One major drawback in calculating distance
measures directly from the training set is in the
case where variables have different
measurement scales or there is a mixture of
numerical and categorical variables.
● For example, if one variable is based on annual
income in dollars, and the other is based on
age in years then income will have a much
higher influence on the distance calculated.
● One solution is to standardize the training set
36
Standardized
Distance
Using the standardized distance on the same
training set, the unknown case returned a different
neighbor which is not a good sign of robustness.
37
Some
Confusions...
What will happen if k equals to multiple
of category or label type?
What will happen if k = 1?
What will happen if we take k’s value
equal to dataset size?
38
Acknowledgements...
Contents are borrowed from…
1. Data Mining Concepts and Techniques By Jiawei Han, Micheline Kamber,
Jian Pei
2. Naive Bayes Example (Youtube Video) By Francisco Icaobelli
(https://www.youtube.com/watch?v=ZAfarappAO0)
3. Predicting the Future Classification
Presented By: Dr. Noureddin Sadawi
(https://github.com/nsadawi/DataMiningSlides/blob/master/Slides.pdf)
39
Questions??
40
Then Thanks...
41

More Related Content

PPTX
Naïve Bayes Classifier Algorithm.pptx
PDF
Naive Bayes
ODP
Machine Learning with Decision trees
PDF
Introduction to Machine Learning Classifiers
ODP
NAIVE BAYES CLASSIFIER
PPTX
Naive bayes
PPTX
Decision Tree Learning
PPT
Classification (ML).ppt
Naïve Bayes Classifier Algorithm.pptx
Naive Bayes
Machine Learning with Decision trees
Introduction to Machine Learning Classifiers
NAIVE BAYES CLASSIFIER
Naive bayes
Decision Tree Learning
Classification (ML).ppt

What's hot (20)

PDF
Support Vector Machines ( SVM )
PPTX
Classification and Regression
ODP
Machine Learning With Logistic Regression
PPTX
Types of Machine Learning
PPT
2.3 bayesian classification
PPTX
KNN Algorithm - How KNN Algorithm Works With Example | Data Science For Begin...
PDF
Logistic regression in Machine Learning
PPTX
Introduction to Deep Learning
PDF
Machine learning
PDF
Decision trees in Machine Learning
PPT
backpropagation in neural networks
PDF
Dimensionality Reduction
PDF
Feature selection
PDF
Data preprocessing using Machine Learning
PPTX
Presentation on K-Means Clustering
PPTX
Decision Trees
PPTX
Machine Learning-Linear regression
PDF
Machine Learning: Introduction to Neural Networks
PDF
Linear regression
PPTX
Ensemble learning
Support Vector Machines ( SVM )
Classification and Regression
Machine Learning With Logistic Regression
Types of Machine Learning
2.3 bayesian classification
KNN Algorithm - How KNN Algorithm Works With Example | Data Science For Begin...
Logistic regression in Machine Learning
Introduction to Deep Learning
Machine learning
Decision trees in Machine Learning
backpropagation in neural networks
Dimensionality Reduction
Feature selection
Data preprocessing using Machine Learning
Presentation on K-Means Clustering
Decision Trees
Machine Learning-Linear regression
Machine Learning: Introduction to Neural Networks
Linear regression
Ensemble learning
Ad

Viewers also liked (20)

PPTX
Online algorithms in Machine Learning
PPTX
Introduction to Machine Learning & Classification
PDF
Online Machine Learning: introduction and examples
PPTX
Introduction to Machine Learning
PPTX
Version controll.pptx
PPTX
PDF
Study On ATM/POS Switching Software For Banks
PPT
Thinking about nlp
PDF
Cost savings from auto-scaling of network resources using machine learning
PPTX
Deep learning for text analytics
PPTX
PPTX
Lecture 9 - Machine Learning and Support Vector Machines (SVM)
PPTX
NLP@Work Conference: email persuasion
PDF
Applications of Machine Learning to Location-based Social Networks
PDF
IoT Mobility Forensics
PPTX
Network_Intrusion_Detection_System_Team1
PPTX
AI Reality: Where are we now? Data for Good? - Bill Boorman
PPTX
Using Deep Learning And NLP To Predict Performance From Resumes
PPTX
classification_methods-logistic regression Machine Learning
PPTX
Airline passenger profiling based on fuzzy deep machine learning
Online algorithms in Machine Learning
Introduction to Machine Learning & Classification
Online Machine Learning: introduction and examples
Introduction to Machine Learning
Version controll.pptx
Study On ATM/POS Switching Software For Banks
Thinking about nlp
Cost savings from auto-scaling of network resources using machine learning
Deep learning for text analytics
Lecture 9 - Machine Learning and Support Vector Machines (SVM)
NLP@Work Conference: email persuasion
Applications of Machine Learning to Location-based Social Networks
IoT Mobility Forensics
Network_Intrusion_Detection_System_Team1
AI Reality: Where are we now? Data for Good? - Bill Boorman
Using Deep Learning And NLP To Predict Performance From Resumes
classification_methods-logistic regression Machine Learning
Airline passenger profiling based on fuzzy deep machine learning
Ad

Similar to Classification Based Machine Learning Algorithms (20)

PPTX
Machine learning algorithms
PDF
IRJET- Performance Evaluation of Various Classification Algorithms
PDF
IRJET- Performance Evaluation of Various Classification Algorithms
PPTX
Data mining classifiers.
PDF
machine_learning.pptx
PPTX
UNIT 3: Data Warehousing and Data Mining
PPTX
BAS 250 Lecture 8
PPT
4_22865_IS465_2019_1__2_1_08ClassBasic.ppt
PPT
Business Analytics using R.ppt
DOCX
Naive bayes classifier
PDF
IJCSI-10-6-1-288-292
PDF
Deployment of ID3 decision tree algorithm for placement prediction
PPT
BAYESIAN theorem and implementation of i
PDF
Analysis of Classification Algorithm in Data Mining
PDF
classification in data mining and data warehousing.pdf
PDF
Classifiers
PPTX
Classification Continued
PPTX
Classification Continued
PPT
Classification
PPT
Unit-4 classification
Machine learning algorithms
IRJET- Performance Evaluation of Various Classification Algorithms
IRJET- Performance Evaluation of Various Classification Algorithms
Data mining classifiers.
machine_learning.pptx
UNIT 3: Data Warehousing and Data Mining
BAS 250 Lecture 8
4_22865_IS465_2019_1__2_1_08ClassBasic.ppt
Business Analytics using R.ppt
Naive bayes classifier
IJCSI-10-6-1-288-292
Deployment of ID3 decision tree algorithm for placement prediction
BAYESIAN theorem and implementation of i
Analysis of Classification Algorithm in Data Mining
classification in data mining and data warehousing.pdf
Classifiers
Classification Continued
Classification Continued
Classification
Unit-4 classification

Recently uploaded (20)

PPTX
Business Acumen Training GuidePresentation.pptx
PDF
Oracle OFSAA_ The Complete Guide to Transforming Financial Risk Management an...
PDF
Clinical guidelines as a resource for EBP(1).pdf
PPTX
Global journeys: estimating international migration
PDF
Report The-State-of-AIOps 20232032 3.pdf
PDF
Data Science Trends & Career Guide---ppt
PPTX
1intro to AI.pptx AI components & composition
PPTX
Measurement of Afordability for Water Supply and Sanitation in Bangladesh .pptx
PPTX
Computer network topology notes for revision
PPTX
STUDY DESIGN details- Lt Col Maksud (21).pptx
PPTX
Challenges and opportunities in feeding a growing population
PPT
Chapter 2 METAL FORMINGhhhhhhhjjjjmmmmmmmmm
PDF
Master Databricks SQL with AccentFuture – The Future of Data Warehousing
PDF
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
PDF
Data Analyst Certificate Programs for Beginners | IABAC
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PPTX
Introduction-to-Cloud-ComputingFinal.pptx
PDF
Linux OS guide to know, operate. Linux Filesystem, command, users and system
PPTX
IB Computer Science - Internal Assessment.pptx
PPTX
Bharatiya Antariksh Hackathon 2025 Idea Submission PPT.pptx
Business Acumen Training GuidePresentation.pptx
Oracle OFSAA_ The Complete Guide to Transforming Financial Risk Management an...
Clinical guidelines as a resource for EBP(1).pdf
Global journeys: estimating international migration
Report The-State-of-AIOps 20232032 3.pdf
Data Science Trends & Career Guide---ppt
1intro to AI.pptx AI components & composition
Measurement of Afordability for Water Supply and Sanitation in Bangladesh .pptx
Computer network topology notes for revision
STUDY DESIGN details- Lt Col Maksud (21).pptx
Challenges and opportunities in feeding a growing population
Chapter 2 METAL FORMINGhhhhhhhjjjjmmmmmmmmm
Master Databricks SQL with AccentFuture – The Future of Data Warehousing
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
Data Analyst Certificate Programs for Beginners | IABAC
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
Introduction-to-Cloud-ComputingFinal.pptx
Linux OS guide to know, operate. Linux Filesystem, command, users and system
IB Computer Science - Internal Assessment.pptx
Bharatiya Antariksh Hackathon 2025 Idea Submission PPT.pptx

Classification Based Machine Learning Algorithms

  • 2. What is Classification? Classification is a data mining task of predicting the value of a categorical variable (target or class) This is done by building a model based on one or more numerical and/or categorical variables ( predictors, attributes or features) Considered an instance of supervised learning Corresponding unsupervised procedure is known as clustering 2
  • 3. Classification Based Algorithms Four main groups of classification algorithms are: ● Frequency Table - ZeroR - OneR - Naive Bayesian - Decision Tree ● Covariance Matrix - Linear Discriminant Analysis - Logistic Regression ● Similarity Functions - K Nearest Neighbours ● Others - Artificial Neural Network - Support Vector Machine 3
  • 4. 4 Naive Bayes Classifier ● Works based on Bayes’ theorem ● Why its is called Naive? - Because it assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature ● Easy to build ● Useful for very large data sets
  • 5. Bayes’ Theorem The theorem can be stated mathematically as follow: P(A) and P(B) are the probabilities of observing A and B without regard to each other. Also known as Prior Probability. P(A | B), a conditional (Posterior) probability, is the probability of observing event A given that B is true. P(B | A) is the conditional (Posterior)probability of observing event B given that A is true. So, how does naive bayes classifier work based on this? 5
  • 6. How Naive Bayes works? ● Let D be a training set of tuples and each tuple is represented by n-dimensional attribute vector, X = ( x1, x2, ….., xn) ● Suppose that there are m classes, C1, C2,...., Cm. Given a tuple, X, the classifier will predict that X belongs to the class having the highest posterior probability, conditioned on X. That is, the Naive Bayesian classifier predicts that tuple X belongs to the class Ci if and only if ● By Bayes’ theorem ● P(X) is constant for all classes, only needs to be maximized 6
  • 7. How Naive Bayes works? (Contd.) ● To reduce computation in evaluating , the naive assumption of class-conditional independence is made. This presumes that the attributes’ values are conditionally independent of one another, given the class label of the tuple (i.e., that there are no dependence relationships among the attributes). This assumption is called class conditional independence. ● Thus, 7
  • 8. How Naive Bayes Works? (Hands on Calculation) Given all the previous patient's symptoms and diagnosis Does the patient with the following symptoms have the flu? 8 chills runny nose headache fever flu? Y N Mild Y N Y Y No N Y Y N Strong Y Y N Y Mild Y Y N N No N N N Y Strong Y Y N Y Strong N N Y Y Mild Y Y chills runny nose headache fever flu? Y N Mild Y ?
  • 9. How Naive Bayes Works? (Hands on Calculation) Contd. First, we compute all possible individual probabilities conditioned on the target attribute (flu). 9 P(Flu=Y) 0.625 P(Flu=N) 0.375 P(chills=Y|flu=Y) 0.6 P(chills=Y|flu=N) 0.333 P(chills=N|flu=Y) 0.4 P(chills=N|flu=N) 0.666 P(runny nose=Y|flu=Y) 0.8 P(runny nose=Y|flu=N) 0.333 P(runny nose=N|flu=Y) 0.2 P(runny nose=N|flu=N) 0.666 P(headache=Mild|flu=Y) 0.4 P(headache=Mild|flu=N) 0.333 P(headache=No|flu=Y) 0.2 P(headache=No|flu=N) 0.333 P(headache=Strong|flu=Y) 0.4 P(headache=Strong|flu=N) 0.333 P(fever=Y|flu=Y) 0.8 P(fever=Y|flu=N) 0.333 P(fever=N|flu=Y) 0.2 P(fever=N|flu=N) 0.666
  • 10. How Naive Bayes Works? (Hands on Calculation) Contd. And then decide: P(flu=Y|Given attribute) = P(chills = Y|flu=Y).P(runny nose = N|flu=Y).P(headache = Mild|flu=Y).P(fever = N|flu=Y).P(flu=Y) = 0.6 * 0.2 * 0.4 * 0.2 * 0.625 = 0.006 VS P(flu=N|Given attribute) = P(chills = Y|flu=N).P(runny nose = N|flu=N).P(headache = Mild|flu=N).P(fever = N|flu=N).P(flu=N) = 0.333 * 0.666 * 0.333 * 0.666 * 0.375 = 0.0184 So, Naive Bayes classifier predicts that the patient doesn’t have the flu. 10
  • 12. Decision Tree ● Decision tree builds classification or regression models in the form of a tree structure ● It breaks down a dataset into smaller and smaller subsets while at the same time an associated decision tree is incrementally developed. ● The final result is a tree with decision nodes and leaf nodes. - A decision node has two or more branches - Leaf node represents a classification or decision ● The topmost decision node in a tree which corresponds to the best predictor called root node ● Decision trees can handle both categorical and numerical data 12
  • 13. Example Set we will work on... 13 Outlook Temp Humidity Windy Play Golf Rainy Hot High False No Rainy Hot High True No Overcast Hot High False Yes Sunny Mild High False Yes Sunny Cool Normal False Yes Sunny Cool Normal True No Overcast Cool Normal True Yes Rainy Mild High False No Rainy Cool Normal False Yes Sunny Mild Normal False Yes Rainy Mild Normal True Yes Overcoast Mild High True Yes Overcoast Hot Normal False Yes Sunny Mild High True No
  • 15. How it works ● The core algorithm for building decision trees called ID3 by J. R. Quinlan ● ID3 uses Entropy and Information Gain to construct a decision tree 15
  • 16. Entropy ● A decision tree is built top-down from a root node and involves partitioning the data into subsets that contain instances with similar values (homogeneous) ● ID3 algorithm uses entropy to calculate the homogeneity of a sample ● If the sample is completely homogeneous the entropy is zero and if the sample is an equally divided it has entropy of one 16
  • 17. Compute Two Types of Entropy ● To build a decision tree, we need to calculate two types of entropy using frequency tables as follows: ● a) Entropy using the frequency table of one attribute (Entropy of the Target): 17
  • 18. ● b) Entropy using the frequency table of two attributes: 18
  • 19. Information Gain ● The information gain is based on the decrease in entropy after a dataset is split on an attribute ● Constructing a decision tree is all about finding attribute that returns the highest information gain (i.e., the most homogeneous branches) 19
  • 20. Example ● Step 1: Calculate entropy of the target 20
  • 21. Example ● Step 2: The dataset is then split on the different attributes. The entropy for each branch is calculated. ● Then it is added proportionally, to get total entropy for the split. ● The resulting entropy is subtracted from the entropy before the split. ● The result is the Information Gain, or decrease in entropy 21
  • 23. Example ● Step 3: Choose attribute with the largest information gain as the decision node 23
  • 24. Example ● Step 4a: A branch with entropy of 0 is a leaf node. 24
  • 25. Example ● Step 4b: A branch with entropy more than 0 needs further splitting. 25
  • 26. Example ● Step 5: The ID3 algorithm is run recursively on the non-leaf branches, until all data is classified. 26
  • 27. Decision Tree to Decision Rules ● A decision tree can easily be transformed to a set of rules by mapping from the root node to leaf nodes one by one 27
  • 28. Any idea about Random Forest?? After all, Forests are made of trees…. 28
  • 30. k-NN Algorithm ● K nearest neighbors is a simple algorithm that stores all available cases and classifies new cases based on a similarity measure (e.g., distance functions) ● KNN has been used in statistical estimation and pattern recognition already in the beginning of 1970’s ● A case is classified by a majority vote of its neighbors, with the case being assigned to the class most common amongst its K nearest neighbors measured by a distance function ● If k =1 , the what will it do? 30
  • 32. Distance measures for cont. variables 32
  • 33. How many neighbors? ● Choosing the optimal value for K is best done by first inspecting the data ● In general, a large K value is more precise as it reduces the overall noise but there is no guarantee ● Cross-validation is another way to retrospectively determine a good K value by using an independent dataset to validate the K value ● Historically, the optimal K for most datasets has been between 3-10. That produces much better results than 1NN 33
  • 34. Example ● Consider the following data concerning credit default. Age and Loan are two numerical variables (predictors) and Default is the target 34
  • 35. Example ● We can now use the training set to classify an unknown case (Age=48 and Loan=$142,000) using Euclidean distance. ● If K=1 then the nearest neighbor is the last case in the training set with Default=Y ● D = Sqrt[(48-33)^2 + (142000-150000)^2] = 8000.01 >> Default=Y ● With K=3, there are two Default=Y and one Default=N out of three closest neighbors. The prediction for the unknown case is again Default=Y 35
  • 36. Standardized Distance ● One major drawback in calculating distance measures directly from the training set is in the case where variables have different measurement scales or there is a mixture of numerical and categorical variables. ● For example, if one variable is based on annual income in dollars, and the other is based on age in years then income will have a much higher influence on the distance calculated. ● One solution is to standardize the training set 36
  • 37. Standardized Distance Using the standardized distance on the same training set, the unknown case returned a different neighbor which is not a good sign of robustness. 37
  • 38. Some Confusions... What will happen if k equals to multiple of category or label type? What will happen if k = 1? What will happen if we take k’s value equal to dataset size? 38
  • 39. Acknowledgements... Contents are borrowed from… 1. Data Mining Concepts and Techniques By Jiawei Han, Micheline Kamber, Jian Pei 2. Naive Bayes Example (Youtube Video) By Francisco Icaobelli (https://www.youtube.com/watch?v=ZAfarappAO0) 3. Predicting the Future Classification Presented By: Dr. Noureddin Sadawi (https://github.com/nsadawi/DataMiningSlides/blob/master/Slides.pdf) 39