SlideShare a Scribd company logo
DATA ANALYTICS
Evaluation Metrics for Supervised Learning
Models of Machine Learning
Md. Main Uddin Rony
Software Developer, Infolytx,Inc.
Machine Learning Evaluation Metrics
ML Evaluation Metrics Are…..
● tied to Machine Learning Tasks
● methods which determine an algorithm’s performance and behavior
● helpful to decide the best model to meet the target performance
● helpful to parameterize the model in such a way that can offer best
performing algorithm
Evaluation Metrics Types...
● Various types of ML Algorithms (classification, regression, ranking,
clustering)
● Different types of evaluation metrics for different types of algorithm
● Some metrics can be useful for more than one type of algorithm
(Precision - Recall)
● Will cover Evaluation Metrics for Supervised learning models only (
Classification, Regression, Ranking)
Classification Metrics
Classification Model Does...
Predict class labels given input data
In Binary classification, there are two possible output classes ( 0 or 1, True
or False, Positive or Negative, Yes or No etc.)
Spam detection of email is a good example of Binary classification.
Some Popular Classification Metrics...
Accuracy
Confusion Matrix
Log-Loss
AUC
Accuracy
● Ratio between the number of correct predictions and total number of
predictions
● Example: Suppose we have 100 examples in the positive class and 200
examples in the negative class. Our model declares 80 out of 100
positives as positive correctly and 195 out of 200 negatives as negative
correctly.
● So, accuracy is = (80 + 195)/(100 + 200) = 91.7%
Confusion Matrix
● Shows a more detailed breakdown of correct and incorrect classifications for each
class.
● Think about our previous example and then the confusion matrix looks like:
● What is the accuracy that positive class has ? And Negative class?
● Clearly, positive class has lower accuracy than the negative class
● And that information is lost if we calculate overall accuracy only.
Predicted as positive Predicted as negative
Labeled as positive 80 20
Labeled as negative 5 195
Per-Class Accuracy
● Average per class accuracy of previous example:
(80% + 97.5%)/2 = 88.75 %, different from accuracy
Why important?
- Can show different scenario when there are different numbers of
examples per class
- Class with more examples than other will dominate the statistic of
accuracy, hence produced a distorted picture
Log-Loss
Very much useful when the raw output of classifier is a numeric probability
instead of a class label 0 or 1
Mathematically , log-loss for a binary classifier:
Minimum is 0 when prediction and true label match up
Calculate for a data point predicted by classifier to belong to class 1 with
probability .51 and with probability 1
Minimizing this value, maximizing the accuracy of the classifier
AUC (Area Under Curve)
● The curve is receiver operating
characteristic curve or in short ROC
curve
● Provides nuanced details about the
behavior of the classifier
● Bad ROC curve covers very little area
● Good ROC curve has a lot of space
under it
● But, how?
AUC (contd..)
AUC (contd..)
AUC (contd..)
AUC (contd..)
AUC (contd..)
AUC (contd..)
AUC (contd..)
● So, what’s the advantage of using of ROC curve over a simpler metric?
ROC curve visualizes all possible classification thresholds, whereas
other metrics only represents your error rate for a single threshold
Ranking Metrics
Ranking ...
Is related to binary classification
Internet Search can be a good example which acts as a ranker.
During a query, it returns ranked list of web pages relevant to that query
So, here ranking can be a binary classification of “relevant query” or
“irrelevant query”
It also ordering the results so that the most relevant result should be on top
So, what can be done in underlying implementation considering both??
Can we predict what will ranking metrics evaluate and how?
Some Ranking Metrics..
Precision - Recall
Precision - Recall Curve and F1 Score
NDCG
Precision - Recall
Considering the scenario of web search result, Precision answers this
question:
“Out of the items that the ranker/classifier predicted to be relevant, how many are
truly relevant?”
Whereas, Recall answers this:
“Out of all the items that are truly relevant, how many are found by the
ranker/classifier?”
Precision - Recall (Contd..)
Calculation Example Of Precision- Recall
Total Negative = 9760 + 140 = 9900
Total Positive = 40 + 60 = 100 Total
Negative prediction = 9760 + 40 = 9800 Total
Positive prediction = 140 + 60 = 200
Precision = TP / (TP+FP)
= 60 / (60 + 140) = 30%
Recall = TP / (TP+FN)
= 60 / (60+40) = 60%
Predicted as
Negative
Predicted as
Positive
Actual
Negative
9760 (TN) 140 (FP)
Actual
Positive
40 (FN) 60 (TP)
Precision - Recall Curve
When the numbers of answers returned by
the ranker will change, the precision and
recall score will also be changed
By plotting precision versus recall over a
range of k values which denotes
numbers of results returned, we get the
precision - recall curve
Computing Precision-Recall Point
Interpolating a Recall/Precision Curve
Trade-off between Recall and Precision
F-Measure
One measure of performance that takes into account both recall and
precision
Harmonic mean of recall and precision:
Compared to arithmetic mean, both need to be high for harmonic mean to
be high
NDCG
● Precision and recall treat all retrieved items equally.
● But, a relevant item in position 1 and a relevant item in position 5 bear
same significance?
● Think about a web search result
● NDCG tries to take this scenario into account.
What?
● NDCG stands for Normalized Discounted Cumulative Gain
● First just focus on DCG (Discounted Cumulative Gain)
Discounted Cumulative Gain
● Popular measure for evaluating web search and related tasks.
● Discounts items that are further down the search result list
● Two assumptions:
- Highly relevant documents are more useful than marginally relevant
document
- the lower the ranked position of a relevant document, the less useful it is
for the user, since it is less likely to be examined
Discounted Cumulative Gain
● Uses graded relevance as a measure of the usefulness, or gain, from
examining a document
● Gain is accumulated starting at the top of the ranking and may be
reduced, or discounted, at lower ranks
● Typical discount is 1/log (rank)
- With base 2, the discount at rank 4 is ½, and at rank 8 it is 1/3
Discounted Cumulative Gain
● DCG is the total gain accumulated at a particular rank p:
● Alternative formulation:
- used by some web search companies
- emphasis on retrieving highly relevant documents
* Equation used from Addison Wesley’s
DCG Example
● 10 ranked documents judged on 0-3 relevance scale:
3, 2, 3, 0, 0, 1, 2, 2, 3, 0
● discounted gain:
3, 2/1, 3/1.59, 0, 0, 1/ 2.59, 2/2.81, 2/3 , 3/3.17, 0
= 3, 2, 1.89, 0, 0, 0.39, 0.71, 0.67, 0.95, 0
● DCG:
3, 5, 6.89, 6.89, 6.89, 7.28, 7.99, 8.66, 9.61, 9.61
* Example used from Addison Wesley’s
presentation
Normalized DCG
● Normalized version of discounted cumulative gain
● Often normalized by comparing the DCG at each rank with the DCG value
for the perfect ranking
● Normalized score always lies between 0.0 and 1.0
NDCG Example
● Let’s look back the list of ranked document judged on relevance scale:
3, 2, 3, 0, 0, 1, 2, 2, 3, 0
● Perfect ranking:
3, 3, 3, 2, 2, 2, 1, 0, 0, 0
● Perfect discounted gain:
3, 3/1, 3/1.59, 2/2, 2/2.32, 2/ 2.59, 1/2.81, 0 , 0, 0
= 3, 3, 1.89, 1, 0.86, 0.77, 0.36, 0, 0, 0
NDCG Example
● Ideal DCG values:
3, 6, 7.89, 8.89, 9.75, 10.52, 10.88, 10.88, 10.88, 10.88
NDCG values( divide actual by ideal):
3/3, 5/6, 6.89/7.89, 6.89/8.89, 6.89/9.75, 7.28/10.52,
7.99/10.88, 8.66/10.88, 9.61/10.88, 9.61/10.88
= 1, 0.83, 0.87, 0.76, 0.71, 0.69, 0.73, 0.8, 0.88, 0.88
3, 2, 3, 0, 0, 1, 2, 2, 3, 0
Regression Metrics
What Regression Tasks do?
Model learns to predict numeric scores.
For example, we try to predict the price of a stock on future days given past
price history and other useful information
Some Regression Metrics..
RMSE (Root Mean Square Error)
Quantiles of Errors
RMSE
The most commonly used metric for regression tasks
Also known as RMSD ( root-mean-square deviation)
This is defined as the square root of the average squared distance between
the actual score and the predicted score:
Quantiles of Errors
RMSE is an average, so it is sensitive to large outliers.
If the regressor performs really badly on a single data point, the average
error could be big, not robust
Quantiles (or percentiles) are much more robust
Because it is not affected by large outliers
It’s important to look at the median absolute percentage:
It gives us a relative measure of the typical error.
Acknowledgement
Evaluating Machine Learning Models by Alice Zheng
Many slides in this section are adapted from Prof. Joydeep Ghosh (UT ECE)
who in turn adapted them from Prof. Dik Lee (Univ. of Science and Tech,
Hong Kong)
Tutorial of Data School on ROC Curves and AUC by Kevin Markham
Questions???
Thank You

More Related Content

PDF
Confusion matrix and classification evaluation metrics
PDF
An Introduction to Supervised Machine Learning and Pattern Classification: Th...
PDF
Module 4: Model Selection and Evaluation
PDF
Model selection and cross validation techniques
PPTX
Lecture-12Evaluation Measures-ML.pptx
PDF
Semi-supervised Machine Learning
PPTX
Model Performance Metrics. Accuracy, Precision, Recall
PDF
L2. Evaluating Machine Learning Algorithms I
Confusion matrix and classification evaluation metrics
An Introduction to Supervised Machine Learning and Pattern Classification: Th...
Module 4: Model Selection and Evaluation
Model selection and cross validation techniques
Lecture-12Evaluation Measures-ML.pptx
Semi-supervised Machine Learning
Model Performance Metrics. Accuracy, Precision, Recall
L2. Evaluating Machine Learning Algorithms I

What's hot (20)

PDF
Feature selection
PDF
Scaling and Normalization
PPTX
Confusion matrix, accuracy, precision, recall, f score
PPTX
Overfitting & Underfitting
PPTX
supervised learning
PDF
Decision trees in Machine Learning
PPTX
Logistic Regression | Logistic Regression In Python | Machine Learning Algori...
PPT
Decision tree
PPTX
Ensemble methods in machine learning
PPTX
Association Rule Learning Part 1: Frequent Itemset Generation
PDF
Feature Engineering
PDF
Performance Metrics for Machine Learning Algorithms
PPTX
K-Folds Cross Validation Method
PPTX
Machine Learning
PPTX
Logistic regression
PDF
Bias and variance trade off
PPTX
Unsupervised learning (clustering)
PPTX
Data preprocessing
PDF
K - Nearest neighbor ( KNN )
PPTX
Machine Learning - Accuracy and Confusion Matrix
Feature selection
Scaling and Normalization
Confusion matrix, accuracy, precision, recall, f score
Overfitting & Underfitting
supervised learning
Decision trees in Machine Learning
Logistic Regression | Logistic Regression In Python | Machine Learning Algori...
Decision tree
Ensemble methods in machine learning
Association Rule Learning Part 1: Frequent Itemset Generation
Feature Engineering
Performance Metrics for Machine Learning Algorithms
K-Folds Cross Validation Method
Machine Learning
Logistic regression
Bias and variance trade off
Unsupervised learning (clustering)
Data preprocessing
K - Nearest neighbor ( KNN )
Machine Learning - Accuracy and Confusion Matrix
Ad

Viewers also liked (20)

PPTX
Version controll.pptx
PDF
Grape(Ruby on Rails)
PDF
Study On ATM/POS Switching Software For Banks
PPTX
Machine Learning Project
PPT
Six sigma (1)
PDF
Bridging the Gap: Machine Learning for Ubiquitous Computing -- Evaluation
PPTX
Lean Six Sigma and the Environment - Sample Slides
PDF
Tweet Recommendation with Graph Co-Ranking
PDF
Learning to rankの評価手法
PDF
Machine Learning and Data Mining: 14 Evaluation and Credibility
PDF
Lecture 3: Basic Concepts of Machine Learning - Induction & Evaluation
PDF
Measuring Effectiveness
PPT
Helpdesk
PDF
Metrics & Analytics That Matter - Steve Krull, CEO, Be Found Online
PPTX
Nabil Malik - Security performance metrics
PDF
Lean Workbench For Creating And Tracking Metrics That Matter
PDF
DataPower Operations Dashboard
PDF
in10: How to build a metric in a metric
PPTX
Analytics and Reporting: Measuring Success Along the Journey
PPTX
Action Trumps Everything
Version controll.pptx
Grape(Ruby on Rails)
Study On ATM/POS Switching Software For Banks
Machine Learning Project
Six sigma (1)
Bridging the Gap: Machine Learning for Ubiquitous Computing -- Evaluation
Lean Six Sigma and the Environment - Sample Slides
Tweet Recommendation with Graph Co-Ranking
Learning to rankの評価手法
Machine Learning and Data Mining: 14 Evaluation and Credibility
Lecture 3: Basic Concepts of Machine Learning - Induction & Evaluation
Measuring Effectiveness
Helpdesk
Metrics & Analytics That Matter - Steve Krull, CEO, Be Found Online
Nabil Malik - Security performance metrics
Lean Workbench For Creating And Tracking Metrics That Matter
DataPower Operations Dashboard
in10: How to build a metric in a metric
Analytics and Reporting: Measuring Success Along the Journey
Action Trumps Everything
Ad

Similar to Data Analysis: Evaluation Metrics for Supervised Learning Models of Machine Learning (20)

PPTX
Important Classification and Regression Metrics.pptx
PPTX
Classification Evaluation Metrics (2).pptx
PPTX
Common evaluation measures in NLP and IR
PDF
Binary classification metrics_cheatsheet
PDF
Assessing Model Performance - Beginner's Guide
PPTX
Performance Metrics, Baseline Model, and Hyper Parameter
PPT
Performance evaluation of IR models
PPTX
network layer service models forwarding versus routing how a router works rou...
PPTX
Machine Learning Unit 2 Semester 3 MSc IT Part 2 Mumbai University
PPT
Lecture11_ Evaluation Metrics for classification.ppt
PPTX
ML-ChapterFour-ModelEvaluation.pptx
PPTX
network layer service models forwarding versus routing how a router works rou...
PDF
A Novel Performance Measure for Machine Learning Classification
PDF
A NOVEL PERFORMANCE MEASURE FOR MACHINE LEARNING CLASSIFICATION
PPTX
All PERFORMANCE PREDICTION PARAMETERS.pptx
PPTX
PERFORMANCE_PREDICTION__PARAMETERS[1].pptx
PDF
A Novel Performance Measure For Machine Learning Classification
PPTX
Module 3_ Classification.pptx
PPTX
ML) is a subdomain of artificial intelligence (AI) that focuses on developing...
PDF
modelperfcheatsheet.pdf
Important Classification and Regression Metrics.pptx
Classification Evaluation Metrics (2).pptx
Common evaluation measures in NLP and IR
Binary classification metrics_cheatsheet
Assessing Model Performance - Beginner's Guide
Performance Metrics, Baseline Model, and Hyper Parameter
Performance evaluation of IR models
network layer service models forwarding versus routing how a router works rou...
Machine Learning Unit 2 Semester 3 MSc IT Part 2 Mumbai University
Lecture11_ Evaluation Metrics for classification.ppt
ML-ChapterFour-ModelEvaluation.pptx
network layer service models forwarding versus routing how a router works rou...
A Novel Performance Measure for Machine Learning Classification
A NOVEL PERFORMANCE MEASURE FOR MACHINE LEARNING CLASSIFICATION
All PERFORMANCE PREDICTION PARAMETERS.pptx
PERFORMANCE_PREDICTION__PARAMETERS[1].pptx
A Novel Performance Measure For Machine Learning Classification
Module 3_ Classification.pptx
ML) is a subdomain of artificial intelligence (AI) that focuses on developing...
modelperfcheatsheet.pdf

Recently uploaded (20)

PPTX
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
PDF
Cost to Outsource Software Development in 2025
PDF
PTS Company Brochure 2025 (1).pdf.......
PPTX
Why Generative AI is the Future of Content, Code & Creativity?
PDF
System and Network Administration Chapter 2
PPTX
L1 - Introduction to python Backend.pptx
PPTX
Computer Software and OS of computer science of grade 11.pptx
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 41
PPTX
Transform Your Business with a Software ERP System
PDF
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
PPTX
Operating system designcfffgfgggggggvggggggggg
PPTX
assetexplorer- product-overview - presentation
PDF
Nekopoi APK 2025 free lastest update
PDF
T3DD25 TYPO3 Content Blocks - Deep Dive by André Kraus
PDF
Upgrade and Innovation Strategies for SAP ERP Customers
PDF
iTop VPN Free 5.6.0.5262 Crack latest version 2025
PDF
Digital Strategies for Manufacturing Companies
PPTX
Log360_SIEM_Solutions Overview PPT_Feb 2020.pptx
PDF
top salesforce developer skills in 2025.pdf
PDF
Understanding Forklifts - TECH EHS Solution
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
Cost to Outsource Software Development in 2025
PTS Company Brochure 2025 (1).pdf.......
Why Generative AI is the Future of Content, Code & Creativity?
System and Network Administration Chapter 2
L1 - Introduction to python Backend.pptx
Computer Software and OS of computer science of grade 11.pptx
Internet Downloader Manager (IDM) Crack 6.42 Build 41
Transform Your Business with a Software ERP System
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
Operating system designcfffgfgggggggvggggggggg
assetexplorer- product-overview - presentation
Nekopoi APK 2025 free lastest update
T3DD25 TYPO3 Content Blocks - Deep Dive by André Kraus
Upgrade and Innovation Strategies for SAP ERP Customers
iTop VPN Free 5.6.0.5262 Crack latest version 2025
Digital Strategies for Manufacturing Companies
Log360_SIEM_Solutions Overview PPT_Feb 2020.pptx
top salesforce developer skills in 2025.pdf
Understanding Forklifts - TECH EHS Solution

Data Analysis: Evaluation Metrics for Supervised Learning Models of Machine Learning

  • 1. DATA ANALYTICS Evaluation Metrics for Supervised Learning Models of Machine Learning Md. Main Uddin Rony Software Developer, Infolytx,Inc.
  • 3. ML Evaluation Metrics Are….. ● tied to Machine Learning Tasks ● methods which determine an algorithm’s performance and behavior ● helpful to decide the best model to meet the target performance ● helpful to parameterize the model in such a way that can offer best performing algorithm
  • 4. Evaluation Metrics Types... ● Various types of ML Algorithms (classification, regression, ranking, clustering) ● Different types of evaluation metrics for different types of algorithm ● Some metrics can be useful for more than one type of algorithm (Precision - Recall) ● Will cover Evaluation Metrics for Supervised learning models only ( Classification, Regression, Ranking)
  • 6. Classification Model Does... Predict class labels given input data In Binary classification, there are two possible output classes ( 0 or 1, True or False, Positive or Negative, Yes or No etc.) Spam detection of email is a good example of Binary classification.
  • 7. Some Popular Classification Metrics... Accuracy Confusion Matrix Log-Loss AUC
  • 8. Accuracy ● Ratio between the number of correct predictions and total number of predictions ● Example: Suppose we have 100 examples in the positive class and 200 examples in the negative class. Our model declares 80 out of 100 positives as positive correctly and 195 out of 200 negatives as negative correctly. ● So, accuracy is = (80 + 195)/(100 + 200) = 91.7%
  • 9. Confusion Matrix ● Shows a more detailed breakdown of correct and incorrect classifications for each class. ● Think about our previous example and then the confusion matrix looks like: ● What is the accuracy that positive class has ? And Negative class? ● Clearly, positive class has lower accuracy than the negative class ● And that information is lost if we calculate overall accuracy only. Predicted as positive Predicted as negative Labeled as positive 80 20 Labeled as negative 5 195
  • 10. Per-Class Accuracy ● Average per class accuracy of previous example: (80% + 97.5%)/2 = 88.75 %, different from accuracy Why important? - Can show different scenario when there are different numbers of examples per class - Class with more examples than other will dominate the statistic of accuracy, hence produced a distorted picture
  • 11. Log-Loss Very much useful when the raw output of classifier is a numeric probability instead of a class label 0 or 1 Mathematically , log-loss for a binary classifier: Minimum is 0 when prediction and true label match up Calculate for a data point predicted by classifier to belong to class 1 with probability .51 and with probability 1 Minimizing this value, maximizing the accuracy of the classifier
  • 12. AUC (Area Under Curve) ● The curve is receiver operating characteristic curve or in short ROC curve ● Provides nuanced details about the behavior of the classifier ● Bad ROC curve covers very little area ● Good ROC curve has a lot of space under it ● But, how?
  • 19. AUC (contd..) ● So, what’s the advantage of using of ROC curve over a simpler metric? ROC curve visualizes all possible classification thresholds, whereas other metrics only represents your error rate for a single threshold
  • 21. Ranking ... Is related to binary classification Internet Search can be a good example which acts as a ranker. During a query, it returns ranked list of web pages relevant to that query So, here ranking can be a binary classification of “relevant query” or “irrelevant query” It also ordering the results so that the most relevant result should be on top So, what can be done in underlying implementation considering both?? Can we predict what will ranking metrics evaluate and how?
  • 22. Some Ranking Metrics.. Precision - Recall Precision - Recall Curve and F1 Score NDCG
  • 23. Precision - Recall Considering the scenario of web search result, Precision answers this question: “Out of the items that the ranker/classifier predicted to be relevant, how many are truly relevant?” Whereas, Recall answers this: “Out of all the items that are truly relevant, how many are found by the ranker/classifier?”
  • 24. Precision - Recall (Contd..)
  • 25. Calculation Example Of Precision- Recall Total Negative = 9760 + 140 = 9900 Total Positive = 40 + 60 = 100 Total Negative prediction = 9760 + 40 = 9800 Total Positive prediction = 140 + 60 = 200 Precision = TP / (TP+FP) = 60 / (60 + 140) = 30% Recall = TP / (TP+FN) = 60 / (60+40) = 60% Predicted as Negative Predicted as Positive Actual Negative 9760 (TN) 140 (FP) Actual Positive 40 (FN) 60 (TP)
  • 26. Precision - Recall Curve When the numbers of answers returned by the ranker will change, the precision and recall score will also be changed By plotting precision versus recall over a range of k values which denotes numbers of results returned, we get the precision - recall curve
  • 29. Trade-off between Recall and Precision
  • 30. F-Measure One measure of performance that takes into account both recall and precision Harmonic mean of recall and precision: Compared to arithmetic mean, both need to be high for harmonic mean to be high
  • 31. NDCG ● Precision and recall treat all retrieved items equally. ● But, a relevant item in position 1 and a relevant item in position 5 bear same significance? ● Think about a web search result ● NDCG tries to take this scenario into account.
  • 32. What? ● NDCG stands for Normalized Discounted Cumulative Gain ● First just focus on DCG (Discounted Cumulative Gain)
  • 33. Discounted Cumulative Gain ● Popular measure for evaluating web search and related tasks. ● Discounts items that are further down the search result list ● Two assumptions: - Highly relevant documents are more useful than marginally relevant document - the lower the ranked position of a relevant document, the less useful it is for the user, since it is less likely to be examined
  • 34. Discounted Cumulative Gain ● Uses graded relevance as a measure of the usefulness, or gain, from examining a document ● Gain is accumulated starting at the top of the ranking and may be reduced, or discounted, at lower ranks ● Typical discount is 1/log (rank) - With base 2, the discount at rank 4 is ½, and at rank 8 it is 1/3
  • 35. Discounted Cumulative Gain ● DCG is the total gain accumulated at a particular rank p: ● Alternative formulation: - used by some web search companies - emphasis on retrieving highly relevant documents * Equation used from Addison Wesley’s
  • 36. DCG Example ● 10 ranked documents judged on 0-3 relevance scale: 3, 2, 3, 0, 0, 1, 2, 2, 3, 0 ● discounted gain: 3, 2/1, 3/1.59, 0, 0, 1/ 2.59, 2/2.81, 2/3 , 3/3.17, 0 = 3, 2, 1.89, 0, 0, 0.39, 0.71, 0.67, 0.95, 0 ● DCG: 3, 5, 6.89, 6.89, 6.89, 7.28, 7.99, 8.66, 9.61, 9.61 * Example used from Addison Wesley’s presentation
  • 37. Normalized DCG ● Normalized version of discounted cumulative gain ● Often normalized by comparing the DCG at each rank with the DCG value for the perfect ranking ● Normalized score always lies between 0.0 and 1.0
  • 38. NDCG Example ● Let’s look back the list of ranked document judged on relevance scale: 3, 2, 3, 0, 0, 1, 2, 2, 3, 0 ● Perfect ranking: 3, 3, 3, 2, 2, 2, 1, 0, 0, 0 ● Perfect discounted gain: 3, 3/1, 3/1.59, 2/2, 2/2.32, 2/ 2.59, 1/2.81, 0 , 0, 0 = 3, 3, 1.89, 1, 0.86, 0.77, 0.36, 0, 0, 0
  • 39. NDCG Example ● Ideal DCG values: 3, 6, 7.89, 8.89, 9.75, 10.52, 10.88, 10.88, 10.88, 10.88 NDCG values( divide actual by ideal): 3/3, 5/6, 6.89/7.89, 6.89/8.89, 6.89/9.75, 7.28/10.52, 7.99/10.88, 8.66/10.88, 9.61/10.88, 9.61/10.88 = 1, 0.83, 0.87, 0.76, 0.71, 0.69, 0.73, 0.8, 0.88, 0.88 3, 2, 3, 0, 0, 1, 2, 2, 3, 0
  • 41. What Regression Tasks do? Model learns to predict numeric scores. For example, we try to predict the price of a stock on future days given past price history and other useful information
  • 42. Some Regression Metrics.. RMSE (Root Mean Square Error) Quantiles of Errors
  • 43. RMSE The most commonly used metric for regression tasks Also known as RMSD ( root-mean-square deviation) This is defined as the square root of the average squared distance between the actual score and the predicted score:
  • 44. Quantiles of Errors RMSE is an average, so it is sensitive to large outliers. If the regressor performs really badly on a single data point, the average error could be big, not robust Quantiles (or percentiles) are much more robust Because it is not affected by large outliers It’s important to look at the median absolute percentage: It gives us a relative measure of the typical error.
  • 45. Acknowledgement Evaluating Machine Learning Models by Alice Zheng Many slides in this section are adapted from Prof. Joydeep Ghosh (UT ECE) who in turn adapted them from Prof. Dik Lee (Univ. of Science and Tech, Hong Kong) Tutorial of Data School on ROC Curves and AUC by Kevin Markham