SlideShare a Scribd company logo
4
Most read
9
Most read
15
Most read
Clustering in data Mining (Data Mining)
 Atta Ul Mustafa
 Armgan Ali
 Ali raza
 Atif Ali
 Abdul Rehman
 (4425)
 4424)
 (4427)
 (4407)
 (4403)
 Introduction
 Clustering
 Why Clustering?
 Types of clustering
 Methods of clustering
 Applications of clustering
 Clustering is the process of making a group of
abstract objects into classes of similar objects.
Points to Remember
 A cluster of data objects can be treated as one
group.
 While doing cluster analysis, we first partition the
set of data into groups based on data similarity
and then assign the labels to the groups.
 The main advantage of clustering over
classification is that, it is adaptable to changes
and helps single out useful features that
distinguish different groups.
Clustering in data Mining (Data Mining)
 High dimensionality - The clustering
algorithm should not only be able to handle
low- dimensional data but also the high
dimensional space.
 Ability to deal with noisy data - Databases
contain noisy, missing or erroneous data.
Some algorithms are sensitive to such data
and may lead to poor quality clusters.
 Interpretability - The clustering results
should be interpretable, comprehensible and
usable.
 Scalability - We need highly scalable
clustering algorithms to deal with large
databases.
 Ability to deal with different kind of
attributes Algorithms should be capable to be
applied on any kind of data such as interval
based (numerical) data, categorical, binary
data.
 Discovery of clusters with attribute shape -
The clustering algorithm should be capable of
detect cluster of arbitrary shape. It should not
be bounded to only distance measures that
tend to find spherical cluster of small size.
 Clustering can be divided into different categories
based on different criteria
 1.Hard clustering: A given data point in n-
dimensional space only belongs to one cluster. This
is also known as exclusive clustering. The K-Means
clustering mechanism is an example of hard
clustering.
 2.Soft clustering: A given data point can belong to
more than one cluster in soft clustering. This is also
known as overlapping clustering. The Fuzzy K-Means
algorithm is a good example of soft clustering.
 3.Hierarchial clustering: In hierarchical clustering, a
hierarchy of clusters is built using the top-down
(divisive) or bottom-up (agglomerative) approach.
 4. Flat clustering: Is a simple technique
where no hierarchy is present.
 5.Model-based clustering: In model-based
clustering, data is modeled using a standard
statistical model to work with different
distributions. The idea is to find a model that
best fits the data.
 Clustering analysis is broadly used in many applications
such as market research, pattern recognition, data
analysis, and image processing.
 Clustering can also help marketers discover distinct
groups in their customer base. And they can characterize
their customer groups based on the purchasing patterns.
 In the field of biology, it can be used to derive plant and
animal taxonomies, categorize genes with similar
functionalities and gain insight into structures inherent to
populations.
 Clustering also helps in identification of areas of similar
land use in an earth observation database. It also helps in
the identification of groups of houses in a city according
to house type, value, and geographic location.
 Clustering also helps in classifying
documents on the web for information
discovery.
 Clustering is also used in outlier detection
applications such as detection of credit card
fraud.
 As a data mining function, cluster analysis
serves as a tool to gain insight into the
distribution of data to observe characteristics
of each cluster.
 The following points throw light on why
clustering is required in data mining −
 Scalability − We need highly scalable clustering
algorithms to deal with large databases.
 Ability to deal with different kinds of attributes −
Algorithms should be capable to be applied on
any kind of data such as interval-based
(numerical) data, categorical, and binary data.
 Discovery of clusters with attribute shape − The
clustering algorithm should be capable of
detecting clusters of arbitrary shape. They should
not be bounded to only distance measures that
tend to find spherical cluster of small sizes.
 High dimensionality − The clustering
algorithm should not only be able to handle
low-dimensional data but also the high
dimensional space.
 Ability to deal with noisy data − Databases
contain noisy, missing or erroneous data.
Some algorithms are sensitive to such data
and may lead to poor quality clusters.
 Interpretability − The clustering results
should be interpretable, comprehensible, and
usable.
Clustering methods can be classified into the
following categories −
 Partitioning Method
 Hierarchical Method
 Density-based Method
 Grid-Based Method
 Model-Based Method
 Constraint-based Method
 Suppose we are given a database of ‘n’ objects and
the partitioning method constructs ‘k’ partition of
data. Each partition will represent a cluster and k ≤ n.
It means that it will classify the data into k groups,
which satisfy the following requirements −
 Each group contains at least one object.
 Each object must belong to exactly one group.
 Points to remember −
 For a given number of partitions (say k), the
partitioning method will create an initial partitioning.
 Then it uses the iterative relocation technique to
improve the partitioning by moving objects from one
group to other.
 This method creates a hierarchical
decomposition of the given set of data
objects. We can classify hierarchical methods
on the basis of how the hierarchical
decomposition is formed. There are two
approaches here −
 Agglomerative Approach
 Divisive Approach
Agglomerative Approach
 This approach is also known as the bottom-up
approach. In this, we start with each object forming a
separate group. It keeps on merging the objects or
groups that are close to one another. It keep on
doing so until all of the groups are merged into one
or until the termination condition holds.
Divisive Approach
 This approach is also known as the top-down
approach. In this, we start with all of the objects in
the same cluster. In the continuous iteration, a
cluster is split up into smaller clusters. It is down
until each object in one cluster or the termination
condition holds. This method is rigid, i.e., once a
merging or splitting is done, it can never be undone.
 This method is based on the notion of
density. The basic idea is to continue growing
the given cluster as long as the density in the
neighborhood exceeds some threshold, i.e.,
for each data point within a given cluster, the
radius of a given cluster has to contain at
least a minimum number of points.
 In this, the objects together form a grid. The
object space is quantized into finite number
of cells that form a grid structure.
Advantages
 The major advantage of this method is fast
processing time.
 It is dependent only on the number of cells in
each dimension in the quantized space.
 In this method, a model is hypothesized for
each cluster to find the best fit of data for a
given model. This method locates the clusters
by clustering the density function. It reflects
spatial distribution of the data points.
 This method also provides a way to
automatically determine the number of
clusters based on standard statistics, taking
outlier or noise into account. It therefore
yields robust clustering methods.
 In this method, the clustering is performed by
the incorporation of user or application-
oriented constraints. A constraint refers to
the user expectation or the properties of
desired clustering results. Constraints
provide us with an interactive way of
communication with the clustering process.
Constraints can be specified by the user or
the application requirement.
Clustering in data Mining (Data Mining)

More Related Content

PPTX
05 Clustering in Data Mining
PPTX
Introduction to Clustering algorithm
PPTX
PPTX
Clustering in Data Mining
PDF
Machine Learning Clustering
PPTX
Clusters techniques
PPT
K means Clustering Algorithm
PPT
3.3 hierarchical methods
05 Clustering in Data Mining
Introduction to Clustering algorithm
Clustering in Data Mining
Machine Learning Clustering
Clusters techniques
K means Clustering Algorithm
3.3 hierarchical methods

What's hot (20)

PPT
Mining Frequent Patterns, Association and Correlations
PPT
Data Mining: Concepts and Techniques_ Chapter 6: Mining Frequent Patterns, ...
PPT
4.2 spatial data mining
PPT
3.2 partitioning methods
PPT
5.3 mining sequential patterns
PPT
Clustering
PPTX
Presentation on supervised learning
PPT
5.2 mining time series data
PPTX
Major issues in data mining
PPTX
multi dimensional data model
PPT
Hadoop Map Reduce
PPT
Map reduce in BIG DATA
PPTX
04 Classification in Data Mining
PPTX
Data mining: Classification and prediction
PDF
Modelling and evaluation
PPTX
Data Mining: Mining ,associations, and correlations
PPT
Cluster analysis
PPTX
Data cubes
PPTX
Database security
PPTX
Hierarchical clustering.pptx
Mining Frequent Patterns, Association and Correlations
Data Mining: Concepts and Techniques_ Chapter 6: Mining Frequent Patterns, ...
4.2 spatial data mining
3.2 partitioning methods
5.3 mining sequential patterns
Clustering
Presentation on supervised learning
5.2 mining time series data
Major issues in data mining
multi dimensional data model
Hadoop Map Reduce
Map reduce in BIG DATA
04 Classification in Data Mining
Data mining: Classification and prediction
Modelling and evaluation
Data Mining: Mining ,associations, and correlations
Cluster analysis
Data cubes
Database security
Hierarchical clustering.pptx
Ad

Similar to Clustering in data Mining (Data Mining) (20)

PDF
Data mining
PDF
Chapter 5.pdf
PPTX
Clustering in Machine Learning, a process of grouping.
PPTX
pratik meshram-Unit 5 (contemporary mkt r sch)
PPTX
UNIT - 4: Data Warehousing and Data Mining
PPTX
METHODS OF CLUSTER ANALYSIS.pptx
PPTX
1. METHODS OF CLUSTER ANALYSIS.pptx
PDF
Paper id 26201478
PPT
multiarmed bandit.ppt
PDF
4.Unit 4 ML Q&A.pdf machine learning qb
PPT
cluster analysis
PPT
Chapter 10 ClusBasic ppt file for clear understaning
PPT
Chapter -10-Clus_Basic.ppt -DataMinning
PPT
Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...
PPTX
Cluster Analysis.pptx
PPTX
unitvclusteranalysis-221214135407-1956d6ef.pptx
PPTX
Data clustring
PPT
Dataa miining
PPT
data mining cocepts and techniques chapter
PPTX
Machine Learning : Clustering - Cluster analysis.pptx
Data mining
Chapter 5.pdf
Clustering in Machine Learning, a process of grouping.
pratik meshram-Unit 5 (contemporary mkt r sch)
UNIT - 4: Data Warehousing and Data Mining
METHODS OF CLUSTER ANALYSIS.pptx
1. METHODS OF CLUSTER ANALYSIS.pptx
Paper id 26201478
multiarmed bandit.ppt
4.Unit 4 ML Q&A.pdf machine learning qb
cluster analysis
Chapter 10 ClusBasic ppt file for clear understaning
Chapter -10-Clus_Basic.ppt -DataMinning
Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...
Cluster Analysis.pptx
unitvclusteranalysis-221214135407-1956d6ef.pptx
Data clustring
Dataa miining
data mining cocepts and techniques chapter
Machine Learning : Clustering - Cluster analysis.pptx
Ad

Recently uploaded (20)

PPT
Miokarditis (Inflamasi pada Otot Jantung)
PPTX
Challenges and opportunities in feeding a growing population
PDF
Report The-State-of-AIOps 20232032 3.pdf
PPT
Chapter 2 METAL FORMINGhhhhhhhjjjjmmmmmmmmm
PDF
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
PDF
Data Analyst Certificate Programs for Beginners | IABAC
PPT
Reliability_Chapter_ presentation 1221.5784
PPTX
Computer network topology notes for revision
PPTX
Moving the Public Sector (Government) to a Digital Adoption
PPTX
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
PPTX
1_Introduction to advance data techniques.pptx
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PDF
Clinical guidelines as a resource for EBP(1).pdf
PDF
Oracle OFSAA_ The Complete Guide to Transforming Financial Risk Management an...
PPTX
Introduction-to-Cloud-ComputingFinal.pptx
PDF
.pdf is not working space design for the following data for the following dat...
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PDF
Taxes Foundatisdcsdcsdon Certificate.pdf
PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PPTX
Business Acumen Training GuidePresentation.pptx
Miokarditis (Inflamasi pada Otot Jantung)
Challenges and opportunities in feeding a growing population
Report The-State-of-AIOps 20232032 3.pdf
Chapter 2 METAL FORMINGhhhhhhhjjjjmmmmmmmmm
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
Data Analyst Certificate Programs for Beginners | IABAC
Reliability_Chapter_ presentation 1221.5784
Computer network topology notes for revision
Moving the Public Sector (Government) to a Digital Adoption
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
1_Introduction to advance data techniques.pptx
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
Clinical guidelines as a resource for EBP(1).pdf
Oracle OFSAA_ The Complete Guide to Transforming Financial Risk Management an...
Introduction-to-Cloud-ComputingFinal.pptx
.pdf is not working space design for the following data for the following dat...
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
Taxes Foundatisdcsdcsdon Certificate.pdf
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
Business Acumen Training GuidePresentation.pptx

Clustering in data Mining (Data Mining)

  • 2.  Atta Ul Mustafa  Armgan Ali  Ali raza  Atif Ali  Abdul Rehman  (4425)  4424)  (4427)  (4407)  (4403)
  • 3.  Introduction  Clustering  Why Clustering?  Types of clustering  Methods of clustering  Applications of clustering
  • 4.  Clustering is the process of making a group of abstract objects into classes of similar objects. Points to Remember  A cluster of data objects can be treated as one group.  While doing cluster analysis, we first partition the set of data into groups based on data similarity and then assign the labels to the groups.  The main advantage of clustering over classification is that, it is adaptable to changes and helps single out useful features that distinguish different groups.
  • 6.  High dimensionality - The clustering algorithm should not only be able to handle low- dimensional data but also the high dimensional space.  Ability to deal with noisy data - Databases contain noisy, missing or erroneous data. Some algorithms are sensitive to such data and may lead to poor quality clusters.  Interpretability - The clustering results should be interpretable, comprehensible and usable.
  • 7.  Scalability - We need highly scalable clustering algorithms to deal with large databases.  Ability to deal with different kind of attributes Algorithms should be capable to be applied on any kind of data such as interval based (numerical) data, categorical, binary data.  Discovery of clusters with attribute shape - The clustering algorithm should be capable of detect cluster of arbitrary shape. It should not be bounded to only distance measures that tend to find spherical cluster of small size.
  • 8.  Clustering can be divided into different categories based on different criteria  1.Hard clustering: A given data point in n- dimensional space only belongs to one cluster. This is also known as exclusive clustering. The K-Means clustering mechanism is an example of hard clustering.  2.Soft clustering: A given data point can belong to more than one cluster in soft clustering. This is also known as overlapping clustering. The Fuzzy K-Means algorithm is a good example of soft clustering.  3.Hierarchial clustering: In hierarchical clustering, a hierarchy of clusters is built using the top-down (divisive) or bottom-up (agglomerative) approach.
  • 9.  4. Flat clustering: Is a simple technique where no hierarchy is present.  5.Model-based clustering: In model-based clustering, data is modeled using a standard statistical model to work with different distributions. The idea is to find a model that best fits the data.
  • 10.  Clustering analysis is broadly used in many applications such as market research, pattern recognition, data analysis, and image processing.  Clustering can also help marketers discover distinct groups in their customer base. And they can characterize their customer groups based on the purchasing patterns.  In the field of biology, it can be used to derive plant and animal taxonomies, categorize genes with similar functionalities and gain insight into structures inherent to populations.  Clustering also helps in identification of areas of similar land use in an earth observation database. It also helps in the identification of groups of houses in a city according to house type, value, and geographic location.
  • 11.  Clustering also helps in classifying documents on the web for information discovery.  Clustering is also used in outlier detection applications such as detection of credit card fraud.  As a data mining function, cluster analysis serves as a tool to gain insight into the distribution of data to observe characteristics of each cluster.
  • 12.  The following points throw light on why clustering is required in data mining −  Scalability − We need highly scalable clustering algorithms to deal with large databases.  Ability to deal with different kinds of attributes − Algorithms should be capable to be applied on any kind of data such as interval-based (numerical) data, categorical, and binary data.  Discovery of clusters with attribute shape − The clustering algorithm should be capable of detecting clusters of arbitrary shape. They should not be bounded to only distance measures that tend to find spherical cluster of small sizes.
  • 13.  High dimensionality − The clustering algorithm should not only be able to handle low-dimensional data but also the high dimensional space.  Ability to deal with noisy data − Databases contain noisy, missing or erroneous data. Some algorithms are sensitive to such data and may lead to poor quality clusters.  Interpretability − The clustering results should be interpretable, comprehensible, and usable.
  • 14. Clustering methods can be classified into the following categories −  Partitioning Method  Hierarchical Method  Density-based Method  Grid-Based Method  Model-Based Method  Constraint-based Method
  • 15.  Suppose we are given a database of ‘n’ objects and the partitioning method constructs ‘k’ partition of data. Each partition will represent a cluster and k ≤ n. It means that it will classify the data into k groups, which satisfy the following requirements −  Each group contains at least one object.  Each object must belong to exactly one group.  Points to remember −  For a given number of partitions (say k), the partitioning method will create an initial partitioning.  Then it uses the iterative relocation technique to improve the partitioning by moving objects from one group to other.
  • 16.  This method creates a hierarchical decomposition of the given set of data objects. We can classify hierarchical methods on the basis of how the hierarchical decomposition is formed. There are two approaches here −  Agglomerative Approach  Divisive Approach
  • 17. Agglomerative Approach  This approach is also known as the bottom-up approach. In this, we start with each object forming a separate group. It keeps on merging the objects or groups that are close to one another. It keep on doing so until all of the groups are merged into one or until the termination condition holds. Divisive Approach  This approach is also known as the top-down approach. In this, we start with all of the objects in the same cluster. In the continuous iteration, a cluster is split up into smaller clusters. It is down until each object in one cluster or the termination condition holds. This method is rigid, i.e., once a merging or splitting is done, it can never be undone.
  • 18.  This method is based on the notion of density. The basic idea is to continue growing the given cluster as long as the density in the neighborhood exceeds some threshold, i.e., for each data point within a given cluster, the radius of a given cluster has to contain at least a minimum number of points.
  • 19.  In this, the objects together form a grid. The object space is quantized into finite number of cells that form a grid structure. Advantages  The major advantage of this method is fast processing time.  It is dependent only on the number of cells in each dimension in the quantized space.
  • 20.  In this method, a model is hypothesized for each cluster to find the best fit of data for a given model. This method locates the clusters by clustering the density function. It reflects spatial distribution of the data points.  This method also provides a way to automatically determine the number of clusters based on standard statistics, taking outlier or noise into account. It therefore yields robust clustering methods.
  • 21.  In this method, the clustering is performed by the incorporation of user or application- oriented constraints. A constraint refers to the user expectation or the properties of desired clustering results. Constraints provide us with an interactive way of communication with the clustering process. Constraints can be specified by the user or the application requirement.