Machine Learning With Python 简明教程
Clustering Algorithms - Overview
Introduction to Clustering
聚类方法是最有用的无监督 ML 方法之一。这些方法用于查找数据样本之间的相似性以及关系模式,然后将这些样本聚类到基于特征相似性的组中。
Clustering methods are one of the most useful unsupervised ML methods. These methods are used to find similarity as well as the relationship patterns among data samples and then cluster those samples into groups having similarity based on features.
聚类很重要,因为它确定了当前未标记数据之间的内在分组。它们在基本上对数据点做出了一些关于其相似性的假设。每一项假设都会构建不同但同样有效的聚类。
Clustering is important because it determines the intrinsic grouping among the present unlabeled data. They basically make some assumptions about data points to constitute their similarity. Each assumption will construct different but equally valid clusters.
例如,以下是显示聚类系统将不同聚类中相似类型的数据分组在一起的图表:
For example, below is the diagram which shows clustering system grouped together the similar kind of data in different clusters −
Cluster Formation Methods
聚类不必以球形形式形成。以下是一些其他聚类形成方法:
It is not necessary that clusters will be formed in spherical form. Followings are some other cluster formation methods −
Density-based
在这些方法中,聚类被形成为稠密区域。这些方法的优点在于,它们既具有良好的准确性,又有合并两个聚类的良好能力。例如,带噪声的基于密度的空间聚类应用 (DBSCAN),用于识别聚类结构的排序点 (OPTICS) 等。
In these methods, the clusters are formed as the dense region. The advantage of these methods is that they have good accuracy as well as good ability to merge two clusters. Ex. Density-Based Spatial Clustering of Applications with Noise (DBSCAN), Ordering Points to identify Clustering structure (OPTICS) etc.
Hierarchical-based
在这些方法中,聚类被基于分层形成为树型结构。它们有两个类别,即凝聚(自底向上的方法)和分裂(自顶向下的方法)。例如,使用代表的聚类 (CURE),使用层次结构的平衡迭代缩小聚类 (BIRCH) 等。
In these methods, the clusters are formed as a tree type structure based on the hierarchy. They have two categories namely, Agglomerative (Bottom up approach) and Divisive (Top down approach). Ex. Clustering using Representatives (CURE), Balanced iterative Reducing Clustering using Hierarchies (BIRCH) etc.
Partitioning
在这些方法中,聚类是由将各个对象分配到 k 个聚类中而形成的。聚类数将等于分区数。例如,K 均值,基于随机搜索聚类大型应用程序 (CLARANS)。
In these methods, the clusters are formed by portioning the objects into k clusters. Number of clusters will be equal to the number of partitions. Ex. K-means, Clustering Large Applications based upon randomized Search (CLARANS).
Grid
在这些方法中,聚类被形成为网格状结构。这些方法的优点在于,在这些网格上进行的所有聚类操作都很快,并且与数据对象的数量无关。例如,统计信息网格 (STING),寻求聚类 (CLIQUE)。
In these methods, the clusters are formed as a grid like structure. The advantage of these methods is that all the clustering operation done on these grids are fast and independent of the number of data objects. Ex. Statistical Information Grid (STING), Clustering in Quest (CLIQUE).
Measuring Clustering Performance
有关 ML 模型最重要的考虑因素之一是评估其性能或可以称之为模型的质量。在监督学习算法的情况下,对模型质量的评估很简单,因为我们已经为每个示例都贴上了标签。
One of the most important consideration regarding ML model is assessing its performance or you can say model’s quality. In case of supervised learning algorithms, assessing the quality of our model is easy because we already have labels for every example.
另一方面,在无监督学习算法的情况下,由于我们处理的是未标记数据,因此我们没有那么幸运。但我们仍然有一些指标可以让从业者深入了解群集的变化,具体取决于算法。
On the other hand, in case of unsupervised learning algorithms we are not that much blessed because we deal with unlabeled data. But still we have some metrics that give the practitioner an insight about the happening of change in clusters depending on algorithm.
在我们深入了解这些指标之前,我们必须了解这些指标只是评估模型之间的比较性能,而不是衡量模型预测的有效性。以下是我们可以在聚类算法中部署的一些指标来衡量模型质量 -
Before we deep dive into such metrics, we must understand that these metrics only evaluates the comparative performance of models against each other rather than measuring the validity of the model’s prediction. Followings are some of the metrics that we can deploy on clustering algorithms to measure the quality of model −
Silhouette Analysis
轮廓分析用于检查聚类模型的质量,方法是测量聚类之间的距离。它基本上为我们提供了一种方法来评估聚类数量等参数,这得益于 Silhouette score 。此分数衡量一个聚类中的每个点与相邻聚类中的点的距离。
Silhouette analysis used to check the quality of clustering model by measuring the distance between the clusters. It basically provides us a way to assess the parameters like number of clusters with the help of Silhouette score. This score measures how close each point in one cluster is to points in the neighboring clusters.
Analysis of Silhouette Score
轮廓分数的范围为 [-1, 1]。它的分析如下 -
The range of Silhouette score is [-1, 1]. Its analysis is as follows −
-
+1 Score − Near +1 Silhouette score indicates that the sample is far away from its neighboring cluster.
-
0 Score − 0 Silhouette score indicates that the sample is on or very close to the decision boundary separating two neighboring clusters.
-
-1 Score &minusl -1 Silhouette score indicates that the samples have been assigned to the wrong clusters.
轮廓分数的计算可以使用以下公式进行 −
The calculation of Silhouette score can be done by using the following formula −
silhouette 分数=(p-q)/max (p,q)
𝒔𝒊𝒍𝒉𝒐𝒖𝒆𝒕𝒕𝒆 𝒔𝒄𝒐𝒓𝒆=(𝒑−𝒒)/𝐦𝐚𝐱 (𝒑,𝒒)
此处,p = 到最近聚类中点的平均距离
Here, 𝑝 = mean distance to the points in the nearest cluster
并且,q = 到所有点的平均聚类内距离。
And, 𝑞 = mean intra-cluster distance to all the points.
Davis-Bouldin Index
DB 索引是执行聚类算法分析的另一个好指标。借助 DB 索引,我们可以了解有关聚类模型的以下几点 -
DB index is another good metric to perform the analysis of clustering algorithms. With the help of DB index, we can understand the following points about clustering model −
-
Weather the clusters are well-spaced from each other or not?
-
How much dense the clusters are?
我们可以借助以下公式计算 DB 索引 -
We can calculate DB index with the help of following formula −
此处,n = 聚类数
Here, 𝑛 = number of clusters
σi = 聚类 i 中所有点到聚类质心 ci 的平均距离。
σi = average distance of all points in cluster 𝑖 from the cluster centroid 𝑐𝑖.
DB 索引越少,聚类模型越好。
Less the DB index, better the clustering model is.
Dunn Index
它的工作原理与 DB 索引相同,但有以下几点不同:
It works same as DB index but there are following points in which both differs −
-
The Dunn index considers only the worst case i.e. the clusters that are close together while DB index considers dispersion and separation of all the clusters in clustering model.
-
Dunn index increases as the performance increases while DB index gets better when clusters are well-spaced and dense.
我们可以借助以下公式计算 Dunn 索引:
We can calculate Dunn index with the help of following formula −
其中,𝑖,𝑗,𝑘 = 每个簇的索引
Here, 𝑖,𝑗,𝑘 = each indices for clusters
𝑝 = 簇间距离
𝑝 = inter-cluster distance
q = 簇内距离
q = intra-cluster distance
Types of ML Clustering Algorithms
以下是最重要的有用的 ML 聚类算法 −
The following are the most important and useful ML clustering algorithms −
K-means Clustering
此聚类算法计算质心并迭代直至找到最佳质心。它假定已知聚类数。它也被称为平面聚类算法。算法从数据识别的聚类数在 K 均值中表示为“K”。
This clustering algorithm computes the centroids and iterates until we it finds optimal centroid. It assumes that the number of clusters are already known. It is also called flat clustering algorithm. The number of clusters identified from data by algorithm is represented by ‘K’ in K-means.
Applications of Clustering
我们可以在以下领域发现聚类很有用 −
We can find clustering useful in the following areas −
Data summarization and compression − 聚类被广泛用于我们要求数据汇总、压缩和减少的领域。例如图像处理和矢量量化。
Data summarization and compression − Clustering is widely used in the areas where we require data summarization, compression and reduction as well. The examples are image processing and vector quantization.
Collaborative systems and customer segmentation − 由于聚类可以用于查找类似产品或同类用户,因此它可以用于协作系统和客户细分领域。
Collaborative systems and customer segmentation − Since clustering can be used to find similar products or same kind of users, it can be used in the area of collaborative systems and customer segmentation.
Serve as a key intermediate step for other data mining tasks − 聚类分析可以生成用于分类、测试、假设生成的数据的紧凑摘要;因此,它也作为其他数据挖掘任务的关键中间步骤。
Serve as a key intermediate step for other data mining tasks − Cluster analysis can generate a compact summary of data for classification, testing, hypothesis generation; hence, it serves as a key intermediate step for other data mining tasks also.
Trend detection in dynamic data − 通过创建具有类似趋势的不同聚类,聚类还可以用于动态数据中的趋势检测。
Trend detection in dynamic data − Clustering can also be used for trend detection in dynamic data by making various clusters of similar trends.
Social network analysis − 聚类可以用于社交网络分析。例如,在图像、视频或音频中生成序列。
Social network analysis − Clustering can be used in social network analysis. The examples are generating sequences in images, videos or audios.
Biological data analysis − 聚类还可以用于生成图像和视频聚类,因此可以成功地用于生物数据分析。
Biological data analysis − Clustering can also be used to make clusters of images, videos hence it can successfully be used in biological data analysis.