python-3.x 优化邓恩指数计算?

jm81lzqq  于 2023-08-08  发布在  Python
关注(0)|答案(2)|浏览(115)

邓恩指数是一种评估聚类的方法。值越高越好。它被计算为最小簇间距离(即任何两个簇质心之间的最小距离)除以最高簇内距离(即,任何簇中任何两点之间的最大距离)。
我有一个计算Dunn指数的代码片段:

def dunn_index(pf, cf):
    """
    pf -- all data points
    cf -- cluster centroids
    """
    numerator = inf
    for c in cf: # for each cluster
        for t in cf: # for each cluster
            if t is c: continue # if same cluster, ignore
            numerator = min(numerator, distance(t, c)) # find distance between centroids
    denominator = 0
    for c in cf: # for each cluster
        for p in pf: # for each point
            if p.get_cluster() is not c: continue # if point not in cluster, ignore
            for t in pf: # for each point
                if t.get_cluster() is not c: continue # if point not in cluster, ignore
                if t is p: continue # if same point, ignore
                denominator = max(denominator, distance(t, p))
    return numerator/denominator

字符串
问题是这是异常缓慢:对于由5000个示例和15个簇组成的示例数据集,上述函数最差需要执行刚好超过3.75亿次的距离计算。实际上,它要低得多,但即使是最好的情况下,数据已经按集群排序,也是大约2500万次距离计算。我想减少时间,我已经试过直线距离和欧几里得,这不是好事。
如何改进这个算法?

zfciruhq

zfciruhq1#

TLDR:重要的是,问题是在二维中建立的。对于大尺寸,这些技术可能是无效的。

在2D中,我们可以在O(n log n)时间内计算每个簇的直径(簇内距离),其中n是使用凸包的簇大小。矢量化用于加速剩余操作。有两种可能的渐进改进在帖子的末尾提到,欢迎投稿;)
设置和伪造数据:

import numpy as np
from scipy import spatial
from matplotlib import pyplot as plt

# set up fake data
np.random.seed(0)
n_centroids = 1000
centroids = np.random.rand(n_centroids, 2)
cluster_sizes = np.random.randint(1, 1000, size=n_centroids)
# labels from 1 to n_centroids inclusive
labels = np.repeat(np.arange(n_centroids), cluster_sizes) + 1
points = np.zeros((cluster_sizes.sum(), 2))
points[:,0] = np.repeat(centroids[:,0], cluster_sizes)
points[:,1] = np.repeat(centroids[:,1], cluster_sizes)
points += 0.05 * np.random.randn(cluster_sizes.sum(), 2)

字符串
看起来有点像这样:
x1c 0d1x的数据
接下来,我们定义了一个diameter函数,用于计算最大的集群内距离,基于这种方法使用船体。

# compute the diameter based on convex hull 
def diameter(pts):
  # need at least 3 points to construct the convex hull
  if pts.shape[0] <= 1:
    return 0
  if pts.shape[0] == 2:
    return ((pts[0] - pts[1])**2).sum()
  # two points which are fruthest apart will occur as vertices of the convex hull
  hull = spatial.ConvexHull(pts)
  candidates = pts[spatial.ConvexHull(pts).vertices]
  return spatial.distance_matrix(candidates, candidates).max()


对于Dunn指数计算,我假设我们已经计算了点、聚类标签和聚类质心。
如果集群数量较大,则以下基于Pandas的解决方案可能会表现良好:

import pandas as pd
def dunn_index_pandas(pts, labels, centroids):
  # O(k n log(n)) with k clusters and n points; better performance with more even clusters
  max_intracluster_dist = pd.DataFrame(pts).groupby(labels).agg(diameter_pandas)[0].max()
  # O(k^2) with k clusters; can be reduced to O(k log(k))
  # get pairwise distances between centroids
  cluster_dmat = spatial.distance_matrix(centroids, centroids)
  # fill diagonal with +inf: ignore zero distance to self in "min" computation
  np.fill_diagonal(cluster_dmat, np.inf)
  min_intercluster_dist = cluster_sizes.min()
  return min_intercluster_dist / max_intracluster_dist


否则,我们可以继续使用纯numpy解。

def dunn_index(pts, labels, centroids):
  # O(k n log(n)) with k clusters and n points; better performance with more even clusters
  max_intracluster_dist = max(diameter(pts[labels==i]) for i in np.unique(labels))
  # O(k^2) with k clusters; can be reduced to O(k log(k))
  # get pairwise distances between centroids
  cluster_dmat = spatial.distance_matrix(centroids, centroids)
  # fill diagonal with +inf: ignore zero distance to self in "min" computation
  np.fill_diagonal(cluster_dmat, np.inf)
  min_intercluster_dist = cluster_sizes.min()
  return min_intercluster_dist / max_intracluster_dist

%time dunn_index(points, labels, centroids)
# returned value 2.15
# in 2.2 seconds
%time dunn_index_pandas(points, labels, centroids)
# returned 2.15
# in 885 ms


对于1000集群大小为i.i.d. ~U[1,1000]1000集群,这需要2.2。秒在我的机器上对于本例(许多小集群),使用Pandas方法,这个数字下降到0.8秒。
当集群数量较大时,还有两个相关的进一步优化机会:

  • 首先,我使用强力O(k^2)方法计算最小集群间距离,其中k是集群的数量。这可以减少到O(k log(k)),如所讨论的here
  • 其次,max(diameter(pts[labels==i]) for i in np.unique(labels))需要k遍历大小为n的数组。对于许多集群,这可能成为瓶颈(如本例所示)。使用pandas方法可以减轻这一点,但我希望可以进一步优化这一点。对于当前的参数,大约三分之一的计算时间花费在计算集群内距离的交换器之外。
9wbgstp7

9wbgstp72#

这不是关于优化算法本身,但我认为以下建议之一可以提高性能。
1.使用multiprocessing的工作池。
1.将python代码提取为c/cpp。参见official documentation
https://www.python.org上还有Performance Tips

相关问题