在 Python 中将 KNN 从小型监督数据集应用于大型无监督数据集



我已经在Python中大约200个样本的小型监督数据集上训练并测试了一个KNN模型。我想将这些结果应用于一个更大的数千个样本的无监督数据集。

我的问题是:有没有办法使用小型监督数据集拟合 KNN 模型,然后更改大型无监督数据集的 K 值?我不想通过使用较小数据集中的低 K 值来过度拟合模型,但不确定如何拟合模型,然后在 Python 中更改 K 值。

使用 KNN 可以这样做吗?有没有其他方法可以将KNN应用于更大的无监督数据集?

我建议在较大的数据集上实际拟合 KNN 模型几次不同的时间,每次都使用不同的值k。对于每个模型,您可以计算轮廓分数。

比较各种轮廓分数,并为最终值k(聚类数)选择用于最高评分模型的值。

举个例子,这是我去年用来为自己做的一些代码:

from sklearn import mixture
from sklearn.metrics import silhouette_score
import matplotlib.pyplot as plt

## A list of the different numbers of clusters (the 'n_components' parameter) with 
## which we will run GMM.
number_of_clusters = [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
## Graph plotting method
def makePlot(number_of_clusters, silhouette_scores):
    # Plot the each value of 'number of clusters' vs. the silhouette score at that value
    fig, ax = plt.subplots(figsize=(16, 6))
    ax.set_xlabel('GMM - number of clusters')
    ax.set_ylabel('Silhouette Score (higher is better)')
    ax.plot(number_of_clusters, silhouette_scores)
    # Ticks and grid
    xticks = np.arange(min(number_of_clusters), max(number_of_clusters)+1, 1.0)
    ax.set_xticks(xticks, minor=False)
    ax.set_xticks(xticks, minor=True)
    ax.xaxis.grid(True, which='both')
    yticks = np.arange(round(min(silhouette_scores), 2), max(silhouette_scores), .02)
    ax.set_yticks(yticks, minor=False)
    ax.set_yticks(yticks, minor=True)
    ax.yaxis.grid(True, which='both')
## Graph the mean silhouette score of each cluster amount.
## Print out the number of clusters that results in the highest
## silhouette score for GMM.
def findBestClusterer(number_of_clusters):
    silhouette_scores = []
    for i in number_of_clusters:
        clusterer = mixture.GMM(n_components=i) # Use the model of your choice here
        clusterer.fit(<your data set>) # enter your data set's variable name here
        preds = clusterer.predict(<your data set>)
        score = silhouette_score(<your data set>, preds)
        silhouette_scores.append(score)
    ## Print a table of all the silhouette scores
    print("")
    print("| Number of clusters | Silhouette score |")
    print("| ------------------ | ---------------- |")
    for i in range(len(number_of_clusters)):
        ## Ensure printed table is properly formatted, taking into account
        ## amount of digits (either one or two) in the value for number of clusters.
        if number_of_clusters[i] <= 9:
            print("| {number}                  | {score:.4f}           |".format(number=number_of_clusters[i], 
                                                                        score=round(silhouette_scores[i], 4)))
        else:
            print("| {number}                 | {score:.4f}           |".format(number=number_of_clusters[i], 
                                                                        score=round(silhouette_scores[i], 4)))

    ## Graph the plot of silhoutte scores for each amount of clusters
    makePlot(number_of_clusters, silhouette_scores)
    ## Find and print out the cluster amount that gives the highest 
    ## silhouette score.
    best_silhouette_score = max(silhouette_scores)
    index_of_best_score = silhouette_scores.index(best_silhouette_score)
    ideal_number_of_clusters = number_of_clusters[index_of_best_score]
    print("")
    print("Having {} clusters gives the highest silhouette score of {}.".format(ideal_number_of_clusters,
                                                                                round(best_silhouette_score, 4)))
findBestClusterer(number_of_clusters) 

请注意,在我的示例中,我使用了 GMM 模型而不是 KNN,但您应该能够稍微修改 findBestClusterer() 方法以使用您想要的任何聚类算法。在此方法中,您还将指定数据集。

在机器学习中,有两种广泛的学习者类型,即渴望学习者(决策树,神经网络,svms...)和惰性学习者,如KNN。事实上,KNN根本不做任何学习。它只是存储您拥有的"标记"数据,然后使用它来执行推理,以便计算新样本(未标记)与它存储的数据中的所有样本(标记数据)的相似程度。然后,根据新样本的K个最近的实例(K最近的邻居因此得名)的多数投票,它将推断出它的类/值。

现在进入您的问题,"训练"KNNK本身无关,因此在执行推理时,请随意使用任何为您提供最佳结果K

最新更新