索引超出范围在Spark MLLIB K-means与TFIDF文本聚类



我正在尝试使用spark MLlib运行k-means,但我正在获得索引超出范围的错误。

我把我的小样本输入文件分开,输出是这样的:-

['hello', 'world', 'this', 'is', 'earth']
['what', 'are', 'you', 'trying', 'to', 'do']
['trying', 'to', 'learn', 'something']
['I', 'am', 'new', 'at', 'this', 'thing']
['what', 'about', 'you']

现在我使用spark给出的TFIDF代码来进行稀疏表示。输出为:-

(1048576,[50570,432125,629096,921177,928731],  [1.09861228867,1.09861228867,0.69314718056,1.09861228867,1.09861228867])
(1048576,[110522,521365,697409,725041,749730,962395],[0.69314718056,1.09861228867,1.09861228867,0.69314718056,0.69314718056,0.69314718056])
(1048576,[4471,725041,850325,962395],[1.09861228867,0.69314718056,1.09861228867,0.69314718056])
(1048576,[36748,36757,84721,167368,629096,704697],[1.09861228867,1.09861228867,1.09861228867,1.09861228867,0.69314718056,1.09861228867])
(1048576,[110522,220898,749730],[0.69314718056,1.09861228867,0.69314718056])

现在我正在运行spark中MLlib给出的k means算法:-

clusters = KMeans.train(tfidf_vectors, 2, maxIterations=10)  
def error(point):
    center = clusters.centers[clusters.predict(point)]
    return sqrt(sum([x**2 for x in (point - center)]))
WSSSE = tfidf_vectors.map(lambda point: error(point)).reduce(lambda x, y: x + y)
print("Within Set Sum of Squared Error = " + str(WSSSE))
clusters.save(sc, "myModelPath")
sameModel = KMeansModel.load(sc, "myModelPath")

但是我在WSSSE步骤中得到索引超出范围的错误。我做错了什么?

我今天已经遇到了一个类似的问题,它看起来是一个bug。TFIDF像这样创建SparseVectors:

>>> from pyspark.mllib.linalg import Vectors
>>> sv = Vectors.sparse(5, {1: 3})

和使用大于最后一个非零值的索引访问值会导致异常:

>>> sv[0]
0.0
>>> sv[1]
3.0
>>> sv[2]
Traceback (most recent call last):
...
IndexError: index out of bounds

快速,虽然不是很有效,解决方法是将SparseVector转换为NumPy数组:

def error(point):                                                         
    center = clusters.centers[clusters.predict(point)]
    return sqrt(sum([x**2 for x in (point.toArray() - center)]))

最新更新