使用数据库扫描时出现"could not convert integer scalar"错误



我正在尝试使用Scikit-Learn的DBSCAN实现来聚集一堆文档。首先,我使用Scikit-Learn的TFIDFVectorizer创建TF-IDF矩阵(这是Numpy.float64类型的163405x13029稀疏矩阵)。然后,我尝试聚集此矩阵的特定子集。当子集很小时,情况正常(例如,长达几千行)。但是,有大量子集(有数千行),我会得到ValueError: could not convert integer scalar

这是完整的追溯(idxs是索引列表):


ValueError                        Traceback (most recent call last)
<ipython-input-1-73ee366d8de5> in <module>()
    193     # use descriptions to clusterize items
    194     ncm_clusterizer = DBSCAN()
--> 195     ncm_clusterizer.fit_predict(tfidf[idxs])
    196     idxs_clusters = list(zip(idxs, ncm_clusterizer.labels_))
    197     for e in idxs_clusters:
/usr/local/lib/python3.4/site-packages/sklearn/cluster/dbscan_.py in fit_predict(self, X, y, sample_weight)
    294             cluster labels
    295         """
--> 296         self.fit(X, sample_weight=sample_weight)
    297         return self.labels_
/usr/local/lib/python3.4/site-packages/sklearn/cluster/dbscan_.py in fit(self, X, y, sample_weight)
    264         X = check_array(X, accept_sparse='csr')
    265         clust = dbscan(X, sample_weight=sample_weight,
--> 266                        **self.get_params())
    267         self.core_sample_indices_, self.labels_ = clust
    268         if len(self.core_sample_indices_):
/usr/local/lib/python3.4/site-packages/sklearn/cluster/dbscan_.py in dbscan(X, eps, min_samples, metric, algorithm, leaf_size, p, sample_weight, n_jobs)
    136         # This has worst case O(n^2) memory complexity
    137         neighborhoods = neighbors_model.radius_neighbors(X, eps,
--> 138                                                          return_distance=False)
    139 
    140     if sample_weight is None:
/usr/local/lib/python3.4/site-packages/sklearn/neighbors/base.py in radius_neighbors(self, X, radius, return_distance)
    584             if self.effective_metric_ == 'euclidean':
    585                 dist = pairwise_distances(X, self._fit_X, 'euclidean',
--> 586                                           n_jobs=self.n_jobs, squared=True)
    587                 radius *= radius
    588             else:
/usr/local/lib/python3.4/site-packages/sklearn/metrics/pairwise.py in pairwise_distances(X, Y, metric, n_jobs, **kwds)
   1238         func = partial(distance.cdist, metric=metric, **kwds)
   1239 
-> 1240     return _parallel_pairwise(X, Y, func, n_jobs, **kwds)
   1241 
   1242 
/usr/local/lib/python3.4/site-packages/sklearn/metrics/pairwise.py in _parallel_pairwise(X, Y, func, n_jobs, **kwds)
   1081     if n_jobs == 1:
   1082         # Special case to avoid picklability checks in delayed
-> 1083         return func(X, Y, **kwds)
   1084 
   1085     # TODO: in some cases, backend='threading' may be appropriate
/usr/local/lib/python3.4/site-packages/sklearn/metrics/pairwise.py in euclidean_distances(X, Y, Y_norm_squared, squared, X_norm_squared)
    243         YY = row_norms(Y, squared=True)[np.newaxis, :]
    244 
--> 245     distances = safe_sparse_dot(X, Y.T, dense_output=True)
    246     distances *= -2
    247     distances += XX
/usr/local/lib/python3.4/site-packages/sklearn/utils/extmath.py in safe_sparse_dot(a, b, dense_output)
    184         ret = a * b
    185         if dense_output and hasattr(ret, "toarray"):
--> 186             ret = ret.toarray()
    187         return ret
    188     else:
/usr/local/lib/python3.4/site-packages/scipy/sparse/compressed.py in toarray(self, order, out)
    918     def toarray(self, order=None, out=None):
    919         """See the docstring for `spmatrix.toarray`."""
--> 920         return self.tocoo(copy=False).toarray(order=order, out=out)
    921 
    922     ##############################################################
/usr/local/lib/python3.4/site-packages/scipy/sparse/coo.py in toarray(self, order, out)
    256         M,N = self.shape
    257         coo_todense(M, N, self.nnz, self.row, self.col, self.data,
--> 258                     B.ravel('A'), fortran)
    259         return B
    260 
ValueError: could not convert integer scalar

我正在使用Python 3.4.3(在红帽上),Scipy 0.18.1和Scikit-learn 0.18.1。

我尝试了这里建议的猴子补丁,但这不起作用。

谷歌搜索,我发现了一个错误的文件,该错误将显然解决了其他类型的稀疏矩阵(例如CSR)的问题,但不能解决COO。

我尝试过喂食dbscan稀疏半径邻域图(而不是特征矩阵),如这里所建议的,但是发生了同样的错误。

我尝试了HDBSCAN,但是发生了同样的错误。

如何修复此问题或绕过此?

即使实现允许,DBSCAN也可能会在如此高的维数据上产生不良结果(从统计的角度来看,由于维度的诅咒)。

相反,我建议您使用TruncatedSVD类将TF-IDF功能向量的维度降低到50或100个组件,然后在结果上应用DBSCAN

相关内容

最新更新