我正在尝试使用Scikit learn实现网格搜索,以选择KNN回归的最佳参数。特别是我想做的:
parameters = [{'weights': ['uniform', 'distance'], 'n_neighbors': [5, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100]}]
clf = GridSearchCV(neighbors.KNeighborsRegressor(), parameters)
clf.fit(features, rewards)
不幸的是,我得到ValueError:数组包含NaN或无穷大。
/Users/zikesjan/anaconda/lib/python2.7/site-packages/sklearn/grid_search.pyc in fit(self, X, y, **params)
705 " The params argument will be removed in 0.15.",
706 DeprecationWarning)
--> 707 return self._fit(X, y, ParameterGrid(self.param_grid))
708
709
/Users/zikesjan/anaconda/lib/python2.7/site-packages/sklearn/grid_search.pyc in _fit(self, X, y, parameter_iterable)
491 X, y, base_estimator, parameters, train, test,
492 self.scorer_, self.verbose, **self.fit_params)
--> 493 for parameters in parameter_iterable
494 for train, test in cv)
495
/Users/zikesjan/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in __call__(self, iterable)
515 try:
516 for function, args, kwargs in iterable:
--> 517 self.dispatch(function, args, kwargs)
518
519 self.retrieve()
/Users/zikesjan/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in dispatch(self, func, args, kwargs)
310 """
311 if self._pool is None:
--> 312 job = ImmediateApply(func, args, kwargs)
313 index = len(self._jobs)
314 if not _verbosity_filter(index, self.verbose):
/Users/zikesjan/anaconda/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.pyc in __init__(self, func, args, kwargs)
134 # Don't delay the application, to avoid keeping the input
135 # arguments in memory
--> 136 self.results = func(*args, **kwargs)
137
138 def get(self):
/Users/zikesjan/anaconda/lib/python2.7/site-packages/sklearn/grid_search.pyc in fit_grid_point(X, y, base_estimator, parameters, train, test, scorer, verbose, loss_func, **fit_params)
309 this_score = scorer(clf, X_test, y_test)
310 else:
--> 311 this_score = clf.score(X_test, y_test)
312 else:
313 clf.fit(X_train, **fit_params)
/Users/zikesjan/anaconda/lib/python2.7/site-packages/sklearn/base.pyc in score(self, X, y)
320
321 from .metrics import r2_score
--> 322 return r2_score(y, self.predict(X))
323
324
/Users/zikesjan/anaconda/lib/python2.7/site-packages/sklearn/metrics/metrics.pyc in r2_score(y_true, y_pred)
2181
2182 """
-> 2183 y_type, y_true, y_pred = _check_reg_targets(y_true, y_pred)
2184
2185 if len(y_true) == 1:
/Users/zikesjan/anaconda/lib/python2.7/site-packages/sklearn/metrics/metrics.pyc in _check_reg_targets(y_true, y_pred)
59 Estimated target values.
60 """
---> 61 y_true, y_pred = check_arrays(y_true, y_pred)
62
63 if y_true.ndim == 1:
/Users/zikesjan/anaconda/lib/python2.7/site-packages/sklearn/utils/validation.pyc in check_arrays(*arrays, **options)
231 else:
232 array = np.asarray(array, dtype=dtype)
--> 233 _assert_all_finite(array)
234
235 if copy and array is array_orig:
/Users/zikesjan/anaconda/lib/python2.7/site-packages/sklearn/utils/validation.pyc in _assert_all_finite(X)
25 if (X.dtype.char in np.typecodes['AllFloat'] and not np.isfinite(X.sum())
26 and not np.isfinite(X).all()):
---> 27 raise ValueError("Array contains NaN or infinity.")
28
29
ValueError: Array contains NaN or infinity.
基于这篇文章,我已经尝试使用以下行来代替上面的行:
clf.fit(np.asarray(features).astype(float), np.asarray(rewards).astype(float))
然后根据这篇文章,我甚至尝试了这个:
scaler = preprocessing.StandardScaler().fit(np.asarray(features).astype(float))
transformed_features = scaler.transform(np.asarray(features).astype(float))
clf.fit(transformed_features, rewards)
但不幸的是没有任何成功。所以我想问一下,如果有人有一些想法,可能的问题可能是,我怎么能使我的代码工作。
提前谢谢你。
编辑:我发现我没有得到这个错误的情况下,我只有以下参数:
parameters = [{'weights': ['uniform'], 'n_neighbors': [5, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100]}]
所以问题似乎是在权重=距离的情况下。有人知道为什么吗?
这里又出现了一个与此相关的问题。
编辑2:
如果我运行我的代码与日志设置在调试,我得到以下警告:
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/neighbors/regression.py:160: RuntimeWarning: invalid value encountered in divide
y_pred[:, j] = num / denom
显然除以0有问题。所以我的问题是为什么在regression.py中第160行有scikit除以0 ?
除了您所尝试的,您还可以查看
import numpy as np
features = np.nan_to_num(features)
rewards = np.nan_to_num(rewards)
这将数组中的所有非数值设置为0
,并且至少应该使算法运行,除非错误发生在算法内部的某个地方。确保你的数据中没有太多的非数字条目,因为将它们全部设置为0可能会导致你的估计出现奇怪的偏差。
如果不是这种情况,并且您使用的是weights='distance'
,那么请检查是否有任何列车样本相同。这将导致反向距离除以零。
如果反向距离是除零的原因,你可以通过使用你自己的距离函数来规避这个问题,例如
def better_inv_dist(dist):
c = 1.
return 1. / (c + dist)
,然后使用'weights': better_inv_dist
。您可能需要将常数c
调整到合适的刻度。在任何情况下,只要c > 0
,它将避免被0除法。
我在scikit-learn上使用KNN回归时遇到了同样的问题。我使用权重="距离",这在计算预测时导致了无限的值(但不是在拟合KNN模型时,即学习适当的KD树或Ball树)。我切换到weights='uniform',程序运行到正确完成,表明提供的权重函数是问题。如果您想使用基于距离的权重,请提供一个自定义权重函数,该函数不会像eickenberg的答案所示的那样在零距离处爆炸到无穷大。