继承自scikit-learn的LassoCV模型



我尝试使用继承来扩展scikit-learn的RidgeCV模型:

from sklearn.linear_model import RidgeCV, LassoCV
class Extended(RidgeCV):
    def __init__(self, *args, **kwargs):
        super(Extended, self).__init__(*args, **kwargs)
    def example(self):
        print 'Foo'

x = [[1,0],[2,0],[3,0],[4,0], [30, 1]]
y = [2,4,6,8, 60]
model = Extended(alphas = [float(a)/1000.0 for a in range(1, 10000)])
model.fit(x,y)
print model.predict([[5,1]])

它工作得很好,但是当我尝试从LassoCV继承时,它产生了以下回溯:

Traceback (most recent call last):
  File "C:/Python27/so.py", line 14, in <module>
    model.fit(x,y)
  File "C:Python27libsite-packagessklearnlinear_modelcoordinate_descent.py", line 1098, in fit
    path_params = self.get_params()
  File "C:Python27libsite-packagessklearnbase.py", line 214, in get_params
    for key in self._get_param_names():
  File "C:Python27libsite-packagessklearnbase.py", line 195, in _get_param_names
    % (cls, init_signature))
RuntimeError: scikit-learn estimators should always specify their parameters in the signature of their __init__ (no varargs). <class '__main__.Extended'> with constructor (<self>, *args, **kwargs) doesn't  follow this convention.

有人能解释一下如何解决这个问题吗?

您可能想要使scikit-learn兼容模型,以便与可用的scikit-learn功能进一步使用它。如果你有,你需要先读一下这个:http://scikit-learn.org/stable/developers/contributing.html rolling-your-own-estimator

简而言之:scikit-learn具有许多特性,如估计器克隆(clone()函数),元算法(如GridSearch, Pipeline),交叉验证。所有这些东西都必须能够获得估计器内部字段的值,并改变这些字段的值(例如GridSearch必须在每次求值之前改变估计器内部的参数),就像SGDClassifier中的参数alpha。要更改某个参数的值,它必须知道它的名称。要从BaseEstimator类(您正在隐式继承)中获得每个分类器方法get_params中的所有字段的名称,需要在类的__init__方法中指定所有参数,因为很容易内省__init__方法的所有参数名称(看看BaseEstimator,这是抛出此错误的类)。

它只是想让你删除所有变量,比如

*args, **kwargs

from __init__ signature。你必须在__init__签名中列出你的模型的所有参数,并初始化对象的所有内部字段。

下面是SGDClassifier的__init__方法的例子,它继承自BaseSGDClassifier:

def __init__(self, loss="hinge", penalty='l2', alpha=0.0001, l1_ratio=0.15,
             fit_intercept=True, n_iter=5, shuffle=True, verbose=0,
             epsilon=DEFAULT_EPSILON, n_jobs=1, random_state=None,
             learning_rate="optimal", eta0=0.0, power_t=0.5,
             class_weight=None, warm_start=False, average=False):
    super(SGDClassifier, self).__init__(
        loss=loss, penalty=penalty, alpha=alpha, l1_ratio=l1_ratio,
        fit_intercept=fit_intercept, n_iter=n_iter, shuffle=shuffle,
        verbose=verbose, epsilon=epsilon, n_jobs=n_jobs,
        random_state=random_state, learning_rate=learning_rate, eta0=eta0,
        power_t=power_t, class_weight=class_weight, warm_start=warm_start, average=average)

相关内容

  • 没有找到相关文章

最新更新