我试图使用sklearn线性回归模型同时预测多个独立的时间序列,但我似乎无法正确预测。
我的数据组织如下:Xn
是一个矩阵,其中每行包含4个观测值的预测窗口,yn
是Xn
每行的目标值。
import numpy as np
# training data
X1=np.array([[-0.31994,-0.32648,-0.33264,-0.33844],[-0.32648,-0.33264,-0.33844,-0.34393],[-0.33264,-0.33844,-0.34393,-0.34913],[-0.33844,-0.34393,-0.34913,-0.35406],[-0.34393,-0.34913,-.35406,-0.35873],[-0.34913,-0.35406,-0.35873,-0.36318],[-0.35406,-0.35873,-0.36318,-0.36741],[-0.35873,-0.36318,-0.36741,-0.37144],[-0.36318,-0.36741,-0.37144,-0.37529],[-0.36741,-.37144,-0.37529,-0.37896],[-0.37144,-0.37529,-0.37896,-0.38069],[-0.37529,-0.37896,-0.38069,-0.38214],[-0.37896,-0.38069,-0.38214,-0.38349],[-0.38069,-0.38214,-0.38349,-0.38475],[-.38214,-0.38349,-0.38475,-0.38593],[-0.38349,-0.38475,-0.38593,-0.38887]])
X2=np.array([[-0.39265,-0.3929,-0.39326,-0.39361],[-0.3929,-0.39326,-0.39361,-0.3931],[-0.39326,-0.39361,-0.3931,-0.39265],[-0.39361,-0.3931,-0.39265,-0.39226],[-0.3931,-0.39265,-0.39226,-0.39193],[-0.39265,-0.39226,-0.39193,-0.39165],[-0.39226,-0.39193,-0.39165,-0.39143],[-0.39193,-0.39165,-0.39143,-0.39127],[-0.39165,-0.39143,-0.39127,-0.39116],[-0.39143,-0.39127,-0.39116,-0.39051],[-0.39127,-0.39116,-0.39051,-0.3893],[-0.39116,-0.39051,-0.3893,-0.39163],[-0.39051,-0.3893,-0.39163,-0.39407],[-0.3893,-0.39163,-0.39407,-0.39662],[-0.39163,-0.39407,-0.39662,-0.39929],[-0.39407,-0.39662,-0.39929,-0.4021]])
# target values
y1=np.array([-0.34393,-0.34913,-0.35406,-0.35873,-0.36318,-0.36741,-0.37144,-0.37529,-0.37896,-0.38069,-0.38214,-0.38349,-0.38475,-0.38593,-0.38887,-0.39184])
y2=np.array([-0.3931,-0.39265,-0.39226,-0.39193,-0.39165,-0.39143,-0.39127,-0.39116,-0.39051,-0.3893,-0.39163,-0.39407,-0.39662,-0.39929,-0.4021,-0.40506])
单个时间序列的正常程序如下:
from sklearn.linear_model import LinearRegression
# train the 1st half, predict the 2nd half
half = len(y1)/2 # or y2 as they have the same length
LR = LinearRegression()
LR.fit(X1[:half], y1[:half])
pred = LR.predict(X1[half:])
r_2 = LR.score(X1[half:],y1[half:])
但是如何将线性回归模型同时应用于多个独立的时间序列呢?我尝试了以下方法:
y_stack = np.vstack((y1[None],y2[None]))
X_stack = np.vstack((X1[None],X2[None]))
print 'y1 shape:',y1.shape, 'X1 shape:',X1.shape
print 'y_stack shape:',y_stack.shape, 'X_stack:',X_stack.shape
y1 shape: (16,) X1 shape: (16, 4)
y_stack shape: (2, 16) X_stack: (2, 16, 4)
但随后线性模型的拟合失败如下:
LR.fit(X_stack[:,half:],y_stack[:,half:])
指出维度数量高于预期:
C:Python27libsite-packagessklearnutilsvalidation.pyc in check_array(array, accept_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, warn_on_dtype, estimator)
394 if not allow_nd and array.ndim >= 3:
395 raise ValueError("Found array with dim %d. %s expected <= 2."
--> 396 % (array.ndim, estimator_name))
397 if force_all_finite:
398 _assert_all_finite(array)
ValueError: Found array with dim 3. Estimator expected <= 2.
任何建议或暗示都将不胜感激。
更新
我可以使用for循环,但由于n
实际上大约为10000或更多,我希望找到包括数组操作的解决方案,因为这些都是numpy、scipy和sklearn 的显式功能
@ali_m我不认为这是一个重复的问题,但它们部分相关。当然,可以使用类似于sklearn:的线性回归模型同时应用和预测时间序列
我创建了一个新类LinearRegression_Multi
:
class LinearRegression_Multi:
def stacked_lstsq(self, L, b, rcond=1e-10):
"""
Solve L x = b, via SVD least squares cutting of small singular values
L is an array of shape (..., M, N) and b of shape (..., M).
Returns x of shape (..., N)
"""
u, s, v = np.linalg.svd(L, full_matrices=False)
s_max = s.max(axis=-1, keepdims=True)
s_min = rcond*s_max
inv_s = np.zeros_like(s)
inv_s[s >= s_min] = 1/s[s>=s_min]
x = np.einsum('...ji,...j->...i', v,
inv_s * np.einsum('...ji,...j->...i', u, b.conj()))
return np.conj(x, x)
def center_data(self, X, y):
""" Centers data to have mean zero along axis 0.
"""
# center X
X_mean = np.average(X,axis=1)
X_std = np.ones(X.shape[0::2])
X = X - X_mean[:,None,:]
# center y
y_mean = np.average(y,axis=1)
y = y - y_mean[:,None]
return X, y, X_mean, y_mean, X_std
def set_intercept(self, X_mean, y_mean, X_std):
""" Calculate the intercept_
"""
self.coef_ = self.coef_ / X_std # not really necessary
self.intercept_ = y_mean - np.einsum('ij,ij->i',X_mean,self.coef_)
def scores(self, y_pred, y_true ):
"""
The coefficient R^2 is defined as (1 - u/v), where u is the regression
sum of squares ((y_true - y_pred) ** 2).sum() and v is the residual
sum of squares ((y_true - y_true.mean()) ** 2).sum().
"""
u = ((y_true - y_pred) ** 2).sum(axis=-1)
v = ((y_true - y_true.mean(axis=-1)[None].T) ** 2).sum(axis=-1)
r_2 = 1 - u/v
return r_2
def fit(self,X, y):
""" Fit linear model.
"""
# get coefficients by applying linear regression on stack
X_, y, X_mean, y_mean, X_std = self.center_data(X, y)
self.coef_ = self.stacked_lstsq(X_, y)
self.set_intercept(X_mean, y_mean, X_std)
def predict(self, X):
"""Predict using the linear model
"""
return np.einsum('ijx,ix->ij',X,self.coef_) + self.intercept_[None].T
可以如下应用,使用与问题中相同的声明变量:
LR_Multi = LinearRegression_Multi()
LR_Multi.fit(X_stack[:,:half], y_stack[:,:half])
y_stack_pred = LR_Multi.predict(X_stack[:,half:])
R2 = LR_Multi.scores(y_stack_pred, y_stack[:,half:])
其中,多个时间序列的R^2为:
array([ 0.91262442, 0.67247516])
这确实类似于标准sklearn线性回归的预测方法:
from sklearn.linear_model import LinearRegression
LR = LinearRegression()
LR.fit(X1[:half], y1[:half])
R2_1 = LR.score(X1[half:],y1[half:])
LR.fit(X2[:half], y2[:half])
R2_2 = LR.score(X2[half:],y2[half:])
print R2_1, R2_2
0.912624422097 0.67247516054
如果您需要构建单独的模型,则不可能使用numpy的功能来提高性能,因为您有许多不同的任务。你唯一能做的就是在不同的线程中同时运行它们(通过使用CPU的多核),甚至将计算拆分到不同的计算机上。
如果你相信所有的数据都适合同一个模型,那么显而易见的解决方案就是合并所有的Xn
和yn
并学习它们。这肯定会比单独计算模型更快。
但事实上,问题不在于计算性能,而在于你想要得到的结果。如果你需要不同的模型,你没有选择,只需单独计算即可。如果您需要一个模型,只需合并数据即可。否则,如果你要计算单独的模型,你会遇到问题:如何从所有模型中获得最终参数。