Sklearn plsRegression- X的差异由潜在向量解释



我使用python的sklearn.cross_decomposition.plsregression

进行了部分最小二乘回归

是否有一种方法可以检索x的解释方差的分数,即 r 2 (x),对于每个PLS组件?我正在寻找与R PLS软件包中类似的explvar()函数类似的东西。但是,我也感谢您自己计算的任何建议。

也有一个类似的问题,有一个答案解释了如何获得Y的差异。我想,在这种情况下," y中的差异"就是所要求的。这就是为什么我打开一个新问题 - 希望那是O.K。

我设法找到了解决该问题的解决方案。以下给出了PLS回归后每个潜在向量解释的X中方差的比例:

import numpy as np
from sklearn import cross_decomposition
# X is a numpy ndarray with samples in rows and predictor variables in columns
# y is one-dimensional ndarray containing the response variable
total_variance_in_x = np.var(X, axis = 0)
pls1 = cross_decomposition.PLSRegression(n_components = 5)
pls1.fit(X, y) 
# variance in transformed X data for each latent vector:
variance_in_x = np.var(pls1.x_scores_, axis = 0) 
# normalize variance by total variance:
fractions_of_explained_variance = variance_in_x / total_variance_in_x

我对此不确定,因此,如果某人可以贡献某些东西...

遵循这些内容:

https://ro-che.info/articles/2017-12-11-pca-expain-variance

https://www.ibm.com/docs/de/spss-statistics/24.0.0?topic= reduction-total-variance-解释

variance_in_x = np.var(pls1.x_scores_, axis = 0) 
fractions_of_explained_variance = variance_in_x / np.sum(variance_in_x)

相关内容

  • 没有找到相关文章

最新更新