Scikit-learn PCA中的错误还是Numpy Eigen分解中的错误?



我有一个包含400个特征的数据集。

我做了什么:

# approach 1
d_cov = np.cov(d_train.transpose())
eigens, mypca = LA.eig(d_cov) # assume sort by eigen value also/ LA = numpy linear algebra
# approach 2 
pca = PCA(n_components=300)
d_fit = pca.fit_transform(d_train)
pc = pca.components_

现在,这两者应该是相同的,对吧? 因为PCA只是协方差矩阵的特征分解。

但这些对我来说非常不同?

怎么可能,我在上面做错了什么?

比较方差:

import numpy as np
LA = np.linalg
d_train = np.random.randn(100, 10)
d_cov = np.cov(d_train.transpose())
eigens, mypca = LA.eig(d_cov)
import matplotlib.pyplot as plt

from sklearn.decomposition import PCA
pca =  PCA(n_components=10)
d_fit = pca.fit_transform(d_train)
pc = pca.components_
ve = pca.explained_variance_
#mypca[0,:], pc[0,:] pc.transpose()[0,:]
plt.plot(list(range(len(eigens))), [ x.transpose().dot(d_cov).dot(x) for x,y  in zip(mypca, eigens) ])
plt.plot(list(range(len(ve))), ve)
plt.show()
print(mypca, 'n---n' , pc)

您需要更仔细地阅读文档。 Numpy的文档很棒而且非常彻底,很多时候你只能通过阅读它来找到问题的解决方案。

这是代码的修改版本(在代码段顶部导入,使用 .T 而不是 .transpose((,pep8。

import matplotlib.pyplot as plt
import numpy as np
from sklearn.decomposition import PCA
from numpy import linalg as LA
d_train = np.random.randn(100, 10)
d_cov = np.cov(d_train.transpose())
eigens, mypca = LA.eig(d_cov)
pca = PCA(n_components=10)
d_fit = pca.fit_transform(d_train)
pc = pca.components_
explained = pca.explained_variance_
my_explained = np.sort([x.T.dot(d_cov).dot(x) for x in mypca.T])[::-1]
plt.close('all')
plt.figure()
plt.plot(my_explained, label='me')
plt.plot(explained, label='sklearn')
plt.legend()
plt.show(block=False)

两条曲线完全相同。 重要的是我迭代my_pca。T,不是my_pca。

Signature: np.linalg.eig(a)
Docstring:
Compute the eigenvalues and right eigenvectors of a square array.
Parameters
----------
a : (..., M, M) array
Matrices for which the eigenvalues and right eigenvectors will
be computed
Returns
-------
w : (..., M) array
# not important for you
v : (..., M, M) array
The normalized (unit "length") eigenvectors, such that the
column ``v[:,i]`` is the eigenvector corresponding to the
eigenvalue ``w[i]``.

特征向量以my_pca列的形式返回,而不是行。for x in my_pca正在迭代行。

我不是 PCA 方面的专家,但如果我转置其中一个矩阵,我似乎会得到类似的值。

>>> import numpy as np
>>> LA = np.linalg
>>> d_train = np.random.randn(100, 10)
>>> d_cov = np.cov(d_train.transpose())
>>> eigens, mypca = LA.eig(d_cov)
>>> from sklearn.decomposition import PCA
>>> pca =  PCA(n_components=10)
>>> d_fit = pca.fit_transform(d_train)
>>> pc = pca.components_
>>> mypca[0,:]
array([-0.44255435, -0.77430549, -0.14479638, -0.06459874,  0.24772212,
0.20780185,  0.22388151, -0.05069543, -0.14515676, -0.03385801])
>>> pc[0,:]
array([-0.44255435, -0.24050535, -0.17313927,  0.07182494,  0.09748632,
0.17910516,  0.26125107,  0.71309764,  0.17276004,  0.25095447])
>>> pc.transpose()[0,:]
array([-0.44255435,  0.77430549,  0.14479638, -0.06459874,  0.24772212,
-0.20780185,  0.22388151, -0.03385801,  0.14515676,  0.05069543])
>>> list(zip(pc.transpose()[:,0], mypca[:,0]))
[(-0.44255435328718207, -0.44255435328718096),
(-0.24050535133912765, -0.2405053513391287),
(-0.17313926714559819, -0.17313926714559785),
(0.07182494253930383, 0.0718249425393035),
(0.09748631534772645, 0.09748631534772684),
(0.17910516453826955, 0.17910516453826758),
(0.2612510722861703, 0.2612510722861689),
(0.7130976419217306, 0.7130976419217326),
(0.17276004381786172, 0.17276004381786136),
(0.25095447415020183, 0.2509544741502009)]

最新更新