在使用scikit时,我有两个关于PCA的问题。
假设我有以下数据:
fullmatrix =[[2.5, 2.4],
[0.5, 0.7],
[2.2, 2.9],
[1.9, 2.2],
[3.1, 3.0],
[2.3, 2.7],
[2.0, 1.6],
[1.0, 1.1],
[1.5, 1.6],
[1.1, 0.9]]
现在我做PCA计算:
from sklearn.decomposition import PCA as PCA
sklearn_pca = PCA()
Y_sklearn = sklearn_pca.fit_transform(fullmatrix)
print Y_sklearn # Y_sklearn is now the Data transformed with 2 eigenvectors
sklearn_pca.explained_variance_ratio_ # variance explained by each eigenvector
print sklearn_pca.explained_variance_ratio_
sklearn_pca.components_ # eigenvectors order by highest eigenvalue
print sklearn_pca.components_
第一个问题是:<<p> /strong>我如何将Y_sklearn投影回原来的尺度?(我知道我们应该得到与完整矩阵相同的数据,因为我使用了所有特征向量,它只是为了检查是否做得对)。第二个问题:如何输入关于来自"sklearn_pca.explained_variance_ratio_"的最小可接受总方差的阈值?例如,假设我想继续使用特征向量,直到我达到总解释方差比大于95%。在这种情况下很容易,我们只使用第一个特征向量,因为它解释了。96318131%。但是我们怎样才能以一种更自动化的方式做到这一点呢?
First: sklearn_pca.inverse_transform(Y_sklearn)
thr = 0.95
# Is cumulative sum exceeds some threshold
is_exceeds = np.cumsum(sklearn_pca.explained_variance_ratio_) >= thr
# Which minimal index provides such variance
# We need to add 1 to get minimum number of eigenvectors for saving this variance
k = np.min(np.where(is_exceeds))+1
# Or you can just initialize your model with thr parameter
sklearn_pca = PCA(n_components = thr)