Scikit-Learn管道:PCA后归一化会产生不希望的随机结果



我正在运行一条管道,该管道将输入范围归一化,运行PCA,在最终运行逻辑回归之前将PCA因子正常化。

但是,我在i产生的混淆矩阵上获得可变结果。

我发现,如果我删除第三步(" jarruarisise_pca"),我的结果是恒定的。

我对所有管道步骤都设置了Random_State = 0。知道为什么我会得到可变结果?

def exp2_classifier(X_train, y_train):
    estimators = [('robust_scaler', RobustScaler()), 
                  ('reduce_dim', PCA(random_state=0)), 
                  ('normalise_pca', PowerTransformer()), #I applied this as the distribution of the PCA factors were skew
                  ('clf', LogisticRegression(random_state=0, solver="liblinear"))] 
                #solver specified here to suppress warnings, it doesn't seem to effect gridSearch
    pipe = Pipeline(estimators)
    return pipe
exp2_eval = Evaluation().print_confusion_matrix
logit_grid = Experiment().run_experiment(asdp.data, "heavy_drinker", exp2_classifier, exp2_eval);

我无法复制您的错误。我尝试了Sklearn的其他示例数据集,但对多次运行获得了一致的结果。因此,差异可能不是由于归一化_pca

引起的
from sklearn import datasets
from sklearn.metrics import confusion_matrix
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import RobustScaler,PowerTransformer
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
cancer = datasets.load_breast_cancer()
X = cancer.data
y = cancer.target
from sklearn.model_selection import train_test_split
X_train, X_eval, y_train, y_eval = train_test_split(X, y, test_size=0.2, random_state=42)
estimators = [('robust_scaler', RobustScaler()), 
              ('reduce_dim', PCA(random_state=0)), 
              ('normalise_pca', PowerTransformer()), #I applied this as the distribution of the PCA factors were skew
              ('clf', LogisticRegression(random_state=0, solver="liblinear"))] 
            #solver specified here to suppress warnings, it doesn't seem to effect gridSearch
pipe = Pipeline(estimators)
pipe.fit(X_train,y_train)
print('train data :')
print(confusion_matrix(y_train,pipe.predict(X_train)))
print('test data :')
print(confusion_matrix(y_eval,pipe.predict(X_eval)))

输出:

train data :
[[166   3]
 [  4 282]]
test data :
[[40  3]
 [ 3 68]]

相关内容

  • 没有找到相关文章

最新更新