是什么让投票回归者的得分低于选民的得分



关于集成学习,我经常听说">在某种意义上更多样化的估计器往往具有更好的集成性能"[tmrlv的答案]。为了评估这一说法,我只使用了DecisionTreeRegressorSeuqential模型来解决基于投票方法的回归问题,比如使用VotingRegressor。据我所知,决策树和神经网络是不同的估计量,所以我希望我的集合模型能胜过那些选民的表现。下面是一个简单的测试,它说明了其他方面。

from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import VotingRegressor
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.metrics import mean_squared_error
from sklearn.datasets import fetch_california_housing
import tensorflow as tf
from scikeras.wrappers import KerasRegressor
housing = fetch_california_housing()
seed1 = 7
seed2 = 5
seed3 = 42
X_train_full, X_test, y_train_full, y_test = train_test_split(housing.data, housing.target, random_state=seed1)
X_train, X_valid, y_train, y_valid = train_test_split(X_train_full, y_train_full, random_state=seed2)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_valid = scaler.fit_transform(X_valid)
X_test = scaler.fit_transform(X_test)
def build_dnn_reg():
model = tf.keras.Sequential()
model.add(tf.keras.layers.InputLayer(input_shape=X_train.shape[1:]))
model.add(tf.keras.layers.Dense(24, tf.keras.activations.relu))
model.add(tf.keras.layers.Dense(24, tf.keras.activations.relu))
model.add(tf.keras.layers.Dense(1))
model.compile(loss=tf.keras.losses.mean_squared_error,
optimizer=tf.keras.optimizers.SGD(learning_rate=0.0003),
metrics=[tf.keras.metrics.MSE])
return model
dnn_reg = KerasRegressor(build_dnn_reg)
rnd_reg = DecisionTreeRegressor(max_depth=5,
min_samples_leaf=1,
max_leaf_nodes=9, random_state=seed1)
vot_reg = VotingRegressor(estimators=[("rnd", rnd_reg), ("dnn", dnn_reg)])
rnd_reg.fit(X_train, y_train)
dnn_reg.fit(X_train, y_train, epochs = 90, validation_data=(X_valid, y_valid))
vot_reg.fit(X_train, y_train)
rnd_reg_scr = cross_val_score(estimator=rnd_reg, X=X_valid, y=y_valid, cv=10, n_jobs=-1)
vot_reg_scr = cross_val_score(estimator=vot_reg, X=X_valid, y=y_valid, cv=10, n_jobs=-1)
yhat_rnd_reg = rnd_reg.predict(X_test)
yhat_dnn_reg = dnn_reg.predict(X_test)
yhat_vot_reg = vot_reg.predict(X_test)
print("rnd_reg_mse: {0}".format(mean_squared_error(y_test, yhat_rnd_reg)))
print("dnn_reg_mse: {0}".format(mean_squared_error(y_test, yhat_dnn_reg)))
print("vot_reg_mse: {0}".format(mean_squared_error(y_test, yhat_vot_reg)))
print("rnd_reg_scr: {0}".format(vot_reg.score(X_test, y_test)))
print("dnn_reg_scr: {0}".format(vot_reg.score(X_test, y_test)))
print("vot_reg_scr: {0}".format(vot_reg.score(X_test, y_test)))

特别是,上面的代码报告了以下度量。

rnd_reg_mse: 0.7501101252790635
dnn_reg_mse: 0.45553422440199703
vot_reg_mse: 0.8858773948030302
rnd_reg_scr: 0.4371867731255331
dnn_reg_scr: 0.6582092706826692
vot_reg_scr: 0.3353195772437395

正如人们所观察到的,我的投票分类器的得分低于选民的得分。如何证明这些结果是合理的?

这类事情不能保证,所以我首先要提醒大家,即使有"多样化";选民。

不过,这里的一个特别之处是,你的投票组合无法设置你在个人keras回归器中设置的拟合参数。因此,在内部,keras投票者只适合一个时期(默认(。在VotingRegressor中,我看不到克服这一点的简单方法,但为了进行比较,您可以看到rnd_reg在使用默认值时的表现。

最新更新