LSTM 的精度非常低



我不知道我的配置精度低(总是 0.1508(。 数据形状 : (1476,1000,1(

scaler = MinMaxScaler(feature_range=(0,1))
scaled_X = scaler.fit_transform(train_Data)
....
myModel = Sequential()
myModel.add(LSTM(128,input_shape=(myData.shape[1:]),activation='relu',return_sequences=True))
myModel.add(Dropout(0.2))
myModel.add(BatchNormalization())
myModel.add(LSTM(128,activation='relu',return_sequences=True))
myModel.add(Dropout(0.2))
myModel.add(BatchNormalization())
myModel.add(LSTM(64,activation='relu',return_sequences=True))
myModel.add(Dropout(0.2))
myModel.add(BatchNormalization())
myModel.add(LSTM(32,activation='relu'))
myModel.add(Dropout(0.2))
myModel.add(BatchNormalization())
myModel.add(Dense(16,activation='relu'))
myModel.add(Dropout(0.2))
myModel.add(Dense(8,activation='softmax'))
#myModel.add(Dropout(0.2))
opt = tf.keras.optimizers.SGD(lr=0.001,decay=1e-6)
ls  = tf.keras.losses.categorical_crossentropy

有时还会发出以下警告:

W1014 21:02:57.125363  6600 ag_logging.py:146] Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x00000188C58C3E18> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: 
WARNING: Entity <function Function._initialize_uninitialized_variables.<locals>.initialize_variables at 0x00000188C58C3E18> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause:

两个主要的罪魁祸首是:Dropout层,数据预处理。详细及其他:

  • 众所周知,堆叠 LSTM 上的Dropout会产生较差的性能,因为它会引入太多噪声,无法进行稳定的时间依赖性特征提取。修复:使用recurrent_dropout
  • 如果您正在使用信号数据,或者使用(1(异常值的数据;(2(相信息;(3(频率信息——MinMaxScaler会破坏后两者加每(1(的幅度信息。解决方法:使用StandardScalerQuantileTransformer
  • 考虑使用Nadam优化器而不是SGD;它在我的LSTM应用程序中被证明占主导地位,并且通常比SGD更超参数鲁棒
  • 考虑使用CuDNNLSTM;它可以运行10倍
  • 确保数据针对 LSTM 正确调整:(batch_size, timesteps, features)- 或等效地,(samples, timesteps, channels)

注意警告:如果您确实使用recurrent_dropout,请使用activation='tanh',因为'relu'不稳定。


更新:真正的罪魁祸首:数据不足。详情请见此处

最新更新