LSTMBlockFusedCell 的(测试)误差会比 LSTMCell 高 6% 吗?或者我在辍学时犯了错误吗?



我为回归问题构建了一个简单的堆叠动态双向LSTM,其中包含LSTMCellDropoutWrapperMultiRNNCellbidirectional_dynamic_rnn(Model_Orig(。20 个 epoch 后的检验绝对误差为 2.89,训练时间为 14.5 小时。

然后我尝试了一个替代实现(Model_blockfused(,它具有相同的结构,但使用块融合组件(即tf.layers.dropoutLSTMBlockFusedCellTimeReversedFusedRNN(。Model_blockfused的训练时间要短得多(3.6 小时(,但 20 个 epoch 后的测试绝对误差高出约 6% (3.06(。

那么,我应该期望LSTMBlockFusedCellLSTMCell之间的性能差异为6%吗?或者我在构建Model_blockfused时是否犯了任何错误(尤其是对于辍学(?

以下是Model_Orig的简化代码:

LSTM_CELL_SIZE = 200
keep_prob = 0.90
parallel_iterations = 512
dropcells = []
for iiLyr in list(range(3)):
cell_iiLyr = tf.nn.rnn_cell.LSTMCell(num_units=LSTM_CELL_SIZE, state_is_tuple=True)
dropcells.append(tf.nn.rnn_cell.DropoutWrapper(cell=cell_iiLyr, output_keep_prob=keep_prob)) 
MultiLyr_cell = tf.nn.rnn_cell.MultiRNNCell(cells=dropcells, state_is_tuple=True)
outputs, states = tf.nn.bidirectional_dynamic_rnn(
cell_fw=MultiLyr_cell, 
cell_bw=MultiLyr_cell, 
inputs=Orig_input_TSs, #shape of Orig_input_TSs: [#batches, time_len, #input_features]
dtype=tf.float32,
sequence_length=length, # shape of length: [#batches, 1]
parallel_iterations = parallel_iterations, # default:32,  Those operations which do not have any temporal dependency and can be run in parallel, will be.
scope = "BiLSTM"
)
states_fw, states_bw = states
# get the states (c and h, both directions) from the top LSTM layer for final fully connected layers.
c_fw_lstLyr, h_fw_lstLyr = states_fw[-1]
c_bw_lstLyr, h_bw_lstLyr = states_bw[-1]

以下是Model_blockfused的简化代码:

LSTM_CELL_SIZE = 200
keep_prob = 0.90
Flg_training = True # True: training
# convert the input sequences (Orig_input_TSs) to the time major format
# shape of input_TSs_TimeMajor: [time_len, #batches, #input_features]
input_TSs_TimeMajor = tf.transpose(Orig_input_TSs, perm=[1,0,2])
# apply the dropout
# shape of dropout_input_TSs_TimeMajor: [time_len, #batches, #input_features]
dropout_input_TSs_TimeMajor = tf.layers.dropout(
input_TSs_TimeMajor,
rate=1.0 - keep_prob, # dropout rate
training=Flg_training
)
# build the stacked dynamic bidirectional LSTM
for iiLyr in list(range(3)):
cur_fw_BFcell_obj = tf.contrib.rnn.LSTMBlockFusedCell(num_units=LSTM_CELL_SIZE)
cur_bw_BFcell_obj = tf.contrib.rnn.TimeReversedFusedRNN(cur_fw_BFcell_obj)
if (iiLyr == 0):
# first layer (different inputs)
# shape of fw_out_TM (or bw_out_TM): [time_len, #batches, LSTM_CELL_SIZE]
# fw_state (or bw_state): LSTMStateTuple(c, h))
fw_out_TM, fw_state = cur_fw_BFcell_obj(dropout_input_TSs_TimeMajor, dtype=tf.float32, sequence_length=length)
bw_out_TM, bw_state = cur_bw_BFcell_obj(dropout_input_TSs_TimeMajor, dtype=tf.float32, sequence_length=length)
else:
# shape of fw_out_TM (or bw_out_TM): [time_len, #batches, LSTM_CELL_SIZE]
# fw_state (or bw_state): LSTMStateTuple(c, h))
fw_out_TM, fw_state = cur_fw_BFcell_obj(fw_out_TM, dtype=tf.float32, sequence_length=length)
bw_out_TM, bw_state = cur_bw_BFcell_obj(bw_out_TM, dtype=tf.float32, sequence_length=length)
# get the LSTM states (c and h, both directions) from the top LSTM layer for final fully connected layers.
c_fw_lstLyr, h_fw_lstLyr = fw_state
c_bw_lstLyr, h_bw_lstLyr = bw_state

谢谢。

首先,你应该为fw和bw使用两个独立的tf.contrib.rnn.LSTMBlockFusedCell,更改下面的代码

cur_fw_BFcell_obj = tf.contrib.rnn.LSTMBlockFusedCell(num_units=LSTM_CELL_SIZE)
cur_bw_BFcell_obj = tf.contrib.rnn.TimeReversedFusedRNN(cur_fw_BFcell_obj)

对此:

cur_fw_BFcell_obj = tf.contrib.rnn.LSTMBlockFusedCell(num_units=LSTM_CELL_SIZE)
cur_bw_BFcell_obj_cell = tf.contrib.rnn.LSTMBlockFusedCell(num_units=LSTM_CELL_SIZE)
cur_bw_BFcell_obj = tf.contrib.rnn.TimeReversedFusedRNN(cur_bw_BFcell_obj_cell)

其次,在TF的tf.contrib.rnn.stack_bidirectional_dynamic_rnn API中,它说

组合的前向和后向层输出用作输入 下一层。

所以下面的代码

fw_out_TM, fw_state = cur_fw_BFcell_obj(fw_out_TM, dtype=tf.float32, sequence_length=length)
bw_out_TM, bw_state = cur_bw_BFcell_obj(bw_out_TM, dtype=tf.float32, sequence_length=length)

应更改为:

next_layer_input = tf.concat([fw_out_TM, bw_out_TM], axis=2)
fw_out_TM, fw_state = cur_fw_BFcell_obj(next_layer_input, dtype=tf.float32, sequence_length=length)
bw_out_TM, bw_state = cur_bw_BFcell_obj(next_layer_input, dtype=tf.float32, sequence_length=length)

相关内容

最新更新