如何在 keras 中获取 LSTM 的隐藏节点表示



我已经在keras中使用LSTM程序实现了一个模型。我正在尝试获取 LSTM 层的隐藏节点的表示形式。这是获取隐藏节点的表示形式(存储在激活变量中(的正确方法吗?

model = Sequential()
model.add(LSTM(50, input_dim=sample_index))
activations = model.predict(testX)
model.add(Dense(no_of_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy',  optimizer='adagrad', metrics=['accuracy'])
hist=model.fit(trainX, trainY, validation_split=0.15, nb_epoch=5, batch_size=20, shuffle=True, verbose=1)

编辑:您获取隐藏表示的方法也是正确的。参考: https://github.com/fchollet/keras/issues/41

训练模型后,可以保存模型和权重。喜欢这个:

from keras.models import model_from_json
json_model = yourModel.to_json()
open('yourModel.json', 'w').write(json_model)
yourModel.save_weights('yourModel.h5', overwrite=True)

然后,您可以可视化 LSTM 图层的权重。喜欢这个:

from keras.models import model_from_json
import matplotlib.pyplot as plt
model = model_from_json(open('yourModel.json').read())
model.load_weights('yourModel.h5')
layers = model.layers[1]  # find the LSTM layer you want to visualize, [1] is just an example
weights, bias = layers.get_weights()
plt.matshow(weights, fignum=100, cmap=plt.cm.gray)
plt.show()

最新更新