贝叶斯模型不能用张量流概率和keras学习



我想估计我的模型的认识不确定性。所以我将所有层都转换为张量流概率层。该模型没有给出任何错误,但它也没有学到任何东西。该模型有两个输出,两个输出的损耗根本不改变。另一方面,模型的整体损失正在缩小,但似乎与其他损失完全无关,我无法解释。

import numpy as np
from tensorflow import keras
import tensorflow_probability as tfp
import tensorflow as tf
from plot.plot_utils import plot_model_metrics
from Custom_Keras_layers.ProbSqueezeExcite import squeeze_excite_block
inp = keras.layers.Input(shape=[self.timesteps, self.features])
# left side
# 1 Conv1D block
l = tfp.layers.Convolution1DFlipout(filters=2*self.features, kernel_size=2, padding='same', activation=tf.nn.relu)(inp)
l = keras.layers.BatchNormalization()(l)
if squeeze_excite == 1:
l = squeeze_excite_block(l)
l = keras.layers.Dropout(dropout_rate)(l, training=True)
# 1 Conv1D block
l = tfp.layers.Convolution1DFlipout(filters=4 * self.features, kernel_size=4, padding='same', activation=tf.nn.relu)(l)
l = keras.layers.BatchNormalization()(l)
if squeeze_excite == 1:
l = squeeze_excite_block(l)
l = keras.layers.Dropout(dropout_rate)(l, training=True)
# 1 lstm bock
l = keras.layers.LSTM(32, recurrent_dropout=dropout_rate, dropout=dropout_rate)(l, training=True)
# letf output layer
l = tfp.layers.DenseFlipout(self.classes, activation=tf.nn.softmax, name='left')(l)
# right side
# 1 Conv1D block
r = tfp.layers.Convolution1DFlipout(filters=2 * self.features, kernel_size=2, padding='same', activation=tf.nn.relu)(inp)
r = keras.layers.BatchNormalization()(r)
if squeeze_excite == 1:
r = squeeze_excite_block(r)
r = keras.layers.Dropout(dropout_rate)(r, training=True)
# 1 Conv1D block
r = tfp.layers.Convolution1DFlipout(filters=4 * self.features, kernel_size=4, padding='same', activation=tf.nn.relu)(r)
r = keras.layers.BatchNormalization()(r)
if squeeze_excite == 1:
r = squeeze_excite_block(r)
r = keras.layers.Dropout(dropout_rate)(r, training=True)
# 1 lstm bock
r = keras.layers.LSTM(32, recurrent_dropout=dropout_rate, dropout=dropout_rate)(r, training=True)
# letf output layer
r = tfp.layers.DenseFlipout(self.classes, activation=tf.nn.softmax, name='right')(r)
model = keras.models.Model(inputs=inp, outputs=[l, r])
optimizer = tf.train.AdamOptimizer(learning_rate=lr)
losses = {
"left": self._neg_log_likelihood_bayesian,
"right": self._neg_log_likelihood_bayesian}
model.compile(optimizer=optimizer, loss=losses, metrics=['accuracy'])
self.model = model

损失函数定义如下:

def _neg_log_likelihood_bayesian(self, y_true, y_pred):
labels_distribution = tfp.distributions.Categorical(logits=y_pred)
neg_log_likelihood = -tf.reduce_mean(labels_distribution.log_prob(tf.argmax(y_true, axis=-1)))
kl = sum(self.model.losses) / self.trainNUM
loss = neg_log_likelihood + kl
return loss

任何帮助将不胜感激。总损耗从 45000 开始,而两个输出损耗约为 1,3。这对我来说很奇怪。

感谢github张量流问题论坛上的这篇文章,我发现了如何解决它 https://github.com/tensorflow/probability/issues/282 您必须缩放每个 tfp 图层内的 KL 总和:

kernel_divergence_fn=lambda q, p, _: tfp.distributions.kl_divergence(q, p) / tf.to_float(train.num_examples))

此外,我将损失函数更改为:

neg_log_likelihood = tf.nn.softmax_cross_entropy_with_logits_v2(标签=y_true, logits=y_pred(

这为我做到了,现在我的模型正在正确训练。

总损失可能已经包括先前的损失 kl(采样权重||之前(,那么可以算双倍吗?(我不确定 Keras 如何处理这个问题。另一个想法是尝试使用reduce_sum而不是reduce_mean。

相关内容

  • 没有找到相关文章

最新更新