如何在tensorflow keras中引用ground truth变量?



我试图添加一个自定义损失到我的模型,需要一个对目标变量的引用来做到这一点。例如

model = Model(inputs=[x1, x2, x3], outputs=[y1,y2,y3])
mse = tf.keras.losses.MeanSquaredError()
model.add_loss(mse(x1, dec_x1))

在这里,我在一个输入变量和一个编码后的同一变量的解码版本之间添加了一个损失。但我也希望能够添加一个损失,这取决于y变量的基本真理(不是预测的y1, y2, y3),即想象y1_true,然后添加一个损失:

# Code to make Y1 which depends on x1, x2, x3
model.add_loss(mse(Y1, y1_true))

但是我如何在keras中访问y1_true?

不幸的是,add_loss函数不能访问y1_true标签。该方法实际上是用于模型中的正则化损失:

在编写自定义层或子类模型的调用方法时,您可能想要计算在训练期间希望最小化的标量(例如正则化损失)。您可以使用add_loss()层方法来跟踪这些损失项。

查看文档获取更多信息。我认为你最好使用定制的训练循环,在那里你可以直接访问你需要的一切。下面是一个简单的例子:

class Autoencoder(tf.keras.Model):
def __init__(self, latent_dim):
super(Autoencoder, self).__init__()
self.latent_dim = latent_dim
self.dense1 = tf.keras.layers.Dense(self.latent_dim, activation='relu')
self.dense2 = tf.keras.layers.Dense(5, activation='relu')
e_input1 = tf.keras.Input(shape=(5,))
e_input2 = tf.keras.Input(shape=(5,))
e_input3 = tf.keras.Input(shape=(5,))
e_output1 = self.dense1(e_input1)
e_output2 = self.dense1(e_input2)
e_output3 = self.dense1(e_input3)
self.encoder = tf.keras.Model([e_input1, e_input2, e_input3], [e_output1, e_output2, e_output3])
d_input1 = tf.keras.Input(shape=(self.latent_dim,))
d_input2 = tf.keras.Input(shape=(self.latent_dim,))
d_input3 = tf.keras.Input(shape=(self.latent_dim,))
d_output1 = self.dense2(d_input1)
d_output2 = self.dense2(d_input2)
d_output3 = self.dense2(d_input3)

self.decoder = tf.keras.Model([d_input1, d_input2, d_input3], [d_output1, d_output2, d_output3])
def encode(self, inputs):
x1, x2, x3 = inputs
return self.encoder([x1, x2, x3])

def decode(self, inputs):
x1, x2, x3 = inputs
return self.decoder([x1, x2, x3])

latent_dim = 5 
autoencoder = Autoencoder(latent_dim) 
optimizer = tf.keras.optimizers.Adam()
mse = tf.keras.losses.MeanSquaredError()

your_train_dataset = tf.data.Dataset.from_tensor_slices((tf.random.normal((4, 5)), 
tf.random.normal((4, 5)), 
tf.random.normal((4, 5)), 
tf.random.normal((4, 5)),
tf.random.normal((4, 5)),
tf.random.normal((4, 5)))).batch(2)
epochs = 2
for epoch in range(epochs):
for batch in your_train_dataset:
x1_batch_train, x2_batch_train, x3_batch_train, y1_batch_train, y2_batch_train, y3_batch_train = batch
with tf.GradientTape() as tape:
enc_x1, enc_x2, enc_x1 = autoencoder.encode([x1_batch_train, x2_batch_train, x3_batch_train])
dec_x1, dec_x2, dec_x1 = autoencoder.decode([enc_x1, enc_x2, enc_x1])
loss1 = mse(x1_batch_train, enc_x1)
loss2 = mse(x1_batch_train, dec_x1)
loss3 = mse(dec_x1, y1_batch_train)
#..... and so on.
losses = loss1 + loss2 + loss3
tf.print(losses)
grads = tape.gradient(losses, autoencoder.trainable_weights)
optimizer.apply_gradients(zip(grads, autoencoder.trainable_weights))

相关内容

最新更新