变异自动编码器:在Keras中实施热身



我最近阅读了本文,该论文介绍了一个称为"热身"(WU)的过程,该过程包括将KL-Divergence中的损失乘以一个变量,其值取决于数字时期(它从0到1线性演变为1)

我想知道这是否是这样做的好方法:

beta = K.variable(value=0.0)
def vae_loss(x, x_decoded_mean):
    # cross entropy
    xent_loss = K.mean(objectives.categorical_crossentropy(x, x_decoded_mean))
    # kl divergence
    for k in range(n_sample):
        epsilon = K.random_normal(shape=(batch_size, latent_dim), mean=0.,
                              std=1.0)  # used for every z_i sampling
        # Sample several layers of latent variables
        for mean, var in zip(means, variances):
            z_ = mean + K.exp(K.log(var) / 2) * epsilon
            # build z
            try:
                z = tf.concat([z, z_], -1)
            except NameError:
                z = z_
            except TypeError:
                z = z_
            # sum loss (using a MC approximation)
            try:
                loss += K.sum(log_normal2(z_, mean, K.log(var)), -1)
            except NameError:
                loss = K.sum(log_normal2(z_, mean, K.log(var)), -1)
        print("z", z)
        loss -= K.sum(log_stdnormal(z) , -1)
        z = None
    kl_loss = loss / n_sample
    print('kl loss:', kl_loss)
    # result
    result = beta*kl_loss + xent_loss
    return result
# define callback to change the value of beta at each epoch
def warmup(epoch):
    value = (epoch/10.0) * (epoch <= 10.0) + 1.0 * (epoch > 10.0)
    print("beta:", value)
    beta = K.variable(value=value)
from keras.callbacks import LambdaCallback
wu_cb = LambdaCallback(on_epoch_end=lambda epoch, log: warmup(epoch))

# train model
vae.fit(
    padded_X_train[:last_train,:,:],
    padded_X_train[:last_train,:,:],
    batch_size=batch_size,
    nb_epoch=nb_epoch,
    verbose=0,
    callbacks=[tb, wu_cb],
    validation_data=(padded_X_test[:last_test,:,:], padded_X_test[:last_test,:,:])
)

这将行不通。我测试了它,以确切地弄清楚为什么它不起作用。要记住的关键是Keras在培训开始时创建一个静态图。

因此,vae_loss函数仅调用一次来创建损耗张量,这意味着每次计算损耗时,对beta变量的引用将保持不变。但是,您的warmup函数将Beta重新定位为新的K.variable。因此,用于计算损失的beta与更新的beta不同,并且该值始终为0。

这很容易解决。只需在warmup回调中更改此行:

beta = K.variable(value=value)

to:

K.set_value(beta, value)

以这种方式,beta中的实际值已更新为"到位",而不是创建一个新变量,并且损失将被正确重新计算。

相关内容

  • 没有找到相关文章

最新更新