使用VGG进行数据生成和损失计算时,元素不是图表的一部分



i有一个vgg19编码器,该编码器采用(256,256,3)的输入图像y,并从VGG的CORV-4-1层返回Dimension (32,32, 512)的张量。我需要将其转换为一个数组来应用一些转换并使用我的解码器重建图像。

简而言之,我正在尝试像这样训练解码器网络:

x = vgg_encoder(y)  # generate features from image y
x = do_extra_transformation(x) # for example, reshape and apply K means to shift features towards their cluster centres
y_pred = decoder(x) # try to reconstruct the image y from features
loss = calculate_loss(y, y_pred) # calculate reconstruction loss using VGG loss

但是,当我运行代码时,我会收到错误: ValueError: Tensor Tensor("block4_conv1/Relu:0", shape=(?, 32, 32, 512), dtype=float32) is not an element of this graph.

我假设该错误来自TensorFlow Diconnect,在我在VGG上进行预测以生成功能。我不明白为什么这是一个问题,因为从技术上讲它仅用于数据生成,也不是用于培训的计算图的一部分!


完整代码,您可以在

下使用python example.py运行
import tensorflow as tf
import numpy as np 
from tensorflow.keras.applications import VGG19
from tensorflow.keras.layers import Input, UpSampling2D, Conv2D
from tensorflow.keras.models import Model
import tensorflow.keras.backend as K
from tensorflow.keras.optimizers import Adam
class CustomModel:
    def __init__(self, im_h, im_w, im_c):
        self.im_shape = (im_h, im_w, im_c)
        self.vgg_features_shape = (None, None, 512)
        self.vgg_loss_model = self.build_vgg_loss()
        self.kernel_size = (3,3)
        self.decoder = self.build_decoder()

    def build_vgg_loss(self):
        vgg = VGG19(weights="imagenet", include_top=False, input_shape=self.im_shape)
        vgg.outputs = vgg.get_layer('block4_conv1').output
        model = Model(inputs=vgg.inputs, outputs=vgg.outputs)
        model.trainable = False
        return model
    def build_decoder(self):
        """
        Mirrors the VGG network with max-pooling layers replaces by UpScaling Layers
        """
        i = Input((None, None, 512))
        x = Conv2D(filters=512, kernel_size=self.kernel_size, padding='same')(i)
        x = UpSampling2D()(x)
        for _ in range(4):
            x = Conv2D(filters=256, kernel_size=self.kernel_size, padding='same')(x)
        x = UpSampling2D()(x)
        for _ in range(2):
            x = Conv2D(filters=128, kernel_size=self.kernel_size, padding='same')(x)
        x = UpSampling2D()(x)
        for _ in range(2):
            x = Conv2D(filters=64, kernel_size=self.kernel_size, padding='same')(x)
        x = Conv2D(filters=3, kernel_size=self.kernel_size, padding='same')(x)
        model = Model(inputs=i, outputs=x)
        return model

    def get_loss(self, y_pred, y):
        vgg_model = self.vgg_loss_model
        def content_loss(y_pred, y):
            dif = vgg_model(y) - vgg_model(y_pred)
            sq = K.square(dif)
            s = K.sum(sq, axis=-1)
            sqrt = K.sqrt(s)
            loss = K.sum(sqrt)
            return loss
        return content_loss(y_pred, y)

class DataLoader:
    def __init__(self, vgg):
        self.vgg = vgg
    def gen(self):
        while True:
            y = np.random.randn(256, 256,3)
            x = self.vgg.predict(np.expand_dims(y, 0)).reshape((32,32,512)) # if this is turned into a np.array, everything works as expected
            yield x, np.random.randn(256, 256,3) 

model = CustomModel(256,256,3)
# dl = DataLoader(datapath='./trainer/data/', mst=mst)
output_types=(
    tf.float32,
    tf.float32
 )
output_shapes=(
    tf.TensorShape([None, None, None]),
    tf.TensorShape([None, None, None])
)
ds = tf.data.Dataset.from_generator(DataLoader(model.vgg_loss_model).gen,
                                   output_types=output_types, 
                                   output_shapes=output_shapes)
ds = ds.repeat().batch(1)
iterator = ds.make_one_shot_iterator()
x, y = iterator.get_next()
y_pred = model.decoder(x)

loss = model.get_loss(y_pred, y)
opt = tf.train.AdamOptimizer(0.01)
train_opt = opt.minimize(loss)
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init_op)
    opt = tf.train.GradientDescentOptimizer(0.01)
    for i in range(5):
        sess.run(train_opt)

不要忘记您描述的任务的输入是图像,并且输出也是同一图像。因此,您构造的模型必须包含所有部分,即 encoder 解码器。当然,您可以选择不训练它们中的任何一个(因为您选择不训练编码器(。因此,这是您需要应用的更改:

以下是错误的,因为yy_pred解码器的真实和预测的输出,因此在它们上应用vgg_model(即编码(是没有意义的:

dif = vgg_model(y) - vgg_model(y_pred)

您只想将重建的图像与原始图像进行比较。因此,只需将其更改为

dif = y - y_pred

(此外,您不再需要get_loss中的vgg_model = self.vgg_loss_model;实际上,get_loss可以定义为CustomModel类的静态方法,而没有该内部custom_loss函数(。


def gen(self):
    while True:
        y = np.random.randn(256, 256,3)
        x = self.vgg.predict(np.expand_dims(y, 0)).reshape((32,32,512))
        yield x, np.random.randn(256, 256,3)

正如我们提到的,输入和输出PF是同一件事(此外,通过使用self.vgg.predict,您可以从整个模型计算图中有效地删除 encoder (。只需将其更改为:

def gen(self):
    while True:
        x = np.random.randn(256, 256,3)
        yield x, x  # same input and output

最后一行:

y_pred = model.decoder(x)

首先应用编码器,然后将解码器应用于Encder的输出以重建图像。因此,实施您说的话:

y_pred = model.decoder(model.vgg_loss_model(x))

最终注意:我认为,在这些情况下,在纸上绘制整个计算图的大图表,理想情况下,在开始实施之前,确实有助于更好地理解问题并节省大量时间和精力。

您可以检查一个类似的问题:https://stackoverflow.com/a/56178638/7454706

创建一个额外的功能

def load_model():
    global model
    model = yourmodel(weights=xx111122)
        # this is key : save the graph after loading the model
    global graph
    graph = tf.get_default_graph()

然后在预测之前:

with graph.as_default():
   preds = model.predict(image)
   #... etc

关于此问题的讨论:https://github.com/tensorflow/tensorflow/issues/14356我认为这是ops.py的问题:

   3034     with self._lock:
-> 3035       return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
   3036 

如果您可以为整个代码提供错误,则可以更容易解决。

相关内容

  • 没有找到相关文章

最新更新