Keras lambda层没有输出张量形状,编译模型时错误



我正在尝试使用keras构建字符级自动编码器。当我编译模型时,我会发现张张器的形状的错误,因为您可以看到Bellow。我打印了以进行检查的图层规格,如果张量形状匹配,我发现问题可能是最后一个lambda层没有正确指定的输出张量形状正确指定,但是我不知道为什么为什么或如何指定它,也没有在Keras或Google的文档中找到任何内容。

错误输出也是代码的一部分,我在其中定义了模型。如果需要的话,澄清的整个脚本在这里:pastebin。

错误和图层输出

(主要是最后一层。)

0        <keras.engine.topology.InputLayer object at 0x7f5d290eb588> Input shape (None, 80) Output shape (None, 80)
1        <keras.layers.core.Lambda object at 0x7f5d35f25a20> Input shape (None, 80) Output shape (None, 80, 99)
2        <keras.layers.core.Dense object at 0x7f5d2dda52e8> Input shape (None, 80, 99) Output shape (None, 80, 256)
3        <keras.layers.core.Dropout object at 0x7f5d25004da0> Input shape (None, 80, 256) Output shape (None, 80, 256)
4        <keras.layers.core.Dense object at 0x7f5d2501ac18> Input shape (None, 80, 256) Output shape (None, 80, 128)
5        <keras.layers.core.Dense object at 0x7f5d24dc6cc0> Input shape (None, 80, 128) Output shape (None, 80, 64)
6        <keras.layers.core.Dense object at 0x7f5d24de1fd0> Input shape (None, 80, 64) Output shape (None, 80, 128)
7        <keras.layers.core.Dropout object at 0x7f5d24df4a20> Input shape (None, 80, 128) Output shape (None, 80, 128)
8        <keras.layers.core.Dense object at 0x7f5d24dfeb38> Input shape (None, 80, 128) Output shape (None, 80, 256)
9        <keras.layers.core.Lambda object at 0x7f5d24da6a20> Input shape (None, 80, 256) Output shape (None, 80)
----------------
0        Input Tensor("input_1:0", shape=(?, 80), dtype=int64) Output Tensor("input_1:0", shape=(?, 80), dtype=int64)
1        Input Tensor("input_1:0", shape=(?, 80), dtype=int64) Output Tensor("ToFloat:0", shape=(?, 80, 99), dtype=float32)
2        Input Tensor("ToFloat:0", shape=(?, 80, 99), dtype=float32) Output Tensor("Relu:0", shape=(?, 80, 256), dtype=float32)
3        Input Tensor("Relu:0", shape=(?, 80, 256), dtype=float32) Output Tensor("cond/Merge:0", shape=(?, 80, 256), dtype=float32)
4        Input Tensor("cond/Merge:0", shape=(?, 80, 256), dtype=float32) Output Tensor("Relu_1:0", shape=(?, 80, 128), dtype=float32)
5        Input Tensor("Relu_1:0", shape=(?, 80, 128), dtype=float32) Output Tensor("Relu_2:0", shape=(?, 80, 64), dtype=float32)
6        Input Tensor("Relu_2:0", shape=(?, 80, 64), dtype=float32) Output Tensor("Relu_3:0", shape=(?, 80, 128), dtype=float32)
7        Input Tensor("Relu_3:0", shape=(?, 80, 128), dtype=float32) Output Tensor("cond_1/Merge:0", shape=(?, 80, 128), dtype=float32)
8        Input Tensor("cond_1/Merge:0", shape=(?, 80, 128), dtype=float32) Output Tensor("truediv:0", shape=(?, 80, 256), dtype=float32)
9        Input Tensor("truediv:0", shape=(?, 80, 256), dtype=float32) Output Tensor("ToFloat_1:0", shape=(), dtype=float32)
----------------
Traceback (most recent call last):
  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/tensor_shape.py", line 578, in merge_with
    self.assert_same_rank(other)
  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/tensor_shape.py", line 624, in assert_same_rank
    "Shapes %s and %s must have the same rank" % (self, other))
ValueError: Shapes (?, ?) and () must have the same rank
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/ops/nn_impl.py", line 153, in sigmoid_cross_entropy_with_logits
    labels.get_shape().merge_with(logits.get_shape())
  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/tensor_shape.py", line 585, in merge_with
    (self, other))
ValueError: Shapes (?, ?) and () are not compatible
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "lstm.py", line 97, in <module>
    autoencoder.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
  File "/usr/local/lib/python3.4/dist-packages/keras/engine/training.py", line 667, in compile
    sample_weight, mask)
  File "/usr/local/lib/python3.4/dist-packages/keras/engine/training.py", line 318, in weighted
    score_array = fn(y_true, y_pred)
  File "/usr/local/lib/python3.4/dist-packages/keras/objectives.py", line 45, in binary_crossentropy
    return K.mean(K.binary_crossentropy(y_pred, y_true), axis=-1)
  File "/usr/local/lib/python3.4/dist-packages/keras/backend/tensorflow_backend.py", line 2449, in binary_crossentropy
    logits=output)
  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/ops/nn_impl.py", line 156, in sigmoid_cross_entropy_with_logits
    % (logits.get_shape(), labels.get_shape()))
ValueError: logits and labels must have the same shape (() vs (?, ?))

代码

我已经使用此代码构建了模型:

def binarize(x, sz):
    return tf.to_float(tf.one_hot(x, sz, on_value=1, off_value=0, axis=-1))

def binarize_outputshape(in_shape):
    return in_shape[0], in_shape[1], len(chars)

def debinarize(x):
    return tf.to_float(np.argmax(x))  # get the character with most probability

def debinarize_outputshape(in_shape):
    return in_shape[0], in_shape[1]
input_sentence = Input(shape=(max_title_len,), dtype='int64')
# make one-hot vectors out of sentences
one_hot = Lambda(binarize, output_shape=binarize_outputshape, arguments={'sz': len(chars)})(input_sentence)
# shape: max_title_len * chars = 80 * 55 = 4400
encoder = Dense(256, activation='relu')(one_hot)
encoder = Dropout(0.1)(encoder)
encoder = Dense(128, activation='relu')(encoder)
encoder = Dense(64, activation='relu')(encoder)
decoder = Dense(128, activation='relu')(encoder)
encoder = Dropout(0.1)(encoder)
decoder = Dense(256, activation='softmax')(decoder)
# transform back from one-hot vectors
decoder = Lambda(debinarize, output_shape=debinarize_outputshape)(decoder)
autoencoder = Model(input=input_sentence, output=decoder)

首先,我输入了最大80个字符的文本序列,lambda层将每个字符转换为一个速率向量。最后,我想将单热向量转移回,而最大值作为解码字符。

问题

  1. 我是否缺少模型构造代码中的某些内容?
  2. 如何使lambda层知道输出张量形状?

编辑

正如纳西姆·本(Nassim Ben)指出的那样,问题是函数 debinarize 。将其更改为:

def debinarize(x):
    return tf.to_float(tf.argmax(x, axis=0))

至少将某种值设置为输出张量的形状。尽管值有些奇怪,因为它是(80,256),并且与输出形状(无,80)有所不同。所有其他输出张量的形状和输出形状相应地匹配(我想'?',没有一个或多或少的含义相同...)。更具体地说,Lambda层现在看起来像:

 <keras.layers.core.Lambda object at 0x7fafcc5a59b0> Input shape (None, 80, 256) Output shape (None, 80)
...
...
Input Tensor("truediv:0", shape=(?, 80, 256), dtype=float32) Output Tensor("ToFloat_1:0", shape=(80, 256), dtype=float32)

问题是,我希望输出张量形状为(?,80),就像第一层的输入一样。我没有更改除 argmax 的其他代码。

现在给出的错误是:

Traceback (most recent call last):
  File "lstm.py", line 122, in <module>
    callbacks=[earlystop_cb, check_cb, keras.callbacks.TensorBoard(log_dir='/tmp/autoencoder')])
  File "/usr/local/lib/python3.4/dist-packages/keras/engine/training.py", line 1168, in fit
    self._make_train_function()
  File "/usr/local/lib/python3.4/dist-packages/keras/engine/training.py", line 760, in _make_train_function
    self.total_loss)
  File "/usr/local/lib/python3.4/dist-packages/keras/optimizers.py", line 433, in get_updates
    m_t = (self.beta_1 * m) + (1. - self.beta_1) * g
  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/ops/math_ops.py", line 883, in binary_op_wrapper
    y = ops.convert_to_tensor(y, dtype=x.dtype.base_dtype, name="y")
  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py", line 651, in convert_to_tensor
    as_ref=False)
  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py", line 716, in internal_convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/constant_op.py", line 176, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/constant_op.py", line 165, in constant
    tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape))
  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/tensor_util.py", line 360, in make_tensor_proto
    raise ValueError("None values not supported.")
ValueError: None values not supported.

我认为它来自在张量上使用numpy函数。尝试使用TF Argmax功能(我认为您要减少的轴是1,不确定)

def debinarize(x):
    return tf.to_float(tf.argmax(x,axis=2))  # get the character with most probability

这个工作吗?

相关内容

  • 没有找到相关文章

最新更新