Tensorflow Addons R2 ValueError:两个形状中的维度0必须相等,但分别为1和5



我一直在尝试将tfa度量添加到我的模型编译中,以便在整个培训过程中跟踪。然而,当我添加R2度量时,我会得到以下错误。我以为y_shape=(1,)会解决这个问题,但它没有。

ValueError: Dimension 0 in both shapes must be equal, but are 1 and 5. Shapes are [1] and [5]. for '{{node AssignAddVariableOp_8}} = AssignAddVariableOp[dtype=DT_FLOAT](AssignAddVariableOp_8/resource, Sum_6)' with input shapes: [], [5].

我的代码如下所示:

model = Sequential()
model.add(Input(shape=(4,)))
model.add(Normalization())
model.add(Dense(5, activation="relu", kernel_regularizer=l2(l2=1e-2)))
print(model.summary())
opt = Adam(learning_rate = 1e-2)
model.compile(loss="mean_squared_error", optimizer=tf.keras.optimizers.Adam(learning_rate=1e-2), metrics=[MeanSquaredError(name="mse"), MeanAbsoluteError(name="mae"), tfa.metrics.RSquare(name="R2", y_shape=(1,))])
history = model.fit(x = training_x,
y = training_y,
epochs = 10,
batch_size = 64,
validation_data = (validation_x, validation_y)
)

非常感谢您的帮助!注意,我也尝试将y_shape更改为(5,(,但后来我得到的错误是尺寸不相等,而是5和1…

您需要在模型中添加一个输出层,如下所示:

model.add(Dense(1))

那么你的模型将如下:

model = Sequential()
model.add(Input(shape=(4,)))
model.add(Normalization())
model.add(Dense(5, activation="relu", kernel_regularizer=regularizers.l2(l2=1e-2)))
model.add(Dense(1))
print(model.summary())

输出:

Model: "sequential_10"
_________________________________________________________________
Layer (type)                Output Shape              Param #   
=================================================================
normalization_10 (Normaliza  (None, 4)                9         
tion)                                                           

dense_12 (Dense)            (None, 5)                 25        

dense_13 (Dense)            (None, 1)                 6         

=================================================================
Total params: 40
Trainable params: 31
Non-trainable params: 9

相关内容

最新更新