是什么原因导致"ValueError: Error when checking target: expected dense_2 to have 2 dimensions, but got array



我试图将识别数字的代码从 mnist 调整为文本生成任务。我收到值错误:

ValueError: Error when checking target: expected dense_2 to have 2 dimensions, but got array with shape (30, 1, 166)

如何使最终图层适合此输出形状?

我将一些文本数据划分为句子。x_trainx_test是使用OCR软件创建的凌乱句子,y_trainy_test是相同的句子,但需要手动校对和更正。我想训练模型以查看常见错误并纠正它们。

我已经在这里和其他网站上寻找这个问题的解决方案。似乎对人们有用的最常见解决方案是使用loss='sparse_categorical_crossentropy',但我已经在使用它了。

这是我正在使用的代码:

# Import test and train sets
test_in = open("test.pkl", "rb")
test_set = pickle.load(test_in)
train_in = open("train.pkl", "rb")
train_set = pickle.load(train_in)
x_test, y_test = test_set[0], test_set[1]
x_train, y_train = train_set[0], train_set[1]
# Map all characters in both sets
chars = sorted(list(set("".join(x_train + y_train + x_test + y_test))))
chardict = dict((c, i + 1) for i, c in enumerate(chars))
rchardict = dict((i + 1, c) for i, c in enumerate(chars))
# Encode lists using mapping
temp_list = list()
for gloss in x_test:
encoded_gloss = [chardict[char] for char in gloss]
temp_list.append(encoded_gloss)
x_test = temp_list
temp_list = list()
for gloss in y_test:
encoded_gloss = [chardict[char] for char in gloss]
temp_list.append(encoded_gloss)
y_test = temp_list
temp_list = list()
for gloss in x_train:
encoded_gloss = [chardict[char] for char in gloss]
temp_list.append(encoded_gloss)
x_train = temp_list
temp_list = list()
for gloss in y_train:
encoded_gloss = [chardict[char] for char in gloss]
temp_list.append(encoded_gloss)
y_train = temp_list
# Pad all sentences
max_len = max([len(x) for x in x_train + y_train + x_test + y_test])
x_test = np.array(pad_sequences(x_test, maxlen=max_len, padding='post'))
x_test = np.reshape(x_test, (x_test.shape[0], 1, x_test.shape[1]))
y_test = np.array(pad_sequences(y_test, maxlen=max_len, padding='post'))
y_test = np.reshape(y_test, (y_test.shape[0], 1, y_test.shape[1]))
x_train = np.array(pad_sequences(x_train, maxlen=max_len, padding='post'))
x_train = np.reshape(x_train, (x_train.shape[0], 1, x_train.shape[1]))
y_train = np.array(pad_sequences(y_train, maxlen=max_len, padding='post'))
y_train = np.reshape(y_train, (y_train.shape[0], 1, y_train.shape[1]))
# Normalise to improve training speed
x_test = x_test/37.0
x_train = x_train/37.0
# Define the model
model = Sequential()
model.add(LSTM(128, input_shape=(x_test.shape[1:]), activation='relu', return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(128, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
opt = Adam(lr=1e-3, decay=1e-5)
# Compile and fit the model
model.compile(loss='sparse_categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
model.fit(x_test, y_test, epochs=5, validation_data=(x_train, y_train))

我希望能够训练模型,以便我可以在看不见的句子上尝试它,看看它是否过度拟合,但我无法克服这个障碍。

编辑以包含完整回溯:

Traceback (most recent call last):
File "/Users/adrian/PycharmProjects/WurzburgGlossParser/Rough Work.py", line 80, in <module>
model.fit(x_test[:30], y_test[:30], epochs=5, validation_data=(x_test[30:40], y_test[30:40]))
File"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras/engine/training.py", line 952, in fit
batch_size=batch_size)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras/engine/training.py", line 789, in _standardize_user_data
exception_prefix='target')
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras/engine/training_utils.py", line 128, in standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking target: expected dense_2 to have 2 dimensions, but got array with shape (30, 1, 166)

您需要从标签中移除尺寸1的尺寸:

y_test = np.squeeze(y_test, axis=1)
y_train = np.squeeze(y_train, axis=1)

最新更新