InvalidArgumentError 同时构建一个 deep-CNN



我是TensorFlow和python的新手... 我正在尝试为细胞图像分类Hep-2数据集构建一个深度CNN。数据集由13596图像组成,我使用8701图像作为CNN的训练数据。另外,我有。CSV文件,其中包含图像ID及其单元格类型。我从中提取内容并使用image_ID.CSV 文件作为我的标签。训练数据和图像 ID 均已转换为.astype(‘float32’)。但是,不知何故,我得到了无效的参数错误,我不知道那里发生了什么。 我已经发布了我的代码和错误,任何提示或帮助将不胜感激。提前谢谢你:)

我也是堆栈溢出的新手。 抱歉我的格式混乱。

我的代码:

from PIL import Image
import glob
import os
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Dropout, Flatten, MaxPooling2D
from keras.optimizers import SGD
def extract_labels(image_names, Original_Labels):
temp = np.array([image.split('.')[0] for image in image_names])
temp2 = np.array([j[0] for i in temp for j in Original_Labels if(int(i) == int(j[0]))])
return temp2
def get_Labels():
df=pd.read_csv('gt_training.csv', sep=',')
labels = np.asarray(df)
path = 'path..../training/'
image_names_train = [f for f in os.listdir(path) if os.path.splitext(f)[-1] == '.png']
return labels, image_names_train

Train_images = glob.glob('path.../training/*.png')
train_data = np.array([np.array(Image.open(fname)) for fname in Train_images])
train_data = train_data.astype('float32')
train_data /= 255
#getting labels from .csv file for training data
labels, image_names_train = get_Labels()
train_labels = extract_labels(image_names_train, labels)
train_labels = train_labels.astype('float32')
print(train_labels.shape)
train_data = train_data.reshape(train_data.shape[0],78,78,1) #reshaping into 4-Dim
input_shape = (78, 78, 1) #1 because the provided dataset is in grey scale 

#Adding pooling, dense layers to an an non-optimized empty CNN
model = Sequential()
model.add(Conv2D(6, kernel_size=(7,7),activation = tf.nn.tanh, input_shape = input_shape))
model.add(MaxPooling2D(pool_size = (2, 2)))
model.add(Conv2D(16, kernel_size=(4,4),activation = tf.nn.tanh))
model.add(MaxPooling2D(pool_size = (3, 3)))
model.add(Conv2D(32, kernel_size=(3,3),activation = tf.nn.tanh))
model.add(MaxPooling2D(pool_size = (3, 3)))
model.add(Flatten())
model.add(Dense(150, activation = tf.nn.tanh, kernel_regularizer = keras.regularizers.l2(0.00005)))
model.add(Dropout(0.5))
model.add(Dense(6, activation = tf.nn.softmax))
#setting an optimizer with a given loss function
opt = SGD(lr = 0.01, momentum = 0.9)
model.compile(optimizer = opt, loss = 'sparse_categorical_crossentropy', metrics = ['accuracy'])
model.fit(x = train_data, y = train_labels, epochs = 10, batch_size = 77)

我收到的错误消息:

six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
InvalidArgumentError:  Received a label value of 13269 which is outside the valid range of [0, 6).  Label values: 8823 3208 9410 5223 8817 3799 6588 1779 1371 5017 9788 9886 3345 1815 5943 37 675 2396 4485 9528 11082 12457 13269 5488 3250 12896 13251 1854 10942 6287 6232 2944
[[node loss_24/dense_55_loss/sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits (defined at C:UsersvardhAnaconda3envstflibsite-packageskerasbackendtensorflow_backend.py:3009) ]] [Op:__inference_keras_scratch_graph_676176]
Function call stack:
keras_scratch_graph

不知何故,我意识到我的问题与此问题有关 无效参数错误:收到标签值 8825.....

解决方案 从那篇帖子:@shaili发布,

在最后一层,例如你使用了model.add(Dense(1, activation='softmax'((。此处 1 将其值限制为 [0, 1( 将其形状更改为最大输出标签。例如,您的输出来自标签 [0,7(,然后使用 model.add(Dense(7, 激活='softmax'((

input_text = Input(shape=(max_len,), dtype=tf.string)
embedding = Lambda(ElmoEmbedding, output_shape=(max_len, 1024))(input_text)
x = Bidirectional(LSTM(units=512, return_sequences=True,
recurrent_dropout=0.2, dropout=0.2))(embedding)
x_rnn = Bidirectional(LSTM(units=512, return_sequences=True,
recurrent_dropout=0.2, dropout=0.2))(x)
x = add([x, x_rnn])  # residual connection to the first biLSTM
out = TimeDistributed(Dense(n_tags, activation="softmax"))(x)
Here in TimeDistributed layer n_tags is the length of tags from which I want to classify.

如果我预测一些其他数量,例如长度与n_tags不同的q_tag,即假设 10 和 n_tags 的长度为 7,并且我收到 8 作为输出标签,它将给出无效参数错误收到超出 [0, 7] 有效范围的标签值 8(。

根据我的经验, 通常,此错误是由于没有要分类的分类不准确而生成的。在我的代码中,model.add(Dense(6, activation = tf.nn.softmax)我给出了要生成的分类类型6而不是13596。但是,这不是一个完全有效的代码,至少它可以让我的代码运行。

相关内容

最新更新