利用Inception Resnet v2(癌症乳腺癌)进行转移学习的准确率较低



我想对BreakHis数据集的乳腺癌症组织病理学图像进行二元分类(https://www.kaggle.com/ambarish/breakhis)使用迁移学习和Inception Resnet v2。目标是冻结所有层,并通过在模型中添加两个神经元来训练完全连接的层。特别是,最初我想考虑与放大倍数40X相关的图像(良性:625,恶性:1370(。以下是我所做工作的总结:

  • 我读取了图像并将其调整为150x150
  • 我将数据集划分为训练集、验证集和测试集
  • 我加载预先训练的网络Inception Resnet v2
  • 我冻结了所有的层,我添加了两个神经元作为二进制分类(1="良性",0="恶性"(
  • 我使用Adam方法作为激活函数编译模型
  • 我进行培训
  • 我做了预测
  • 我计算准确度

这是代码:

data = dataset[dataset["Magnificant"]=="40X"]
def preprocessing(dataset, img_size):
# images
X = []
# labels 
y = []

i = 0
for image in list(dataset["Path"]):
# Ridimensiono e leggo le immagini
X.append(cv2.resize(cv2.imread(image, cv2.IMREAD_COLOR), 
(img_size, img_size), interpolation=cv2.INTER_CUBIC))
basename = os.path.basename(image)

# Get labels
if dataset.loc[i][2] == "benign":
y.append(1)
else:
y.append(0)
i = i+1
return X, y
X, y = preprocessing(data, 150)
X = np.array(X)
y = np.array(y)
# Splitting
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, stratify = y_40, shuffle=True, random_state=1)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, stratify = y_train, shuffle=True, random_state=1) 
conv_base = InceptionResNetV2(weights='imagenet', include_top=False, input_shape=[150, 150, 3])   
# Freezing
for layer in conv_base.layers:
layer.trainable = False
model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(1, activation='sigmoid'))
opt = tf.keras.optimizers.Adam(learning_rate=0.0002)
loss = tf.keras.losses.BinaryCrossentropy(from_logits=False)
model.compile(loss=loss, optimizer=opt, metrics = ["accuracy", tf.metrics.AUC()])
batch_size = 32
train_datagen = ImageDataGenerator(rescale=1./255)
val_datagen = ImageDataGenerator(rescale=1./255) 
train_generator = train_datagen.flow(X_train, y_train, batch_size=batch_size) 
val_generator = val_datagen.flow(X_val, y_val, batch_size=batch_size)
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)
ntrain =len(X_train)
nval = len(X_val)
len(y_train)
epochs = 70
history = model.fit_generator(train_generator,
steps_per_epoch=ntrain // batch_size,
epochs=epochs,
validation_data=val_generator,
validation_steps=nval // batch_size, callbacks=[callback])

这是最后一个时期的训练输出:

Epoch 70/70
32/32 [==============================] - 3s 84ms/step - loss: 0.0499 - accuracy: 0.9903 - auc_5: 0.9996 - val_loss: 0.5661 - val_accuracy: 0.8250 - val_auc_5: 0.8521

我做了预测:

test_datagen = ImageDataGenerator(rescale=1./255) 
x = X_test
y_pred = model.predict(test_datagen.flow(x))
y_p = []
for i in range(len(y_pred)):
if y_pred[i] > 0.5:
y_p.append(1)
else:
y_p.append(0)

我计算准确度:

from sklearn.metrics import accuracy_score
accuracy =  accuracy_score(y_test, y_p)
print(accuracy)

这是我得到的准确度值:0.5459098497495827

为什么我的准确率如此之低,我已经做了几次测试,但总是得到类似的结果(帮助我(

在进行迁移学习时,尤其是在使用冻结权重时,进行与最初训练网络时相同的预处理是极其重要的。

对于InceptionResNetV2网络,预处理类型是tensorflow/keras库中的"tf",这对应于除以127,然后减去图像网权重的1。你被255除了。

幸运的是,您不必费力地浏览代码来找出使用了什么函数,因为它们在API中已经公开。只需进行

train_datagen = ImageDataGenerator(preprocessing_function=tf.keras.applications.inception_resnet_v2.preprocess_input)

等等进行验证和测试

相关内容

  • 没有找到相关文章

最新更新