2D CNN对3D Greyscale MRI数据进行分类,可能存在数据标签问题



我正在尝试对3D黑白MRI数据进行二进制分类。由于缺乏B&W数据。我添加了一个维度来排列维度,本质上,这个数据的深度充当了批次维度。我使用的是一个子样本数据,20个文件,每个189乘233乘197。只是作为一个快速的背景。

我有一个csv文件,里面有一堆信息,包括我试图提取的每个文件的标签数据,如下代码所示。

import numpy as np
import glob
import os
import tensorflow as tf
import pandas as pd
import glob
import SimpleITK as sitk
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from keras.preprocessing.image import ImageDataGenerator

from keras.utils import plot_model
from keras.utils import to_categorical
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from google.colab import drive
drive.mount('/content/gdrive')
datapath = ('/content/gdrive/My Drive/DirectoryTest/All Data/')
patients = os.listdir(datapath)
labels_df = pd.read_csv('/content/Data_Index.csv', index_col = 0 )
labelset = []
for i in patients:
label = labels_df.loc[i, 'Group']
if label is 'AD':
np.char.replace(label, ['AD'], [0])
if label is 'CN':
np.char.replace(label, ['CN'], [1])
labelset.append(label)
label_encoder = LabelEncoder()
labelset = label_encoder.fit_transform(labelset)
labelset = np_utils.to_categorical(labelset, num_classes= 2)
FullDataSet = []
for i in patients:
a = sitk.ReadImage(datapath + i)
b = sitk.GetArrayFromImage(a)
c = np.reshape(b, (189,233,197, 1))
FullDataSet.append(c)
training_data, testing_data, training_labels, testing_labels = train_test_split(FullDataSet, labelset, train_size=0.70,test_size=0.30)
dataset_train = tf.data.Dataset.from_tensor_slices((training_data, training_labels))
dataset_test = tf.data.Dataset.from_tensor_slices((testing_data, testing_labels))
CNN_model = tf.keras.Sequential(
[
#tf.keras.layers.Input(shape=(189, 233, 197, 1), batch_size=2),
#tf.keras.layers.Reshape((197, 233, 189, 1)),   

tf.keras.layers.Conv2D(kernel_size=(7, 7), data_format='channels_last', filters=64, activation='relu',
padding='same', strides=( 3, 3), input_shape=( 233, 197, 1)),
#tf.keras.layers.BatchNormalization(center=True, scale=False),
tf.keras.layers.MaxPool2D(pool_size=(3, 3), padding='same'),
tf.keras.layers.Dropout(0.20),

tf.keras.layers.Conv2D(kernel_size=( 7, 7), filters=128, activation='relu', padding='same', strides=( 3, 3)),
#tf.keras.layers.BatchNormalization(center=True, scale=False),
tf.keras.layers.MaxPool2D(pool_size=(3, 3), padding='same'),
tf.keras.layers.Dropout(0.20),      
tf.keras.layers.Conv2D(kernel_size=( 7, 7), filters=256, activation='relu', padding='same', strides=( 3, 3)),
#tf.keras.layers.BatchNormalization(center=True, scale=False),
tf.keras.layers.MaxPool2D(pool_size=(3, 3), padding = 'same'),
tf.keras.layers.Dropout(0.20), 
# last activation could be either sigmoid or softmax, need to look into this more. Sig for binary output, Soft for multi output 
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),   
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dropout(0.20),
tf.keras.layers.Dense(2, activation='softmax')
])
# Compile the model
CNN_model.compile(optimizer=tf.keras.optimizers.Adam(lr=0.00001), loss='binary_crossentropy', metrics=['accuracy'])
# print model layers
CNN_model.summary()
CNN_history = CNN_model.fit(dataset_train, epochs=10, validation_data=dataset_test)

当我去拟合模型时,我得到以下错误:

Epoch 1/10
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-35-a8b210ec2e72> in <module>()
1 #running of the model
2 #CNN_history = CNN_model.fit(dataset_train, epochs=100, validation_data =dataset_test, validation_steps=1)
----> 3 CNN_history = CNN_model.fit(dataset_train, epochs=10, validation_data=dataset_test)
4 
5 
10 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
971           except Exception as e:  # pylint:disable=broad-except
972             if hasattr(e, "ag_error_metadata"):
--> 973               raise e.ag_error_metadata.to_exception(e)
974             else:
975               raise
ValueError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:806 train_function  *
return step_function(self, iterator)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:796 step_function  **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1211 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:789 run_step  **
outputs = model.train_step(data)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:749 train_step
y, y_pred, sample_weight, regularization_losses=self.losses)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:204 __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:149 __call__
losses = ag_call(y_true, y_pred)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:253 call  **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:1605 binary_crossentropy
K.binary_crossentropy(y_true, y_pred, from_logits=from_logits), axis=-1)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/backend.py:4829 binary_crossentropy
bce = target * math_ops.log(output + epsilon())
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:1141 binary_op_wrapper
raise e
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:1125 binary_op_wrapper
return func(x, y, name=name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:1457 _mul_dispatch
return multiply(x, y, name=name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:509 multiply
return gen_math_ops.mul(x, y, name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_math_ops.py:6176 mul
"Mul", x=x, y=y, name=name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:744 _apply_op_helper
attrs=attr_protos, op_def=op_def)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py:593 _create_op_internal
compute_device)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py:3485 _create_op_internal
op_def=op_def)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py:1975 __init__
control_input_ops, op_def)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py:1815 _create_c_op
raise ValueError(str(e))
ValueError: Dimensions must be equal, but are 2 and 189 for '{{node binary_crossentropy/mul}} = Mul[T=DT_FLOAT](ExpandDims, binary_crossentropy/Log)' with input shapes: [2,1], [189,2].

我知道[189,2]中的2与最终的softmax层有关,但我不知道该如何处理这些信息,也不知道该从哪里开始。任何帮助都将不胜感激,谢谢!

这里有一些关于代码的注释,希望对您有所帮助。

使用Conv3DMaxPool3D

如果您处理的是3D图像,那么您几乎肯定应该使用Conv3D而不是Conv2D,使用MaxPool3D而不是MaxPool2D。这里有一个例子(使用随机数据(,我刚刚测试过,它似乎工作得很好:

import numpy as np
import tensorflow as tf
from tensorflow import keras
train_size = 20
val_size = 5
X_train = np.random.random([train_size, 189, 233, 197]).astype(np.float32)
X_valid = np.random.random([val_size, 189, 233, 197]).astype(np.float32)
y_train = np.random.randint(2, size=train_size).astype(np.float32)
y_valid = np.random.randint(2, size=val_size).astype(np.float32)
CNN_model = keras.Sequential([
keras.layers.Reshape([189, 233, 197, 1], input_shape=[189, 233, 197]),
keras.layers.Conv3D(kernel_size=(7, 7, 7), filters=32, activation='relu',
padding='same', strides=(3, 3, 3)),
#keras.layers.BatchNormalization(),
keras.layers.MaxPool3D(pool_size=(3, 3, 3), padding='same'),
keras.layers.Dropout(0.20),

keras.layers.Conv3D(kernel_size=(5, 5, 5), filters=64, activation='relu',
padding='same', strides=(3, 3, 3)),
#keras.layers.BatchNormalization(),
keras.layers.MaxPool3D(pool_size=(2, 2, 2), padding='same'),
keras.layers.Dropout(0.20),
keras.layers.Conv3D(kernel_size=(3, 3, 3), filters=128, activation='relu',
padding='same', strides=(1, 1, 1)),
#keras.layers.BatchNormalization(),
keras.layers.MaxPool3D(pool_size=(2, 2, 2), padding='same'),
keras.layers.Dropout(0.20),
keras.layers.Flatten(),
keras.layers.Dense(256, activation='relu'),   
keras.layers.Dense(64, activation='relu'),
keras.layers.Dropout(0.20),
keras.layers.Dense(1, activation='sigmoid')
])
# Compile the model
CNN_model.compile(optimizer=keras.optimizers.Adam(lr=0.00001),
loss='binary_crossentropy',
metrics=['accuracy'])
# print model layers
CNN_model.summary()
CNN_history = CNN_model.fit(X_train, y_train, epochs=10,
validation_data=[X_valid, y_valid])

不整形以排列尺寸

关于这两条注释行:

#tf.keras.layers.Input(shape=(189, 233, 197, 1), batch_size=2),
#tf.keras.layers.Reshape((197, 233, 189, 1)),   

将189x233x197x1图像重塑为197x233x189x1不会如您所期望的那样工作。它将完全打乱像素,使任务更加困难。这类似于将2x3图像重塑为3x2图像:

>>> img = np.array([[1,2,3],[4,5,6]])
>>> np.reshape(img, [3, 2])
array([[1, 2],
[3, 4],
[5, 6]])

请注意,这与旋转图像不同:像素完全混合。

你想要的是像这样使用tf.keras.layers.Permute()

CNN_model = tf.keras.Sequential([
tf.keras.layers.Permute((3, 2, 1, 4), input_shape=(189, 233, 197, 1)),
...
])

因为这些注释行是错误的,我怀疑下面的行也可能是错误的:

c = np.reshape(b, (189,233,197, 1))

我不知道b的形状,所以请确保它与这个np.reshape()操作兼容。例如,如果它的形状是[189, 233, 197],那也没关系。但是,例如,如果是[197, 233, 189],那么在重塑之前,您需要排列维度:

b_permuted = np.transpose(b, [2, 1, 0]) # permute dims
c = np.reshape(b_permuted, [189, 233, 197, 1]) # then add the channels dim

np.transpose()函数类似于使用Permute(),只是维度是0索引的,而不是1索引的。

这可能更为复杂。例如,如果3D图像被存储为并排包含较小2D切片的大2D图像,则b的形状可以类似于[189*197, 233]。在这种情况下,你需要做这样的事情:

b_reshaped = np.reshape(b, [189, 197, 233, 1])
c = np.transpose(b_reshaped, [0, 2, 1, 3])

我希望这些例子足够清楚。

使用tf.keras,而不是keras

有几种Keras API的实现。一种是keras封装;多后端";Keras(使用pip install keras安装(。另一个是tf.keras,它带有TensorFlow。您的程序似乎同时使用了这两种功能。你绝对应该避免这种情况,这会引起奇怪的问题。

from keras.utils import plot_model # this is multibackend Keras
...
CNN_model = tf.keras.Sequential(...) # this is tf.keras

我强烈建议您卸载多后端keras,以避免这种类型的错误:pip uninstall keras。然后通过前缀tensorflow.来修复导入,例如:

from tensorflow.keras.models import Sequential
from tensorflow.keras.utils import to_categorical # note: not from np_utils
...

不要使用to_categorical()进行二进制分类

对于二进制分类,标签应该只是一个包含0和1的1D数组,例如np.array([1., 0., 0., 1., 1.])。代码可以非常简化:

labelset = []
for i in patients:
label = labels_df.loc[i, 'Group']
if label == 'AD':  # use `==` instead of `is` to compare strings
labelset.append(0.)
elif label == 'CN':
labelset.append(1.)
else:
raise "Oops, unknown label" # I recommend testing possible failure cases
labelset = np.array(labelset)

重要的是,对于二进制分类,您应该在输出层中使用单个神经元,并使用"sigmoid"激活函数(而不是用于多类分类的"softmax"(:

CNN_model = tf.keras.Sequential([
...
tf.keras.layers.Dense(1, activation='sigmoid')
])

次要评论

  • 在调用train_test_split()时,不需要同时指定train_sizetest_size

祝你好运!

最新更新