3D自动编码器在打印时误差较小,但结果较差



我正在制作一个自动编码器,以降低肺部CT扫描(3D(的维数。

输入为176(患者(x 30(切片(x 256 x 256 x 1。虽然它实现了0.1233(二进制交叉熵(的损失,但当我绘制它在训练集上的预测时,它们看起来不太好。你对如何解决这个问题有什么建议吗?(这是一个训练时间更长的问题吗?(。

谢谢:((我是ML新手(

import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, Conv3D, Conv3DTranspose, MaxPooling3D, UpSampling3D
from tensorflow.keras.models import Model
import matplotlib.pyplot as plt
from tensorflow.keras.callbacks import TensorBoard
from sys import getsizeof
np.set_printoptions(threshold=np.inf)
import os 
from keras.optimizers import RMSprop
# Creating the model
input_img = Input(shape=(32,256,256,1))
x = Conv3D(filters = 16, kernel_size = (5,5,5), activation='relu', padding='same')(input_img)
x = MaxPooling3D(pool_size=(4, 4, 4), padding='same')(x)
x = Conv3D(filters = 64, kernel_size = (3,3,3), activation='relu', padding='same')(x)
x = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(x)
x = Conv3D(filters = 8, kernel_size = (3,3,3), activation='relu', padding='same')(x)
encoded = MaxPooling3D(pool_size=(4, 4, 4), padding='same')(x)
x = Conv3D(filters = 8, kernel_size = (3,3,3), activation='relu', padding='same')(encoded)
x = UpSampling3D((4,4,4))(x)
x = Conv3D(filters = 64, kernel_size = (3,3,3), activation='relu', padding='same')(x)
x = UpSampling3D((2,2,2))(x)
x = Conv3D(filters = 16, kernel_size = (5,5,5), activation='relu', padding='same')(x)
x = UpSampling3D((4,4,4))(x)
decoded = Conv3D(filters = 1, kernel_size = (5,5,5), activation='sigmoid', padding='same')(x)
autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
x_train = np.load('D:\nparr\npstack_l.npy')
x_train = x_train.astype('float16') / 255
x_train = np.reshape(x_train, (len(x_train), 32, 256, 256))
#print(x_train.nbytes)
autoencoder.fit(x_train, x_train,
epochs=15, batch_size = 1)
x_train = x_train.astype('float32')
# Predict from training set and plot
decoded_imgs = autoencoder.predict(x_train[0:4].reshape(2,32,256,256,1))
n = 4
plt.figure(figsize=(20, 4))
for i in range(n):
# display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_train[i][15].reshape(256, 256))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
# display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i][15].reshape(256, 256))
plt.gray()
#print(decoded_imgs[0][15])
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()

输出:训练集分段上的自动编码器输出

在使用二进制时,了解训练是否有效的更好方法是使用准确性。尝试更改:

autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')

autoencoder.compile(optimizer='adadelta',loss='binary_crossentropy', metric=['binary_accuracy']

相关内容

  • 没有找到相关文章

最新更新