我在使用keras构建的此自动编码器上遇到麻烦。输入的形状取决于屏幕尺寸,输出将是下一个屏幕大小的预测...但是,似乎有一个错误我无法弄清楚...请原谅我在此网站上的糟糕格式...
代码:
def model_build():
input_img = InputLayer(shape=(1, env_size()[1], env_size()[0]))
x = Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
x = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = MaxPooling2D((2, 2), padding='same')(x)
x = Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = UpSampling2D((2, 2))(x)
x = Conv2D(16, (3, 3), activation='relu', padding='same')(x)
x = UpSampling2D((2, 2))(x)
x = Conv2D(32, (3, 3), activation='relu')(x)
x = UpSampling2D((2, 2))(x)
decoded = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
model = Model(input_img, decoded)
return model
if __name__ == '__main__':
model = model_build()
model.compile('adam', 'mean_squared_error')
y = np.array([env()])
print(y.shape)
print(y.ndim)
debug = model.fit(np.array([[env()]]), np.array([[env()]]))
错误:
追溯(最近的最新通话): 文件"/home/ai/desktop/algernon-test/rewarders.py",第46行,in debug = model.fit(np.Array([[ENG()])),np.Array([[envy()]])) 文件"/home/ai/.local/lib/python3.6/site-packages/keras/engine/training.py",第952行,适合 batch_size = batch_size) 文件"/home/ai/.local/lib/python3.6/site-packages/keras/engine/training.py",第789行,in _standardize_user_data exception_prefix ='target') 文件"/home/ai/.local/lib/python3.6/site-packages/keras/engine/training_utils.py",第138行 str(data_shape)) ValueError:检查目标时的错误:预期的Conv2D_7具有形状(4、268、1),但具有形状的数组(1、270、480)
编辑:
get_screen导入为env()的代码:
def get_screen():
img = screen.grab()
img = img.resize(screen_size())
img = img.convert('L')
img = np.array(img)
return img
您有三个2倍下采样步骤,三个x2 Upsmpling步骤。这些步骤不知道原始图像大小,因此它们将大小舍入到最接近的8 = 2^3的倍数。
。cropX = 7 - ((size[0]+7) % 8)
cropY = 7 - ((size[1]+7) % 8)
cropX = 7 - ((npix+7) % 8)
cropY = 7 - ((nlin+7) % 8)
如果添加新的最后一层,应该可以工作...
decoded = layers.Cropping2D(((0,cropY),(0,cropX)))(x)
看起来像 env_size()
和 env()
Mess Image以某种方式尺寸。考虑此示例:
image1 = np.random.rand(1, 1, 270, 480) #First dimension is batch size for test purpose
image2 = np.random.rand(1, 4, 268, 1) #Or any other arbitrary dimensions
input_img = layers.Input(shape=image1[0].shape)
x = layers.Conv2D(32, (3, 3), activation='relu', padding='same')(input_img)
x = layers.MaxPooling2D((2, 2), padding='same')(x)
x = layers.Conv2D(16, (3, 3), activation='relu', padding='same')(x)
x = layers.MaxPooling2D((2, 2), padding='same')(x)
x = layers.Conv2D(8, (3, 3), activation='relu', padding='same')(x)
encoded = layers.MaxPooling2D((2, 2), padding='same')(x)
x = layers.Conv2D(8, (3, 3), activation='relu', padding='same')(encoded)
x = layers.UpSampling2D((2, 2))(x)
x = layers.Conv2D(16, (3, 3), activation='relu', padding='same')(x)
x = layers.UpSampling2D((2, 2))(x)
x = layers.Conv2D(32, (3, 3), activation='relu')(x)
x = layers.UpSampling2D((2, 2))(x)
decoded = layers.Conv2D(1, (3, 3), activation='sigmoid', padding='same')(x)
model = tf.keras.Model(input_img, decoded)
model.compile('adam', 'mean_squared_error')
model.summary()
这条线将起作用:
model.fit(image1, nb_epoch=1, batch_size=1)
但这不是
model.fit(image2, nb_epoch=1, batch_size=1)
编辑:为了获得与输入相同的输出,您需要仔细计算卷积内核大小。image1 = np.random.rand(1,1920,1080,1)
input_img = layers.Input(shape=image1[0].shape)
x = layers.Conv2D(32, 3, activation='relu', padding='same')(input_img)
x = layers.MaxPooling2D((2, 2), padding='same')(x)
x = layers.Conv2D(16, 3, activation='relu', padding='same')(x)
x = layers.MaxPooling2D((2, 2), padding='same')(x)
x = layers.Conv2D(8, 3, activation='relu', padding='same')(x)
encoded = layers.MaxPooling2D((2, 2), padding='same')(x)
x = layers.Conv2D(8, 3, activation='relu', padding='same')(encoded)
x = layers.UpSampling2D((2, 2))(x)
x = layers.Conv2D(16, 3, activation='relu', padding='same')(x)
x = layers.UpSampling2D((2, 2))(x)
x = layers.Conv2D(32, 1, activation='relu')(x) # set kernel size to 1 for example
x = layers.UpSampling2D((2, 2))(x)
decoded = layers.Conv2D(1, 3, activation='sigmoid', padding='same')(x)
model = tf.keras.Model(input_img, decoded)
model.compile('adam', 'mean_squared_error')
model.summary()
这将输出相同的维度。
根据本指南http://cs231n.github.io/convolutional-networks/
我们可以计算输出量的空间大小作为 输入音量大小(W),即交流层的接收场大小 神经元(f),施加它们的大步 边框上使用的零填充量(P)。你可以说服 您自己是计算多少个神经元的正确公式 " FIT"由(W -F 2P)/S 1 给出。例如,对于7x7输入和3x3 带1和垫0的过滤器,我们将获得5x5输出。大步向前 2我们将获得3x3输出。