相同的 Keras 脚本在 WSL(Ubuntu)中有效,但在 Conda 中不起作用?



我正在尝试训练CNN对图像进行分类,我有一个在CPU和GPU环境中成功运行的Keras脚本,但网络只能在CPU环境中学习。在 GPU 环境中,损耗在第一个纪元之后减少,但在此之后保持不变。为什么?

使用 Python 3.6,我有一个在 WSL(Ubuntu(中运行的 tensorflow 环境,以及在 Conda 中运行的 tensorflow-gpu 环境。我尝试了不同的架构和不同的优化器,但无论如何我对 GPU 环境有什么问题。

编辑:我创建了一个 CPU conda 环境,但我有同样的问题,所以这似乎是 Conda 与 WSL 而不是 GPU 与 CPU 的问题。此外,Conda 在 CPU 上的纪元是 WSL 中 CPU 纪元的两倍。

我注释掉了 model.fit(( 以减少冗长的输出。

import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Input, Conv2D, Flatten, Dense, MaxPooling2D
from tensorflow.keras.models import Model
import utils
(x_train, y_train), _, (x_test, y_test) = utils.load_data(limit=36)
input_image = Input(shape=(256, 256, 3))    
x = Conv2D(32, (3, 3), padding='same', activation='relu')(input_image)
x = Conv2D(32, (3, 3), activation='relu')(x)
x = MaxPooling2D()(x)
x = Flatten()(x)
x = Dense(128, activation='relu')(x)
output = Dense(9, activation='softmax')(x)
model = Model(inputs=input_image, outputs=output)
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['acc'])
# model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))
for i in range(25):
print(str(i) + ': ' + str(model.train_on_batch(x_train, y_train)))
model.evaluate(x_train, y_train)
model.evaluate(x_test, y_test)

CPU 训练结果:

0: [17.486359, 0.6875]
1: [61761.203, 0.28125]
2: [2228.4707, 0.71875]
3: [4440.558, 0.28125]
4: [1062.5581, 0.71875]
5: [481.29315, 0.71875]
6: [234.01581, 0.4375]
7: [170.98215, 0.71875]
8: [38.968575, 0.6875]
9: [8.086919, 0.75]
10: [5.7502546, 0.375]
11: [72.89319, 0.71875]
12: [13.203195, 0.6875]
13: [1.4184309, 0.875]
14: [9.258236, 0.46875]
15: [23.165062, 0.71875]
16: [8.963888, 0.78125]
17: [3.1053305, 0.84375]
18: [1.0664859, 0.96875]
19: [0.039992813, 1.0]
20: [0.023323938, 1.0]
21: [0.019487603, 1.0]
22: [0.01734325, 1.0]
23: [0.015670585, 1.0]
24: [0.014209943, 1.0]
32/32 [==============================] - 1s 19ms/sample - loss: 0.0129 - acc: 1.0000
4/4 [==============================] - 0s 20ms/sample - loss: 2.3463 - acc: 0.7500

我希望看到类似于上述内容的内容,但我得到了这个奇怪的东西 - GPU 训练的结果:

0: [8.630159, 0.1875]
1: [4.5332146, 0.71875]
2: [4.5332146, 0.71875]
3: [4.5332146, 0.71875]
4: [4.5332146, 0.71875]
5: [4.5332146, 0.71875]
6: [4.5332146, 0.71875]
7: [4.5332146, 0.71875]
8: [4.5332146, 0.71875]
9: [4.5332146, 0.71875]
10: [4.5332146, 0.71875]
11: [4.5332146, 0.71875]
12: [4.5332146, 0.71875]
13: [4.5332146, 0.71875]
14: [4.5332146, 0.71875]
15: [4.5332146, 0.71875]
16: [4.5332146, 0.71875]
17: [4.5332146, 0.71875]
18: [4.5332146, 0.71875]
19: [4.5332146, 0.71875]
20: [4.5332146, 0.71875]
21: [4.5332146, 0.71875]
22: [4.5332146, 0.71875]
23: [4.5332146, 0.71875]
24: [4.5332146, 0.71875]
32/32 [==============================] - 0s 4ms/sample - loss: 4.5332 - acc: 0.7188
4/4 [==============================] - 0s 19ms/sample - loss: 4.0295 - acc: 0.7500

我迫不及待地想看看我犯了什么愚蠢的错误,哈哈

我不知道实际问题是什么,但我知道从 1.13.1 更新 tensorflow-gpu 修复了它。

在康达外壳中,我运行了pip install tf-nightly-gpu,现在网络按预期训练。我敢打赌,我不必使用夜间构建,我可以指定 1.14.0(我在 WSL 中使用的 tensorflow-gpu 构建(,但无论如何。

相关内容

最新更新