Resnet模型训练时间过长



我正在使用本教程学习我的模型的迁移学习。正如我们所看到的,他的单历元平均为1秒。

Epoch 1/100
1080/1080 [==============================] - 10s 10ms/step - loss: 3.6862 - acc: 0.2000
Epoch 2/100
1080/1080 [==============================] - 1s 1ms/step - loss: 3.0746 - acc: 0.2574
Epoch 3/100
1080/1080 [==============================] - 1s 1ms/step - loss: 2.6839 - acc: 0.3185
Epoch 4/100
1080/1080 [==============================] - 1s 1ms/step - loss: 2.3929 - acc: 0.3583
Epoch 5/100
1080/1080 [==============================] - 1s 1ms/step - loss: 2.1382 - acc: 0.3870
Epoch 6/100
1080/1080 [==============================] - 1s 1ms/step - loss: 1.7810 - acc: 0.4593

但当我为我的cifar模型使用几乎相同的代码时,我的单个epoch大约需要1个小时才能运行。

Train on 50000 samples
3744/50000 [=>............................] - ETA: 43:38 - loss: 3.3223 - acc: 0.1760
1

我的代码是

from tensorflow.keras.applications import ResNet50
from tensorflow.keras.layers import GlobalAveragePooling2D, Dense, Dropout
from tensorflow.keras import Model
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
x_train = x_train / 255.0
x_test = x_test / 255.0
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
base_model = ResNet50(weights= None, include_top=False, input_shape= (32,32,3))
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dropout(0.4)(x)
predictions = Dense(10 , activation= 'softmax')(x)
model = Model(inputs = base_model.input, outputs = predictions)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
hist = model.fit(x_train, y_train)

请注意,我使用的是cifar10数据集用于该模型。我的代码或数据有什么问题吗?我该如何改进?一个历元花一个小时太长了。我也有NVIDIA MX-110 2GB,ofc TensorFlow正在使用。

我复制并运行了您的代码,但为了让它运行,我不得不在下面进行更改

import tensorflow as tf
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.layers import GlobalAveragePooling2D, Dense, Dropout
from tensorflow.keras import Model
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
print (len(x_train))
x_train = x_train / 255.0
x_test = x_test / 255.0
y_train = tf.keras.utils.to_categorical(y_train)
y_test = tf.keras.utils.to_categorical(y_test)
base_model = ResNet50(weights= None, include_top=False, input_shape= (32,32,3))
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dropout(0.4)(x)
predictions = Dense(10 , activation= 'softmax')(x)
model = Model(inputs = base_model.input, outputs = predictions)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
hist = model.fit(x_train, y_train, )
# the result for 2 epochs is shown below
50000
Epoch 1/2
1563/1563 [==============================] - 58s 37ms/step - loss: 2.8654 - acc: 0.2537
Epoch 2/2
1563/1563 [==============================] - 51s 33ms/step - loss: 2.5331 - acc: 0.2748

根据型号.fit文档,如果未指定批量大小,则默认为32。因此,50000个样本/32=1563个步骤。由于某些原因,在代码中,批大小默认为1。我不知道为什么。因此,将batch_size设置为50,然后您将需要1000个步骤。为了加快速度,我会设置权重="0";imagenet";并使用冻结基础模型中的层

for layer in base_model.layers:
layer.trainable = False
#if you set batch_size=50, weights="imagenet" with the base model frozen you get
50000
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5
94773248/94765736 [==============================] - 5s 0us/step
Epoch 1/2
1000/1000 [==============================] - 16s 16ms/step - loss: 2.5101 - acc: 0.1487
Epoch 2/2
1000/1000 [==============================] - 10s 10ms/step - loss: 2.1159 - acc: 0.2249

看起来不像是批量处理数据。因此,模型的每个前向传递只看到一个训练实例,这是非常低效的。

尝试在你的模型中设置批量大小。fit((调用:

hist = model.fit(x_train, y_train, batch_size=16, epochs=num_epochs, 
validation_data=(x_test, y_test), shuffle=True)

调整你的批量大小,使其成为GPU内存中可以容纳的最大值——在确定一个值之前,先尝试几个不同的值。

最新更新