迁移学习-试图在RTX 2070上重新训练efficientnet-B07,内存不足



这是我在尝试64gb ram CPU时尝试运行工作的训练代码挤压RTX 2070

config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.7
tf.keras.backend.set_session(tf.Session(config=config))
model = efn.EfficientNetB7()
model.summary()
# create new output layer
output_layer = Dense(5, activation='sigmoid', name="retrain_output")(model.get_layer('top_dropout').output)
new_model = Model(model.input, output=output_layer)
new_model.summary()
# lock previous weights
for i, l in enumerate(new_model.layers):
if i < 228:
l.trainable = False
# lock probs weights
new_model.compile(loss='mean_squared_error', optimizer='adam')
batch_size = 5
samples_per_epoch = 30
epochs = 20
# generate train data
train_datagen = ImageDataGenerator(
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
validation_split=0)
train_generator = train_datagen.flow_from_directory(
train_data_input_folder,
target_size=(input_dim, input_dim),
batch_size=batch_size,
class_mode='categorical',
seed=2019,
subset='training')
validation_generator = train_datagen.flow_from_directory(
validation_data_input_folder,
target_size=(input_dim, input_dim),
batch_size=batch_size,
class_mode='categorical',
seed=2019,
subset='validation')
new_model.fit_generator(
train_generator,
samples_per_epoch=samples_per_epoch,
epochs=epochs,
validation_steps=20,
validation_data=validation_generator,
nb_worker=24)
new_model.save(model_output_path)

exception:

2019-11-17 08:52:52.903583:Itensorflow/stream_executor/dso_loader.cc:152]成功打开CUDA库libcublas.so.10.0本地。。。2019-11-17 08:53:24.7132020:I tensorflow/core/common_runtime/bfc_allocater.cc:641]110块大小27724800总计2.84GiB 2019-11-17 08:53:24.713024:Itensorflow/core/common_runtime/bfc_allocater.cc:641]6个大小的块38814720总计222.10MiB 2019-11-17 08:53:24.713027:Itensorflow/core/common_runtime/bfc_allocater.cc:641]23大小的块54000128总计1.16GiB 2019-11-17 08:53:24.713031:Itensorflow/core/common_runtime/bfc_allocater.cc:641]1大小的块73760000总计70.34亿卢布2019-11-17 08:53:24.713034:Itensorflow/core/common_runtime/bfc_allocater.cc:645]总计在用区块:5.45GiB 2019-11-17 08:53:24.713040:Itensorflow/core/common_runtime/bfc_allocater.cc:647]统计:限制:5856749158使用中:5848048896最大使用量:5848061440 NumAllocs:6140最大AllocSize:3259170816

2019-11-17 08:53:24.713214:Wtensorflow/core/common_runtime/bfc_allocater.cc:271]****************************************************************************************************2019-11-17 08:53:24.71322:Wtensorflow/core/framework/op_kernel.cc:1401]op_REQUIRES在失败cwise_ops_common.cc:70:资源耗尽:分配时OOM形状为[51344,38,38]且类型为float on的张量/作业:localhost/副本:0/任务:0/设备:GPU:0(通过分配器GPU_0_bfc(Traceback(最近一次通话(:文件"/home/naort/Desktop/深度学习数据准备工具/EfficientNet迁移学习锅炉板/model_retrain.py",第76行,nb_worker=24(文件"/usr/local/lib/python3.6/dist packages/ceras/legacy/interfaces.py",第91行,包装返回函数(*args,**kwargs(文件"/usr/local/lib/python3.6/dist packages/keras/engine/training.py",第1732行,在fit_generator initial_epoch=initial_epoch(文件中"/usr/local/lib/python3.6/dist packages/keras/engine/training_generator.py",第220行,fit_generator reset_metrics=False(文件"/usr/local/lib/python3.6/dist packages/keras/engine/training.py",第1514行,在train_on_batch输出=self.train_function(ins(文件中"/home/naort/.local/lib/python3.6/site packages/tensorflow/python/keras/backend.py",第3076行,在调用run_metadata=self.run_metadata(文件中"/home/naort/.local/lib/python3.6/site packages/tensorflow/python/client/session.py",第1439行,调用run_metadata_ptr(文件"/home/naort/.local/lib/python3.6/site packages/tensorflow/python/framework/errors_impl.py",第528行,在出口c_api中。TF_GetCode(self.status.status((tensorflow.python.framework.errors_impl.ResourceExhaustedError:OOM当分配形状为[51344,38,38]且类型为float on的张量时/作业:localhost/副本:0/任务:0/设备:GPU:0(通过分配器GPU_0_bfc([[{节点training/Adam/pgradients/AddN_387-0-TransposeNWCToNCHW-LayoutOptimizer}]]提示:如果您想在OOM发生时查看已分配张量的列表,将report_tensor_allocations_pon_oom添加到当前的RunOptions分配信息。

提示:如果您想查看已分配张量的列表发生OOM时,将report_tensor_allocations_pon_OOM添加到RunOptions获取当前分配信息。

尽管EfficientNet模型的参数计数低于比较ResNe(X(t模型,但它们仍然消耗大量GPU内存。您看到的是GPU内存不足错误(RTX 2070为8GB(,而不是系统内存不足(64GB(。

B7型号,尤其是在全分辨率下,超出了你想用一张RTX 2070卡进行训练的范围。即使冻结了很多层。

可能会有所帮助的是,在FP16中运行该模型,它还将利用RTX卡的TensorCores。从…起https://medium.com/@noel_kennedy/如何使用半精度float16-when--training-on-rtx-cards-with-tensorflow-keras-d4033d59f9e4,试试这个:

import keras.backend as K
dtype='float16'
K.set_floatx(dtype)
# default is 1e-7 which is too small for float16.  Without adjusting the epsilon, we will get NaN predictions because of divide by zero problems
K.set_epsilon(1e-4) 

相关内容

  • 没有找到相关文章

最新更新