我在gpu和上训练我的unet模型时遇到问题
该模型是一个简单的U-net实现,我知道它是有效的,因为它的测试集不是一个multi_gpu_model
train_generator = zip(image_generator, mask_generator)
with tf.device("/cpu:0"):
# initialize the model
model = unet((512,512,3))
# make the model parallel
model = multi_gpu_model(model, gpus=2)
model.compile(optimizer='adam', loss="mean_squared_error")
model.fit_generator(train_generator, steps_per_epoch=250, epochs=10)
:output
File "C:/Users/PycharmProjects/U-net/U-net.py", line 29, in <module>
model = multi_gpu_model(model, gpus=2)
File "C:UsersAppDataLocalProgramsPythonPython37libsite-packageskerasutilsmulti_gpu_utils.py", line 150, in multi_gpu_model
available_devices = _get_available_devices()
File "C:UsersAppDataLocalProgramsPythonPython37libsite-packageskerasutilsmulti_gpu_utils.py", line 16, in _get_available_devices
return K.tensorflow_backend._get_available_gpus() + ['/cpu:0']
File "C:UsersAppDataLocalProgramsPythonPython37libsite-packageskerasbackendtensorflow_backend.py", line 506, in _get_available_gpus
_LOCAL_DEVICES = tf.config.experimental_list_devices()
AttributeError: module 'tensorflow_core._api.v2.config' has no attribute 'experimental_list_devices'
我也尝试过tf.distributed.miroredStrategy((,但也没有运气
如有任何帮助,将不胜感激
experial_list_devices在tf 2.1中已弃用,请替换tf.config.list_logical_devices。
def _get_available_gpus():
"""Get a list of available gpu devices (formatted as strings).
# Returns
A list of available GPU devices.
"""
global _LOCAL_DEVICES
if _LOCAL_DEVICES is None:
if _is_tf_1():
devices = get_session().list_devices()
_LOCAL_DEVICES = [x.name for x in devices]
else:
devices = tf.config.list_logical_devices()
_LOCAL_DEVICES = [x.name for x in devices]
return [x for x in _LOCAL_DEVICES if 'device:gpu' in x.lower()]
链接有助于解决您的问题链接