我有一个带4 GPU的服务器。我想完全使用其中2个进行多GPU培训。
KERAS文档提供了一些有关如何使用多个GPU的见解,但我想选择特定的GPU。有没有办法实现这一目标?
from keras import backend as K
import tensorflow as tf
c = []
for d in ['/device:GPU:2', '/device:GPU:3']:
with K.tf.device(d):
config = tf.ConfigProto(intra_op_parallelism_threads=4,
inter_op_parallelism_threads=4, allow_soft_placement=True,
device_count = {'CPU' : 1, 'GPU' : 2})
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3])
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2])
c.append(tf.matmul(a, b))
with tf.device('/cpu:0'):
sum = tf.add_n(c)
session = tf.Session(config=config)
K.set_session(session)
我认为这应该有效。您应该拥有要使用的GPU设备的数字(索引(。在这种情况下,其2和3。相关链接1(https://github.com/carla-simulator/carla/issues/116
2(https://www.tensorflow.org/guide/using_gpu#using_multiple_gpus
最好的方法是通过在策略范围中创建和编译模型来通过tf.distribute
策略来编译KERAS模型。例如:
import contextlib
def model_scope(devices):
if 1 < len(devices):
strategy = tf.distribute.MirroredStrategy(devices)
scope = strategy.scope()
else:
scope = contextlib.supress() # Python 3.4 up
return scope
devices = ['/device:GPU:2', '/device:GPU:3']
with model_scope(devices):
# create and compile your model
model = get_model()
model.compile(optimizer=optimizer, loss=loss)