多 GPU/塔设置 Tensorflow 1.2 估计器



我想将我的_model_fnEstimator变成多GPU解决方案。

有没有办法在 Esitmator API 中做到这一点,或者我必须显式编码设备放置和同步。

我知道我可以使用tf.device('gpu:X')将我的模型放置在 GPUX上。我还知道我可以遍历可用的 GPU 名称以跨多个 GPU 复制我的模型。我也知道我可以对多个 GPU 使用单个输入队列。

我不知道的是哪些部分(优化器、损失计算)我实际上可以移动到 GPU 以及我必须在哪里同步计算。

Cifar10的例子中,我认为我只需要同步渐变。

特别是在使用

train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.contrib.framework.get_global_step(),
learning_rate=learning_rate,
learning_rate_decay_fn=_learning_rate_decay_fn,
optimizer=optimizer)

我无法再手动调用optimizer.compute_gradients()optimizer.apply_gradients(),因为这是由.optimize_loss(..)内部处理

的我想知道如何像在 cifar10示例 Cifar10-MultiGPU 中那样平均梯度,或者这是否是Estimator的正确方法。

实际上,您可以在model_fn功能中实现多GPU,功能与以前相同。
您可以在此处找到完整的代码。它支持多线程队列阅读器和多GPU在使用估算器时进行非常高速的训练。

代码片段:(获取完整代码)

def model_fn(features, labels, mode, params):
# network
network_fn = nets_factory.get_network_fn(
FLAGS.model_name,
num_classes=params['num_classes'],
weight_decay=0.00004,
is_training=(mode == tf.estimator.ModeKeys.TRAIN))
# if predict. Provide an estimator spec for `ModeKeys.PREDICT`.
if mode == tf.estimator.ModeKeys.PREDICT:
logits, end_points = network_fn(features)
return tf.estimator.EstimatorSpec(mode=mode, predictions={"output": logits})
# Create global_step and lr
global_step = tf.train.get_global_step()
learning_rate = get_learning_rate("exponential", FLAGS.base_lr,
global_step, decay_steps=10000)
# Create optimizer
optimizer = get_optimizer(FLAGS.optimizer, learning_rate)
# Multi GPU support - need to make sure that the splits sum up to 
# the batch size (in case the batch size is not divisible by 
# the number of gpus. This code will put remaining samples in the
# last gpu. E.g. for a batch size of 15 with 2 gpus, the splits 
# will be [7, 8].
batch_size = tf.shape(features)[0]
split_size = batch_size // len(params['gpus_list'])
splits = [split_size, ] * (len(params['gpus_list']) - 1)
splits.append(batch_size - split_size * (len(params['gpus_list']) - 1))
# Split the features and labels
features_split = tf.split(features, splits, axis=0)
labels_split = tf.split(labels, splits, axis=0)
tower_grads = []
eval_logits = []
with tf.variable_scope(tf.get_variable_scope()):
for i in xrange(len(params['gpus_list'])):
with tf.device('/gpu:%d' % i):
with tf.name_scope('%s_%d' % ("classification", i)) as scope:
# model and loss
logits, end_points = network_fn(features_split[i])
tf.losses.softmax_cross_entropy(labels_split[i], logits)
update_ops = tf.get_collection(
tf.GraphKeys.UPDATE_OPS, scope)
updates_op = tf.group(*update_ops)
with tf.control_dependencies([updates_op]):
losses = tf.get_collection(tf.GraphKeys.LOSSES, scope)
total_loss = tf.add_n(losses, name='total_loss')
# reuse var
tf.get_variable_scope().reuse_variables()
# grad compute
grads = optimizer.compute_gradients(total_loss)
tower_grads.append(grads)
# for eval metric ops
eval_logits.append(logits)
# We must calculate the mean of each gradient. Note that this is the
# synchronization point across all towers.
grads = average_gradients(tower_grads)
# Apply the gradients to adjust the shared variables.
apply_gradient_op = optimizer.apply_gradients(
grads, global_step=global_step)
# Track the moving averages of all trainable variables.
variable_averages = tf.train.ExponentialMovingAverage(0.9999, global_step)
variables_averages_op = variable_averages.apply(tf.trainable_variables())
# Group all updates to into a single train op.
train_op = tf.group(apply_gradient_op, variables_averages_op)
# Create eval metric ops
_predictions = tf.argmax(tf.concat(eval_logits, 0), 1)
_labels = tf.argmax(labels, 1)
eval_metric_ops = {
"acc": slim.metrics.streaming_accuracy(_predictions, _labels)}
# Provide an estimator spec for `ModeKeys.EVAL` and `ModeKeys.TRAIN` modes.
return tf.estimator.EstimatorSpec(
mode=mode,
loss=total_loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)

相关内容

  • 没有找到相关文章

最新更新