多 GPU 训练比 Tensorflow 上的单 GPU 慢



我已经创建了 3 个虚拟 GPU(有 1 个 GPU(,并尝试加快图像的矢量化。但是,使用下面提供的代码并从文档外手动放置(此处(我得到了奇怪的结果:在所有 GPU 上进行训练比在单个 GPU 上慢两倍。还要在具有 3 个物理 GPU 的计算机上检查此代码(并删除虚拟设备初始化( - 工作原理相同。

环境:Python 3.6,Ubuntu 18.04.3,tensorflow-gpu 1.14.0。

代码(此示例创建 3 个虚拟设备,您可以在具有一个 GPU 的 PC 上对其进行测试(:

import os
import time
import numpy as np
import tensorflow as tf
start = time.time()
def load_graph(frozen_graph_filename):
# We load the protobuf file from the disk and parse it to retrieve the
# unserialized graph_def
with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
# Then, we import the graph_def into a new Graph and returns it
with tf.Graph().as_default() as graph:
# The name var will prefix every op/nodes in your graph
# Since we load everything in a new graph, this is not needed
tf.import_graph_def(graph_def, name="")
return graph
path_to_graph = '/imagenet/'  # Path to imagenet folder where graph file is placed
GRAPH = load_graph(os.path.join(path_to_graph, 'classify_image_graph_def.pb'))
# Create Session
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.9
config.gpu_options.allow_growth = True
session = tf.Session(graph=GRAPH, config=config)
output_dir = '/vectors/'  # where to saved vectors from images
# Single GPU vectorization
for image_index, image in enumerate(selected_list):
with Image.open(image) as f:
image_data = f.convert('RGB')
feature_tensor = session.graph.get_tensor_by_name('pool_3:0')
feature_vector = session.run(feature_tensor, {'DecodeJpeg:0': image_data})
feature_vector = np.squeeze(feature_vector)
outfile_name = os.path.basename(image) + ".vc"
out_path = os.path.join(output_dir, outfile_name)
# Save vector
np.savetxt(out_path, feature_vector, delimiter=',')
print(f"Single GPU: {time.time() - start}")
start = time.time()
print("Start calculation on multiple GPU")
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
# Create 3 virtual GPUs with 1GB memory each
try:
tf.config.experimental.set_virtual_device_configuration(
gpus[0],
[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024),
tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024),
tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPU,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Virtual devices must be set before GPUs have been initialized
print(e)
print("Create prepared ops")
start1 = time.time()
gpus = logical_gpus  # comment this line to use physical GPU devices for calculations
image_list = ['1.jpg', '2.jpg', '3.jpg']  # list with images to vectorize (tested on 100 and 1000 examples)
# Assign chunk of list to each GPU
# image_list1, image_list2, image_list3 = image_list[:len(image_list)],
#                                         image_list[len(image_list):2*len(image_list)],
#                                         image_list[2*len(image_list):]
selected_list = image_list # commit this line if you want to try to assign chunk of list manually to each GPU
output_vectors = []
if gpus:
# Replicate your computation on multiple GPUs
feature_vectors = []
for gpu in gpus:  # iterating on a virtual GPU devices, not physical
with tf.device(gpu.name):
print(f"Assign list of images to {gpu.name.split(':', 4)[-1]}")
# Try to assign chunk of list with images to each GPU - work the same time as single GPU
# if gpu.name.split(':', 4)[-1] == "GPU:0":
#     selected_list = image_list1
# if gpu.name.split(':', 4)[-1] == "GPU:1":
#     selected_list = image_list2
# if gpu.name.split(':', 4)[-1] == "GPU:2":
#     selected_list = image_list3
for image_index, image in enumerate(selected_list):
with Image.open(image) as f:
image_data = f.convert('RGB')
feature_tensor = session.graph.get_tensor_by_name('pool_3:0')
feature_vector = session.run(feature_tensor, {'DecodeJpeg:0': image_data})
feature_vectors.append(feature_vector)
print("All images has been assigned to GPU's")
print(f"Time spend on prep ops: {time.time() - start1}")
print("Start calculation on multiple GPU")
start1 = time.time()
for image_index, image in enumerate(image_list):
feature_vector = np.squeeze(feature_vectors[image_index])
outfile_name = os.path.basename(image) + ".vc"
out_path = os.path.join(output_dir, outfile_name)
# Save vector
np.savetxt(out_path, feature_vector, delimiter=',')
# Close session
session.close()
print(f"Calc on GPU's spend: {time.time() - start1}")
print(f"All time, spend on multiple GPU: {time.time() - start}")

提供输出视图(从包含 100 张图像的列表(:

1 Physical GPU, 3 Logical GPUs
Single GPU: 18.76301646232605
Start calculation on multiple GPU
Create prepared ops
Assign list of images to GPU:0
Assign list of images to GPU:1
Assign list of images to GPU:2
All images has been assigned to GPU's
Time spend on prep ops: 18.263537883758545
Start calculation on multiple GPU
Calc on GPU's spend: 11.697082042694092
All time, spend on multiple GPU: 29.960679531097412

我尝试过:将带有图像的列表拆分为 3 个块,并将每个块分配给 GPU(请参阅提交的代码行(。这将多 GPU 时间减少到 17 秒,比单个 GPU 运行 18 秒 (~5%( 快一点。

预期结果: 多GPU 版本比单 GPU 版本快(至少 1.5 倍加速(。

想法,为什么会发生:我以错误的方式编写了计算

有两个基本的误解导致了你的麻烦:

  1. with tf.device(...):适用于在作用域内创建的图形节点,不适用于Session.run调用。

  2. Session.run是阻止调用。它们不并行运行。TensorFlow只能并行化单个Session.run的内容。

现代 TF (>= 2.0( 可以使这变得容易得多。

主要是您可以停止使用tf.Sessiontf.Graph。改用@tf.function,我相信这个基本结构会起作用:

@tf.function
def my_function(inputs, gpus, model):
results = []
for input, gpu in zip(inputs, gpus):
with tf.device(gpu):
results.append(model(input))    
return results

但是你会想尝试一个更现实的测试。只有 3 张图像,您根本无法衡量实际性能。

另请注意:

  1. tf.distribute.Strategy类通过将设备规范与正在运行的@tf.function分开,可以帮助简化其中的一些操作。strategy.experimental_run_v2(my_function, args=(dataset_inputs,))
  2. 使用tf.data.Dataset输入管道将帮助您将加载/预处理与模型执行重叠。

但是,如果您真的打算使用tf.Graphtf.Session来做到这一点,我认为您基本上需要从这里重新组织您的代码:

# Your code:
# Builds a graph
graph = build_graph()
for gpu in gpus():
with tf.device(gpu):
# Calls `gpu` in each device scope.
session.run(...)

对此:

g = tf.Graph()
with g.as_default():
results = []
for gpu in gpus:
# Build the graph, on each device
input = iterator.get_next()
with tf.device(gpu):    
results.append(my_function(input))       
# Use a single `Session.run` call
np_result = session.run(results, feed_dict={inputs: my_inputs})

相关内容

  • 没有找到相关文章

最新更新