第二次计算Tensorflow性能下降



我对Tensorflow2.0很陌生,我正在考虑使用它的gpu处理功能进行一些矩阵计算。所以我在测量性能的同时,在一些大的矩阵乘法上尝试了它。当我在一个大矩阵上运行它时,它非常快。但是当我在其他矩阵上运行它时,它变得很慢。对于非常小的张量的初始化也是很慢的。这个问题是因为矩阵占用了太多的内存吗?但是,即使我删除了pythondel的变量,问题仍然存在。

我的python代码:
import tensorflow as tf
import numpy as np
import time

a = np.ones((9000,4000))
b = np.ones((4000,9000))
a2 = [a,a,a,a,a,a,a]
b2 = [b,b,b,b,b,b,b]
a3 = np.ones((7,9000,4000))
b3 = np.ones((7,4000,9000))
with tf.device('/gpu:0'):

# first multiplication
a2 = tf.convert_to_tensor(a)
b2 = tf.convert_to_tensor(b)
start = time.time()
c = tf.matmul([b2,b2,b2,b2,b2,b2,b2], [a2,a2,a2,a2,a2,a2,a2])
print("first multiplication time: ", time.time() - start)
del c, a2, b2
# second multiplication
a3 = tf.convert_to_tensor(a3)
b3 = tf.convert_to_tensor(b3)
start = time.time()
c = tf.matmul(b3, a3)
print("second multiplication time: ", time.time() - start)
del c, a3, b3
# third multiplication
start = time.time()
n = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='n')
m = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='m')
print("constant init time: ",time.time() - start)
c = tf.matmul([n,n], [m,m])
print("constant init plus third multiplication time: ", time.time() - start)

输出(不含tensorflow Information输出)

first multiplication time:  0.7032458782196045
2021-02-07 20:40:36.004254: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 2016000000 exceeds 10% of free system memory.
2021-02-07 20:40:36.588404: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 2016000000 exceeds 10% of free system memory.
second multiplication time:  6.460264682769775
constant init time:  6.7629804611206055
constant init plus third multiplication time:  6.76327919960022

,当我取消第一个乘法的注释时,输出变为:

2021-02-07 20:44:29.165061: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 2016000000 exceeds 10% of free system memory.
2021-02-07 20:44:29.763323: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 2016000000 exceeds 10% of free system memory.
second multiplication time:  0.9040727615356445
constant init time:  7.273072242736816
constant init plus third multiplication time:  7.273530006408691

当我只运行第三个计算时:

constant init time:  0.0499725341796875
constant init plus third multiplication time:  0.4284539222717285

我真的很想了解发生了什么,甚至可能找到一种方法来改进它。

谢谢你的帮助!

这是因为你没有将张量从GPU传输回CPU,所以它们占用了GPU空间。我不确定del,技术上它应该在eager工作,但是有一个与内存泄漏相关的错误(不确定它是否已经修复)。

如果在tf.matmul

之后调用了一个额外的函数
c = tf.matmul(b3, a3).numpy() // call numpy which copies it back to cpu

你应该得到正确的时间,

first multiplication time:  8.76913070678711
second multiplication time:  8.516901731491089
constant init time:  0.0011458396911621094
constant init plus third multiplication time:  0.0024268627166748047

如果少了什么请告诉我…

相关内容

  • 没有找到相关文章

最新更新