为什么随着训练的进行,我的tensorflow代码运行得越来越慢


import tensorflow as tf
input_data = tf.constant([[1.,1.]])
output_data = tf.constant([[1.,0.]])
weight = tf.Variable([[1.,1.],
[1.,1.]])
sess = tf.Session()
sess.run(tf.global_variables_initializer())
optimizer = tf.train.GradientDescentOptimizer(0.1)
for epoch in range(1000):
y = tf.matmul(input_data, weight)
loss = (output_data[0][0] - y[0][0])**2 + (output_data[0][1] - y[0][1])**2
sess.run(optimizer.minimize(loss))
print(epoch)

随着时间的推移,上面的程序变得越来越慢。我认为这是因为每个历元都在不断添加新的节点。我该如何处理?

试试这个。。。

import time
import tensorflow as tf
input_data = tf.constant([[1.,1.]])
output_data = tf.constant([[1.,0.]])
weight = tf.Variable([[1.,1.],
[1.,1.]])
optimizer = tf.train.GradientDescentOptimizer(0.1)
y = tf.matmul(input_data, weight)
loss = (output_data[0][0] - y[0][0])**2 + (output_data[0][1] - y[0][1])**2
train = optimizer.minimize(loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print('Initial weights: ', sess.run(weight))
for epoch in range(1000):
st = time.time()
sess.run(train)
print('Epoch %3d : %.3f ms' %(epoch, 1e3*(time.time()-st)))
print('Weights: ', sess.run(weight))

原始代码在每个历元重新创建图形。如果这样做,图形只创建一次,循环中唯一的工作就是梯度计算/更新。

最新更新