第二次运行张量流时出错



我正在尝试运行以下张量流代码,它第一次工作正常。如果我尝试再次运行它,它会一直抛出错误,说

ValueError: Variable layer1/weights1 already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:
File "C:UsersownerAnaconda3envsDeepLearning_NoGPUlibsite-packagestensorflowpythonframeworkops.py", line 1228, in __init__
self._traceback = _extract_stack()
File "C:UsersownerAnaconda3envsDeepLearning_NoGPUlibsite-packagestensorflowpythonframeworkops.py", line 2336, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:UsersownerAnaconda3envsDeepLearning_NoGPUlibsite-packagestensorflowpythonframeworkop_def_library.py", line 768, in apply_op
op_def=op_def)

如果我重新启动控制台然后运行它,它会再次运行良好。

下面是我对神经网络的实现。

import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
import tensorflow as tf
learning_rate = 0.001
training_epochs = 100
n_input = 9
n_output = 1
n_layer1_node = 100
n_layer2_node = 100
X_train = np.random.rand(100, 9)
y_train = np.random.rand(100, 1)
with tf.variable_scope('input'):
X = tf.placeholder(tf.float32, shape=(None, n_input))
with tf.variable_scope('output'):
y = tf.placeholder(tf.float32, shape=(None, 1))
#layer 1
with tf.variable_scope('layer1'):
weight_matrix1 = {'weights': tf.get_variable(name='weights1', 
shape=[n_input, n_layer1_node], 
initializer=tf.contrib.layers.xavier_initializer()),
'biases': tf.get_variable(name='biases1',
shape=[n_layer1_node],
initializer=tf.zeros_initializer())}
layer1_output = tf.nn.relu(tf.add(tf.matmul(X, weight_matrix1['weights']), weight_matrix1['biases']))
#Layer 2
with tf.variable_scope('layer2'):
weight_matrix2 = {'weights': tf.get_variable(name='weights2', 
shape=[n_layer1_node, n_layer2_node], 
initializer=tf.contrib.layers.xavier_initializer()),
'biases': tf.get_variable(name='biases2',
shape=[n_layer2_node],
initializer=tf.zeros_initializer())}
layer2_output = tf.nn.relu(tf.add(tf.matmul(layer1_output, weight_matrix2['weights']), weight_matrix2['biases']))
#Output layer
with tf.variable_scope('layer3'):
weight_matrix3 = {'weights': tf.get_variable(name='weights3', 
shape=[n_layer2_node, n_output], 
initializer=tf.contrib.layers.xavier_initializer()),
'biases': tf.get_variable(name='biases3',
shape=[n_output],
initializer=tf.zeros_initializer())}
prediction = tf.nn.relu(tf.add(tf.matmul(layer2_output, weight_matrix3['weights']), weight_matrix3['biases']))
cost = tf.reduce_mean(tf.squared_difference(prediction, y))
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with tf.Session() as session:
session.run(tf.global_variables_initializer())

for epoch in range(training_epochs):
session.run(optimizer, feed_dict={X: X_train, y: y_train})
train_cost = session.run(cost, feed_dict={X: X_train, y:y_train})
print(epoch, " epoch(s) done")
print("training complete")

正如错误所暗示的那样,我尝试在with tf.variable_scope():中添加reuse=True作为参数,但这又不起作用。

我正在康达环境中运行它。我在 Windows 3.5 中使用 Python 3.5 和 CUDA 8(但这应该没关系,因为这没有配置为在 GPU 中运行(。

这是 TF 如何工作的问题。需要了解TF具有"隐藏"状态 - 正在构建的图形。大多数 tf 函数在此图中创建操作(就像每个 tf 一样。变量调用,每个算术运算等(。另一方面,实际的"执行"发生在tf中。会话((。因此,您的代码通常如下所示:

build_graph()
with tf.Session() as sess:
process_something()

由于所有实际变量、结果等都只留在会话中,如果你想"运行两次",你会这样做

build_graph()
with tf.Session() as sess:
process_something()
with tf.Session() as sess:
process_something()

请注意,我正在构建图形一次。图是事物外观的抽象表示,它不保存任何计算状态。当你尝试做

build_graph()
with tf.Session() as sess:
process_something()
build_graph()
with tf.Session() as sess:
process_something()

由于尝试创建具有相同名称的变量(在您的情况下会发生什么(,图形正在完成等,您可能会在第二个 build_graph(( 期间收到错误。如果你真的需要以这种方式运行,你只需要在两者之间重置图形

build_graph()
with tf.Session() as sess:
process_something()
tf.reset_default_graph()
build_graph()
with tf.Session() as sess:
process_something()

将正常工作。

最新更新