是否可以在深度神经网络中进行实时训练和测试?



我的问题是关于进行训练然后实时测试,而不是进行训练然后单独测试。

我所知道的,下面的代码中的深度神经网络,是先做训练,然后在完成训练后,它开始做测试?那是我的问题,如果我想每次都做训练,然后如果输出小于这样的阈值,应该执行测试等等。

### imports
import tensorflow as tf
### constant data
x  = [[0.,0.],[1.,1.],[1.,0.],[0.,1.]]
y_ = [[0.],[0.],[1.],[1.]]
### induction
# 1x2 input -> 2x3 hidden sigmoid -> 3x1 sigmoid output
# Layer 0 = the x2 inputs
x0 = tf.constant( x  , dtype=tf.float32 )
y0 = tf.constant( y_ , dtype=tf.float32 )
# Layer 1 = the 2x3 hidden sigmoid
m1 = tf.Variable( tf.random_uniform( [2,3] , minval=0.1 , maxval=0.9 , dtype=tf.float32  ))
b1 = tf.Variable( tf.random_uniform( [3]   , minval=0.1 , maxval=0.9 , dtype=tf.float32  ))
h1 = tf.sigmoid( tf.matmul( x0,m1 ) + b1 )
# Layer 2 = the 3x1 sigmoid output
m2 = tf.Variable( tf.random_uniform( [3,1] , minval=0.1 , maxval=0.9 , dtype=tf.float32  ))
b2 = tf.Variable( tf.random_uniform( [1]   , minval=0.1 , maxval=0.9 , dtype=tf.float32  ))
y_out = tf.sigmoid( tf.matmul( h1,m2 ) + b2 )

### loss
# loss : sum of the squares of y0 - y_out
loss = tf.reduce_sum( tf.square( y0 - y_out ) )
# training step : gradient decent (1.0) to minimize loss
train = tf.train.GradientDescentOptimizer(1.0).minimize(loss)

### training
# run 500 times using all the X and Y
# print out the loss and any other interesting info
with tf.Session() as sess:
sess.run( tf.global_variables_initializer() )
for step in range(500) :
sess.run(train)
results = sess.run([m1,b1,m2,b2,y_out,loss])
labels  = "m1,b1,m2,b2,y_out,loss".split(",")
for label,result in zip(*(labels,results)) :
print ""
print label
print result
print ""

例如,在上面的例子中,应该计算变量loss,如果小于0.01,则应进行测试,并且每次都应计算以执行测试。

这可能吗? 如何修改以下代码来执行此操作?

提前谢谢你

从技术上讲,您可以轻松实现这一点。

loss_list = []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for t in range(100):
loss_val, _ = sess.run([loss, train])
loss_list.append(loss_val)

无需更改创建图形tensorflow部分,您只需将loss放入sess.run()

只要你知道实时loss值,就可以为测试数据集使用另一个源字典(所以我建议你不要使用tf.constant,而是使用tf.placeholder来馈送数据(,并有另一个sess.run来计算测试集上的损失。

编辑

我已经重构了您的代码,基本上,我用tf.placeholder替换了tf.constant,因此您可以让多个提要字典与一个tensorflow图交互。

from matplotlib import pyplot as plt
import tensorflow as tf
tf.reset_default_graph()
x_train = [[0.,0.],[1.,1.],[1.,0.],[0.,1.]]
y_train = [[0.],[0.],[1.],[1.]]
x_test =  [[0.,0.],[.5,.5],[.5,0.],[0.,.5]]
y_test = [[0.],[0.],[2.],[2.]]
# use placeholder instead so you can have different inputs
x = tf.placeholder('float32', [None, 2])
y = tf.placeholder('float32',)
# Layer 1 = the 2x3 hidden sigmoid
m1 = tf.Variable(tf.random_uniform([2,3], minval=0.1, maxval=0.9, dtype=tf.float32))
b1 = tf.Variable(tf.random_uniform([3], minval=0.1, maxval=0.9, dtype=tf.float32))
h1 = tf.sigmoid(tf.matmul(x, m1) + b1)
# Layer 2 = the 3x1 sigmoid output
m2 = tf.Variable(tf.random_uniform([3,1], minval=0.1, maxval=0.9, dtype=tf.float32))
b2 = tf.Variable(tf.random_uniform([1], minval=0.1, maxval=0.9, dtype=tf.float32))
y_out = tf.sigmoid(tf.matmul(h1, m2) + b2)
### loss
# loss : sum of the squares of y0 - y_out
loss = tf.reduce_sum(tf.square(y - y_out))
# training step : gradient decent (1.0) to minimize loss
train = tf.train.GradientDescentOptimizer(1.0).minimize(loss)
# the two feed dictionaries
feeddict_train = {x: x_train, y: y_train}
feeddict_test = {x: x_test, y: y_test}
### training
# run 500 times using all the X and Y
# print out the loss and any other interesting info
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
train_loss, test_loss = [], []
for step in range(500):
loss_train, _ = sess.run([loss, train], feed_dict=feeddict_train)
train_loss.append(loss_train)
# insert whatever logic is needed 
# under the same tensorflow graph (in the session), use another feed dictionary 
loss_test = sess.run(loss, feed_dict=feeddict_test)
test_loss.append(loss_test)
plt.plot(train_loss, 'r', label='train_loss')
plt.plot(test_loss, 'b', label='test_loss')
plt.legend(loc='best')

通过运行代码,您可以看到测试损失和训练损失越来越小。您可以在评估列车损耗后插入所需的任何逻辑。

一件非常重要的事情是,当我在sess.run(loss, feed_dict=feeddict_test)中使用feeddict_test时,模型没有经过训练,而只是在测试数据集上评估测试损失。

最新更新