我是TensorFlow的初学者。我在帮助下制作了简单的自动编码器。我想将最终decoded
张量转换为numpy array。我尝试使用.eval()
,但我无法使用它。如何将张量转换为numpy?
我的输入图像大小为512*512*1,数据类型是原始图像格式。
代码
#input
image_size = 512
hidden = 256
input_image = np.fromfile('PATH',np.float32)
# Variables
x_placeholder = tf.placeholder("float", (image_size*image_size))
x = tf.reshape(x_placeholder, [image_size * image_size, 1])
w_enc = tf.Variable(tf.random_normal([hidden, image_size * image_size], mean=0.0, stddev=0.05))
w_dec = tf.Variable(tf.random_normal([image_size * image_size, hidden], mean=0.0, stddev=0.05))
b_enc = tf.Variable(tf.zeros([hidden, 1]))
b_dec = tf.Variable(tf.zeros([image_size * image_size, 1]))
#model
encoded = tf.sigmoid(tf.matmul(w_enc, x) + b_enc)
decoded = tf.sigmoid(tf.matmul(w_dec,encoded) + b_dec)
# Cost Function
cross_entropy = -1. * x * tf.log(decoded) - (1. - x) * tf.log(1. - decoded)
loss = tf.reduce_mean(cross_entropy)
train_step = tf.train.AdagradOptimizer(0.1).minimize(loss)
# Train
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
print('Training...')
for _ in xrange(10):
loss_val, _ = sess.run([loss, train_step], feed_dict = {x_placeholder: input_image})
print loss_val
您可以将解码添加到sess.run()返回的张量列表中,如下所示。decded_val will by numpy数组,您可以重塑其以获取原始图像形状。
另外,您可以在训练循环外进行sess.run()以获取由此产生的解码图像。
import tensorflow as tf
import numpy as np
tf.reset_default_graph()
#load_image
image_size = 16
k = 64
temp = np.zeros((image_size, image_size))
# Variables
x_placeholder = tf.placeholder("float", (image_size, image_size))
x = tf.reshape(x_placeholder, [image_size * image_size, 1])
w_enc = tf.Variable(tf.random_normal([k, image_size * image_size], mean=0.0, stddev=0.05))
w_dec = tf.Variable(tf.random_normal([image_size * image_size, k], mean=0.0, stddev=0.05))
b_enc = tf.Variable(tf.zeros([k, 1]))
b_dec = tf.Variable(tf.zeros([image_size * image_size, 1]))
#model
encoded = tf.sigmoid(tf.matmul(w_enc, x) + b_enc)
decoded = tf.sigmoid(tf.matmul(w_dec,encoded) + b_dec)
# Cost Function
cross_entropy = -1. * x * tf.log(decoded) - (1. - x) * tf.log(1. - decoded)
loss = tf.reduce_mean(cross_entropy)
train_step = tf.train.AdagradOptimizer(0.1).minimize(loss)
# Train
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
print('Training...')
for _ in xrange(10):
loss_val, decoded_val, _ = sess.run([loss, decoded, train_step], feed_dict = {x_placeholder: temp})
print loss_val
print('Done!')