使用tensorflow实现自动编码器



我想通过修改代码使用tensorflow实现我自己的自动编码器代码:在这里输入链接描述我想把代码写进一个类。我实现的类是:

import tensorflow as tf
class AutoEncoder:
    def __init__(self,input,hidden,learning_rate=0.01,training_epochs=50,
                 batch_size = 100, display_step = 10):
        print('hello,worldn')
        self.X = input
        self.hidden = hidden
        self.weights = []
        self.biases = []
        self.inputfeature = input.shape[1]
        self.learning_rate = learning_rate
        self.trainning_epochs = training_epochs
        self.batch_size = batch_size
        self.display_step = display_step
    def initialPara(self):
        weights = {
            'encoder_h1': tf.Variable(tf.random_normal([self.inputfeature,self.hidden])),
            'decoder_h1': tf.Variable(tf.random_normal([self.hidden,self.inputfeature]))
        }
        biases = {
            'encoder_b1': tf.Variable(tf.random_normal([self.hidden])),
            'decoder_b1': tf.Variable(tf.random_normal([self.inputfeature]))
        }
        self.weights = weights
        self.biases = biases
    def encoder(self,X):
        layer = tf.nn.sigmoid(
            tf.add(
                tf.matmul(X, self.weights['encoder_h1']),self.biases['encoder_b1']
            )
        )
        return layer
    def decoder(self,X):
        layer = tf.nn.sigmoid(
            tf.add(
                tf.matmul(X, self.weights['decoder_h1']),self.biases['decoder_b1']
            )
        )
        return layer
    def train(self):
        X = self.X
        batch_size = self.batch_size
        self.initialPara()
        encoder_op = self.encoder(X)
        decoder_op = self.decoder(encoder_op)
        y_pred = decoder_op
        y_true = X
        # define loss and optimizer, minimize the squared error
        cost = tf.reduce_mean(
            tf.pow(y_true-y_pred,2)
        )
        optimizer = tf.train.RMSPropOptimizer(self.learning_rate).minimize(cost)
        init = tf.initialize_all_variables()
        # launch the graph
        with tf.Session() as sess:
            sess.run(init)
            total_batch = int( X.shape[0]/batch_size )
            # training cycle
            for epoch in range(self.trainning_epochs):
                # loop over all batches
                for i in range(total_batch):
                    batch_xs = X[i*batch_size:(i+1)*batch_size]
                    _, c = sess.run([optimizer, cost], feed_dict={X: batch_xs})
                #display logs per epoch step
                if epoch % self.display_step == 0:
                    print("Epoch:", '%04d'%(epoch+1),
                          "cost=","{:.9f}".foramt(c))
            print("optimization finished!!")
        self.encoderOp = encoder_op
        self.decoderOp = decoder_op

类由main函数调用:

from AutoEncoder import *
import tensorflow as tf
import tflearn.datasets.mnist as mnist
from tensorflow.examples.tutorials.mnist import input_data
X,Y,testX,testY = mnist.load_data(one_hot=True)
autoencoder1 = AutoEncoder(X,10,learning_rate=0.01)
autoencoder1.train()

出现错误:

Traceback (most recent call last):
  File "/home/zhq/Desktop/AutoEncoder/main.py", line 13, in <module>
    autoencoder1.train()
  File "/home/zhq/Desktop/AutoEncoder/AutoEncoder.py", line 74, in train
    _, c = sess.run([optimizer, cost], feed_dict={X: batch_xs})
TypeError: unhashable type: 'numpy.ndarray'

我想知道我的代码出了什么问题?提前感谢!

ZhQ

问题是,如果您想在会话期间提供一些数据,则需要使用占位符。例如:

self.X = tf.placeholder(tf.float32, [None, input_dim])

占位符是在会话期间由feed字典指定的图的一部分。

相关内容

  • 没有找到相关文章

最新更新