TensorFlow:模型保存成功,但还原失败,我错了



我最近正在学习TensorFlow,显然我是新手。但是我在这个问题中尝试了多种方法,我编写了此代码来训练我的模型,并希望直接恢复它,而不是model.ckpt文件已经存在。但是在火车之后,我的测试准确性约为90%,但是如果我直接将其恢复到10%的精度,我认为这是因为我恢复了模型。我只有两个名为weightsbiases的变量,这是我的主要部分代码:

def train(bottleneck_tensor, jpeg_data_tensor):
image_lists = create_image_lists(TEST_PERCENTAGE, VALIDATION_PERCENTAGE)
n_classes = len(image_lists.keys())
# input
bottleneck_input = tf.placeholder(tf.float32, [None, BOTTLENECK_TENSOR_SIZE],
                                  name='BottleneckInputPlaceholder')
ground_truth_input = tf.placeholder(tf.float32, [None, n_classes], name='GroundTruthInput')
# this is the new_layer code
# with tf.name_scope('final_training_ops'):
#     weights = tf.Variable(tf.truncated_normal([BOTTLENECK_TENSOR_SIZE, n_classes], stddev=0.001))
#     biases = tf.Variable(tf.zeros([n_classes]))
#     logits = tf.matmul(bottleneck_input, weights) + biases
logits=transfer_new_layer.new_layer(bottleneck_input,n_classes)
final_tensor = tf.nn.softmax(logits)
# losses
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=ground_truth_input)
cross_entropy_mean = tf.reduce_mean(cross_entropy)
train_step = tf.train.GradientDescentOptimizer(LEARNING_RATE).minimize(cross_entropy_mean)
# calculate the accurancy
with tf.name_scope('evaluation'):
    correct_prediction = tf.equal(tf.argmax(final_tensor, 1), tf.argmax(ground_truth_input, 1))
    evaluation_step = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
image_order_step = tf.arg_max(final_tensor, 1)
saver = tf.train.Saver(tf.global_variables(), write_version=tf.train.SaverDef.V1)
with tf.Session() as sess:
    init = tf.global_variables_initializer()
    sess.run(init)
    if os.path.exists('F:/_pythonWS/imageClassifier/ckpt/imagesClassFilter.ckpt'):
        saver.restore(sess,"F:/_pythonWS/imageClassifier/ckpt/imagesClassFilter.ckpt")
        reader = tf.train.NewCheckpointReader('F:/_pythonWS/imageClassifier/ckpt/imagesClassFilter.ckpt')
        all_variables = reader.get_variable_to_shape_map()
        for each in all_variables:
            print(each, all_variables[each])
            print(reader.get_tensor(each))
    else:
        print("retrain model")
        for i in range(STEPS):
            train_bottlenecks, train_ground_truth = get_random_cached_bottlenecks(
                sess, n_classes, image_lists, BATCH, 'training', jpeg_data_tensor, bottleneck_tensor)
            sess.run(train_step,
                     feed_dict={bottleneck_input: train_bottlenecks, ground_truth_input: train_ground_truth})
            # 在验证数据上测试正确率
            if i % 100 == 0 or i + 1 == STEPS:
                validation_bottlenecks, validation_ground_truth = get_random_cached_bottlenecks(
                    sess, n_classes, image_lists, BATCH, 'validation', jpeg_data_tensor, bottleneck_tensor)
                validation_accuracy = sess.run(evaluation_step, feed_dict={
                    bottleneck_input: validation_bottlenecks, ground_truth_input: validation_ground_truth})
                print('Step %d: Validation accuracy on random sampled %d examples = %.1f%%' % (
                i, BATCH, validation_accuracy * 100))
        saver.save(sess, 'F:/_pythonWS/imageClassifier/ckpt/imagesClassFilter.ckpt')
        print(tf.get_session_tensor("final_training_ops/Variable",dtype=float))
        print(tf.get_session_tensor("final_training_ops/Variable_1",dtype=float))
    print('Beginning Test')
    # test
    test_bottlenecks, test_ground_truth = get_tst_bottlenecks(sess, image_lists, n_classes,
                                                                             jpeg_data_tensor,
                                                                             bottleneck_tensor)
    # saver.restore(sess, 'F:/_pythonWS/imageClassifier/ckpt/imagesClassFilter.ckpt')
    test_accuracy = sess.run(evaluation_step, feed_dict={
        bottleneck_input: test_bottlenecks, ground_truth_input: test_ground_truth})
    print('Final test accuracy = %.1f%%' % (test_accuracy * 100))
    label_name_list = list(image_lists.keys())
    for label_index, label_name in enumerate(label_name_list):
        category = 'testing'
        for index, unused_base_name in enumerate(image_lists[label_name][category]):
            bottlenecks = []
            ground_truths = []
            print("real lable%s:" % label_name)
            # print(unused_base_name)
            bottleneck = get_or_create_bottleneck(sess, image_lists, label_name, index, category,
                                                                 jpeg_data_tensor, bottleneck_tensor)
            # saver.restore(sess, 'F:/_pythonWS/imageClassifier/ckpt/imagesClassFilter.ckpt')
            ground_truth = np.zeros(n_classes, dtype=np.float32)
            ground_truth[label_index] = 1.0
            bottlenecks.append(bottleneck)
            ground_truths.append(ground_truth)
            image_kind = sess.run(image_order_step, feed_dict={
                bottleneck_input: bottlenecks, ground_truth_input: ground_truths})
            image_kind_order = int(image_kind[0])
            print("pre_lable%s:" % label_name_list[image_kind_order])

尝试此方法保存和还原:

saver = tf.train.Saver() 
with tf.Session() as sess: 
sess.run(initVar) 
# restore saved model
new_saver = tf.train.import_meta_graph('my-model.meta')
new_saver.restore(sess, tf.train.latest_checkpoint('./'))

# save model weights, after training process
saver.save(sess, 'my-model')  

在会话外定义一个tf.train.Saver。完成训练后,将权重列为saver.save(sess, 'my-model')。并恢复如上所述的权重。

我知道我在哪里错了...事实是,我已经成功地还原模型,但是因为我每次使用rand创建结果列表,当我使用image_order_step = tf.arg_max(final_tensor, 1)计算时测试映像的种类,因为当我下次运行代码时,lables订单会更改,但是重量和biaese仍然相同,例如上次,第一次,lable列表为[a1,a2,a3,a4,a5,a6],在计算image_order_step = tf.arg_max(final_tensor, 1)结果之后,结果为3,因此结果将是A4,下一次lable列表更改为[A5,A3,A1,A6,A6,A2,A4],但是image_order_step = tf.arg_max(final_tensor, 1)结果仍然为3,因此预测结果将是A6,因此准确性每次都会改变,而rand则完全会变化。这个问题告诉我,要小心每个细节,否则一个错误会使您很长一段时间感到困惑。超过!

最新更新