记忆泄漏评估CNN模型的文本层次化模型



我一直在此博客中对CNN进行一些改编,以进行CNN:http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/

一切正常!但是,当我尝试使用经过培训的模型预测新实例时,它会消耗所有可用的内存。看来,当评估并一次又一次加载所有模型时,它似乎没有解放任何内存。据我所知

这是我正在使用的代码的一部分:

with graph.as_default():
session_conf = tf.ConfigProto(
  allow_soft_placement=FLAGS.allow_soft_placement,
  log_device_placement=FLAGS.log_device_placement)
sess = tf.Session(config=session_conf)
with sess.as_default():
    # Load the saved meta graph and restore variables
    saver = tf.train.import_meta_graph("{}.meta".format(checkpoint_file))
    saver.restore(sess, checkpoint_file)
    # Get the placeholders from the graph by name
    input_x = graph.get_operation_by_name("input_x").outputs[0]
    # input_y = graph.get_operation_by_name("input_y").outputs[0]
    dropout_keep_prob = graph.get_operation_by_name("dropout_keep_prob").outputs[0]
    # Tensors we want to evaluate
    predictions = graph.get_operation_by_name("output/predictions").outputs[0]
    # Add a vector for probas
    probas =graph.get_operation_by_name("output/scores").outputs[0]
    # Generate batches for one epoch
    print("nGenerating Bathces...n")
    gc.collect()
    #mem0 = proc.get_memory_info().rss
    batches = data_helpers.batch_iter(list(x_test), FLAGS.batch_size, 1, shuffle=False)
    #mem1 = proc.get_memory_info().rss
    print("nBatches done...n")
    #pd = lambda x2, x1: 100.0 * (x2 - x1) / mem0
    #print "Allocation: %0.2f%%" % pd(mem1, mem0)
    # Collect the predictions here
    all_predictions = []
    all_probas = []
    for x_test_batch in batches:
        #Calculate probability of prediction been good
        gc.collect()
        batch_probas = sess.run(tf.reduce_max(tf.nn.softmax(probas),1), {input_x: x_test_batch, dropout_keep_prob: 1.0})
        batch_predictions = sess.run(predictions, {input_x: x_test_batch, dropout_keep_prob: 1.0})
        all_predictions = np.concatenate([all_predictions, batch_predictions])
        all_probas = np.concatenate([all_probas, batch_probas])
        # Add summary ops to collect data
        with tf.name_scope("eval") as scope:
            p_h = tf.histogram_summary("eval/probas", batch_probas)
            summary= sess.run(p_h)
            eval_summary_writer.add_summary(summary)

任何帮助都将不胜感激

欢呼

您的训练循环在每次迭代中创建新的TensorFlow操作(tf.reduce_max()tf.nn.softmax()tf.histogram_summary()),这将导致随着时间的推移消耗更多的内存。当您多次运行同一图时,TensorFlow是最有效的,因为它可以摊销在多个执行中优化图形的成本。所以,为了获得最佳性能,您应该修改程序,以便在for x_test_batch in batches:循环之前创建这些操作一次,然后在每次迭代中重新使用相同的操作。

<</p>

最新更新