如何在使用tensorflow nce_loss进行训练时进行预测



https://www.tensorflow.org/api_docs/python/tf/nn/nce_loss这里说calculate the full sigmoid loss for evaluation or inference,有人能详细解释一下如何在推理期预测标签吗?

据我所知,模型的最后一层输出是形状(batch,num_class),在训练过程中,它直接进入nce损失,并被视为二进制分类问题。在推理过程中,我直接在最后一层输出上取S形,并得到相应的条目i来表示类i的概率,这对吗?或者我可以像使用softmax一样直接将最大的条目视为类标签?

我不太理解这一点,也没有在网上找到任何与此相关的实际例子。感谢您的帮助!提前感谢!

当您考虑序列输入时,NCE_loss是一种噪声对比估计,它通过选择一个采样点来改变输入以创建输出。

参考0:https://www.tensorflow.org/api_docs/python/tf/nn/nce_loss

参考1:https://github.com/yl-1993/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/mnist_deep.py

参考文献2:https://www.programcreek.com/python/example/90447/tensorflow.nce_loss

[样本]:

import os
from os.path import exists
import tensorflow as tf
import tensorflow_io as tfio
import matplotlib.pyplot as plt
import math
import numpy as np
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
None
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
physical_devices = tf.config.experimental.list_physical_devices('GPU')
assert len(physical_devices) > 0, "Not enough GPU hardware devices available"
config = tf.config.experimental.set_memory_growth(physical_devices[0], True)
print(physical_devices)
print(config)   
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Variables
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
learning_rate = 0.001
global_step = 0
vocabulary_size = 5000
start = 0
limit = 128
delta = 1
embedding_size = 16
n_sample = 16
tf.compat.v1.disable_eager_execution()
# Input data.
inputs = tf.compat.v1.get_variable('X', dtype = tf.int32, initializer = tf.random.uniform(shape=[1], maxval=1, dtype=tf.int32, seed=10))
labels = tf.compat.v1.get_variable('Y', dtype = tf.int32, initializer = tf.random.uniform(shape=[1, 1], maxval=1, dtype=tf.int32, seed=10))
# Look up embeddings for inputs.
embeddings = tf.Variable(
tf.random.uniform([vocabulary_size, embedding_size], -1.0, 1.0)
)
embed = tf.nn.embedding_lookup(embeddings, inputs)
# Construct the variables for the NCE loss
nce_weights = tf.Variable(
tf.random.uniform(shape=[vocabulary_size, embedding_size], maxval=255, dtype=tf.float32,)
)
nce_biases = tf.Variable(tf.zeros([vocabulary_size]))
# Compute the average NCE loss for the batch.
# tf.nce_loss automatically draws a new sample of the negative labels each
# time we evaluate the loss.
loss = tf.reduce_mean(
tf.nn.nce_loss(nce_weights, nce_biases,
labels=labels,
inputs=embed,
num_sampled=n_sample,
num_classes=vocabulary_size),
name='loss'
)
optimizer = tf.compat.v1.train.ProximalAdagradOptimizer(
learning_rate,
initial_accumulator_value=0.1,
l1_regularization_strength=0.2,
l2_regularization_strength=0.1,
use_locking=False,
name='ProximalAdagrad'
)
training_op = optimizer.minimize(loss, name='minimize') 

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: DataSet / Input
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
X = np.reshape([ 500 ], (1))
Y = np.reshape([ 15 ], (1, 1))
history = [ ] 
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Training / Optimize
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
with tf.compat.v1.Session() as sess:
sess.run(tf.compat.v1.global_variables_initializer())

for i in range(1000):
global_step = global_step + 1
train_loss, temp = sess.run([loss, training_op], feed_dict={inputs:X, labels:Y})
history.append(train_loss)

print( 'steps: ' + str(i) )

sess.close()
print( history )
plt.plot(history)
plt.show()
plt.close()
input('...')

相关内容

  • 没有找到相关文章

最新更新