从张量中采样,该张量依赖于张量流中的随机变量



是否可以从依赖于张量流中随机变量的张量中获取样本?我需要得到一个近似的样本分布,用于要优化的损失函数。具体来说,在下面的例子中,我希望能够获得Y_output的样本,以便能够计算输出分布的均值和方差,并在损失函数中使用这些参数。

def sample_weight(mean, phi, seed=1):
P_epsilon = tf.distributions.Normal(loc=0., scale=1.0)
epsilon_s = P_epsilon.sample([1])
s = tf.multiply(epsilon_s, tf.log(1.0+tf.exp(phi)))
weight_sample = mean + s
return weight_sample
X = tf.placeholder(tf.float32, shape=[None, 1], name="X")
Y_labels = tf.placeholder(tf.float32, shape=[None, 1], name="Y_labels")
sw0 = sample_weight(u0,p0)
sw1 = sample_weight(u1,p1)
Y_output = sw0 + tf.multiply(sw1,X)
loss = tf.losses.mean_squared_error(labels=Y_labels, predictions=Y_output)
train_op = tf.train.AdamOptimizer(0.5e-1).minimize(loss)
init_op = tf.global_variables_initializer()
losses = []
predictions = []
Fx = lambda x: 0.5*x + 5.0
xrnge = 50
xs, ys = build_toy_data(funcx=Fx, stdev=2.0, num=xrnge)
with tf.Session() as sess:
sess.run(init_op)
iterations=1000
for i in range(iterations):
stat = sess.run(loss, feed_dict={X: xs, Y_labels: ys})

不确定这是否回答了您的问题,但是:当您在采样Op的下游有一个Tensor时(例如,通过对P_epsilon.sample([1])的调用创建的Op,无论何时在下游Tensor上调用sess.run,采样操作都将重新运行,并产生一个新的随机值。示例:

import tensorflow as tf
from tensorflow_probability import distributions as tfd
n = tfd.Normal(0., 1.)
s = n.sample()
y = s**2
sess = tf.Session()  # Don't actually do this -- use context manager
print(sess.run(y))
# ==> 0.13539088
print(sess.run(y))
# ==> 0.15465781
print(sess.run(y))
# ==> 4.7929106

如果你想要一堆y的样品,你可以做

import tensorflow as tf
from tensorflow_probability import distributions as tfd
n = tfd.Normal(0., 1.)
s = n.sample(100)
y = s**2
sess = tf.Session()  # Don't actually do this -- use context manager
print(sess.run(y))
# ==> vector of 100 squared random normal values

我们在tensorflow_probability中也有一些很酷的工具来做你在这里驾驶的那种事情。即BijectorAPI和更简单的trainable_distributionsAPI。

(另一个小问题:我建议使用tf.nn.softplus,或者至少使用tf.log1p(tf.exp(x))而不是tf.log(1.0 + tf.exp(x))。后者由于浮点不精确而具有较差的数值特性,前者对此进行了优化(。

希望这是一些帮助!

最新更新