将函数应用于 3D 张量,同时忽略零行和填充



我目前正在尝试改善我的张量流管道中最昂贵的操作的运行时间。

我正在尝试完成以下操作:我得到了一个包含一些患者数据的多个样本的 3D 张量,例如数据可能如下所示

n_hidden = 3  #number of elements per 1D tensor
batch_size = 3 #number of patients
n_mc_samples = 2 #number of samples per patient
rnn_grid_times = [2,3,1] #number of non zero 1D tensors per patient
all_outputs = tf.constant([[[0.15, 0.874, 0.2], [0.1,0.00878,0.58],[0.0,0.0,0.0]], #beginning of patient 1
[[0.456,0.454,0.003],[0.4564,0.4984,0.21], [0.0,0.0,0.0]],
[[0.121,0.22,0.45],[0.15,0.488,0.222], [0.11,0.849,0.45]],  #beginning of patient 2
[[0.15, 0.5646, 0.15], [0.45,0.48949,0.56465], [0.4489,0.456,0.9]],
[[0.121, 0.22, 0.01], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]], #beginning of patient 3
[[0.15, 0.89, 0.42], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]])

该数据对应于3名患者,每位患者采样两次。如您所见,患者 1 和 3 的数据被填充为与患者 2 的数据大小相同。

我的目标是将每个非零一维张量馈送到具有一个隐藏层的单个输出神经网络,然后在零张量的位置添加额外的踏板,以保持患者之间的均匀维度。所以这里一个有效的结果可能是

[[-0.11379365, -0.11188659,  0.        ],
[-0.11379365, -0.11379365,  0.        ],
[-0.1135166 , -0.11379365, -0.11379365],
[-0.11379365, -0.11359671, -0.11270589],
[-0.11379365,  0.        ,  0.        ],
[-0.11379365,  0.        ,  0.        ]]

只是重申一下,因为我意识到这有点令人费解,所以与第一个代码块中的[0.15, 0.874, 0.2]相关的输出在第二个代码块中-0.11379365

这是隔离的代码,上面显示了玩具数据。如果你有一个有效的张量流环境,这应该是可以运行的,没有问题

import tensorflow as tf    
RANDOM_SEED = 42
tf.set_random_seed(RANDOM_SEED)
def code():
n_hidden = 3
batch_size = 3
n_mc_samples = 2
num_rnn_grid_times = tf.constant([2, 3, 1])
all_outputs = tf.constant([[[0.15, 0.874, 0.2], [0.1,0.00878,0.58],[0.0,0.0,0.0]], #beginning of patient 1
[[0.456,0.454,0.003],[0.4564,0.4984,0.21], [0.0,0.0,0.0]],
[[0.121,0.22,0.45],[0.15,0.488,0.222], [0.11,0.849,0.45]],  #beginning of patient 2
[[0.15, 0.5646, 0.15], [0.45,0.48949,0.56465], [0.4489,0.456,0.9]],
[[0.121, 0.22, 0.01], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]], #beginning of patient 3
[[0.15, 0.89, 0.42], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]])

n_extra_hidden_nodes = 2
extra_hidden_weights = tf.Variable(tf.random_normal([n_hidden, n_extra_hidden_nodes], stddev=0.1), name="HiddenSoftmax/W")
extra_hidden_biases = tf.Variable(tf.random_normal([n_extra_hidden_nodes], stddev=0.1), name="HiddenSoftmax/b")

out_weights = tf.Variable(tf.random_normal([n_extra_hidden_nodes, 1], stddev=0.1), name="Softmax/W") 
out_biases = tf.Variable(tf.random_normal([1], stddev=0.1), name="Softmax/b")

nneth_array_total = tf.Variable([])
n = tf.constant(0)
inner_cond = lambda i, nneth_array, n: tf.less(i, num_rnn_grid_times[tf.floordiv(n,n_mc_samples)])
cond = lambda n, nneth_array_total: tf.less(n, batch_size*n_mc_samples)
def inner_body(i, nneth_array, n):
hidden = tf.nn.relu(tf.matmul(tf.expand_dims(all_outputs[n][i], 0), extra_hidden_weights) + extra_hidden_biases)
nneth = tf.matmul(hidden, out_weights) + out_biases
nneth = tf.reshape(nneth, [1]) #single output for the neural net
nneth_array = tf.concat([nneth_array, nneth], 0)
return i + 1, nneth_array, n
def body(n, nneth_array_total):
nneth_array = tf.Variable([])
i = tf.constant(0) #iterator over 1D tensors
i, nneth_array, n = tf.while_loop(inner_cond, inner_body, loop_vars=[i, nneth_array, n], shape_invariants=[i.get_shape(), tf.TensorShape([None]), n.get_shape()])
padding = tf.zeros([tf.reduce_max(num_rnn_grid_times) - num_rnn_grid_times[tf.floordiv(n,n_mc_samples)]],dtype=tf.float32)
nneth_array = tf.concat([nneth_array, padding],0) #add extra zeros so that all nneth_arrays have same shape
nneth_array_total= tf.concat([nneth_array_total, nneth_array], 0)
return n+1, nneth_array_total

n, nneth_array_total = tf.while_loop(cond, body, loop_vars=[n, nneth_array_total],
shape_invariants=[n.get_shape(), tf.TensorShape([None])])
nneth_array_total = tf.reshape(nneth_array_total, [batch_size*n_mc_samples, tf.reduce_max(num_rnn_grid_times)])
preds = nneth_array_total
return preds

if __name__ == '__main__':
pred = code()
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)

sess.run(init)
print(sess.run([pred]))

代码有效,但速度很慢。这是管道的一部分,迭代一个患者大约需要 1.25 秒,似乎很大一部分运行时间是由于上面的代码。这意味着使用我的数据集的一个纪元大约需要 12 个小时,与类似的方法相比,这有点太多了。

我已经用谷歌搜索并找到了将函数应用于多维张量的方法,但没有一种也考虑填充的方法。有什么见解吗?

即使使用零向量,也可以提供整个输入,从而提供最快的处理时间。但就像你说的,由于网络中的偏差,这将返回非零输出。由于您希望在输入向量需要为零时输出为零,因此一个简单的技巧是应用一个掩码,如果输入向量为零,该掩码将使预测为零。

当输入向量不为零时返回 1,否则返回 0 的掩码可以通过一个简单的逻辑获得:

a_sum = tf.reduce_sum(all_outputs, 2)
mask = tf.cast(tf.greater(a_sum, tf.zeros_like(a_sum)), tf.float32)

然后,将预测与掩码相乘。

最新更新