PyTorch 相当于 index_add_ 取最大值



在 PyTorch 中,张量的index_add_方法使用提供的索引张量进行求和:

idx = torch.LongTensor([0,0,0,0,1,1])
child = torch.FloatTensor([1, 3, 5, 10, 8, 1])
parent = torch.FloatTensor([0, 0])
parent.index_add_(0, idx, child)

前四个子值的总和为 parent[0],接下来的两个值加为 parent[1],因此结果是tensor([ 19., 9.])

但是,我需要执行index_max_,这在 API 中不存在。有没有办法有效地做到这一点(无需循环或分配更多内存(?一个(坏的(循环解决方案是:

for i in range(max(idx)+1):
parent[i] = torch.max(child[idx == i])

这会产生tensor([ 10., 8.])的预期结果,但非常缓慢。

使用索引的解决方案:

def index_max(child, idx, num_partitions): 
# Building a num_partition x num_samples matrix `idx_tiled`:
partition_idx = torch.range(0, num_partitions - 1, dtype=torch.long)
partition_idx = partition_idx.view(-1, 1).expand(num_partitions, idx.shape[0])
idx_tiled = idx.view(1, -1).repeat(num_partitions, 1)
idx_tiled = (idx_tiled == partition_idx).float()
# i.e. idx_tiled[i,j] == 1 if idx[j] == i, else 0
parent = idx_tiled * child
parent, _ = torch.max(parent, dim=1)
return parent

标杆:

import timeit
setup = '''
import torch
def index_max_v0(child, idx, num_partitions):
parent = torch.zeros(num_partitions)
for i in range(max(idx) + 1):
parent[i] = torch.max(child[idx == i])
return parent
def index_max(child, idx, num_partitions):
# Building a num_partition x num_samples matrix `idx_tiled` 
# containing for each row indices of
partition_idx = torch.range(0, num_partitions - 1, dtype=torch.long)
partition_idx = partition_idx.view(-1, 1).expand(num_partitions, idx.shape[0])
idx_tiled = idx.view(1, -1).repeat(num_partitions, 1)
idx_tiled = (idx_tiled == partition_idx).float()
parent = idx_tiled * child
parent, _ = torch.max(parent, dim=1)
return parent
idx = torch.LongTensor([0,0,0,0,1,1])
child = torch.FloatTensor([1, 3, 5, 10, 8, 1])
num_partitions = torch.unique(idx).shape[0]
'''
print(min(timeit.Timer('index_max_v0(child, idx, num_partitions)', setup=setup).repeat(5, 1000)))
# > 0.05308796599274501
print(min(timeit.Timer('index_max(child, idx, num_partitions)', setup=setup).repeat(5, 1000)))
# > 0.024736385996220633

最新更新