pytorch中两个1d信号之间的可微分卷积



我需要在pytorch中实现信号和窗口之间的卷积,并且我希望它是可微分的。由于我找不到张量的现有函数(我只能找到具有可学习参数的函数(,我自己写了一个,但我无法在不破坏计算图的情况下使其工作。我该怎么做?我做的功能是:

def Convolve(a, b):
conv=torch.zeros(a.shape[0], a.shape[1], requires_grad=True).clone()
l=b.shape[0]
r=int((l-1)/2)
l2=a.shape[1]
for x in range(a.shape[0]):#for evry signal
for x2 in range(a.shape[1]):#for every time instant
for x4 in range(l):#compute the convolution (for every window value)
if (x2-r+x4<0 or x2-r+x4>=l2):#if the index is out of bonds the result is 0 (to avoid zero padding)
conv[x][x2]+=0
else:
conv[x][x2-r+x4]+=a[x][x2-r+x4]*b[x4]#otherwise is window*signal
return conv

其中"a"是二维张量(信号索引,时间(,"b"是Hann窗口。窗户的长度很奇怪。

幸运的是,使用pytorch基元可以实现这一点。您可能正在寻找功能性conf1d。以下是它的工作原理。我不确定你是想要关于输入还是权重的导数,所以两者都有,只需保留适合你需要的requires_grad

import torch.nn.functional as F
# batch of 3 signals, each of length 11 with 1 channel
signal = torch.randn(3, 1, 11, requires_grad=True) 
# convolution kernel of size 3, expecting 1 input channel and 1 output channel
kernel = torch.randn(1, 1, 3, requires_grad=True)
# convoluting signal with kernel and applying padding
output = F.conv1d(signal, kernel, stride=1, padding=1, bias=None)
# output.shape is (3, 1, 11)
# backpropagating some kind of loss through the computational graph
output.sum().backward() 
print(kernel.grad)
>>> torch.tensor([[[...]]])

最新更新