我正在尝试将所有功能(除了第一个(馈送到一些层(nn.Lineline+nn.LeakyReLU(,获得输出,然后重新组装初始数据结构并将其馈送到最后一个层。但训练过程永远悬着,我没有得到任何产出。
需要明确的是,如果没有这一点,代码可以很好地工作,但我正试图通过在将一些特征(带有未处理的第一个特征(提供给最后一层之前对其进行预处理来改进结果。
任何帮助都将不胜感激。
这是我的代码:
def forward(self, x):
# save the residual for the skip connection
res = x[:, :, 0:self.skip]
xSignal = np.zeros((len(x),len(x[0]),1))
xParams = np.zeros((len(x),len(x[0]),len(x[0][0])-1))
# separate data
for b in range(len(x)):
for c in range(len(x[b])):
for d in range(len(x[b][c])):
if d == 0:
xSignal[b][c][d] = x[b][c][d]
else:
xParams[b][c][d-1] = x[b][c][d]
# pass parameters through first network
xParams = torch.from_numpy(xParams).cuda().float()
xParams = self.paramsLinear(xParams)
xParams = self.paramsLeakyRelu(xParams)
# make new array with output and the signal
xConcat = np.zeros((len(x),len(x[0]),len(x[0][0])))
for b in range(len(x)):
for c in range(len(x[b])):
for d in range(len(x[b][c])):
if d == 0:
xConcat[b][c][d] = xSignal[b][c][d]
else:
xConcat[b][c][d] = xParams[b][c][d-1]
# convert to tensor
xConcat = torch.from_numpy(xConcat).cuda().float()
# pass it through the recurrent part
xConcat, self.hidden = self.rec(xConcat, self.hidden)
# then the linear part and return
return self.lin(xConcat) + res```
事实证明,切片比迭代更快、更容易。我还用torch.cat
函数把所有的东西放回一个张量中。
def forward(self, x):
# save the residual for the skip connection
res = x[:, :, 0:self.skip]
# split features
xSignal = x[:, :, 0:1]
xParams = x[:, :, 1:]
# pass only some features through first layers
xParams = self.paramsLinear(xParams)
xParams = self.paramsLeakyRelu(xParams)
# put everything back together
x = torch.cat((xSignal, xParams), 2)
# pass it through the last layers
x, self.hidden = self.rec(x, self.hidden)
# then the linear part and return
return self.lin(x) + res
现在正在按预期进行训练:(