向后(),pytorch,lstmcell中的错误。 "Trying to backward through the graph a second time"



错误,你能告诉我这个错误的问题在哪里吗?

pytorch是最新版本。尝试将输入和初始隐藏状态更改为Variable(),但它不起作用。

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx


*Traceback (most recent call last):
File " learnPytorch_lstm_short.py", line 81, in <module>
main()
File " learnPytorch_lstm_short.py", line 77, in main
neural_network()
File " learnPytorch_lstm_short.py", line 62, in neural_network
loss.backward() 
File "/Users/xxx/opt/anaconda3/envs/torch_learn/lib/python3.8/site-packages/torch/_tensor.py", line 363, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/Users/xxx/opt/anaconda3/envs/torch_learn/lib/python3.8/site-packages/torch/autograd/__init__.py", line 173, in backward
Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.*

代码文件如下:

from audioop import bias
import re
import torch 
import numpy as np 
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim 
import torch
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
import numpy as np
from torch.autograd import Variable

class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.rnn = nn.LSTMCell(input_size = 10,
hidden_size = 10)

def forward(self, x, hx, cx ):
output = []
for i in range(x.shape[0]):
hx, cx   = self.rnn(x[i], (hx, cx ))
output.append(hx)
output =torch.stack(output, dim=0)
return output, hx, cx 

def neural_network():
net = Net()
net = net.float()
net.zero_grad()
criterion = nn.MSELoss() #  nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

hx = Variable(torch.randn(3, 10))
cx = Variable(torch.randn(3, 10))

for epoch in range(1):
running_loss =0
for i in range(20):
inputs = Variable(torch.randn(10, 3, 10)) # time step, batch,hidden size 
labels = Variable(torch.randn(10, 3, 10))
# print("input shape ", inputs.shape)
optimizer.zero_grad()

outputs, hx, cx  = net(inputs, hx, cx)
loss = criterion(outputs, labels)

loss.backward() 
optimizer.step()
running_loss += loss.item() 

if i % 2 == 1:
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0 

print('Finished training ')

def main():
neural_network()

if __name__ == "__main__":
main()

在再次作为输入传递之前分离隐藏的/cell状态。

outputs, hx, cx  = net(inputs, hx.detach(), cx.detach())

另外,变量是不赞成的,在你的情况下,只是删除变量例如…hx = torch.randn(3, 10)。如果你确实需要渐变,只需添加require_grad=True…hx = torch.randn(3, 10, requires_grad=True)

最新更新