PyTorch autograd -- grad 只能为标量输出隐式创建



我在PyTorch中使用autograd工具,发现自己处于需要通过整数索引访问一维张量中的值的情况。像这样:

def basic_fun(x_cloned):
res = []
for i in range(len(x)):
res.append(x_cloned[i] * x_cloned[i])
print(res)
return Variable(torch.FloatTensor(res))

def get_grad(inp, grad_var):
A = basic_fun(inp)
A.backward()
return grad_var.grad

x = Variable(torch.FloatTensor([1, 2, 3, 4, 5]), requires_grad=True)
x_cloned = x.clone()
print(get_grad(x_cloned, x))

我收到以下错误消息:

[tensor(1., grad_fn=<ThMulBackward>), tensor(4., grad_fn=<ThMulBackward>), tensor(9., grad_fn=<ThMulBackward>), tensor(16., grad_fn=<ThMulBackward>), tensor(25., grad_fn=<ThMulBackward>)]
Traceback (most recent call last):
File "/home/mhy/projects/pytorch-optim/predict.py", line 74, in <module>
print(get_grad(x_cloned, x))
File "/home/mhy/projects/pytorch-optim/predict.py", line 68, in get_grad
A.backward()
File "/home/mhy/.local/lib/python3.5/site-packages/torch/tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/mhy/.local/lib/python3.5/site-packages/torch/autograd/__init__.py", line 90, in backward
allow_unreachable=True)  # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

总的来说,我对如何使用变量的克隆版本应该如何在梯度计算中保留该变量持怀疑态度。变量本身实际上没有用于A的计算,所以当你调用A.backward()时,它不应该是该操作的一部分。

感谢您对这种方法的帮助,或者是否有更好的方法可以避免丢失梯度历史记录并仍然通过带有requires_grad=True的一维张量进行索引!

**编辑(9 月 15 日(:**

res是包含 1 到 5 平方值的零维张量列表。为了连接包含 [1.0, 4.0, ..., 25.0] 的单个张量,我将return Variable(torch.FloatTensor(res))更改为torch.stack(res, dim=0),从而产生tensor([ 1., 4., 9., 16., 25.], grad_fn=<StackBackward>)

但是,我收到这个新错误,由A.backward()行引起。

Traceback (most recent call last):
File "<project_path>/playground.py", line 22, in <module>
print(get_grad(x_cloned, x))
File "<project_path>/playground.py", line 16, in get_grad
A.backward()
File "/home/mhy/.local/lib/python3.5/site-packages/torch/tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/mhy/.local/lib/python3.5/site-packages/torch/autograd/__init__.py", line 84, in backward
grad_tensors = _make_grads(tensors, grad_tensors)
File "/home/mhy/.local/lib/python3.5/site-packages/torch/autograd/__init__.py", line 28, in _make_grads
raise RuntimeError("grad can be implicitly created only for scalar outputs")
RuntimeError: grad can be implicitly created only for scalar outputs

我将basic_fun更改为以下内容,这解决了我的问题:

def basic_fun(x_cloned):
res = torch.FloatTensor([0])
for i in range(len(x)):
res += x_cloned[i] * x_cloned[i]
return res

此版本返回标量值。

> 在basic_fun函数中,res 变量已经是一个火炬-自动grad-变量,你不需要再次转换它。恕我直言

def basic_fun(x_cloned):
res = []
for i in range(len(x)):
res.append(x_cloned[i] * x_cloned[i])
print(res)
#return Variable(torch.FloatTensor(res))
return res[0]
def get_grad(inp, grad_var):
A = basic_fun(inp)
A.backward()
return grad_var.grad

x = Variable(torch.FloatTensor([1, 2, 3, 4, 5]), requires_grad=True)
x_cloned = x.clone()
print(get_grad(x_cloned, x))

最新更新