Pytorch:CrossEntropyLoss的多目标错误



所以我在训练一个卷积神经网络。以下是基本细节:

  • 原始标签dim=torch。尺寸([64,1](
  • 净输出dim=火炬。尺寸([64,2](
  • 损失类型=nn。CrossEntropyLoss((
  • error=RuntimeError:/pyttorc/aten/src/THUNN/generic/ClassNLLCriteration.cu:15不支持多目标

我哪里错了

培训:

EPOCHS        = 5
LEARNING_RATE = 0.0001
BATCH_SIZE    = 64
net = Net().to(device)
optimizer = optim.Adam(net.parameters(), lr=LEARNING_RATE)
loss_log = []
loss_log = train(net, trainSet, loss_log, EPOCHS, LEARNING_RATE, BATCH_SIZE)

列车功能:

def train(net, train_set, loss_log=[], EPOCHS=5, LEARNING_RATE=0.001, BATCH_SIZE=32):
print('Initiating Training..')  
loss_func = nn.CrossEntropyLoss()
# Iteration Begins
for epoch in tqdm(range(EPOCHS)):
# Iterate over every sample in the batch
for data in tqdm(trainSet, desc=f'Iteration > {epoch+1}/{EPOCHS} : ', leave=False):
x, y = data
net.zero_grad()
#Compute the output
output, sm = net(x)
# Compute Train Loss
loss = loss_func(output, y.to(device))
# Backpropagate
loss.backward()
# Update Parameters
optimizer.step()
# LEARNING_RATE -= LEARNING_RATE*0.0005
loss_log.append(loss)
lr_log.append(LEARNING_RATE)
return loss_log, lr_log

完全错误:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-20-8deb9a27d3b4> in <module>()
13 
14 total_epochs += EPOCHS
---> 15 loss_log = train(net, trainSet, loss_log, EPOCHS, LEARNING_RATE, BATCH_SIZE)
16 
17 plt.plot(loss_log)
4 frames
<ipython-input-9-59e1d2cf0c84> in train(net, train_set, loss_log, EPOCHS, LEARNING_RATE, BATCH_SIZE)
21         # Compute Train Loss
22         # print(output, y.to(device))
---> 23         loss = loss_func(output, y.to(device))
24 
25         # Backpropagate
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530             result = self._slow_forward(*input, **kwargs)
531         else:
--> 532             result = self.forward(*input, **kwargs)
533         for hook in self._forward_hooks.values():
534             hook_result = hook(self, input, result)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
914     def forward(self, input, target):
915         return F.cross_entropy(input, target, weight=self.weight,
--> 916                                ignore_index=self.ignore_index, reduction=self.reduction)
917 
918 
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
2019     if size_average is not None or reduce is not None:
2020         reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2021     return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
2022 
2023 
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
1836                          .format(input.size(0), target.size(0)))
1837     if dim == 2:
-> 1838         ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
1839     elif dim == 4:
1840         ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15

问题是您的目标张量是二维的([64,1]而不是[64](,这使得PyTorch认为每个数据有一个以上的基本事实标签。这很容易通过loss_func(output, y.flatten().to(device))解决。希望这能有所帮助!

您自己编写了问题:

original label dim = torch.Size([64, 1]) <-- [0] or [1]
output from the net dim = torch.Size([64, 2]) <-- [0,1] or [1,0]

您需要将目标更改为一个热编码。此外,如果您正在进行二进制分类,我建议更改模型以返回单个输出单元,并使用binary_cross_entropy作为损失函数。

相关内容

  • 没有找到相关文章

最新更新