Pytorch错误- device:期望所有张量在同一个设备上,但发现至少有两个设备



我一直在关注这个链接和这个链接。但是错误仍然存在。

我正在尝试一个简单的前馈NN使用pytorch(从教程)。

import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transformers
# hyperparameters
input_size = 784
hidden_size = 100
num_classes = 10
num_epochs = 2
learning_rate = 0.001
batch_size = 100
# activating device
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Loading data
train_dataset = torchvision.datasets.MNIST('.', train=True, transform=transformers.ToTensor(), download=True)
test_dataset = torchvision.datasets.MNIST('.', train=False, transform=transformers.ToTensor(), download=False)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=False)

class FeedforwardNN(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(FeedforwardNN, self).__init__()
self.l1 = nn.Linear(input_size, hidden_size)
self.r1 = nn.ReLU()
self.l2 = nn.Linear(hidden_size, num_classes)
def forward(self, x):
out = self.l1(x)
out = self.r1(out)
out = self.l2(out)
return out

model = FeedforwardNN(input_size, hidden_size, num_classes)
model.to(device)  # Added to solve the mentioned error, but to no avail.
# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

# Training Loop
n_total_steps = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.reshape(-1, 28*28).to(device)
y_pred = model(images)
loss = criterion(y_pred, labels)
loss.backward()
optimizer.zero_grad()

我得到以下错误

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument target in method wrapper_nll_loss_forward)

我尝试在criterionoptimizer上添加to(device)。但这并不能解决错误。

使用相同代码的教程似乎运行正常。

在撰写本文时,cuda处于非活动状态。尽管这并不重要,因为我们已经给出了if条件,它将选择cpu

>>> torch.cuda.is_available()
False

那么错误应该在哪里呢?

您忘记将labels移动到循环中的设备,

最新更新