带有自定义层的PyTorch网络在CPU上运行良好,但移动到GPU时会得到cudaErrorIllegalAddress



我正在尝试实现我自己版本的图形注意力网络

定制的GAT层如下

class GATLayer(nn.Module):
def __init__(self, input_dim: int, output_dim: int, adj: torch.tensor):
super().__init__()
self.W = nn.Parameter(torch.zeros(size=(output_dim, input_dim)))
self.a = nn.Parameter(torch.zeros(size=(2 * output_dim,)))
self.adj = adj
self.n_points = adj.shape[0]
#print(f"input dim:{input_dim}")
def forward(self, h: torch.Tensor):
B, T, N, F = h.size()
hh = functional.linear(h, self.W)
output = torch.zeros_like(hh)
for i in range(self.n_points):
#  print(i)
hhj = hh[:, :, self.adj[i], :]
hhi = torch.cat([hh[:, :, i:i + 1, :]] * hhj.size(2), 2)
hhij = torch.cat([hhi, hhj], 3)
e = torch.mm(hhij.reshape(B * T * hhj.size(2), -1), self.a.reshape(self.a.size(0), 1)).reshape(B, T, -1)
alpha = functional.softmax(e, dim=2)
output[:, :, i, :] = torch.sum(hhj * torch.cat([torch.unsqueeze(alpha, 3)] * hhj.size(3), 3), dim=2)
return output

整个网络定义为:

class AQIP(nn.Module):
def __init__(self, adj: torch.tensor, seq_len: int, with_aqi: bool = True):
super().__init__()
self.hid_size = 128
self.seq_len = seq_len
self.gat_layers = [
GATLayer(input_dim=16 + int(with_aqi), output_dim=128, adj=adj),
GATLayer(input_dim=128, output_dim=128, adj=adj),
]
self.rnns = [
nn.LSTM(input_size=128, hidden_size=128, num_layers=4, bias=True, batch_first=True),
]
self.linear = nn.Linear(in_features=128 * 4, out_features=1, bias=True)
def forward(self, x: torch.Tensor, site_idx: int):
h = torch.zeros(size=(4, x.size(0), 128))
c = torch.zeros(size=(4, x.size(0), 128))
for gat in self.gat_layers:
x = gat(x)
for rnn in self.rnns:
x[:, :, site_idx, :], (h, c) = rnn(x[:, :, site_idx, :], (h, c))
h = h.permute(1, 0, 2)
h = h.reshape(h.size(0), -1)
return self.linear(h).squeeze()

当我用下面的代码独立测试定制的GAT层时,发现GAT层即使在GPU 上也能很好地工作

model = GATLayer(3, 1024, torch.tensor(np.array([[1, 0, 1], [0, 0, 1], [1, 0, 1]], dtype='bool')))
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = model.to(device)
print(model(torch.randn(5, 5, 3, 3)).shape)

哪个输出torch.Size([5, 5, 3, 1024])

当我用CPU和下面的代码测试整个网络时,也很好

#device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
adj = torch.tensor(np.array([[1, 0, 0], [0, 1, 1], [1, 1, 1]], dtype="bool"))
exp = torch.randn(3, 8, 3, 17)
gpus = [0]
model = AQIP(adj, seq_len=8)
#model = model.to(device, non_blocking=True)
print(model(exp, 1))

哪个输出tensor([-0.0320, -0.0320, -0.0320], grad_fn=<SqueezeBackward0>)

但是,只要我试图将模型移动到GPU,并对设备和行进行注释,我就会得到以下错误,并追溯到与我的代码无关的格式化程序类:

RuntimeError: copy_if failed to synchronize: cudaErrorIllegalAddress: an illegal memory access was encountered

当使用CUDA_LAUNCH_BLOCKING=1运行代码时,我得到:

RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`

这并没有帮助我定位错误在所有

我还在同一台机器上运行了官方示例,并试图将它们转移到GPU,结果发现它们都运行得很好。所以我想这与CUDA和Cudnn或GPU驱动程序版本不兼容无关。但是我也找不到代码中的问题。请帮忙!如果你能帮我摆脱困境,我将不胜感激。

经过多次努力,我终于发现了问题。事实证明,如果你像一样把层放进列表中

self.gat_layers = [
GATLayer(input_dim=16 + int(with_aqi), output_dim=128, adj=adj).cuda(),
GATLayer(input_dim=128, output_dim=128, adj=adj).cuda(),
]

然后PyTorch不会自动识别这些层,因此当调用.to(device)时,它们的参数不会传输到GPU。因此,解决方案之一是逐个声明层。

虽然更好的解决方案是使用nn.ModuleList来包含您想要的所有层,因此代码可以更改为

self.gat_layers = nn.ModuleList([
GATLayer(input_dim=16 + int(with_aqi), output_dim=128, adj=adj).cuda(),
GATLayer(input_dim=128, output_dim=128, adj=adj).cuda(),
])

最新更新