将PyTorch NN模型的权重作为张量的有效方法



我有一个简单的神经网络:

import torch
import torch.nn as nn
import torch.optim as optim
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.fc1 = nn.Linear(1, 5)
self.fc2 = nn.Linear(5, 10)
self.fc3 = nn.Linear(10, 1)
def forward(self, x):
x = self.fc1(x)
x = torch.relu(x)        
x = torch.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Model()

是否有一种更有效的方法来获得这个网络的权重(同时保持梯度),而不是像这样遍历每一个

for w_arr in net.parameters():

list(net.parameters())

由于后者不维护梯度(它将其转换为列表)

可以使用torch.nn.utils.parameters_to_vector效用函数

>>> net(torch.rand(1, 1, requires_grad=True)).mean().backward()
>>> from torch.nn.utils import parameters_to_vector
>>> parameters_to_vector(net.parameters())
tensor([-0.8196, -0.7785, -0.2459,  0.4670, -0.9747,  0.1994,  0.7510, -0.6452,
0.4948,  0.3376,  0.2641, -0.0707,  0.1282, -0.2944,  0.1337,  0.0461,
-0.1491,  0.2985,  0.3031,  0.3566,  0.0058,  0.0157, -0.0712,  0.3874,
0.2870, -0.3829,  0.1178, -0.3901, -0.0425, -0.1603,  0.0408,  0.3513,
0.0289, -0.3374, -0.1820,  0.3684, -0.3069,  0.0312, -0.4205,  0.1456,
0.2833,  0.0589, -0.2229, -0.1753, -0.1829,  0.1529,  0.1097,  0.0067,
-0.2694, -0.2176,  0.2292,  0.0529, -0.2617,  0.0736,  0.1617,  0.0438,
0.2387,  0.3278, -0.0536, -0.2875, -0.0869,  0.0770, -0.0774, -0.1909,
0.2803, -0.3237, -0.3851, -0.2241,  0.2838,  0.2202,  0.3057,  0.0128,
-0.2650,  0.1660, -0.2961, -0.0123, -0.2106, -0.1021,  0.1135, -0.1051,
0.1735], grad_fn=<CatBackward>)

它会将参数生成器转换为平坦张量,同时保留梯度,这对应于所有平坦参数张量的串联。

相关内容

  • 没有找到相关文章

最新更新