了解Pytorch权重和线性层中的偏差



下面是将权重和偏差组合到一个单层中的代码,我无法理解下面的行,为什么我们必须将权重转置矩阵与bais相乘。我应该在没有权重的情况下进行偏差,因为我们正在乘以权重以获得最终输出3

combined_layer.bias.data=layer1.bias@layer2.weight.t((+layer2.bias

# Create a single layer to replace the two linear layers
combined_layer = nn.Linear(input_size, output_size)
combined_layer.weight.data = layer2.weight @ layer1.weight
combined_layer.bias.data = layer1.bias @ layer2.weight.t() + layer2.bias //This should be just bias
outputs3 = inputs @ combined_layer.weight.t() + combined_layer.bias

有人能帮我理解一下吗?

您只需扩展两个Linear层的原始方程,即

# out = layer2(layer1(x))
# given (x @ A + B) @ C + D
out = (x @ layer1.weight.t() + layer1.bias) @ layer2.weight.t() + layer2.bias

您可以扩展(x @ A + B) @ C + D = (x @ A @ C) + B @ C + D

out = x @ layer1.weight.t() @ layer2.weight.t() + layer1.bias @ layer2.weight.t() + layer2.bias
out = x @ (layer1.weight.t() @ layer2.weight.t()) + (layer1.bias @ layer2.weight.t() + layer2.bias)
# the above equation is x @ (A @ C) + B @ C + D
# now you can assume
combined_layer.weight = layer2.weight @ layer1.weight
combined_layer.bias = layer1.bias @ layer2.weight.t() + layer2.bias 
# final output
out = x @ combined_layer.weight.t() + combined_layer.bias

此外,请注意,这里也使用矩阵乘法转置规则,即

转置(A@B(=转置(B(@转置(A(

这就是为什么combined_layer.weight.t()乘以x,因为我们在layer2.weight @ layer1.weight中没有进行转置。

最新更新