Pytorch CNN,不兼容张量形状



这是我的pytorch CNN网。输入张量是火炬。Size([4,1,128, 128])表示批处理大小为4的图像:

class My_Net(nn.Module):
def __init__(self, image_length):
self.image_length = image_length
# Creating the layers here (convolutional, pooling, and linear layers)
super(My_Net, self).__init__()
self.conv1 = nn.Conv2d(in_channels=1, out_channels=64, kernel_size=(5, 5), padding='same')
self.pool1 = nn.MaxPool2d(kernel_size=(10, 10))
self.conv2 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=(5, 5), padding='same')
self.pool2 = nn.MaxPool2d(kernel_size=(8, 8))
self.lin1 = nn.Linear(128, 50)
self.lin2 = nn.Linear(50, 9)
def forward(self, x):
x = self.pool1(F.relu(self.conv1(x)))  # first convolution and pooling step with relu activation
x = self.pool2(F.relu(self.conv2(x)))  # second convolution and pooling step with relu activation
print(x.size()) #added this to see the tensor demensions before passing into the view and linear layers
x = x.view((128 * 1 * 1, 4))  # second reshape
x = F.relu(self.lin1(x))  # relu activation function on the first linear layer
x = F.relu(self.lin2(x))  # we want only positive values so relu works best here
return x

我得到一个错误在向前传递,我有一个困难的时间修复。我认为这是因为缺乏对维度如何变化的理解。错误如下:

line 51, in forward
x = F.relu(self.lin1(x)) 
line 1102, in _call_impl
return forward_call(*input, **kwargs)
line 103, in forward
return F.linear(input, self.weight, self.bias)
line 1848, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (128x4 and 128x50)

我这里的主要目标是通过2个卷积和池化层发送图像,然后2个线性层最终减少到9个神经元线性层,这将是输出,并通过MSE与给定图像的9个生长条件进行比较。x.size()调用向前的输出是torch。Size([4,128, 1,1])

Pytorch线性层非常容易处理单个实例和批处理数据。如果您正在传递批处理数据,请确保您的数据形状为(batch_size, *),其中*是数据的维度。因此,在将数据传递到第一个线性层之前,您的数据应该是形状(4, 128)。这可以通过使用x = x.view((4, 128))重塑来实现。

此外,My_Net类在其init中缺少对super()方法的调用。方法。您应该添加这个调用,以便父类调用。模块类的init方法也被调用。这将正确地初始化nn。模块类并允许您的网络正常运行。你的init方法应该像这样:
def __init__(self, image_length):
super(My_Net, self).__init__()  # Add this line

# The rest of your net

最新更新