尝试在GRAYSCALE映像上使用自定义主干训练FasterRCNN时出错



我按照https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html#putting-一切结合教程,以便为GRAYSCALE图像上的1个类创建对象检测器。

这是我的代码(请注意,我使用DenseNet作为BACKBONE-由我在自己的数据集上预训练的模型(:

device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
num_classes = 2 # 1 class + background
model = torch.load(os.path.join(patch_classifier_model_dir, "densenet121.pt"))
backbone = model.features
backbone.out_channels = 1024
anchor_generator = AnchorGenerator(sizes=((32, 64, 128, 256, 512),),
aspect_ratios=((0.5, 1.0, 2.0),))
roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=[0],
output_size=7,
sampling_ratio=2)
# put the pieces together inside a FasterRCNN model
model = FasterRCNN(backbone,
num_classes=2,
rpn_anchor_generator=anchor_generator,
box_roi_pool=roi_pooler)
# move model to the right device
model.to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=0.005,
momentum=0.9, weight_decay=0.0005)
# and a learning rate scheduler which decreases the learning rate by
# 10x every 3 epochs
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
step_size=3,
gamma=0.1)

这是我遇到的错误:

RuntimeError: Given groups=1, weight of size [64, 1, 7, 7], expected input[2, 3, 1344, 800] to have 1 channels, but got 3 channels instead

基于FasterRCNN架构,我认为问题出在transform组件中,因为它试图对最初为灰度而非RGB:的图像进行归一化

FasterRCNN(
(transform): GeneralizedRCNNTransform(
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
Resize(min_size=(800,), max_size=1333, mode='bilinear')
)
(backbone): Sequential(
(conv0): Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(norm0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu0): ReLU(inplace=True)
(pool0): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(denseblock1): _DenseBlock(
(denselayer1): _DenseLayer(
(norm1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu1): ReLU(inplace=True)
(conv1): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu2): ReLU(inplace=True)
(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
)

...............

(norm5): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(rpn): RegionProposalNetwork(
(anchor_generator): AnchorGenerator()
(head): RPNHead(
(conv): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(cls_logits): Conv2d(1024, 15, kernel_size=(1, 1), stride=(1, 1))
(bbox_pred): Conv2d(1024, 60, kernel_size=(1, 1), stride=(1, 1))
)
)
(roi_heads): RoIHeads(
(box_roi_pool): MultiScaleRoIAlign()
(box_head): TwoMLPHead(
(fc6): Linear(in_features=50176, out_features=1024, bias=True)
(fc7): Linear(in_features=1024, out_features=1024, bias=True)
)
(box_predictor): FastRCNNPredictor(
(cls_score): Linear(in_features=1024, out_features=2, bias=True)
(bbox_pred): Linear(in_features=1024, out_features=8, bias=True)
)
)
)

我说得对吗?如果是,我该如何解决此问题?是否有处理灰度图像和FasterRCNN的标准实践?

提前感谢!真的很感激!

Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])表示对输入图像的所有3个通道应用归一化处理。0.485被应用于R信道,0.456被应用于G信道,并且0.406被应用于B信道。标准偏差值也是如此。

主干的第一个Conv.层需要1个通道的输入,这就是出现此错误的原因。

您可以执行以下操作来解决此问题。

重新定义GeneralizedRCNNTransform,并将其附加到模型中。你可以这样做:

# put the pieces together inside a FasterRCNN model
model = FasterRCNN(backbone, num_classes=2, rpn_anchor_generator=anchor_generator, box_roi_pool=roi_pooler)
# Changes
grcnn = torchvision.models.detection.transform.GeneralizedRCNNTransform(min_size=800, max_size=1333, image_mean=[0.485], image_std=[0.229])
model.transform = grcnn
model.to(device)

相关内容

  • 没有找到相关文章

最新更新