Yolo v7不检测图像上的对象



我正在尝试Yolo v7,它似乎正在工作,但结果图像上没有对象检测映射,而它应该有。

我读了Github如何在Docker上设置Yolo v7,这里是完整的命令,你应该能够重现我的问题。

git clone https://github.com/WongKinYiu/yolov7
cd yolov7
nvidia-docker run --name yolov7 -it --rm -v "$CWD":/yolov7 --shm-size=64g nvcr.io/nvidia/pytorch:21.08-py3
// on the container
cd /yolov7
python -m pip install virtualenv
python -m virtualenv venv3
. venv3/bin/activate
pip install -r requirements.txt
apt update
apt install -y zip htop screen libgl1-mesa-glx
pip install seaborn thop
python detect.py --weights yolov7.pt --conf 0.25 --img-size 640 --source inference/images/horses.jpg

这是最后一个命令的控制台输出:

# python detect.py --weights yolov7.pt --conf 0.25 --img-size 640 --source inference/images/horses.jpg 
Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, device='', exist_ok=False, img_size=640, iou_thres=0.45, name='exp', no_trace=False, nosave=False, project='runs/detect', save_conf=False, save_txt=False, source='inference/images/horses.jpg', update=False, view_img=False, weights=['yolov7.pt'])
YOLOR 🚀 v0.1-115-g072f76c torch 1.13.0+cu117 CUDA:0 (NVIDIA GeForce GTX 1650, 3903.875MB)
Fusing layers... 
RepConv.fuse_repvgg_block
RepConv.fuse_repvgg_block
RepConv.fuse_repvgg_block
Model Summary: 306 layers, 36905341 parameters, 6652669 gradients
Convert model to Traced-model... 
traced_script_module saved! 
model is traced! 
/yolov7/venv3/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3190.)
return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
Done. (150.9ms) Inference, (0.3ms) NMS
The image with the result is saved in: runs/detect/exp6/horses.jpg
Done. (0.616s)

现在,我应该能够看到从原始图像inference/images/horses.jpg到生成图像runs/detect/exp6/horses.jpg的检测,对吧?但我看到的两幅图像是一样的,没有区别。这个装置有什么问题吗?

Nvidia司机:

$ nvidia-smi 
Tue Dec  6 09:47:03 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.60.11    Driver Version: 525.60.11    CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
| 45%   27C    P8    N/A /  75W |     13MiB /  4096MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
         
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1152      G   /usr/lib/xorg/Xorg                  9MiB |
|    0   N/A  N/A      1256      G   /usr/bin/gnome-shell                2MiB |
+-----------------------------------------------------------------------------+
Ubuntu版本:

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.4 LTS
Release:        20.04
Codename:       focal

将detect.py文件中的变量half修改为False

第31行:half = False

当我们使用Gpu时。该程序只允许使用模型的一半精度。这是在detect.py的第31行中写的内容(#半精度仅在CUDA上支持)。

将其更改为False:第31行:half = False为我变魔术

我也遇到过同样的问题。基本上就是别人提到的。

原因在第31行half = device.type != 'cpu' # half precision only supported on CUDA.

并非所有GPU或所有Nvidia GPU与CUDA支持半精度(16位)浮点数,特别是如果你的GPU有点老。在我的情况下,我使用的是AMD 5700XT(通过ROCM) -这个GPU也没有fp16支持!

为了使其可配置,我添加了一个命令行参数,该参数允许用户覆盖其他答案中提到的变量half;

# After Line 31~, check if command line override is present.
# device = select_device(opt.device)
half = opt.fp16 and device.type != 'cpu'  # half precision only supported on CUDA
# After line (169~) with `parser = argparse.ArgumentParser()`
parser.add_argument("--fp16", type=bool, default=False, help="Use float16 (Some GPUs only)")

相关内容

  • 没有找到相关文章

最新更新