为什么我的Opencv没有回应或采取任何行动



我正在Python Opencv中进行YOLO对象检测。但出于某种原因,它没有回应。如果我只是简单地在没有物体检测代码的情况下打开相机,它会很好地工作。但是,如果我添加对象检测代码,它根本没有响应。这是我的代码:

import cv2
import numpy as np
net = cv2.dnn.readNet('yolov3.cfg', 'yolov3.weights')
classes = []
cap = cv2.VideoCapture(0)
with open('coco.names', 'r') as f:
classes = f.read().splitlines()
while True:
_, img = cap.read()
height, width, _ = img.shape
blob = cv2.dnn.blobFromImage(img, 1/255, (416, 416), (0, 0, 0), swapRB=True, crop=False)
net.setInput(blob)
output_layers_names = net.getUnconnectedOutLayersNames()
layerOutputs = net.forward(output_layers_names)
boxes = []
confidence = []
class_ids = []
for output in layerOutputs:
for detection in output:
scores = detection[5:]
class_id = np.argmax(scores)
if confidence > [0.5]:
center_x = int(detection[0]*width)
center_y = int(detection[1]*height)
w = int(detection[2]*width)
h = int(detection[3]*height)
x = int(center_x - w/2)
y = int(center_y - h/2)
boxes.append([x, y, w, h])
confidence.append((float(confidence)))
class_ids.append(class_id)
indexes = cv2.dnn.NMSBoxes(boxes, confidence, 0.5, 0.4)
font = cv2.FONT_HERSHEY_PLAIN
colors = np.random.uniform(0, 255, size=(len(boxes), 3))
if len(indexes)>0:
for i in indexes.flatten():
x,y,w,h = boxes[i]
label = str(classes[class_ids[i]])
confidence = str(round(confidence[i], 2))
color = colors[i]
cv2.rectangle(img, (x, y), (x+w, y+h), color, 2)
cv2.putText(img, label + '' + confidence, (x, y+20), font, 2, (255, 255, 255), 2)
cv2.imshow("Video", img)
if cv2.waitKey(0) & 0xFF == ord('q'):
break

如果我运行这个代码,相机会打开,但如果我移动我的手,它不会移动我的手掌。这是我的代码问题还是我的电脑问题?请帮我

顺便说一句,代码在上传到堆栈溢出时得到了更多的空间,对此我深表歉意。

尝试给waitKey()函数更多的时间:

cv2.waitKey(10) & 0xFF == ord('q')
^^

阅读此处的文档以供参考waitKey():

函数waitKey无限等待键事件(当≤0时),或等待延迟毫秒(当为正时)。由于操作系统在切换线程之间的时间最短,因此该函数不会等待精确的延迟毫秒,它会等待至少延迟毫秒,这取决于当时计算机上运行的其他程序。它返回所按下的键的代码,如果在指定时间之前没有按下任何键,则返回-1。

此外,请检查代码的缩进。试着运行这个较小的版本,如果它有效,一步一步地添加对象检测来找出任何其他问题:

import cv2 as cv2
import numpy as np
cap = cv2.VideoCapture(0)
while True:
_, img = cap.read()
cv2.imshow("Video", img)
if cv2.waitKey(10) & 0xFF == ord('q'):
break

或更可读:

keypressed = cv2.waitKey(10)
if keypressed == ord('q'):
break

尝试此操作以测试检测器

尝试这个循环,如果检测器正在工作,你应该在终端上看到一个检测列表。如果你把自己放在camere前面,你应该在索引0处看到一个高分,即coco.names

while True:
_, img = cap.read()
img = cv2.resize(img, None, fx=0.5, fy=0.5) # just for more room on my screen
height, width, _ = img.shape
blob = cv2.dnn.blobFromImage(img, 1/255, (416, 416), [0, 0, 0], swapRB=True, crop=False)
net.setInput(blob)
output_layers_names = net.getUnconnectedOutLayersNames()
print(output_layers_names) # <-- prints out the list of detections
layerOutputs = net.forward('yolo_82') # <-- try one layer at a time
for out in layerOutputs:
if not np.all(out[5:] == 0): print(out[5:])
print('-'*50)
cv2.imshow("Video", img)
keypressed = cv2.waitKey(10)
if keypressed == ord('q'):
break

一旦它起作用,就一步一步地添加其余的代码,检查结果。

最新更新