我想将现有的openCV应用程序包含在使用Qt创建的GUI中。我在堆栈溢出上发现了一些类似的问题
QT 如何将应用程序嵌入到 QT 小部件中
在我的Qt应用程序中运行另一个可执行文件
问题是,我不想像使用 QProcess 那样简单地启动 openCV 应用程序。OpenCV应用程序有一个"MouseListener",所以如果我点击窗口,它仍然应该调用openCV应用程序的函数。此外,我想在Qt GUI的标签中显示检测到的坐标。因此,它必须是某种互动。
我已经阅读了有关createwindowContainer函数(http://blog.qt.io/blog/2013/02/19/introducing-qwidgetcreatewindowcontainer/)的信息,但由于我对Qt不是很熟悉,因此我不确定这是否是正确的选择以及如何使用它。
我正在使用Linux Mint 17.2,opencv版本3.1.0和Qt版本4.8.6
感谢您的投入
我一开始并没有真正解决问题。但现在它正在发挥作用。如果有人有同样的问题,也许我的解决方案可以提供一些想法。如果您想在qt中显示视频,或者如果您在使用OpenCV库时遇到问题,也许我可以提供帮助。
以下是一些代码片段。 它们没有太多评论,但我希望这个概念是明确的:
首先,我有一个带有标签的主窗口,我将其提升为自定义标签的类型。自定义标签是我的容器,用于显示视频并对鼠标输入做出反应。
CustomLabel::CustomLabel(QWidget* parent) : QLabel(parent), currentImage(NULL),
tickrate_ms(33), vid_fps(0), video_width(0), video_height(0), myTimer(NULL), cap(NULL)
{
// init variables
showPoints = true;
calculatedCenter = cv::Point(0,0);
oldCenter = cv::Point(0,0);
currentState = STATE_NO_STREAM;
NOF_corners = 30; //default init value
termcrit = cv::TermCriteria(cv::TermCriteria::COUNT | cv::TermCriteria::EPS, 30,0.01);
// enable mouse Tracking
this->setMouseTracking(true);
// connect signals with slots
QObject::connect(getMainWindow(), SIGNAL(sendFileOpen()), this, SLOT(onOpenClick()));
QObject::connect(getMainWindow(), SIGNAL(sendWebcamOpen()), this, SLOT(onWebcamBtnOpen()));
QObject::connect(getMainWindow(), SIGNAL(closeVideoStreamSignal()), this, SLOT(onCloseVideoStream()));
}
你必须覆盖 paintEvent-Method:
void CustomLabel::paintEvent(QPaintEvent *e){
QPainter painter(this);
// When no image is loaded, paint the window black
if (!currentImage){
painter.fillRect(QRectF(QPoint(0, 0), QSize(width(), height())), Qt::black);
QWidget::paintEvent(e);
return;
}
// Draw a frame from the video
drawVideoFrame(painter);
QWidget::paintEvent(e);
}
在paintEvent中调用的方法:
void CustomLabel::drawVideoFrame(QPainter &painter){
painter.drawImage(QRectF(QPoint(0, 0), QSize(width(), height())), *currentImage,
QRectF(QPoint(0, 0), currentImage->size()));
}
在计时器的每个滴答声中,我都会调用 onTick()
void CustomLabel::onTick() {
/* This method is called every couple of milliseconds.
* It reads from the OpenCV's capture interface and saves a frame as QImage
* the state machine is implemented here. every tick is handled
*/
if(cap->isOpened()){
switch(currentState) {
case STATE_IDLE:
if (!cap->read(currentFrame)){
qDebug() << "cvWindow::_tick !!! Failed to read frame from the capture interface in STATE_IDLE";
}
break;
case STATE_DRAWING:
if (!cap->read(currentFrame)){
qDebug() << "cvWindow::_tick !!! Failed to read frame from the capture interface in STATE_DRAWING";
}
currentFrame.copyTo(currentCopy);
cv::circle(currentCopy, cv::Point(focusPt.x*xScale, focusPt.y*yScale),
sqrt((focusPt.x - currentMousePos.x())*(focusPt.x - currentMousePos.x())*xScale*xScale+(focusPt.y - currentMousePos.y())*
(focusPt.y - currentMousePos.y())*yScale*yScale), cv::Scalar(0, 0, 255), 2, 8, 0);
//qDebug() << "focus pt x " << focusPt.x << "y " << focusPt.y;
break;
case STATE_TRACKING:
if (!cap->read(currentFrame)){
qDebug() << "cvWindow::_tick !!! Failed to read frame from the capture interface in STATE_TRACKING";
}
cv::cvtColor(currentFrame, currentFrame, CV_BGR2GRAY, 0);
if(initGrayFrame){
currentGrayFrame.copyTo(previousGrayFrame);
initGrayFrame = false;
return;
}
cv::calcOpticalFlowPyrLK(previousGrayFrame, currentFrame, previousPts, currentPts, featuresFound, err, cv::Size(21, 21),
3, termcrit, 0, 1e-4);
AcquireNewPoints();
currentCopy = CalculateCenter(currentFrame, currentPts);
if(showPoints){
DrawPoints(currentCopy, currentPts);
}
break;
case STATE_LOST_POLE:
currentState = STATE_IDLE;
initGrayFrame = true;
cv::cvtColor(currentFrame, currentFrame, CV_GRAY2BGR);
break;
default:
break;
}
// if not tracking, draw currentFrame
// OpenCV uses BGR order, convert it to RGB
if(currentState == STATE_IDLE) {
cv::cvtColor(currentFrame, currentFrame, CV_BGR2RGB);
memcpy(currentImage->scanLine(0), (unsigned char*)currentFrame.data, currentImage->width() * currentImage->height() * currentFrame.channels());
} else {
cv::cvtColor(currentCopy, currentCopy, CV_BGR2RGB);
memcpy(currentImage->scanLine(0), (unsigned char*)currentCopy.data, currentImage->width() * currentImage->height() * currentCopy.channels());
previousGrayFrame = currentFrame;
previousPts = currentPts;
}
}
// Trigger paint event to redraw the window
update();
}
不要介意yScale和xScale因子,它们仅用于opencv绘图功能,因为customLabel大小与视频分辨率不同
OpenCV仅用于图像处理。如果您知道将 cv::Mat 转换为任何其他所需的格式,您可以将 OpenCV 包含在任何 GUI 开发工具包中。对于Qt,您可以将cv::Mat转换为QImage,然后在Qt SDK中的任何位置使用它。此示例显示了 OpenCV 和 Qt 集成,包括线程和网络摄像头访问。网络摄像头使用OpenCV访问,收到的cv::Mat被转换为QImage并渲染到QLabel上。https://github.com/nickdademo/qt-opencv-multithreaded该代码包含 MatToQImage() 函数,该函数显示了从 cv::Mat 到 QImage 的转换。集成非常简单,因为一切都在C++。