我已经努力了,但我无法在Emgu CV中找到SURF算法中的单点兴趣。我为SURF编写代码。我遇到的问题是,有时它会进入if语句,靠近我的编号部分"1",有时它不会基于不同的图像。为什么会这样呢?在此基础上,单应性被计算为不为空。然后我就能画圆或线了。这也有问题。在图像上0,0点处绘制圆形或矩形。请帮帮我。我将感激不尽。
public Image<Bgr, Byte> Draw(Image<Gray, byte> conditionalImage, Image<Gray, byte> observedImage, out long matchTime)
{
//observedImage = observedImage.Resize(, INTER.CV_INTER_LINEAR);
Stopwatch watch;
HomographyMatrix homography = null;
SURFDetector surfCPU = new SURFDetector(500, false);
VectorOfKeyPoint modelKeyPoints;
VectorOfKeyPoint observedKeyPoints;
Matrix<int> indices;
Matrix<byte> mask;
int k = 2;
double uniquenessThreshold = 0.8;
//extract features from the object image
modelKeyPoints = surfCPU.DetectKeyPointsRaw(conditionalImage, null);
Matrix<float> modelDescriptors = surfCPU.ComputeDescriptorsRaw(conditionalImage, null, modelKeyPoints);
watch = Stopwatch.StartNew();
// extract features from the observed image
observedKeyPoints = surfCPU.DetectKeyPointsRaw(observedImage, null);
Matrix<float> observedDescriptors = surfCPU.ComputeDescriptorsRaw(observedImage, null, observedKeyPoints);
BruteForceMatcher<float> matcher = new BruteForceMatcher<float>(DistanceType.L2);
matcher.Add(modelDescriptors);
indices = new Matrix<int>(observedDescriptors.Rows, k);
using (Matrix<float> dist = new Matrix<float>(observedDescriptors.Rows, k))
{
matcher.KnnMatch(observedDescriptors, indices, dist, k, null);
mask = new Matrix<byte>(dist.Rows, 1);
mask.SetValue(255);
Features2DToolbox.VoteForUniqueness(dist, uniquenessThreshold, mask);
}
int nonZeroCount = CvInvoke.cvCountNonZero(mask);
//My Section number = 1
if (nonZeroCount >= 4)
{
nonZeroCount = Features2DToolbox.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints, indices, mask, 1.5, 20);
if (nonZeroCount >= 4)
homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints, observedKeyPoints, indices, mask, 2);
}
watch.Stop();
//Draw the matched keypoints
Image<Bgr, Byte> result = Features2DToolbox.DrawMatches(conditionalImage, modelKeyPoints, observedImage, observedKeyPoints,
indices, new Bgr(Color.Blue), new Bgr(Color.Red), mask, Features2DToolbox.KeypointDrawType.DEFAULT);
#region draw the projected region on the image
if (homography != null)
{ //draw a rectangle along the projected model
Rectangle rect = conditionalImage.ROI;
PointF[] pts = new PointF[] {
new PointF(rect.Left, rect.Bottom),
new PointF(rect.Right, rect.Bottom),
new PointF(rect.Right, rect.Top),
new PointF(rect.Left, rect.Top)};
homography.ProjectPoints(pts);
PointF _circleCenter = new PointF();
_circleCenter.X = (pts[3].X + ((pts[2].X - pts[3].X) / 2));
_circleCenter.Y = (pts[3].Y + ((pts[0].Y - pts[3].Y) / 2));
result.Draw(new CircleF(_circleCenter, 15), new Bgr(Color.Red), 10);
result.DrawPolyline(Array.ConvertAll<PointF, Point>(pts, Point.Round), true, new Bgr(Color.Cyan), 5);
}
#endregion
matchTime = watch.ElapsedMilliseconds;
return result;
}
modelKeyPoints = surfCPU.DetectKeyPointsRaw(conditionalImage, null);
在这行代码之后,您在modelKeyPoints中拥有了模型图像的所有感兴趣的点。对于观察到的图像也是如此。
得到两幅图像的关键点后,需要建立观测图像中的点与模型图像中的点之间的关系。要实现这一点,您可以使用一个已知算法:
using (Matrix<float> dist = new Matrix<float>(observedDescriptors.Rows, k))
{
matcher.KnnMatch(observedDescriptors, indices, dist, k, null);
mask = new Matrix<byte>(dist.Rows, 1);
mask.SetValue(255);
Features2DToolbox.VoteForUniqueness(dist, uniquenessThreshold, mask);
}
基本上这将计算,对于模型图像中的每个点,观察图像中最近的2 (k)个点。如果两个点的距离比小于0.8 (uniquenessThreshold),则认为不能安全地匹配该点。对于这个过程,你使用一个同时作为输入和输出的掩码:作为输入,它表示需要匹配的点,作为输出,它表示正确匹配的点。
则掩码中非零值的个数即为匹配点的个数