C#+EmguCV实现SURF算法 下载本文

C#+EmguCV中SURF算法的实现

EmguCV的官方网站上的例子中,有SURF算法的实现,其实现的时候利用的GPU的加速,看着比较复杂。此外,官网上例子的实现并没有做界面,看着不舒服,加载图片也不是很方便,因此,为了学习,我将官网上的例子进行了修改,去掉了GPU加速的部分,然后在做了显示界面,操作起来更友好些。

我是在Vs2012下使用2.9 Alpha版本的EmgucV做的。 首先显示界面如下图所示,显示界面是两个窗体,第一个如下:

窗体上有两个PictureBox控件,一个用来显示待匹配的源图像,一个用来显示匹配的目标图像。

然后相对应的有三个Button控件,第一个用来打开源图像,第二个用来打开目标图像,第三个用来匹配,当点击第三个Button控件实现匹配,匹配的图像显示在新的窗体上,新的窗体很简单,就一个窗体,图像我们使用窗体的Paint事件绘制在上面,第二个窗体如下:

其中button1实现的是打开源图像,代码如下: private void buttonSrc_Click(object sender, EventArgs e) {

//Create open dialog;

OpenFileDialog opnDlg = new OpenFileDialog();

opnDlg.Filter = \

//Seting the title of dialog;

opnDlg.Title = \ opnDlg.ShowHelp = true;

if (opnDlg.ShowDialog() == DialogResult.OK) {

curFileNameSrc = opnDlg.FileName; try {

curBitmapSrc = new Bitmap(curFileNameSrc); pictureBoxSrc.Image = curBitmapSrc; } catch {

MessageBox.Show(\

} }

}

Button2实现的是打开目标图像的功能,代码如下: private void buttonDst_Click(object sender, EventArgs e) {

//Create open dialog;

OpenFileDialog opnDlg = new OpenFileDialog();

opnDlg.Filter = \

//Seting the title of dialog;

opnDlg.Title = \ opnDlg.ShowHelp = true;

if (opnDlg.ShowDialog() == DialogResult.OK) {

curFileNameDst = opnDlg.FileName; try {

curBitmapDst = new Bitmap(curFileNameDst); pictureBoxDst.Image = curBitmapDst; } catch {

MessageBox.Show(\ } } }

Button3用来实现匹配,代码如下:

private void buttonMatch_Click(object sender, EventArgs e) {

if(curBitmapDst!=null&&curBitmapSrc!=null) {

long matchTime; Image

Byte>

srcImg

=

new

Image

Byte>(curBitmapSrc);

Image srcImg1 = srcImg.Convert();

Image(curBitmapDst);

Image dstImg1 = dstImg.Convert();

Matching Match = new Matching();

Image result = Match.Draw(srcImg1,dstImg1,out matchTime);

Bitmap Img = result.ToBitmap(); Form2 Form = new Form2(Img); Form.ShowDialog(); } else {

MessageBox.Show(\ } }

用于匹配的函数,我新建了一个名叫Matching的类,用来实现具体的匹配过程,在Button3中仅是对该类实例化,传入源图像和目标图像调用Match的函数就可以了。新建的Match类的代码如下:

using System;

using System.Collections.Generic; using System.Linq; using System.Text;

using System.Threading.Tasks; using Emgu.CV; using Emgu.Util; using Emgu.CV.Structure; using Emgu.CV.Util; using Emgu.CV.Features2D; using System.Diagnostics; using System.Drawing;

Byte>

dstImg

=

new

Image

namespace test10 {

class Matching {

public Matching() { }

public

void

FindMatch(Image

Byte>

modelImage,

Image observedImage, out long matchTime, out VectorOfKeyPoint modelKeyPoints, out VectorOfKeyPoint observedKeyPoints, out Matrix indices, out Matrix mask, out HomographyMatrix homography)

{

int k = 2;

double uniquenessThreshold = 0.8;

SURFDetector surfCPU = new SURFDetector(500, false); Stopwatch watch; homography = null;

//extract features from the object image modelKeyPoints = new VectorOfKeyPoint(); Matrix

watch = Stopwatch.StartNew();

// extract features from the observed image observedKeyPoints = new VectorOfKeyPoint(); Matrix

BruteForceMatcher BruteForceMatcher(DistanceType.L2);

matcher.Add(modelDescriptors);

observedDescriptors

matcher

=

= new

surfCPU.DetectAndCompute(observedImage, null, observedKeyPoints);

modelDescriptors

=

surfCPU.DetectAndCompute(modelImage, null, modelKeyPoints);

indices = new Matrix(observedDescriptors.Rows, k); using {

matcher.KnnMatch(observedDescriptors, indices, dist, k, null);

mask = new Matrix(dist.Rows, 1); mask.SetValue(255);

Features2DToolbox.VoteForUniqueness(dist, uniquenessThreshold, mask);

}

int nonZeroCount = CvInvoke.cvCountNonZero(mask); if (nonZeroCount >= 4) {

nonZeroCount

Features2DToolbox.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints, indices, mask, 1.5, 20);

if (nonZeroCount >= 4) homography observedKeyPoints, indices, mask, 2);

}

watch.Stop();

matchTime = watch.ElapsedMilliseconds; }

public Image Draw(Image modelImage, Image observedImage, out long matchTime)

{

HomographyMatrix homography; VectorOfKeyPoint modelKeyPoints; VectorOfKeyPoint observedKeyPoints;

=

Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints,

=

(Matrix

dist

=

new

Matrix(observedDescriptors.Rows, k))

Matrix indices; Matrix mask;

FindMatch(modelImage, observedImage, out matchTime, out modelKeyPoints, out observedKeyPoints, out indices, out mask, out homography);

//Draw the matched keypoints Image

indices, new Bgr(255, 255, 255), new Bgr(255, 255, 255), mask, Features2DToolbox.KeypointDrawType.DEFAULT);

#region draw the projected region on the image if (homography != null)

{ //draw a rectangle along the projected model Rectangle rect = modelImage.ROI; PointF[] pts = new PointF[] { new PointF(rect.Left, rect.Bottom), new PointF(rect.Right, rect.Bottom), new PointF(rect.Right, rect.Top), new PointF(rect.Left, rect.Top)}; homography.ProjectPoints(pts);

result.DrawPolyline(Array.ConvertAll

}

#endregion

return result; } } }

在(窗体2)Form2中的代码如下:

Point>(pts,

Byte>

result

=

Features2DToolbox.DrawMatches(modelImage, modelKeyPoints, observedImage,

using System;

using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text;

using System.Threading.Tasks; using System.Windows.Forms;

namespace test10 {

public partial class Form2 : Form {

private Bitmap Img1; public Form2(Bitmap Img) {

InitializeComponent(); Img1 = Img; }

private void Form2_Paint(object sender, PaintEventArgs e) {

Graphics img = e.Graphics;

if (Img1 != null) {

img.DrawImage(Img1, 0, 0, Img1.Width, Img1.Height); //img.DrawImage(Img1, (int)(Img1.Height)));

} } }

new

Rectangle(this.AutoScrollPosition.X, this.AutoScrollPosition.Y, (int)(Img1.Width),

}

好,这样就实现了这个例子,我们运行程序看看。 这是加载源图像和目标图像。

下面是显示匹配的结果:

这是第一次发例子,写的不好请见谅。

想要看这里例子的程序的话,可以从这里下载:

http://yunpan.cn/QaHyB8y8CGCIM 提取码 e350