基于SURF特征匹配和卡尔曼滤波的序列红外图像空中目标检测(含matlab代码)

    技术2023-07-30  87

    写在最前

    本科毕业设计的工作终于告一段落了,可以闲下来把一些工作放上来分享一下。 接下来介绍的是使用SURF算法和卡尔曼滤波在序列图像(视频)中进行目标检测的工作。有关SURF算法和卡尔曼滤波的知识就不展开介绍了,上有很多相关讲解都很清晰。

    实验图像

    由于只能放5M以下的动图,视频太长这里只好放两张截图示意一下。红外相机固定,飞机目标在相对清晰的天空背景下运动,期间释放干扰弹并遮挡自身。

    算法流程

    算法主要步骤为:读取序列图像某一帧,提取SURF特征点并进行匹配。当匹配点数大于等于3时,认为从图像中成功检测到目标,将目标此时的速度、位置信息送入卡尔曼滤波器实时更新;当匹配点数小于3时,认为检测不到目标或目标已经被遮挡,此时使用卡尔曼滤波算法根据目标之前的运动状态预测目标的位置,将其当做算法此时检测到的目标。最后将该帧图像下的目标检测结果标记在图像上,实现克服序列图像中目标的遮挡问题。 其中匹配点数根据自己实验中模板图像能够提取到的SURF特征点数进行设置。

    算法结果

    从结果中截了两幅图像,分别表示目标在未被干扰遮挡和被干扰遮挡时使用本文算法都可以正确检测到目标的位置。第三幅图是把整段序列图像(视频)中检测到的目标的位置叠加起来绘制的运动轨迹图,横纵坐标表示的是目标在图像中的位置(像素值)。 对于同样一段播放时长约16秒的序列图像(图像),在内存12G、主频2.30GHz、处理器为Intel®Core™i5-6300HQ CPU的计算机windows10操作系统中使用MATLAB2016b软件进行实验,运行本文算法得到检测结果耗时约44.358秒,远比使用基于混合高斯模型的自适应背景差分和卡尔曼滤波相结合的算法71.330秒的运行时长短。

    代码

    clc;clear all; frame = []; detectedLocation = []; trackedLocation = []; label = ''; %接下来设置卡尔曼滤波器的参数,需要自己调试 %这里设置目标的运动模式:ConstantVelocity表示匀速运动, %ConstantAcceleration表示匀加速运动(引入加速度) param.motionModel = 'ConstantVelocity'; %这里设置卡尔曼滤波初始位置:Same as first detection表示 %初始位置设定为第一次检测到目标的位置 param.initialLocation = 'Same as first detection'; %这里设置初始测量误差:匀速运动可以设置为1E5 * ones(1, 2)%匀加速运动可以设置为1E5 * ones(1, 3) param.initialEstimateError =1E5 * ones(1, 2) ; %这里设置运动噪声:匀速运动可以设置为[25,10]%匀加速运动可以设置为[25,10,1] param.motionNoise =[25,10] ; %这里设置测量噪声:匀速运动可以设置为12500%匀加速运动可以设置为50 param.measurementNoise = 12500; %这里设置分割阈值:匀速运动可以设置为0.05%匀加速运动可以设置为0.1 param.segmentationThreshold = 0.05; %这里设置序列图像(视频)的地址 video= VideoReader('D:\Python\work-exercises\GraduationDesign\test.mp4'); numFrames = video.NumberOfFrames; videoPlayer = vision.VideoPlayer('Position', [100,100,500,400]); %这里设置SURF目标模板的图片地址 utilities.boxImage=rgb2gray(imread('D:\Python\work-exercises\GraduationDesign\airplane_std.jpg')); utilities.boxPoints=detectSURFFeatures(utilities.boxImage); [boxFeatures, ~] = extractFeatures(utilities.boxImage, utilities.boxPoints); utilities.boxFeatures=boxFeatures; accumulatedImage = 0; accumulatedDetections = []; accumulatedPredections = []; accumulatedTrackings = []; isTrackInitialized = false; %510是视频的帧数 for k=1:510 frame = read(video,k); grayImage = rgb2gray(frame); scenePoints = detectSURFFeatures(grayImage,'ROI',[1 1 size(frame,2) 0.45 *size(frame,1)]); [sceneFeatures, scenePoints] = extractFeatures(grayImage, scenePoints); %这里设置了SURF匹配的阈值 boxPairs = matchFeatures(utilities.boxFeatures, sceneFeatures,'MatchThreshold',10,'Unique', true); matchedBoxPoints =utilities.boxPoints(boxPairs(:,1), :); matchedScenePoints = scenePoints(boxPairs(:,2),:); if(size(boxPairs,1)<3) detectedLocation=[]; isObjectDetected = false; else detectedLocation = matchedScenePoints.Location(2,:); isObjectDetected = true; end if ~isTrackInitialized if isObjectDetected initialLocation = detectedLocation; kalmanFilter = configureKalmanFilter(param.motionModel, initialLocation, param.initialEstimateError,param.motionNoise, param.measurementNoise); isTrackInitialized = true; trackedLocation = correct(kalmanFilter, detectedLocation); label = 'Initial'; else trackedLocation = []; label = ''; end else %如果SURF匹配检测到目标,就标记目标位置 if isObjectDetected predict(kalmanFilter); trackedLocation = correct(kalmanFilter, detectedLocation); label = 'Matched'; else %如果SURF匹配检测失败,使用卡尔曼滤波器预测目标的位置 trackedLocation = predict(kalmanFilter); accumulatedPredections=[accumulatedPredections; trackedLocation]; label = 'Predicted'; end end accumulatedImage = max(accumulatedImage, frame); accumulatedDetections = [accumulatedDetections; detectedLocation]; accumulatedTrackings = [accumulatedTrackings; trackedLocation]; combinedImage = frame; if ~isempty(trackedLocation) shape = 'rectangle'; region=[trackedLocation(1)-48,trackedLocation(2)-20,67,41]; combinedImage = insertObjectAnnotation(frame, shape, region, {label}, 'Color', 'yellow'); end step(videoPlayer, combinedImage); %将检测过程绘制为gif动图并保存 [I_CI,map]=rgb2ind(combinedImage,256); if k == 1 imwrite(I_CI,map,'test1.gif','gif', 'Loopcount',inf,'DelayTime',0.077); else imwrite(I_CI,map,'test1.gif','gif','WriteMode','append','DelayTime',0.077); end %绘制检测结果的坐标图(gif动图) figure(2);set(gcf,'color','w','position',[50,30,1280,720]); subplot(3,1,1);title('SURF特征点匹配检测到的目标运动轨迹');set(gca,'YDir','reverse');axis([350 ,800 ,220 ,300]);hold on; subplot(3,1,2);title('卡尔曼滤波预测到的目标轨迹');set(gca,'YDir','reverse');axis([350 ,800 ,220 ,300]);hold on; subplot(3,1,3);title('检测结果叠加得到目标的轨迹');set(gca,'YDir','reverse');axis([350 ,800 ,220 ,300]);hold on; if isObjectDetected subplot(3,1,1);title('SURF特征点匹配检测到的目标运动轨迹');plot(trackedLocation(1), trackedLocation(2), 'r+');set(gca,'YDir','reverse');axis([350 ,800 ,220 ,300]);hold on; subplot(3,1,3);title('检测结果叠加得到目标的轨迹');plot(trackedLocation(1), trackedLocation(2), 'r+');set(gca,'YDir','reverse');axis([350 ,800 ,220 ,300]);hold on; else subplot(3,1,2);title('卡尔曼滤波预测到的目标轨迹');plot(trackedLocation(1), trackedLocation(2), 'b+');set(gca,'YDir','reverse');axis([350 ,800 ,220 ,300]);hold on; subplot(3,1,3);title('检测结果叠加得到目标的轨迹');plot(trackedLocation(1), trackedLocation(2), 'b+');set(gca,'YDir','reverse');axis([350 ,800 ,220 ,300]);hold on; end drawnow; F=getframe(gcf); I_F=frame2im(F); [I_M,I_map]=rgb2ind(I_F,256); if k == 1 imwrite(I_M,I_map,'data1.gif','gif', 'Loopcount',inf,'DelayTime',0.077); else imwrite(I_M,I_map,'data1.gif','gif','WriteMode','append','DelayTime',0.077); end end uiscopes.close('All'); %将所有检测结果叠加为一张图片并显示 figure; imshow(accumulatedImage); hold on; plot(accumulatedDetections(:,1),accumulatedDetections(:,2), 'k+'); if ~isempty(accumulatedTrackings) plot(accumulatedTrackings(:,1),accumulatedTrackings(:,2), 'r-o'); legend('Detection', 'Tracking'); end
    Processed: 0.009, SQL: 9