摘要
针对传统方法和深度学习匹配方法在倾斜影像上获取匹配点少、复现率低以及精度不高等问题,本文提出一种面向倾斜摄影的深度学习航空影像匹配方法。首先,利用POS信息计算影像重叠区域,并对倾斜影像进行透视变换改正,减弱几何变形对匹配过程的影响;其次,在变换后的重叠区域影像上利用训练的多尺度特征点检测网络推理其对应的高斯热力图,在高斯热力图尺度空间检测极值点作为稳定特征点,基于自监督主方向检测网络获取特征点主方向;接着,在特征点描述阶段,结合网络学习得到的特征点位置和主方向获取尺度旋转不变GeoDesc基础描述子,并考虑图像的几何、视觉上下文信息对描述子进行增强处理;最后,通过双向比值提纯法获取初始匹配点,利用RANSAC和图约束方法剔除误匹配后获得最终匹配点结果。使用ISPRS提供的2组典型区域倾斜影像进行匹配实验,结果表明,相比于SIFT、ASIFT、SuperPoint、GeoDesc及ContextDesc等算法,本文方法能够在大视角变化和纹理信息贫乏的倾斜影像对上获取更多均匀分布的匹配点,同时复现率也要优于其他方法。
To solve the problem of few points,low recall rate,and low accuracy of feature matching points in oblique image matching by traditional and deep learning methods,we propose a deep learning-based oblique photogrammetry image matching method.Firstly,the oblique image overlapping areas are computed using Position and Orientation System(POS)information.The geometric deformation of the image overlapping areas,caused by large angle change and inconsistent depth of scene,is compensated using perspective transformation.After removing geometric deformation,the transformed images only have small scale rotation changes.Secondly,we trained the feature point detection neural network in two stages to get the multi-scale feature detection network.The pre-trained multi-scale feature detection network is used to infer the Gaussian heat map on the transformed images.The robust sub-pixel feature points are detected in the extreme scale space of the Gauss heat maps,which effectively avoids the influence of image-scale changes.In order to assist feature points description,the feature points scale and direction are obtained based on the pre-trained self-supervised principal feature direction network.In the feature description stage,the scale and rotation invariant GeoDesc descriptor information is obtained by self-supervised feature detection and principal feature direction network.The feature descriptor is enhanced by considering the image geometric and visual context information,which is useful to describe oblique images with large angle change and those with less texture information.Finally,the initial matching points are obtained by a two-stage ratio purification method,which ensures that not many gross errors in the initial points.The mismatches of initial matching points are further removed by fundamental-based Random Sample Consensus(RANSAC)algorithm and geometry-based graph constraint method,which guarantees that the final obtained matching points accuracy is reliable for bundle block adjustment.In order to verify the matching effect of the proposed method,the two typical rural and city area oblique images in ISPRS oblique photogrammetry datasets are selected to qualitatively and quantitatively analyze the matching results of all methods.The experimental results show that our proposed method can obtain lots of uniformly distributed matching points in large scale perspective and poor-texture oblique images.Compared with SIFT,Affine SIFT(ASIFT),SuperPoint,GeoDesc,and ContextDesc algorithms,our proposed method can acquire more robust feature points in scale space of the Gauss heat maps,which is helpful to increase the matching recall rate and accuracy.
作者
杨佳宾
范大昭
杨幸彬
纪松
雷蓉
YANG Jiabin;FAN Dazhao;YANG Xingbin;JI Song;LEI Rong(Institute of Geospatial Information,Information Engineering University,Zhengzhou 450001,China;SenseTime Research,Beijing 100190,China)
出处
《地球信息科学学报》
CSCD
北大核心
2021年第10期1823-1837,共15页
Journal of Geo-information Science
基金
高分辨率对地观测系统重大专项(42-Y30B04-9001-19/21)
国家自然科学基金项目(41971427)。
关键词
倾斜摄影
深度学习
透视变换
高斯热力图
特征描述
oblique photogrammetry
deep learning
perspective transformation
gaussian heat map
feature description