{"title":"RLSAK: A recursive least square approximation with k-means for transformation model estimation in image registration techniques","authors":"Sistu Ganesh, Nivedita Tripathi, Gineesh Sukumaran","doi":"10.1109/ICHCI-IEEE.2013.6887775","DOIUrl":null,"url":null,"abstract":"This paper aims to present a new transformation model estimation frame work for feature based image registration techniques. In general Feature based image registration techniques involves Feature detection, matching, transformation model estimation, image resampling and transformation. Very little work has been done in the area of transformation model estimation compared to wide range of techniques available in Feature Detection and matching. While RANSAC (Random Sample Consensus) and LMS (Least Median of Squares) are the most commonly used methods for robust global transformation estimation in affine and perspective transformations, research is going on for the methods that would do well in the presence of a very high number of outlier data and overcome the disadvantages of these state of art techniques. This motivated us to develop a new algorithm which not only uses the spatial relations between the feature points but also make use of the image intensity profiles for robust model estimation even in presence of outliers. In current approach, first the Euclidean distances created by the matched feature points is clustered and matching error for each cluster is computed using intensity information. The feature point pairs of the cluster having minimum error are retained. Now by applying mean filtering followed by recursive least square approximation, the transformation model is estimated. The efficiency of the proposed methodology is demonstrated on different datasets under different transformations & for different areas of application. The method has shown significant improvements in accuracy compared to other existing techniques even in the presence of large number of outliers.","PeriodicalId":419263,"journal":{"name":"2013 International Conference on Human Computer Interactions (ICHCI)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 International Conference on Human Computer Interactions (ICHCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICHCI-IEEE.2013.6887775","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
This paper aims to present a new transformation model estimation frame work for feature based image registration techniques. In general Feature based image registration techniques involves Feature detection, matching, transformation model estimation, image resampling and transformation. Very little work has been done in the area of transformation model estimation compared to wide range of techniques available in Feature Detection and matching. While RANSAC (Random Sample Consensus) and LMS (Least Median of Squares) are the most commonly used methods for robust global transformation estimation in affine and perspective transformations, research is going on for the methods that would do well in the presence of a very high number of outlier data and overcome the disadvantages of these state of art techniques. This motivated us to develop a new algorithm which not only uses the spatial relations between the feature points but also make use of the image intensity profiles for robust model estimation even in presence of outliers. In current approach, first the Euclidean distances created by the matched feature points is clustered and matching error for each cluster is computed using intensity information. The feature point pairs of the cluster having minimum error are retained. Now by applying mean filtering followed by recursive least square approximation, the transformation model is estimated. The efficiency of the proposed methodology is demonstrated on different datasets under different transformations & for different areas of application. The method has shown significant improvements in accuracy compared to other existing techniques even in the presence of large number of outliers.