{"title":"Image reconstruction using the reconfiguration technique","authors":"Etienne Aubin Mbe Mbock","doi":"10.1109/AIPR.2015.7444543","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444543","url":null,"abstract":"Image reconstruction has been under study for decades and plays an important role in science, technology and security. In this paper, we propose an identification method for coded images. This method is different from conventional object identification frameworks, not in the sense of the extracted geometrical features but in the sense of the reconfiguration concept used. The goal of reconfiguration in this study is to reach a point that will make object identification possible. There have been research projects previously conducted on this topic and most of them are discretization-based. Our method, based on the reconfiguration concept, is a technology-based method that allows object identification. Our experimentation shows that the object can be identified within 8 iterations of the algorithm, beyond which no additional accuracy is achieved. This identification method, based on reconfiguration, extends the existing ones and enhances state-of-the-art object identification methods.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127120158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adam Lutz, K. Grace, Neal Messer, Soundararajan Ezekiel, Erik Blasch, M. Alford, A. Bubalo, Maria Scalzo-Cornacchia
{"title":"Bandelet transformation based image registration","authors":"Adam Lutz, K. Grace, Neal Messer, Soundararajan Ezekiel, Erik Blasch, M. Alford, A. Bubalo, Maria Scalzo-Cornacchia","doi":"10.1109/AIPR.2015.7444530","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444530","url":null,"abstract":"Perfect image registration is an unsolved challenge that has been attempted in a multitude of different ways. This paper presents an approach for single-modal, multi-view registration of aerial imagery data that uses bandelets in the preprocessing phase to extract key geometric features and limit the amount of details in the image that must be considered during the feature matching process. Applying the bandelet decomposition on both the reference and target images before feature extraction will limit the control point selection process to only those points with the most relevant geometric data. The approach uses a multi-scale approach to estimate a transformation that converges to an optimal solution as well as reduce the computation time for real-time image registration. The bandelet basis also provides for a more effective feature (e.g. corner) detection and extraction method because it determines the geometric flow and allows for shifted patches in the orthogonal direction of the geometric flow. Theoretically the bandelet results in less false positives and better detection rates than existing methods. The Bandelet-based Image Registration (BIR) method has applications in image fusion, change detection, object recognition, autonomous navigation, and target tracking.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127945947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unsupervised approach for object matching using Speeded Up Robust Features","authors":"A. Vardhan, N. Verma, R. K. Sevakula, A. Salour","doi":"10.1109/AIPR.2015.7444541","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444541","url":null,"abstract":"Autonomous object counting system is of great use in retail stores, industries and also in research processes. In this paper, a Speeded Up Robust Feature (SURF) based robust algorithm for identifying, counting and locating all instances of a defined object in any image, has been proposed. The defined object is referred to as prototype and the image in which one wishes to count the prototype is referred to as scene image. The algorithm starts by detecting the interest points for SURF in both, prototype and scene images. The SURF points on prototype are first clustered using density based clustering; then SURF points in each cluster are matched with those in scene image. The SURF points in scene image that have been matched w.r.t. a single cluster, are clustered using the same clustering algorithm. Each cluster formed in scene image represents an instance of prototype object in the image. Homography transforms are further used to give exact location and span of each prototype object in the scene image. Once the span of each prototype is defined, SURF points within this span are matched with the prototype image and then Homography transform is once again applied while considering the newly matched SURF points; thus eliminating noisy detection/s of prototype. While the same process is repeated with each cluster, a novel centroid based algorithm for merging repeated detections of same prototype instance is used. Carrying the benefits of SURF and Homography transforms, the algorithm is capable of detecting all prototype instances present in scene image, irrespective of their scale and orientation. The complete algorithm has also been integrated into a desktop application, which uses camera feed to report the real time count of the prototype in the scene image.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128140747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Online signature verification using the entropy function","authors":"M. Hanmandlu, Farrukh Sayeed, S. Vasikarla","doi":"10.1109/AIPR.2015.7444522","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444522","url":null,"abstract":"This paper proposes a new online signature verification system. We have developed features based on Hanman-Anirban entropy function. We have used the Inner Product Classifier (IPC) for the verification of the signatures. The performance of signature verification has been found to be promising.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"249 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132200896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Suspicious Face Detection based on Eye and other facial features movement monitoring","authors":"Chandan Tiwari, M. Hanmandlu, S. Vasikarla","doi":"10.1109/AIPR.2015.7444523","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444523","url":null,"abstract":"Visual surveillance and security applications were never more important than now more so due to the overwhelming ever-growing threat of terrorism. Till date the large scale video surveillance systems mostly work as a passive system in which the videos are simply stored without being monitored. Such system will be useful for post event investigation. In order to make a system that is capable of real-time monitoring, we need to develop algorithms which can analyze and understand the scene that is being monitored. Generally, humans express their intention explicitly through facial expressions, speech, eye movement, and hand gesture. According to cognitive visiomotor theory, the human eye movements are rich source of information about the human intention and behavior. If we monitor the eye movement of a person, we will be able to describe him as an abnormal suspicious person or a normal person. We track his/her Eyes and based upon the eye movement in successive frames of the input videos using the Non-linear Entropy of eyes. Results of our experiments show that Non-linear Entropy of Eyes of an abnormal person is much higher than the eye's entropy of any normal person.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"755 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133846560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lalitha Dabbiru, Pan Wei, A. Harsh, Julie White, J. Ball, J. Aanstoos, P. Donohoe, J. Doyle, Sam Jackson, J. Newman
{"title":"Runway assessment via remote sensing","authors":"Lalitha Dabbiru, Pan Wei, A. Harsh, Julie White, J. Ball, J. Aanstoos, P. Donohoe, J. Doyle, Sam Jackson, J. Newman","doi":"10.1109/AIPR.2015.7444545","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444545","url":null,"abstract":"Airport pavements are constructed to provide adequate support for the loads and traffic volume imposed by aircrafts. One aspect of pavement evaluation is the pavement condition which is determined by the types and extent of distresses. These include cracking, rutting, weathering, and others that may affect pavement surface roughness and the potential for FOD (Foreign Object Debris). Pavement evaluations are necessary to assess the ability to safely operate aircraft on an airfield. The purpose of this study is to explore the potential use of microwave remote sensing to assess the pavement surface roughness. Radar backscatter responds to surface roughness as well as dielectric constant. The resulting changes in backscatter can convey information about the degree of cracking and surface roughness of the runway. In this study, we develop a relation between the Terrain Ruggedness Index (TRI) of the runway and radar backscatter magnitudes. Radar data from the TerraSAR-X satellite is used, along with airborne LiDAR data (30 cm spacing). Modest linear correlation was found between the vertical co-polarization channel of the radar data and TRI values computed in 5 by 5 pixel windows from the LiDAR elevation data. Over four different test areas on the runway, the coefficients of determination ranged from 0.12 to 0.46.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124119936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Applying attributes to improve human activity recognition","authors":"D. Tahmoush, Claire Bonial","doi":"10.1109/AIPR.2015.7444553","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444553","url":null,"abstract":"Activity and event recognition from video has utilized low-level features over higher-level text-based class attributes and ontologies because they traditionally have been more effective on small datasets. However, by including human knowledge-driven associations between actions and attributes while recognizing the lower-level attributes with their temporal relationships, we can learn a much greater set of activities as well as improve low-level feature-based algorithms by incorporating an expert knowledge ontology. In an event ontology, events can be broken down into actions, and these can be decomposed further into attributes. For example, throwing events can include throwing of stones or baseballs with the object being relocated from a hand through the air to a location of interest. The throwing can be broken down into the many physical attributes that can be used to describe the motion like BodyPartsUsed = Hands, BodyPartArticulation-Arm = OneArmRaisedOverHead, and many others. Building general attributes from video and merging them into an ontology for recognition allows significant reuse for the development of activity and event classifiers. Each activity or event classifier is composed of interacting attributes the same way sentences are composed of interacting letters to create a complete language.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123163277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Machine vision algorithms for robust animal species identification","authors":"C. Cohen, D. Haanpaa, James P. Zott","doi":"10.1109/AIPR.2015.7444526","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444526","url":null,"abstract":"Numerous military bases have a requirement, based on the Sikes Act, to maintain the base's natural environment while still meeting military mission objectives. One method used to accomplish this is by working towards the goal of achieving habitat and species sustainability. One difficulty is that there is currently no baseline of the ecosystem. Specifically, a critical need is the detection and identification of animals on Federal and State endangered lists. For instance, the U.S. Fish and Wildlife Service lists 130 animals as either endangered or threatened, including the desert tortoise, the Mohave ground squirrel, various species of fox, jaguar, mountain beaver, and wolf. In order to even begin to form an appropriate natural environmental baseline, the location and movements of these animals must be acquired, recorded, and made available for review. To this end, in this presentation we detail technology and machine vision algorithms that can be used to: 1.) Recognize animals that are on the endangered or threatened lists, 2.) Identification of animals without the need to track them in sequential image frames, 3.) Provide continual animal census surveillance for weeks at a time in operational environments, and 4.) Record video and still-image data along with annotations for later analysis. Specifically, present an extendable architecture for species identification and identification software truthing/training, and populate this architecture with three recognition modules: a Haar Cascade classifier, a Local Binary Pattern cascade classifier, and a neural network. We also detail the results of our work, current challenges, and future approaches we are taking with our research.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133251971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Beach, C. Cohen, D. Haanpaa, Steven C. Rowe, Pritpaul Mahal
{"title":"3D camera identification for enabling robotic manipulation","authors":"G. Beach, C. Cohen, D. Haanpaa, Steven C. Rowe, Pritpaul Mahal","doi":"10.1109/AIPR.2015.7444549","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444549","url":null,"abstract":"Many autonomous robotic systems have a need to sense the world around them in order to safely interact with and manipulate objects while avoiding unintentional collisions with other entities. For systems that require fairly precise localization of objects over long ranges, the data needed to perform such actions has historically been collected through expensive laser based devices such as LADARS. For less precise applications, researchers have utilized standard 2D cameras (with and without tags), 3D stereoscopic camera systems, RADAR systems (such as those used in many automotive driver assist systems), and ultrasonic sensors. Recently, a new type of actively illuminated 3D camera has become available that can provide high resolution and relatively large ranges (although not as long as LADARs) but at a price that is more comparable with the less expensive, low resolution sensors. While this hardware provides valuable data about the world, it requires new techniques for processing the data to enable intelligent interpretation by the robotic systems.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124808212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hybrid sensing face detection and recognition","authors":"Mingyuan Zhou, Haiting Lin, Jingyi Yu, S. Young","doi":"10.1109/AIPR.2015.7444542","DOIUrl":"https://doi.org/10.1109/AIPR.2015.7444542","url":null,"abstract":"The capability to track, detect, and identify human targets in highly cluttered scenes under extreme conditions, such as in complete darkness or in battlefield, has been one of the primary tactical advantages in military operations. In this paper, we propose a new collaborative, multi-spectrum sensing solution to achieve face detection and registration under low lighting conditions. We construct a novel type of hybrid sensors by combining a pair of near infrared (NIR) cameras and a thermal camera (a long wave infrared LWIR camera). We strategically surround each NIR sensor with a ring of LED IR flashes in order to capture the “red-eye”, or more precisely, the “bright-eye” effect of the target. The bright-eyes are used to localize the 3D position of eyes and face. The recovered 3D information can be further used to warp the thermal face imagery to frontal-parallel pose so that additional tasks such as face recognition can be reliably conducted, especially with the assistance of accurate eye locations.","PeriodicalId":440673,"journal":{"name":"2015 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115563787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}