{"title":"Real-time adaptive pixel replacement","authors":"M. Pusateri, J. Scott, Muhammad Umar Mushtaq","doi":"10.1109/AIPR.2010.5759720","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759720","url":null,"abstract":"Scintillation noise artifacts are a part of intensified imagery for both analog and digital sensors. The high intensity flashes are similar to classic “salt” noise although they often are multiple pixels in extent; they can prove very distracting when utilizing intensified imagery under stressful conditions. In stereo intensified vision system, the fact that artifacts occur at different locations in the left and right sensor increases their ability to distract. Digital intensified sensors are not immune from this problem; however, digital image processing gives us an opportunity to mitigate the problem. A 3×3 median filter is the classic suggested solution to “salt” noise. However, the multiple pixel extent of scintillation noise requires the median neighborhood to be increased to 5×5 for effective suppression. Unfortunately, median also introduces a low pass effect that smoothes the imagery to an unacceptable degree. To overcome this loss of image clarity, we have developed and implemented an adaptive algorithm that is designed to identify scintillation noise. Scintillated pixels are replaced using the 5×5 median while unaffected pixels are left unchanged. The algorithm was tested on a Xilinx XC6SLX150–3 and is capable of operating at a pixel clock of over 220 MHz. With a pixel clock of 140 MHz and a 60 Hz frame rate, the module latency is under 26 µs. We discuss the identification of scintillated pixels and will comparing frames from the raw video, the 5×5 median video and the adaptively replaced 5×5 median.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133022454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Elshinawy, Abdel-Hameed A. Badawy, Wael W. Abdelmageed, M. Chouikha
{"title":"Comparing one-class and two-class SVM classifiers for normal mammogram detection","authors":"M. Elshinawy, Abdel-Hameed A. Badawy, Wael W. Abdelmageed, M. Chouikha","doi":"10.1109/AIPR.2010.5759708","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759708","url":null,"abstract":"X-ray mammograms are one of the most common techniques used by radiologists for breast cancer detection and diagnosis. Early detection is important, which raised the importance of developing Computer-Aided Detection and Diag-nosis(CAD) systems. Although most(CAD)systems were designed to help radiologists in their diagnosis by providing useful insight, the accuracy of CAD systems remains below the level that would lead to an improvement in the overall radiologists' performance. Unlike other CAD systems who aim to detect abnormal mammograms, we are designing a pre-CAD system that aims to detect normal mammograms instead of abnormal ones. The pre-CAD system works as a \"first look\" and screens-out normal mammograms, leaving the radiologists and other conventional CAD systems to focus on the suspicious cases. Support Vector Machine classifiers are used to detect normal mammograms. We are comparing the effect of using 1-class and 2-class SVMs when normal mammogram, instead of abnormal, is detected. Results showed that our pre-CAD system performance for 1-class outperformed 2-class SVM classifiers almost always. Using our set of features, 1-class SVM achieved a specificity of (99.2%), while the two-class SVM achieved (86.71%) respectively.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117276513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. D. Pritt, Michael Gribbons, Kevin J. LaTourette
{"title":"Automated cross-sensor registration, orthorectification and geopositioning using LIDAR digital elevation models","authors":"M. D. Pritt, Michael Gribbons, Kevin J. LaTourette","doi":"10.1109/AIPR.2010.5759694","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759694","url":null,"abstract":"Cross-sensor image registration, orthorectification, and geopositioning of imagery are well-known problems whose solutions are difficult, if not impossible, to automate. Registration of radar to optical imagery typically requires a manual solution, as does the registration of imagery over rugged terrain or urban areas, where foreshortening and layover present formidable obstacles to successful automation. We have developed an automated solution that is based on the registration of imagery to high-precision digital elevation models (DEMs) derived from Lidar data. The key idea is the generation of a simulated image using Lidar data, the image camera model and the illumination conditions. The simulated image is then registered to the actual image with normalized cross-correlation methods. The result is an effective and completely automated technique for registering imagery to DEMs. It has been shown to work with BuckEye Lidar, ALIRT Lidar, commercial satellite imagery and commercial synthetic aperture radar imagery over diverse terrain types, including mountains, cities, and forests. It provides an automated solution to many difficult geospatial problems, including cross-sensor registration of radar and optical imagery, image registration over rugged terrain, geopositioning of imagery and orthorectification. Its use of Lidar enables it to handle three-dimensional features that are foreshortened or laid over in different directions. Its use of simulated imagery enables it to bypass the problem of disparate features in cross-sensor registration. Statistical analyses of the registration accuracy are presented along with results on commercial satellite imagery and Lidar data over Iraq, Afghanistan, Haiti and the U.S.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114989322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Supervised semantic classification for nuclear proliferation monitoring","authors":"Ranga Raju Vatsavai, A. Cheriyadat, S. Gleason","doi":"10.1109/AIPR.2010.5759712","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759712","url":null,"abstract":"Existing feature extraction and classification approaches are not suitable for monitoring proliferation activity using high-resolution multi-temporal remote sensing imagery. In this paper we present a supervised semantic labeling framework based on the Latent Dirichlet Allocation method. This framework is used to analyze over 120 images collected under different spatial and temporal settings over the globe representing three major semantic categories: airports, nuclear, and coal power plants. Initial experimental results show a reasonable discrimination of these three categories even though coal and nuclear images share highly common and overlapping objects. This research also identified several research challenges associated with nuclear proliferation monitoring using high resolution remote sensing images.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130125316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of ISR technologies for counter insurgency warfare","authors":"S. Israel","doi":"10.1109/AIPR.2010.5759681","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759681","url":null,"abstract":"This paper compares and contrasts strategies used to test technologies for the Cold War era warfare to the test strategies employed for counter-insurgency warfare. The most important difference is the change from technologies that describe what-where to technologies that additionally describe activities. It will be shown that Cold War test strategies form the lowest, component, level testing for counter-insurgent focused technologies. However, to characterize activities, additional systematic testing is performed. Finally, an effectiveness example is provided to illustrate these concepts.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"46 3-4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120894240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Standoff video analysis for the detection of security anomalies in vehicles","authors":"S. Srivastava, E. Delp","doi":"10.1109/AIPR.2010.5759685","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759685","url":null,"abstract":"Video surveillance systems are commonly used by security personnel to monitor and record activity in buildings, public gatherings, busy roads, and parking lots. These systems allow many cameras to be observed by a small number of trained human operators but suffer from potential operator fatigue and lack of attention due to the large amount of information provided by cameras which can distract the operator from focusing on important events. In this paper, we propose the design of an autonomous video surveillance system which can operate from a standoff range that analyzes approaching vehicles in order to detect security anomalies. Such anomalies, based on dynamic analysis of the vehicle tracks, include unexpected slowing/stopping or sudden acceleration, particularly near check points or critical structures (e.g. government buildings). A human supervisor can be alerted whenever a significant event is detected and can then determine if the vehicle should be further inspected. Besides dynamic analysis, the system also estimates physical information about the vehicles such as make, body type and tire size. We describe low-complexity techniques to obtain the above information from two cameras.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121116827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Histo-pathological image analysis using OS-FCM and level sets","authors":"M. Babu, V. Madasu, M. Hanmandlu, S. Vasikarla","doi":"10.1109/AIPR.2010.5759688","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759688","url":null,"abstract":"Malignant melanomas are the most serious form of skin cancer accounting for the majority of skin cancer related deaths. Histo-pathological images of skin tissues are analyzed for detecting various types of melanomas. The automatic analysis of these images can greatly facilitate the diagnosis task for dermato-pathologists. The first and foremost step in automatic histo-pathological image analysis is to accurately segment the images into dermal and epidermal layers along with segmenting other tissues structures such as nests and melanocytic cells which indicate the presence of cancer. In this paper, we present a novel technique for segmenting the dermal-epidermal junction based on color features which are initially clustered using the Orientation Sensitive Fuzzy C-means algorithm (OS-FCM) and later refined with level set based algorithms. A few novel parameters which define the architecture of the dermis are then extracted. Experimental results on a small database of skin tissue images show the efficacy of the proposed methodology in differentiating between melanomas and naevi.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127486438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prediction of satellite images using fuzzy rule based Gaussian regression","authors":"N. Verma, N. Pal","doi":"10.1109/AIPR.2010.5759679","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759679","url":null,"abstract":"We present a novel approach for prediction of satellite image frame that uses a fuzzy rule based framework. The input-output membership functions for the premise and consequent parts of the rules are derived using a Gaussian Mixture Model (GMM). The weights of the fuzzy rules are represented as the prior probabilities of the respective Gaussian components. For obtaining the predictive fuzzy model, the GMM parameters are estimated via EM algorithm using a spatiotemporal representation of image sequence or video clips. Minimum Description Length (MDL) criterion is used to obtain a suitable predictive fuzzy model. The resulting model is successfully applied on a sequence of satellite images of tropical cyclone, Nargis, that made landfall in Myanmar on May 2, 2008. The quality of the predicted image is assessed using two criteria. The proposed approach is found to predict image frame successfully.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132376389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An adaptive parameterization method for SIFT based video stabilization","authors":"V. Santhaseelan, V. Asari","doi":"10.1109/AIPR.2010.5759711","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759711","url":null,"abstract":"Video stabilization is used to eliminate unwanted shakiness in video caused by movement of the camera. This can be achieved by estimating the motion of the camera, filtering out the high frequency components in the motion path and warping the video frames in order to compensate for the motion. In this paper, an adaptive parameterization technique is proposed to define the characteristics of the filter used to eliminate high frequency components in the motion path. Scale Invariant Feature Transform (SIFT) is used to extract the features from each video frame. A string of transformation matrices is used to represent the motion of the camera. For any frame that has to be stabilized, only a few frames in the local neighborhood are considered to calculate the required amount of motion compensation. The high-frequency components in camera motion are eliminated using a zero-mean Gaussian filter. The variance of the Gaussian filter that defines the amount of smoothening is computed automatically from the camera motion path. This is based on the observation that the variation in the individual components in the transformation matrices correlates with the amount of instability in the video. The proposed approach has been found to be effective irrespective of the presence of moving objects in the video.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134131005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Algorithms with attitude","authors":"A. Schaum","doi":"10.1109/AIPR.2010.5759683","DOIUrl":"https://doi.org/10.1109/AIPR.2010.5759683","url":null,"abstract":"A new methodology has been developed for creating detection algorithms for the class of composite hypothesis testing problems. Rather than estimating unknown parameter values for each variate test value, as in the generalized likelihood ratio test, continuum fusion methods integrate an infinite number of optimal detectors, one for each parameter value. The final form of the algorithm depends on the type of threshold constraint enforced during the fusing procedure, and this choice defines an attitude in a design process. The attitude can be tailored to suppress outliers not well described by statistical models and yet common to realistic remote sensing problems.","PeriodicalId":128378,"journal":{"name":"2010 IEEE 39th Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134187525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}