{"title":"Establishing low dimensional manifolds for 3D object pose estimation","authors":"R. Kouskouridas, A. Gasteratos","doi":"10.1109/IST.2012.6295483","DOIUrl":"https://doi.org/10.1109/IST.2012.6295483","url":null,"abstract":"We propose a novel solution to the problem of 3D object pose estimation problem that is based on an efficient representation and feature extraction technique. We build a part-based architecture that takes into account both appearance-based characteristics of targets along with their geometrical attributes. This bunch-based structure encompasses an image feature extraction procedure accompanied by a clustering scheme over the abstracted key-points. In a follow-up step, these clusters are considered to establish representative manifolds capable of distinguishing similar poses of different objects into the corresponding classes. We form low dimensional manifolds by incorporating sophisticated operations over the members (clusters) of the extracted part-based architecture. An accurate estimation of the pose of a target is provided by a neural network-based solution that entails a novel input-output space targeting method. The performance of our method is comparatively studied against other related works that provide solution to the 3D object pose estimation and that are based on a) manifold modeling, b) object part-based representation and c) conventional dimensionality reduction frameworks. Experimental results justify our theoretical claims and provide evidence of low generalization error when estimating the 3D pose of objects, with the best performance achieved when employing the Radial Basis Functions kernel.","PeriodicalId":213330,"journal":{"name":"2012 IEEE International Conference on Imaging Systems and Techniques Proceedings","volume":"412 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122860727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Super-resolution imaging method based on random radiation radar array","authors":"Yuanyue Guo, Dongjin Wang, Xuezhi He, Bo Liu","doi":"10.1109/IST.2012.6403797","DOIUrl":"https://doi.org/10.1109/IST.2012.6403797","url":null,"abstract":"For the purpose of obtaining high-resolution staring image of ground targets by high-altitude imaging platforms, this paper proposes a super-resolution imaging approach based on random radiation radar array by combining random signal radiation and correlated imaging. Firstly, this super-resolution imaging principles are analyzed, and the definition of temporal-spatial stochastic characteristics of the radiating electromagnetic field on the target space are presented, and on this basis, the super-resolution imaging potentiality of random radiation radar array is demonstrated theoretically beyond the limitation of antenna aperture. Then the imaging model of stochastic radiation radar array is founded. The influences of random signals and the spatial configuration of the radar array (number of array elements and array aperture) on stochastic characteristics of the radiation field are discussed in detail. Finally, the imaging simulations are performed and the results show that random radiation radar array can improve its resolution by more than 20 times compared with a real aperture radar array with the same size.","PeriodicalId":213330,"journal":{"name":"2012 IEEE International Conference on Imaging Systems and Techniques Proceedings","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129456398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A GPGPU accelerated compressed sensing with tight wavelet frame transform technique for MR imaging reconstruction","authors":"B. Hu, X. Ma, M. Joyce, P. Glover, B. Naleem","doi":"10.1109/IST.2012.6295566","DOIUrl":"https://doi.org/10.1109/IST.2012.6295566","url":null,"abstract":"High resolution Magnetic Resonance Imaging (MRI) requires long acquisition time to obtain the fully sampled k-space data for image reconstruction. Compressed Sensing (CS) theory has recently been utilized to accelerate the image reconstruction speed by sparsely sampling the k-space. In this work, the CS framework was combined with the Tight Wavelet Frame (TWF) transform to further enhance edges/boundaries of MR images and reduce their noise levels. Because the TWF coefficients at finer scale correspond to important image boundary features, the proposed algorithm is able to effectively enhance the signal to noise ratio of MR images without blurring their edges or create artifacts. The resulting constrained minimization problem is then solved iteratively and requires extensive computational resources. To accelerate the reconstruction for real-time medical image processing purpose, the algorithm is implemented on the General Purpose Graphic Processing Units (GPGPU). The effects of various factors, including the register counts and block size, on the GPU occupancy have been investigated to tune the hardware for the optimum performance. The proposed algorithm demonstrates great potential to accelerate the MR imaging acquisition by 8-fold without noticeable artifacts. Comparisons with other two l1 minimization methods with traditional wavelet transforms further confirm the competitiveness of the proposed algorithm. Moreover, a speedup of 45 times was achieved by the GPGPU implementation compared with the CPU version, and therefore making this algorithm suitable for applications in a clinical MRI setting.","PeriodicalId":213330,"journal":{"name":"2012 IEEE International Conference on Imaging Systems and Techniques Proceedings","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127581332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Giakos, R. Picard, W. Inbody, P. Dao, P. Crabtree, P. Mcnicholl, J. Petermann, S. Shrestha, C. Narayan, S. Marotta
{"title":"An automated ladar polarimetric system for remote characterization of space materials","authors":"G. Giakos, R. Picard, W. Inbody, P. Dao, P. Crabtree, P. Mcnicholl, J. Petermann, S. Shrestha, C. Narayan, S. Marotta","doi":"10.1109/IST.2012.6295598","DOIUrl":"https://doi.org/10.1109/IST.2012.6295598","url":null,"abstract":"The calibration, testing, and operational principles of an efficient multifunctional monostatic polarimetric ladar are introduced and related to the system performance metrics. The depolarization, diattenuation, and retardance of the materials were estimated using Mueller matrix (MM) decomposition for different aspect angles. The outcome of this study indicates that polarimetric principles may enhance the capabilities of the ladar to provide adequate characterization and discrimination of unresolved space objects.","PeriodicalId":213330,"journal":{"name":"2012 IEEE International Conference on Imaging Systems and Techniques Proceedings","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127680206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Ukida, M. Miwa, Y. Tanimoto, T. Sano, Hideki Yamamoto
{"title":"Visual communication using LED panel and video camera for mobile object","authors":"H. Ukida, M. Miwa, Y. Tanimoto, T. Sano, Hideki Yamamoto","doi":"10.1109/IST.2012.6295517","DOIUrl":"https://doi.org/10.1109/IST.2012.6295517","url":null,"abstract":"This study proposes an information transmission device constructed by a LED panel and a video camera. The LED panel displays various patterns of AR markers and micro QR codes. Then, from images taken by the video camera, some information and 3D position and pose of the camera from the LED panel are extracted. We are planning to apply this system for a communication between a radio-controlled helicopter and a base station. In this paper, we propose a method to distinguish AR markers and micro QR codes automatically, and show results of discriminated rates in the experiment.","PeriodicalId":213330,"journal":{"name":"2012 IEEE International Conference on Imaging Systems and Techniques Proceedings","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116992488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimal shutter speed sequences for real-time HDR video","authors":"B. Guthier, S. Kopf, W. Effelsberg","doi":"10.1109/IST.2012.6295505","DOIUrl":"https://doi.org/10.1109/IST.2012.6295505","url":null,"abstract":"A technique to create High Dynamic Range (HDR) video frames is to capture Low Dynamic Range (LDR) images at varying shutter speeds. They are then merged into a single image covering the entire brightness range of the scene. While shutter speeds are often chosen to vary by a constant factor, we propose an adaptive approach. The scene's histogram together with functions judging the contribution of an LDR exposure to the HDR result are used to compute a sequence of shutter speeds. This sequence allows for the estimation of the scene's radiance map with a high degree of accuracy. We show that, in comparison to the traditional approach, our algorithm achieves a higher quality of the HDR image for the same number of captured LDR exposures. Our algorithm is suited for creating HDR videos of scenes with varying brightness conditions in real-time, which applications like video surveillance benefit from.","PeriodicalId":213330,"journal":{"name":"2012 IEEE International Conference on Imaging Systems and Techniques Proceedings","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126662196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"OCR-based neural network for ANPR","authors":"X. Zhai, F. Bensaali, R. Sotudeh","doi":"10.1109/IST.2012.6295581","DOIUrl":"https://doi.org/10.1109/IST.2012.6295581","url":null,"abstract":"Optical Character Recognition (OCR) is the last stage in an Automatic Number Plate Recognition System (ANPRs). In this stage the number plate characters on the number plate image are converted into encoded texts. In this paper, an Artificial Neural Network (ANN) based OCR algorithm for ANPR application is presented. A database of 3700 UK binary character images have been used for testing the performance of the proposed algorithm. Results achieved have shown that the proposed algorithm can meet the real-time requirement of an ANPR system and can averagely process a character image in 8.4ms with 97.3% successful recognition rate.","PeriodicalId":213330,"journal":{"name":"2012 IEEE International Conference on Imaging Systems and Techniques Proceedings","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126462276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaozhen Chen, Wu Zhou, Hailong Liu, Xiaoshu Cai, M. Su, L. Cheng
{"title":"In-situ particle measurement with blurred image processing using telecentric lenses","authors":"Xiaozhen Chen, Wu Zhou, Hailong Liu, Xiaoshu Cai, M. Su, L. Cheng","doi":"10.1109/IST.2012.6295547","DOIUrl":"https://doi.org/10.1109/IST.2012.6295547","url":null,"abstract":"A system based on trajectory image processing is developed to achieve in situ measurement of particle size, concentration and velocity in dilute gas-liquid two-phase flow, which is hard to be measured by laser particle analyzer. As the single-frame single-exposure image method, velocities of droplets can be easily calculated from the certain exposure time and the moving distance in a trajectory image, avoiding cross-correlation processing. A telecentric lens is used as an important component of the measurement system which can help to get concentration information. This in situ method avoids inversed processes in measurement methods such as light fluctuation method. Particles with defocused blur are also dealt with in the later part of the paper for further investigation of particle size and concentration measurement.","PeriodicalId":213330,"journal":{"name":"2012 IEEE International Conference on Imaging Systems and Techniques Proceedings","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126000134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D effect of electrical capacitance and resistance tomography sensors","authors":"Jiangtao Sun, Wuqiang Yang","doi":"10.1109/IST.2012.6295557","DOIUrl":"https://doi.org/10.1109/IST.2012.6295557","url":null,"abstract":"In electrical resistance tomography (ERT) sensors, small pin electrodes are commonly used. It is well known that the pin electrodes cause severe field distortions or 3D effect, and the sensing region of such an ERT sensor is not constrained to the pin electrode plane, but spreads to a large volume. This is also true for electrical capacitance tomography (ECT) sensors, even though it is less severe because of larger size electrodes in ECT sensors. However, when the length of electrodes in an ECT sensor is reduced to almost the same dimension as those in ERT sensor, the 3D effect may be equally obvious. To investigate the 3D effect of ERT and ECT sensors, simulation was carried out with different lengths of electrodes and the results are compared with the corresponding 2D simulation. It is concluded that sensors with longer electrodes usually have less 3D effect without guard electrodes. To investigate the effectiveness of grounded guards, 3D simulation was carried out for ECT sensors with grounded guards and the results are compared with the previous results in term of the 3D effect. It is found that the 3D effects of ECT and ERT sensors without guards are similar to each other, and the grounded guards can reduce the 3D effect of ECT sensors to a great extent.","PeriodicalId":213330,"journal":{"name":"2012 IEEE International Conference on Imaging Systems and Techniques Proceedings","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128072991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Russell, N. Mcloughlin, Vincent Nourrit, J. Oakley
{"title":"Enhancement of color retinal images in poor imaging conditions","authors":"G. Russell, N. Mcloughlin, Vincent Nourrit, J. Oakley","doi":"10.1109/IST.2012.6295584","DOIUrl":"https://doi.org/10.1109/IST.2012.6295584","url":null,"abstract":"Aberrations are one of the two main optical phenomena degrading the retinal image's quality. As light propagates to and from the retina, it will be scattered by small inhomogeneities within the ocular media. The amount of scattering in a young non pathological eye is low but increases with age and some pathologies (e.g. cataract). This increased scattering results in a degradation of the retinal image in terms of reduced brightness, poorer contrast, colour degradation and possibly lower spatial resolution. This phenomenon is similar to the degradation of images caused by atmospheric conditions. Although a large body of work exists on the enhancement of images recorded in poor visibility conditions, a paucity of papers have focused on reducing the degradation of retinal images by intraocular scattering. This work's aim was to adapt and assess the potential of an automatic defogging system to retinal imaging.","PeriodicalId":213330,"journal":{"name":"2012 IEEE International Conference on Imaging Systems and Techniques Proceedings","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133978182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}