Naeem Ahmad, Khursheed Khursheed, Muhammad Imran, N. Lawal, M. O’nils
{"title":"Cost optimization of a sky surveillance visual sensor network","authors":"Naeem Ahmad, Khursheed Khursheed, Muhammad Imran, N. Lawal, M. O’nils","doi":"10.1117/12.924344","DOIUrl":"https://doi.org/10.1117/12.924344","url":null,"abstract":"A Visual Sensor Network (VSN) is a network of spatially distributed cameras. The primary difference between VSN and \u0000other type of sensor networks is the nature and volume of information. A VSN generally consists of cameras, \u0000communication, storage and central computer, where image data from multiple cameras is processed and fused. In this \u0000paper, we use optimization techniques to reduce the cost as derived by a model of a VSN to track large birds, such as \u0000Golden Eagle, in the sky. The core idea is to divide a given monitoring range of altitudes into a number of sub-ranges of \u0000altitudes. The sub-ranges of altitudes are monitored by individual VSNs, VSN1 monitors lower range, VSN2 monitors \u0000next higher and so on, such that a minimum cost is used to monitor a given area. The VSNs may use similar or different \u0000types of cameras but different optical components, thus, forming a heterogeneous network. We have calculated the cost \u0000required to cover a given area by considering an altitudes range as single element and also by dividing it into sub-ranges. \u0000To cover a given area with given altitudes range, with a single VSN requires 694 camera nodes in comparison to \u0000dividing this range into sub-ranges of altitudes, which requires only 88 nodes, which is 87% reduction in the cost.","PeriodicalId":369288,"journal":{"name":"Real-Time Image and Video Processing","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132489601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Block matching noise reduction method for photographic images applied in Bayer RAW domain and optimized for real-time implementation","authors":"I. Romanenko, E. Edirisinghe, Daniel Larkin","doi":"10.1117/12.922791","DOIUrl":"https://doi.org/10.1117/12.922791","url":null,"abstract":"Image de-noising has been a well studied problem in the field of digital image processing. However there are a number \u0000of problems, preventing state-of-the-art algorithms finding their way to practical implementations. In our research we \u0000have solved these issues with an implementation of a practical de-noising algorithm. In order of importance: firstly we \u0000have designed a robust algorithm, tackling different kinds of nose in a very wide range of signal to noise ratios, secondly \u0000in our algorithm we tried to achieve natural looking processed images and to avoid unnatural looking artifacts, thirdly we \u0000have designed the algorithm to be suitable for implementation in commercial grade FPGA's capable of processing full \u0000HD (1920×1080) video data in real time (60 frame per second). \u0000The main challenge for the use of noise reduction algorithms in photo and video applications is the compromise \u0000between the efficiency of the algorithm (amount of PSNR improvement), loss of details, appearance of artifacts and the \u0000complexity of the algorithm (and consequentially the cost of integration). In photo and video applications it is very \u0000important that the residual noise and artifacts produced by the noise reduction algorithm should look natural and do not \u0000distract aesthetically. Our proposed algorithm does not produce artificially looking defects found in existing state-of-theart \u0000algorithms. \u0000In our research, we propose a robust and fast non-local de-noising algorithm. The algorithm is based on a Laplacian \u0000pyramid. The advantage of this approach is the ability to build noise reduction algorithms with a very large effective \u0000kernel. In our experiments effective kernel sizes as big as 127×127 pixels were used in some cases, which only required \u00004 scales. This size of a kernel was required to perform noise reduction for the images taken with a DSLR camera. \u0000Taking into account the achievable improvement in PSNR (on the level of the best known noise reduction \u0000techniques) and low algorithmic complexity, enabling its practical use in commercial photo, video applications, the \u0000results of our research can be very valuable.","PeriodicalId":369288,"journal":{"name":"Real-Time Image and Video Processing","volume":"185 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127550247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Bailleul, B. Simon, M. Debailleul, Hui Liu, O. Haeberlé
{"title":"GPU acceleration towards real-time image reconstruction in 3D tomographic diffractive microscopy","authors":"J. Bailleul, B. Simon, M. Debailleul, Hui Liu, O. Haeberlé","doi":"10.1117/12.922147","DOIUrl":"https://doi.org/10.1117/12.922147","url":null,"abstract":"Phase microscopy techniques regained interest in allowing for the observation of unprepared specimens with excellent temporal resolution. Tomographic diffractive microscopy is an extension of holographic microscopy which permits 3D observations with a finer resolution than incoherent light microscopes. Specimens are imaged by a series of 2D holograms: their accumulation progressively fills the range of frequencies of the specimen in Fourier space. A 3D inverse FFT eventually provides a spatial image of the specimen. Consequently, acquisition then reconstruction are mandatory to produce an image that could prelude real-time control of the observed specimen. The MIPS Laboratory has built a tomographic diffractive microscope with an unsurpassed 130nm resolution but a low imaging speed - no less than one minute. Afterwards, a high-end PC reconstructs the 3D image in 20 seconds. We now expect an interactive system providing preview images during the acquisition for monitoring purposes. We first present a prototype implementing this solution on CPU: acquisition and reconstruction are tied in a producer-consumer scheme, sharing common data into CPU memory. Then we present a prototype dispatching some reconstruction tasks to GPU in order to take advantage of SIMDparallelization for FFT and higher bandwidth for filtering operations. The CPU scheme takes 6 seconds for a 3D image update while the GPU scheme can go down to 2 or > 1 seconds depending on the GPU class. This opens opportunities for 4D imaging of living organisms or crystallization processes. We also consider the relevance of GPU for 3D image interaction in our specific conditions.","PeriodicalId":369288,"journal":{"name":"Real-Time Image and Video Processing","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132424616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time lossy compression of hyperspectral images using iterative error analysis on graphics processing units","authors":"S. Sánchez, A. Plaza","doi":"10.1117/12.923834","DOIUrl":"https://doi.org/10.1117/12.923834","url":null,"abstract":"Hyperspectral image compression is an important task in remotely sensed Earth Observation as the dimensionality \u0000of this kind of image data is ever increasing. This requires on-board compression in order to optimize the \u0000donwlink connection when sending the data to Earth. A successful algorithm to perform lossy compression of \u0000remotely sensed hyperspectral data is the iterative error analysis (IEA) algorithm, which applies an iterative \u0000process which allows controlling the amount of information loss and compression ratio depending on the number \u0000of iterations. This algorithm, which is based on spectral unmixing concepts, can be computationally expensive \u0000for hyperspectral images with high dimensionality. In this paper, we develop a new parallel implementation of \u0000the IEA algorithm for hyperspectral image compression on graphics processing units (GPUs). The proposed \u0000implementation is tested on several different GPUs from NVidia, and is shown to exhibit real-time performance \u0000in the analysis of an Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) data sets collected over different \u0000locations. The proposed algorithm and its parallel GPU implementation represent a significant advance towards \u0000real-time onboard (lossy) compression of hyperspectral data where the quality of the compression can be also \u0000adjusted in real-time.","PeriodicalId":369288,"journal":{"name":"Real-Time Image and Video Processing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131933549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A flexible software architecture for scalable real-time image and video processing applications","authors":"R. Usamentiaga, J. Molleda, D. García, F. Bulnes","doi":"10.1117/12.921397","DOIUrl":"https://doi.org/10.1117/12.921397","url":null,"abstract":"Real-time image and video processing applications require skilled architects, and recent trends in the hardware \u0000platform make the design and implementation of these applications increasingly complex. Many frameworks and \u0000libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing \u0000applications. However, they tend to lack flexibility because they are normally oriented towards particular types \u0000of applications, or they impose specific data processing models such as the pipeline. Other issues include large \u0000memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a \u0000novel software architecture for real-time image and video processing applications which addresses these issues. \u0000The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the \u0000application layer. The platform abstraction layer provides a high level application programming interface for \u0000the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic \u0000publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route \u0000the messages from the publishers to the subscribers interested in a particular type of messages. The application \u0000layer provides a repository for reusable application modules designed for real-time image and video processing \u0000applications. These modules, which include acquisition, visualization, communication, user interface and data \u0000processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, \u0000or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed \u0000architecture.","PeriodicalId":369288,"journal":{"name":"Real-Time Image and Video Processing","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128326723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Stuchi, Elisa Signoreto Barbarini, F. P. Vieira, Daniel dos Santos, M. A. Stefani, F. M. M. Yasuoka, J. C. C. Neto, E. L. L. Rodrigues
{"title":"MTF measurements on real time for performance analysis of electro-optical systems","authors":"J. Stuchi, Elisa Signoreto Barbarini, F. P. Vieira, Daniel dos Santos, M. A. Stefani, F. M. M. Yasuoka, J. C. C. Neto, E. L. L. Rodrigues","doi":"10.1117/12.915632","DOIUrl":"https://doi.org/10.1117/12.915632","url":null,"abstract":"The need of methods and tools that assist in determining the performance of optical systems is actually increasing. One \u0000of the most used methods to perform analysis of optical systems is to measure the Modulation Transfer Function (MTF). \u0000The MTF represents a direct and quantitative verification of the image quality. This paper presents the implementation of \u0000the software, in order to calculate the MTF of electro-optical systems. The software was used for calculating the MTF of \u0000Digital Fundus Camera, Thermal Imager and Ophthalmologic Surgery Microscope. The MTF information aids the \u0000analysis of alignment and measurement of optical quality, and also defines the limit resolution of optical systems. The \u0000results obtained with the Fundus Camera and Thermal Imager was compared with the theoretical values. For the \u0000Microscope, the results were compared with MTF measured of Microscope Zeiss model, which is the quality standard of \u0000ophthalmological microscope.","PeriodicalId":369288,"journal":{"name":"Real-Time Image and Video Processing","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123099800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image segmentation in wavelet transform space implemented on DSP","authors":"V. Ponomaryov, H. Castillejos, R. Peralta-Fabi","doi":"10.1117/12.921878","DOIUrl":"https://doi.org/10.1117/12.921878","url":null,"abstract":"A novel approach in the segmentation for the images of different nature employing the feature extraction in WT space \u0000before the segmentation process is presented. The designed frameworks (W-FCM, W-CPSFCM and WK-Means) \u0000according to AUC analysis have demonstrated better performance novel frameworks against other algorithms existing in \u0000literature during numerous simulation experiments with synthetic and dermoscopic images. The novel W-CPSFCM \u0000algorithm estimates a number of clusters in automatic mode without the intervention of a specialist. The implementation \u0000of the proposed segmentation algorithms on the Texas Instruments DSP TMS320DM642 demonstrates possible real time \u0000processing mode for images of different nature.","PeriodicalId":369288,"journal":{"name":"Real-Time Image and Video Processing","volume":"48 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120861660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Schöch, Carlo Bach, Andreas Ettemeyer, Sabine Linz-Dittrich
{"title":"GePaRDT: a framework for massively parallel processing of dataflow graphs","authors":"Alexander Schöch, Carlo Bach, Andreas Ettemeyer, Sabine Linz-Dittrich","doi":"10.1117/12.921677","DOIUrl":"https://doi.org/10.1117/12.921677","url":null,"abstract":"The trend towards computers with multiple processing units keeps going with no end in sight. Modern consumer \u0000computers come with 2 - 6 processing units. Programming methods have been unable to keep up with this fast \u0000development. In this paper we present a framework that uses a dataflow model for parallel processing: the Generic \u0000Parallel Rapid Development Toolkit, GePaRDT. This intuitive programming model eases the concurrent usage \u0000of many processing units without specialized knowledge about parallel programming methods and it's pitfalls.","PeriodicalId":369288,"journal":{"name":"Real-Time Image and Video Processing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122111262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}