C. Spampinato, S. Palazzo, D. Giordano, I. Kavasidis, Fang-Pang Lin, Yun-Te Lin
{"title":"Covariance based Fish Tracking in Real-life Underwater Environment","authors":"C. Spampinato, S. Palazzo, D. Giordano, I. Kavasidis, Fang-Pang Lin, Yun-Te Lin","doi":"10.5220/0003866604090414","DOIUrl":"https://doi.org/10.5220/0003866604090414","url":null,"abstract":"In this paper we present a covariance based tracking algorithm for intelligent video analysis to assist marine biologists in understanding the complex marine ecosystem in the Ken-Ding sub-tropical coral reef in Taiwan by processing underwater real-time videos recorded in open ocean. One of the most important aspects of marine biology research is the investigation of fish trajectories to identify events of interest such as fish preying, mating, schooling, etc. This task, of course, requires a reliable tracking algorithm able to deal with 1) the difficulties of following fish that have multiple degrees of freedom and 2) the possible varying conditions of the underwater environment. To accommodate these needs, we have developed a tracking algorithm that exploits covariance representation to describe the object’s appearance and statistical information and also to join different types of features such as location, color intensities, derivatives, etc. The accuracy of the algorithm was evaluated by using hand-labeled ground truth data on 30000 frames belonging to ten different videos, achieving an average performance of about 94%, estimated using multiple ratios that provide indication on how good is a tracking algorithm both globally (e.g. counting objects in a fixed range of time) and locally (e.g. in distinguish occlusions among objects).","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124876375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-camera Topology Recovery using Lines","authors":"Sang Ly, C. Demonceaux, P. Vasseur","doi":"10.5220/0002843302450250","DOIUrl":"https://doi.org/10.5220/0002843302450250","url":null,"abstract":"We present a topology estimation approach for a system of single view point (SVP) cameras using lines. Images captured by SVP cameras such as perspective, central catadioptric or fisheye cameras are mapped to spherical images using the unified projection model. We recover the topology of a multiple central camera setup by rotation and translation decoupling. The camera rotations are first recovered from vanishing points of parallel lines. Next, the translations are estimated from known rotations and line projections in spherical images. The proposed algorithm has been validated on simulated data and real images from perspective and fisheye cameras. This vision-based approach can be used to initialize an extrinsic calibration of a hybrid camera network.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130657245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast adaptable skin colour detection in RGB space","authors":"M. Tosàs, Steven Mills","doi":"10.5220/0002055800030010","DOIUrl":"https://doi.org/10.5220/0002055800030010","url":null,"abstract":"Partially etherified melamine-formaldehyde resins having a melamine:formaldehyde molar ratio of 1:(1.25 to 1) can be prepared by a condensation reaction under neutral or weakly alkaline conditions if the condensation reaction is carried out in the presence of a glycol monoether of the general formula I R(OCH2 CH2)nOH (I) wherein R denotes alkyl having 1 to 4 C atoms and n denotes a number from 1 to 5. The partially etherified melamine-formaldehyde resins are suitable for the production of compression moulding compositions having a high flow and minimal processing shrinkage and after-shrinkage.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122326124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Bosco, Davide Giacalone, A. Bruna, S. Battiato, Rosetta Rizzo
{"title":"Signal Activity Estimation with Built-in Noise Management in Raw Digital Images","authors":"A. Bosco, Davide Giacalone, A. Bruna, S. Battiato, Rosetta Rizzo","doi":"10.5220/0004280301180121","DOIUrl":"https://doi.org/10.5220/0004280301180121","url":null,"abstract":"Discriminating smooth image regions from areas in which significant signal activity occurs is a widely studied subject and is important in low level image processing as well as computer vision applications. In this paper we present a novel method for estimating signal activity in an image directly in the CFA (Color Filter Array) Bayer raw domain. The solution is robust against noise in that it utilizes low level noise characterization of the image sensor to automatically compensate for high noise levels that contaminate the image signal.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133829559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Faster Method Aiming Iris Extraction","authors":"J. H. Alves, G. Giraldi, L. A. P. Neves","doi":"10.5220/0004303700900093","DOIUrl":"https://doi.org/10.5220/0004303700900093","url":null,"abstract":"In this paper, we present a technique for iris segmentation. The method finds the pupil in the first step. Next, it segments the iris using the pupil location. The proposed approach is based on the mathematical morphology operators of opening and closing, as well as histogram expansion and thresholding. The CASIA Iris Database from the Institute of Automation of the Chinese Academy of Sciences has been used for the tests. Several tests were performed with 200 different images, showing the efficiency of the proposed","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128209615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Anisotropic Median Filtering for Stereo Disparity Map Refinement","authors":"Nils Einecke, J. Eggert","doi":"10.5220/0004200401890198","DOIUrl":"https://doi.org/10.5220/0004200401890198","url":null,"abstract":"In this paper we present a novel method for refining stereo disparity maps that is inspired by both simple median filtering and edge-preserving anisotropic filtering. We argue that a combination of these two techniques is particularly effective for reducing the fattening effect that typically occurs for block-matching stereo algorithms. Experiments show that the newly proposed post-refinement can propel simple patch-based algorithms to much higher ranks in the Middlebury stereo benchmark. Furthermore, a comparison to state-of-the-art methods for disparity refinement shows a similar accuracy improvement but at only a fraction of the computational effort. Hence, this approach can be used in systems with restricted computational power.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121053471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elisavet (Ellie) Konstantina Stathopoulou, R. Hänsch, O. Hellwich
{"title":"Prior Knowledge About Camera Motion for Outlier Removal in Feature Matching","authors":"Elisavet (Ellie) Konstantina Stathopoulou, R. Hänsch, O. Hellwich","doi":"10.5220/0005456406030610","DOIUrl":"https://doi.org/10.5220/0005456406030610","url":null,"abstract":"The search of corresponding points in between images of the same scene is a well known problem in many computer vision applications. In particular most structure from motion techniques depend heavily on the correct estimation of corresponding image points. Most commonly used approaches make neither assumptions about the 3D scene nor about the relative positions of the cameras and model both as completely unknown. This general model results in a brute force comparison of all keypoints in one image to all points in all other images. In reality this model is often far too general because coarse prior knowledge about the cameras is often available. For example, several imaging systems are equipped with positioning devices which deliver pose information of the camera. Such information can be used to constrain the subsequent point matching not only to reduce the computational load, but also to increase the accuracy of path estimation and 3D reconstruction. This study presents Guided Matching as a new matching algorithm towards this direction. The proposed algorithm outperforms brute force matching in speed as well as number and accuracy of correspondences, given well estimated priors.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116153759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Evaluation Methodology for Stereo Correspondence Algorithms","authors":"I. Cabezas, M. Trujillo, M. Florian","doi":"10.5220/0003850801540163","DOIUrl":"https://doi.org/10.5220/0003850801540163","url":null,"abstract":"A comparison of stereo correspondence algorithms can be conducted by a quantitative evaluation of disparity maps. Among the existing evaluation methodologies, the Middlebury’s methodology is commonly used. However, the Middlebury’s methodology has shortcomings in the evaluation model and the error measure. These shortcomings may bias the evaluation results, and make a fair judgment about algorithms accuracy difficult. An alternative, the A∗ methodology is based on a multiobjective optimisation model that only provides a subset of algorithms with comparable accuracy. In this paper, a quantitative evaluation of disparity maps is proposed. It performs an exhaustive assessment of the entire set of algorithms. As innovative aspect, evaluation results are shown and analysed as disjoint groups of stereo correspondence algorithms with comparable accuracy. This innovation is obtained by a partitioning and grouping algorithm. On the other hand, the used error measure offers advantages over the error measure used in the Middlebury’s methodology. The experimental validation is based on the Middlebury’s test-bed and algorithms repository. The obtained results show seven groups with different accuracies. Moreover, the topranked stereo correspondence algorithms by the Middlebury’s methodology are not necessarily the most accurate in the proposed methodology.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123503979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Vehicle Speed Estimation from Two Images for LIDAR Second Assessment","authors":"C. Beumier","doi":"10.5220/0003855403810386","DOIUrl":"https://doi.org/10.5220/0003855403810386","url":null,"abstract":"Vehicle speed control has been traditionally carried out by RADAR and more recently by LIDAR systems. We present a solution that derives the speed from two images acquired by a static camera and one real dimension from the vehicle. It was designed to serve the purpose of second assessment in case of legal dispute about a LIDAR speed measure. The approach follows a stereo paradigm, considering the equivalent problem of a stationary vehicle captured by a moving camera. 3D coordinates of vehicle points are obtained as the intersection of 3D lines emanating from corresponding points in both images, using the camera pinhole model. The displacement, approximated by a translation, is derived from the best match of reconstructed 3D points, minimising the residual error of 3D line intersection and the deviation with the known dimensions of the licence plate. A graphical interface lets the user select and refine vehicle points, starting with the 4 corners of the licence plate. The plate dimension is selected from a list or typed in. More than 100 speed estimation results confirmed hypothesis about the translation approximation and showed a maximal deviation with LIDAR speed of less than +/10 % as required by the application.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114206059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Driving Warning System Based on Visual Perception of Road Signs","authors":"J. P. Carrasco, A. D. L. Escalera, J. M. Armingol","doi":"10.5220/0001076800540060","DOIUrl":"https://doi.org/10.5220/0001076800540060","url":null,"abstract":"Advanced Driver Assistance Systems are used to increase the security of vehicles. Computer Vision is one of the main technologies used for this aim. Lane marks recognition, pedestrian detection, driver drowsiness or road sign detection and recognition are examples of these systems. The last one is the goal of this paper. A system that can detect and recognize road signs based on color and shape features is presented in this article. It will be focused on detection, especially the color space used, investigating on the case of road signs under shadows. The system, also tracks the road sign once it has been detected. It warns the driver in case of anomalous speed for the recognized road sign using the information from a GPS.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121225488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}