{"title":"Texture based image indexing and retrieval","authors":"N. G. Rao, Dr. V Vijaya Kumar, V. V. Krishna","doi":"10.5220/0002065801770181","DOIUrl":"https://doi.org/10.5220/0002065801770181","url":null,"abstract":"11-aza-10-deoxo-10-dihydroerythromycin A and derivatives thereof, and process for preparation thereof.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125049112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Rodrigues, S. Granja, António H. J. Moreira, N. Rodrigues, J. Vilaça
{"title":"Application to Quantify Fetal Lung Branching on Rat Explants","authors":"P. Rodrigues, S. Granja, António H. J. Moreira, N. Rodrigues, J. Vilaça","doi":"10.5220/0004220900670070","DOIUrl":"https://doi.org/10.5220/0004220900670070","url":null,"abstract":"Recently, regulating mechanisms of branching morphogenesis of fetal lung rat explants have been an essential tool for molecular research. The development of accurate and reliable segmentation techniques may be essential to improve research outcomes. This work presents an image processing method to measure the perimeter and area of lung branches on fetal rat explants. The algorithm starts by reducing the noise corrupting the image with a pre-processing stage. The outcome is input to a watershed operation that automatically segments the image into primitive regions. Then, an image pixel is selected within the lung explant epithelial, allowing a region growing between neighbouring watershed regions. This growing process is controlled by a statistical distribution of each region. When compared with manual segmentation, the results show the same tendency for lung development. High similarities were harder to obtain in the last two days of culture, due to the increased number of peripheral airway buds and complexity of lung architecture. However, using semiautomatic measurements, the standard deviation was lower and the results between independent researchers were more coherent.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129214993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stereo vision for obstacle detection: A region-based approach","authors":"P. Foggia, A. Limongiello, M. Vento","doi":"10.5220/0002067900360045","DOIUrl":"https://doi.org/10.5220/0002067900360045","url":null,"abstract":"Esters of arylacetic acids, more particularly lower alcohol esters of arylacetic acids, including those substituted on the methylene group, are prepared by rearrangement of the corresponding alpha-halo-alkylarylketones with Ag compounds in lower alcohols and in an acid medium. From the alkyl esters so prepared, their respective free acids can be obtained, if desired, by various means such as hydrolysis or the shift with mineral acids of the alkaline salts prepared by reaction with alkali, etc.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132687430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Garcia-Barnes, D. Gil, S. Pujadas, F. Carreras, M. Ballester
{"title":"A Normalized Parametric Domain for the Analysis of the Left Ventricular Function","authors":"J. Garcia-Barnes, D. Gil, S. Pujadas, F. Carreras, M. Ballester","doi":"10.5220/0001074002670274","DOIUrl":"https://doi.org/10.5220/0001074002670274","url":null,"abstract":"Impairment of left ventricular (LV) contractility due to cardiovascular diseases is reflected in LV motion patterns. The mechanics of any muscle strongly depends on the spatial orientation of its muscular fibers since the motion that the muscle undergoes mainly takes place along the fiber. The helical ventricular myocardial band (HVMB) concept describes the myocardial muscle as a unique muscular band that twists in space in a non homogeneous fashion. The 3D anisotropy of the ventricular band fibers suggests a regional analysis of the heart motion. Computation of normality models of such motion can help in the detection and localization of any cardiac disorder. In this paper we introduce, for the first time, a normalized parametric domain that allows comparison of the left ventricle motion across patients. We address, both, extraction of the LV motion from Tagged Magnetic Resonance images, as well as, defining a mapping of the LV to a common normalized domain. Extraction of normality motion patterns from 17 healthy volunteers shows the clinical potential of our LV parametrization.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133641522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"REFA3D: Robust Spatio-temporal Analysis of Video Sequences","authors":"M. Grand-Brochier, C. Tilmant, M. Dhome","doi":"10.5220/0003857203520357","DOIUrl":"https://doi.org/10.5220/0003857203520357","url":null,"abstract":"This article proposes a generalization of our approach REFA (Grand-brochier et al., 2011) to spatio-temporal domain. Our new method REFA3D, is based mainly on hes-STIP detector and E-HOG3D. SIFT3D and HOG/HOF are the two must used methods for space-time analysis and give good results. So their studies allow us to understand their construction and to extract some components to improve our approach. The mask of analysis used by REFA is modified and therefore relies on the use of ellipsoids. The validation tests are based on video clips from synthetic transformations as well as real sequences from a simulator or an onboard camera. Our system (detection, description and matching) must be as invariant as possible for the image transformation (rotations, scales, time-scaling). We also study the performance obtained for registration of subsequence, a process often used for the location, for example. All the parameters (analysis shape, thresholds) and changes to the space-time generalization will be detailed in this article.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121253171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Advanced Player Activity Recognition by Integrating Body Posture and Motion Information","authors":"Marco Leo, T. D’orazio, P. Spagnolo, P. Mazzeo","doi":"10.5220/0001754002610266","DOIUrl":"https://doi.org/10.5220/0001754002610266","url":null,"abstract":"Human action recognition is an important research area in the field of computer vision having a great number of real-world applications. This paper presents a multi-view action recognition framework able to extract human silhouette clues from different synchronized static cameras and then to validate them introducing advanced reasonings about scene dynamics. Two different algorithmic procedures have been introduced: the first one performs, in each acquired image, the neural recognition of the human body configuration by using a novel mathematic tool named Contourlet transform. The second procedure performs, instead, 3D ball and player motion analysis. The outcomes of both procedures are then properly merged to accomplish the final player activity recognition task. Experimental results were carried out on several image sequences acquired during some matches of the Italian Serie A soccer championship.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133350286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stereo Vision Head Vergence using GPU Cepstral Filtering","authors":"Luís Almeida, P. Menezes, J. Dias","doi":"10.5220/0003319406650670","DOIUrl":"https://doi.org/10.5220/0003319406650670","url":null,"abstract":"Vergence ability is an important visual behavior observed on living creatures when they use vision to interact with the environment. The notion of active observer is equally useful for robotic vision systems on tasks like object tracking, fixation and 3D environment structure recovery. Humanoid robotics are a potential playground for such behaviors. This paper describes the implementation of a real time binocular vergence behavior using cepstral filtering to estimate stereo disparities. By implementing the cepstral filter on a graphics processing unit (GPU) using Compute Unified Device Architecture (CUDA) we demonstrate that robust parallel algorithms that used to require dedicated hardware are now available on common computers. The cepstral filtering algorithm speed up is more than sixteen times than on a current CPU. The overall system is implemented in the binocular vision system IMPEP (IMPEP Integrated Multimodal Perception Experimental Platform) to illustrate the system performance experimentally.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122776922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time Gender Recognition for Uncontrolled Environment of Real-life Images","authors":"Duan-Yu Chen, Kuan-Yi Lin","doi":"10.5220/0002823203570362","DOIUrl":"https://doi.org/10.5220/0002823203570362","url":null,"abstract":"Gender recognition is a challenging task in real life images and surveillance videos due to their relatively low-resolution, under uncontrolled environment and variant viewing angles of human subject. Therefore, in this paper, a system of real-time gender recognition for real life images is proposed. The contribution of this work is fourfold. A skin-color filter is first developed to filter out non-face noises. In order to make the system robust, a mechanism of decision making based on the combination of surrounding face detection, context-regions enhancement and confidence-based weighting assignment is designed. Experimental results obtained by using extensive dataset show that our system is effective and efficient in recognizing genders for uncontrolled environment of real life images.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123285017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust Face Alignment Using Convolutional Neural Networks","authors":"S. Duffner, Christophe Garcia","doi":"10.5220/0001073200300037","DOIUrl":"https://doi.org/10.5220/0001073200300037","url":null,"abstract":"Face recognition in real-world images mostly relies on three successive steps: face detection, alignment and identification. The second step of face alignment is crucial as the bounding boxes produced by robust face detection algorithms are still too imprecise for most face recognition techniques, i.e. they show slight variations in position, orientation and scale. We present a novel technique based on a specific neural architecture which, without localizing any facial feature points, precisely aligns face images extracted from bounding boxes coming from a face detector. The neural network processes face images cropped using misaligned bounding boxes and is trained to simultaneously produce several geometric parameters characterizing the global misalignment. After having been trained, the neural network is able to robustly and precisely correct translations of up to ±13% of the bounding box width, in-plane rotations of up to ±30◦ and variations in scale from 90% to 110%. Experimental results show that 94% of the face images of the BioID database and 80% of the images of a complex test set extracted from the internet are aligned with an error of less than 10% of the face bounding","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123916501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lightweight Computer Vision Methods for Traffic Flow Monitoring on Low Power Embedded Sensors","authors":"M. Magrini, D. Moroni, G. Pieri, O. Salvetti","doi":"10.5220/0005361006630670","DOIUrl":"https://doi.org/10.5220/0005361006630670","url":null,"abstract":"Nowadays pervasive monitoring of traffic flows in urban environment is a topic of great relevance, since the information it is possible to gather may be exploited for a more efficient and sustainable mobility. In this paper, we address the use of smart cameras for assessing the level of service of roads and early detect possible congestion. In particular, we devise a lightweight method that is suitable for use on low power and low cost sensors, resulting in a scalable and sustainable approach to flow monitoring over large areas. We also present the current prototype of an ad hoc device we designed and report experimental results obtained during a field test.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"39 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113956987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}