A. Doulamis, N. Doulamis, Konstantinos Makantasis, Michael Klein
{"title":"A 4D Virtual/Augmented Reality Viewer Exploiting Unstructured Web-based Image Data","authors":"A. Doulamis, N. Doulamis, Konstantinos Makantasis, Michael Klein","doi":"10.5220/0005456806310639","DOIUrl":"https://doi.org/10.5220/0005456806310639","url":null,"abstract":"Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). Thus, 4D modelling (3D plus the time) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time. However, it is difficult to implement temporal 3D modelling for many time instances using conventional capturing tools since we need high financial effort and computational complexity in acquiring a set of the most suitable image data. One way to address this, is to exploit the huge amount of images distributing over visual hosting repositories, such as flickr and picasa. These visual data, nevertheless, are loosely structured and thus no appropriate for 3D modelling. For this reason, a new content-based filtering mechanism should be implemented so as to rank (filter) images according to their contribution to the 3D reconstruction process and discards image outliers that can either confuse or delay the 3D reconstruction process. Then, we proceed to the implementation of a virtual/augmented reality which allows the cultural heritage actors to temporally assess cultural objects of interest and assists conservators to check how restoration methods affect an object or how materials decay through time. The proposed system has been developed and evaluated using real-life data and outdoor sites.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129495467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Nonlinear primary cortical image representation for JPEG 2000 - applying natural image statistics and visual perception to image compression","authors":"R. Valerio, R. Navarro","doi":"10.5220/0001377205190522","DOIUrl":"https://doi.org/10.5220/0001377205190522","url":null,"abstract":"In this paper, we present a nonlinear image representation scheme based on a statistically-derived divisive normalization model of the information processing in the visual cortex. The input image is first decomposed into a set of subbands at multiple scales and orientations using the Daubechies (9, 7) floating point filter bank. This is followed by a nonlinear “divisive normalization” stage, in which each linear coefficient is squared and then divided by a value computed from a small set of neighboring coefficients in space, orientation and scale. This neighborhood is chosen to allow this nonlinear operation to be efficiently inverted. The parameters of the normalization operation are optimized in order to maximize the statistical independence of the normalized responses for natural images. Divisive normalization not only can be used to describe the nonlinear response properties of neurons in visual cortex, but also yields image descriptors more independent and relevant from a perceptual point of view. The resulting multiscale nonlinear image representation permits an efficient coding of natural images and can be easily implemented in a lossy JPEG 2000 codec. In fact, the nonlinear image representation implements in an automatic way a more general version of the point-wise extended masking approach proposed as an extension for visual optimisation in JPEG 2000 Part 2. Compression results show that the nonlinear image representation yields a better ratedistortion performance than the wavelet transform alone.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124145566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accurate Detection and Visualization of 3D Shape Deformation by using Multiple Projectors","authors":"Masayasu Yoshigi, Fumihiko Sakaue, J. Sato","doi":"10.5220/0005455405770582","DOIUrl":"https://doi.org/10.5220/0005455405770582","url":null,"abstract":"In this paper, we propose a method for detecting the deformation of object shape by using multiple projectors. In this method, a set of specially coded patterns are projected onto a target object from multiple projectors. Then, if the target object is not deformed, the object is illuminated by plain white color, and if the object is deformed, it is illuminated by radical colors. Thus, we can visualize and detect the deformation of object just by projecting lights from multiple projectors. The proposed method uses the disparities of multiple projectors, and thus, we do not any complicated method for detecting object shape deformation. In addition, we utilize image super-resolution technique for object deformation visualization, so that we can visualize extremely small deformation easily.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133482714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. C. Aquino, D. Q. Leite, G. Giraldi, Jaime S. Cardoso, P. Rodrigues, L. A. P. Neves
{"title":"Surface Reconstruction for Generating Digital Models of Prosthesis","authors":"L. C. Aquino, D. Q. Leite, G. Giraldi, Jaime S. Cardoso, P. Rodrigues, L. A. P. Neves","doi":"10.5220/0003356601370142","DOIUrl":"https://doi.org/10.5220/0003356601370142","url":null,"abstract":"The restoration and recovery of a defective skull can be performed through operative techniques to implant a customized prosthesis. Recently, image processing and surface reconstruction methods have been used for digital prosthesis design. In this paper we present a framework for prosthesis modeling. Firstly, we take the computed tomography (CT) of the skull and perform bone segmentation by thresholding. The obtained binary volume is processed by morphological operators, frame-by-frame, to get the inner and outer boundaries of the bone. These curves are used to initialize a 2D deformable model that generates the prosthesis boundary in each CT frame. In this way, we can fill the prosthesis volume which is the input for a marching cubes technique that computes the digital model of the target geometry. In the experimental results we demonstrate the potential of our technique and compare it with a related one.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123786032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A passive 3D scanner - acquiring high-quality textured 3D-models using a consumer digital-camera","authors":"M. Elter, Andreas Ernst, Christian Küblbeck","doi":"10.5220/0002039503110316","DOIUrl":"https://doi.org/10.5220/0002039503110316","url":null,"abstract":"We present a low-cost, passive 3d scanning system using an off-the-shelf consumer digital camera for image acquisition. We have developed a state of the art structure from motion algorithm for camera pose estimation and a fast shape from stereo approach for shape reconstruction. We use a volumetric approach to fuse partial shape reconstructions and a texture mapping technique for appearance recovery. We extend the state of the art by applying modifications of standard computer vision techniques to images of very high resolution to generate high quality textured 3d models. Our reconstruction results are robust and visually convincing.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115718333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Case-based indoor navigation","authors":"A. Micarelli, G. Sansonetti","doi":"10.5220/0002069300970106","DOIUrl":"https://doi.org/10.5220/0002069300970106","url":null,"abstract":"The purpose of this paper is to present a novel approach to the problem of autonomous robot navigation in a partially structured environment. The proposed solution is based on the ability of recognizing digital images that have been artificially obtained by applying a sensor fusion algorithm to ultrasonic sensor readings. Such images are classified in different categories using the well known Case-Based Reasoning (CBR) technique, as defined in the Artificial Intelligence domain. The architecture takes advantage of fuzzy theory for the construction of digital images, and wavelet functions for their analysis.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121868217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Guerrero, Javier Ruiz-del-Solar, Rodrigo Palma Amestoy
{"title":"Spatiotemporal context in robot vision: Detection of static objects in the robocup four legged league","authors":"P. Guerrero, Javier Ruiz-del-Solar, Rodrigo Palma Amestoy","doi":"10.5220/0002069901360148","DOIUrl":"https://doi.org/10.5220/0002069901360148","url":null,"abstract":"In a piezoelectric tuning fork of the type which is used as an electro-mechanical filter and wherein a tuning fork is directly supported by a supporting member extended from a terminal plate, the supporting member is surrounded with a vibration isolation or absorbing member which in turn is bonded or otherwise joined to both the supporting member and the terminal plate, whereby the noise output due to the transmission of external vibrations or impacts may be considerably suppressed.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127803972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Przelaskowski, R. Jóźwiak, T. Zieliński, M. Duplaga
{"title":"Endobronchial Tumor Mass Indication in Videobronchoscopy - Block based Analysis","authors":"A. Przelaskowski, R. Jóźwiak, T. Zieliński, M. Duplaga","doi":"10.5220/0002924405360542","DOIUrl":"https://doi.org/10.5220/0002924405360542","url":null,"abstract":"Computer-assisted interpretation of bronchial neoplastic lesion is an innovative but exceptionally challenging task due to highly diversified pathology appearance, video quality limitations and the role of subjective assessment of the endobronchial images. This work is focused on various manifestations of endobronchial tumors in acquired image sequences, bronchoscope navigation, artifacts, lightening and reflections, changing color dominants and unstable focus conditions. Proposed method of neoplasmatic areas indication was based on three steps of video analysis: a) informative frame selection, b) block-based unsupervised determining of enlarged textual activity, c) recognition of potentially tumor tissue, based on feature selection in different domains of transformed image and Support Vector Machine (SVM) classification. Prior to all of these procedures, wavelet-based image processing was applied to extract texture image for further analysis. Proposed method was verified with a reference image dataset containing diversified endobronchial tumor patterns. Obtained results reveal high accuracy for independent classification of individual (single video record) forms of endobronchial tumor patterns. The overall accuracy for whole dataset of 888 test blocks reached 100%. Less complex (approximately two times) procedure including initial blocks of interests selection reached accuracy","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128717669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karla L. Caballero Barajas, J. Barajas, O. Pujol, J. Mauri, P. Radeva
{"title":"Reconstructing ivus images for an accurate tissue classification","authors":"Karla L. Caballero Barajas, J. Barajas, O. Pujol, J. Mauri, P. Radeva","doi":"10.5220/0002061001130119","DOIUrl":"https://doi.org/10.5220/0002061001130119","url":null,"abstract":"Plaque rupture in coronary vessels is one of the principal causes of sudden death in western societies. Reliable diagnostic tools are of great interest for physicians in order to detect and quantify vulnerable plaque in order to develop an effective treatment. To achieve this, a tissue classification must be performed. Intravascular Ultrasound (IVUS) represents a powerful technique to explore the vessel walls and to observe its morphology and histological properties. In this paper, we propose a method to reconstruct IVUS images from the raw Radio Frequency (RF) data coming from the ultrasound catheter. This framework offers a normalization scheme to compare accurately different patient studies. Then, an automatic tissue classification based on the texture analysis of these images and the use of Adapting Boosting (AdaBoost) learning technique combined with Error Correcting Output Codes (ECOC) is presented. In this study, 9 in-vivo cases are reconstructed with 7 different parameter set. This method improves the classification rate based on images, yielding a 91% of well-detected tissue using the best parameter set. It is also reduced the inter-patient variability compared with the analysis of DICOM images, which are obtained from the commercial equipment.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126625161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The descriptive techniques for image analysis and recognition","authors":"I. Gurevich","doi":"10.5220/0002071302230229","DOIUrl":"https://doi.org/10.5220/0002071302230229","url":null,"abstract":"A process is provided for preparing propellant compositions including a film-forming synthetic polymer that are capable of forming foamed structures containing open and/or closed cells, which may optionally contain a material which is deposited in the pores and/or walls of the structure as the structure is formed, which comprises coating the synthetic polymer in particulate form with an inert solid material insoluble in the propellant and in solutions of the synthetic resin the propellant at atmospheric temperature; and then adding the propellant and dissolving the synthetic polymer in the propellant. The process is of particular application for preparing such synthetic polymer-propellant compositions in situ in closed containers capable of withstanding an internal pressure sufficient to keep the propellant in the liquid phase at atmospheric temperature, and when the composition is withdrawn from the container to atmospheric pressure, the propellant volatilizes rapidly and a foamed structure is formed within a few seconds.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122791046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}