{"title":"3D Gesture Recognition by Superquadrics","authors":"Ilya M. Afanasyev, M. Cecco","doi":"10.5220/0004348404290433","DOIUrl":"https://doi.org/10.5220/0004348404290433","url":null,"abstract":"Abstract: This paper presents 3D gesture recognition and localization method based on processing 3D data of hands in color gloves acquired by 3D depth sensor, like Microsoft Kinect. RGB information of every 3D datapoints is used to segment 3D point cloud into 12 parts (a forearm, a palm and 10 for fingers). The object (a hand with fingers) should be a-priori known and anthropometrically modeled by SuperQuadrics (SQ) with certain scaling and shape parameters. The gesture (pose) is estimated hierarchically by RANSAC-object search with a least square fitting the segments of 3D point cloud to corresponding SQ-models: at first – a pose of the hand (forearm & palm), and then positions of fingers. The solution is verified by evaluating the matching score, i.e. the number of inliers corresponding to the appropriate distances from SQ surfaces and 3D datapoints, which are satisfied to an assigned distance threshold.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126770236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Geometry Closure for Hemodynamics Simulations","authors":"J. Bruijns, R. Hermans","doi":"10.5220/0001782701530161","DOIUrl":"https://doi.org/10.5220/0001782701530161","url":null,"abstract":"Physicians may treat an aneurysm by injecting coils through a catheter into the aneurysm, or by anchoring a stent as a flow diverter. Since such an intervention is risky, a patient is only treated when the probability of aneurysm rupture is relatively high. Hemodynamic properties of aneurysmal blood flow, extracted by computational fluid dynamics calculations, are hypothesized to be relevant for predicting this rupture. Since hemodynamics simulations require a closed vessel section with defined inflow and outflow points, and since the user can easily overlook small side branches, we have developed an algorithm for fully-automatic geometry closure of an open vessel section. Since X-ray based flow returns an indication for the needed length to have a developed flow inside the geometry, we have also developed an algorithm to create a geometry closure around an aneurysm based on a length criterion. After both geometry closure algorithms were tested elaborately, practicability of the hemodynamics workstation is currently being tested.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126243934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reconstructing Archeological Vessels from Fragments using Anchor Points Residing on Shard Fragment Borders","authors":"Zexi Liu, F. Cohen, E. Taslidere","doi":"10.5220/0004231800800083","DOIUrl":"https://doi.org/10.5220/0004231800800083","url":null,"abstract":"This paper presents a method to assist in the tedious process of reconstructing ceramic vessels from excavated fragments. The method models the fragment borders as 3D curves and uses intrinsic differential anchor points on the curves. Corresponding anchors on different fragments are identified using absolute invariants and a longest string search technique. A rigid transformation is computed from the corresponding anchors, allowing the fragments to be virtually mended. A global constraint induced by the surface of revolution (basis shape) to decide on how all pairs of mended fragments are coming together as one global mended vessel is used. The accuracy of mending is measured using a distance error map metric. The method is tested on a set of 3D scanned fragments (313 pieces) coming from 19 broken vessels. 80% of the pieces were properly mended and resulted into alignment error at the scanner-resolution-level. The method took 59 seconds for mending pieces plus 60 minutes for 3D scans as compared to 12 hours for stitching manually.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"9 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123802277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Color Quantization via Spatial Resolution Reduction","authors":"G. Ramella, G. S. D. Baja","doi":"10.5220/0004272100780083","DOIUrl":"https://doi.org/10.5220/0004272100780083","url":null,"abstract":"A color quantization algorithm is presented, which is based on the reduction of the spatial resolution of the input image. The maximum number of colors nf desired for the output image is used to fix the proper spatial resolution reduction factor. This is used to build a lower resolution version of the input image with size nf. Colors found in the lower resolution image constitute the palette for the output image. The three components of each color of the palette are interpreted as the coordinates of a voxel in the 3D discrete space. The Voronoi Diagram of the set of voxels corresponding to the colors of the palette is computed and is used for color mapping of the input image.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"12 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120889409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michalis Vrigkas, Vasileios Karavasilis, Christophoros Nikou, I. Kakadiaris
{"title":"Action Recognition by Matching Clustered Trajectories of Motion Vectors","authors":"Michalis Vrigkas, Vasileios Karavasilis, Christophoros Nikou, I. Kakadiaris","doi":"10.5220/0004277901120117","DOIUrl":"https://doi.org/10.5220/0004277901120117","url":null,"abstract":"A framework for action representation and recognition base d on the description of an action by time series of optical flow motion features is presented. In the learning step, the motion curves representing each action are clustered using Gaussian mixture modeling (GMM). In the recognition step, the optical flow curves of a probe sequence are also clustered using a GMM and the probe c urves are matched to the learned curves using a non-metric similarity function based on the longest common subsequence which is robust to noise and provides an intuitive notion of similarity between traject ories. Finally, the probe sequence is categorized to the learned action with the maximum similarity using a neare st n ighbor classification scheme. Experimental results on common action databases demonstrate the effecti veness of the proposed method.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132432150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Suvam Patra, B. Bhowmick, Subhashis Banerjee, P. Kalra
{"title":"High Resolution Point Cloud Generation from Kinect and HD Cameras using Graph Cut","authors":"Suvam Patra, B. Bhowmick, Subhashis Banerjee, P. Kalra","doi":"10.5220/0003863003110316","DOIUrl":"https://doi.org/10.5220/0003863003110316","url":null,"abstract":"This paper describes a methodology for obtaining a high resolution dense point cloud using Kinect (J. Smisek and Pajdla, 2011) and HD cameras. Kinect produces a VGA resolution photograph and a noisy point cloud. But high resolution images of the same scene can easily be obtained using additional HD cameras. We combine the information to generate a high resolution dense point cloud. First, we do a joint calibration of Kinect and the HD cameras using traditional epipolar geometry (R. Hartley, 2004). Then we use the sparse point cloud obtained from Kinect and the high resolution information from the HD cameras to produce a dense point cloud in a registered frame using graph cut optimization. Experimental results show that this approach can significantly enhance the resolution of the Kinect point cloud.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130170306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deformable Image Registration - Improved Fast Free Form Deformation","authors":"B. Papież, T. Zieliński, B. Matuszewski","doi":"10.5220/0002920105300535","DOIUrl":"https://doi.org/10.5220/0002920105300535","url":null,"abstract":"In this paper, we describe a class of deformable registration techniques with application to radiotherapy of prostate cancer. To solve registration problem we introduced Jacobi and successive over-relaxation methods and compared them with the Gauss-Seidel used in the variational framework previously proposed in literature. A multi-resolution scheme was used to improve speed of computation, robustness and ability to recover bigger image deformations. To investigate the properties of these algorithms they were tested using simulated data with known displacement filed and real CT images . The results show that it is possible to improve currently widely used algorithms by introducing simple modifications in the numerical solving scheme.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129055784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparison of Reconstruction and Texturing of 3D Urban Terrain by L1 Splines, Conventional Splines and Alpha Shapes","authors":"D. Bulatov, J. Lavery","doi":"10.5220/0001746504030409","DOIUrl":"https://doi.org/10.5220/0001746504030409","url":null,"abstract":"We compare computational results for three procedures for reconstruction and texturing of 3D urban terrain. One procedure is based on recently developed “L1 splines”, another on conventional splines and a third on “α-shapes”. Computational results generated from optical images of a model house and of the Gottesaue Palace in Karlsruhe, Germany are presented. These comparisons indicate that the L1-spline-based procedure produces textured reconstructions that are superior to those produced by the conventional-spline-based procedure and the α-shapes-based procedure.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130935275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eftychios E. Protopapadakis, Konstantinos Makantasis, N. Doulamis
{"title":"Maritime Targets Detection from Ground Cameras Exploiting Semi-supervised Machine Learning","authors":"Eftychios E. Protopapadakis, Konstantinos Makantasis, N. Doulamis","doi":"10.5220/0005456205830594","DOIUrl":"https://doi.org/10.5220/0005456205830594","url":null,"abstract":"This paper presents a vision-based system for maritime surveillance, using moving PTZ cameras. The proposed methodology fuses a visual attention method that exploits low-level image features appropriately selected for maritime environment, with appropriate tracker. Such features require no assumptions about environmental nor visual conditions. The offline initialization is based on large graph semi-supervised technique in order to minimize user’s effort. System’s performance was evaluated with videos from cameras placed at Limassol port and Venetian port of Chania. Results suggest high detection ability, despite dynamically changing visual conditions and different kinds of vessels, all in real time.","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121042103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rida Sadek, C. Ballester, Lluís Garrido, E. Meinhardt, V. Caselles
{"title":"Frame Interpolation with Occlusion Detection using a Time Coherent Segmentation","authors":"Rida Sadek, C. Ballester, Lluís Garrido, E. Meinhardt, V. Caselles","doi":"10.5220/0003830803670372","DOIUrl":"https://doi.org/10.5220/0003830803670372","url":null,"abstract":"In this paper we propose an interpolation method to produce a sequence of plausible intermediate frames between two input images. The main feature of the proposed method is the handling of occlusions using a time coherent video segmentation into spatio-temporal regions. Occlusions and disocclusions are defined as points in a frame where a region ends or starts, respectively. Out of these points, forward and backward motion fields are used to interpolate the intermediate frames. After motion-based interpolation, there may still be some holes which are filled using a hole filling algorithm. We illustrate the proposed method with some","PeriodicalId":411140,"journal":{"name":"International Conference on Computer Vision Theory and Applications","volume":"179 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123107398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}