{"title":"Density-aware Stratified Sampling for Visualizing Large Volume Geo-Spatial Data","authors":"Liming Dong, Bin Feng, Weidong Liu","doi":"10.1145/3406971.3406991","DOIUrl":"https://doi.org/10.1145/3406971.3406991","url":null,"abstract":"Sampling is a popular approach in big data visualization, however, current sampling approaches don't work well when visualization type is scatter plot, and are even worse in supporting keyword search queries. In this paper, we present an approach of density-aware stratified sampling, it first probing the density of record in different areas of the visualization, then taking the density data to guide the stratified sampling. We conducted an extensively user study to show the efficiency and efficacy of our approach, the experiment shows that our approach can provide very close scatter plots of keyword search queries of a 200 million record dataset within 0.2 second, and the construction time is only 1/4 of an alternative method.","PeriodicalId":111905,"journal":{"name":"Proceedings of the 4th International Conference on Graphics and Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130791208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Tsai, Wen-Jan Chen, Jen-Yung Lin, Guo-Shiang Lin, Sheng-lei Yan
{"title":"Polyp Classification Based on Deep Neural Network for Colonoscopic Images","authors":"M. Tsai, Wen-Jan Chen, Jen-Yung Lin, Guo-Shiang Lin, Sheng-lei Yan","doi":"10.1145/3406971.3406977","DOIUrl":"https://doi.org/10.1145/3406971.3406977","url":null,"abstract":"In this paper, a colorectal polyp classification method based on deep neural network (DNN) was proposed for BLI (Blue Laser Imaging) images. Since polyps can be considered as objects, an one-stage object detection network, YOLO (You Only Look Once), is selected to develop a computer-aided system to detect and classify polyps. Based on data augmentation and transfer learning, the DNN was modified to classify polyps into two classes: hyperplastic and adenomatous. To evaluate the performance of the proposed method, many colonoscopic images are collected for testing. The precision and recall rates can achieve 99% for 234 cases outside the training set. Experimental results show that the proposed method can not only detect but also classify colorectal Polyps in BLI images.","PeriodicalId":111905,"journal":{"name":"Proceedings of the 4th International Conference on Graphics and Signal Processing","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134317752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yujie Wu, Hong Zhang, Yawei Li, Yinan Mao, Lei He, Zhoufeng Liu
{"title":"Dense Block U-net for Dynamic Scene Deblurring","authors":"Yujie Wu, Hong Zhang, Yawei Li, Yinan Mao, Lei He, Zhoufeng Liu","doi":"10.1145/3406971.3406972","DOIUrl":"https://doi.org/10.1145/3406971.3406972","url":null,"abstract":"Motion blur often arises in a single image because of the camera shake, the objects motion and the depth variation. The image deblurring is a challenging task due to its ill-posed nature. To remove these blurriness, the conventional energy optimization based methods always rely on the assumption such that the blur kernel is uniform across the entire image. With the development of the deep neural network, the learning based methods were proposed to tackle with the non-uniform blur cases. In this paper, we propose a U-Net network containing dense blocks for dynamic scene deblurring. By passing the kernel estimation, our model significantly reduces the inference time. The extensive experiments on both synthetic and real blurred images demonstrate that our method outperforms the state-of-the-art blind deblurring algorithms.","PeriodicalId":111905,"journal":{"name":"Proceedings of the 4th International Conference on Graphics and Signal Processing","volume":"406 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122925163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-Time Grasp Detection Using Improved FMM and Cascaded Neural Networks","authors":"L. Weiwei, Wu Peng, Dong Shiwen","doi":"10.1145/3406971.3406988","DOIUrl":"https://doi.org/10.1145/3406971.3406988","url":null,"abstract":"The successful grasping task for the robotic arm requires precise grasping posture. In this paper, we use the cascading depth network to predict the optimal object grasping pose. The model is mainly divided into two steps: i) generating a set of candidates that contain the regions of objects; ii) getting the optimal capture position by detecting the candidate region, and combining the depth image to obtain the three-dimensional coordinates of the capture position for objects. Due to flaws and edge noise in the depth image of Kinect, an improved FMM (Fast Marching Method) algorithm is used to repair the depth image hole, and then the joint bilateral filtering algorithm is employed to recover the edge noise of the depth image. Experimental results in public dataset and real scenes have demonstrated the effectiveness of the proposed method.","PeriodicalId":111905,"journal":{"name":"Proceedings of the 4th International Conference on Graphics and Signal Processing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125140026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance Comparison of Saliency Detection Methods for Food Region Extraction","authors":"Takuya Futagami, N. Hayasaka, T. Onoye","doi":"10.1145/3406971.3406974","DOIUrl":"https://doi.org/10.1145/3406971.3406974","url":null,"abstract":"Several methods for extracting food regions from food images use visual saliency to improve accuracy. The effectiveness of saliency detection methods for food extraction, however, has not been discussed sufficiently. Thus, the effectiveness of well-known saliency detection methods is compared thoroughly for the future development of highly accurate food-extraction methods. Ten saliency detection methods, which consisted of seven handcrafted feature-based approaches and three deep learning-based approaches, were tested by applying them to 240 food images. The results suggest that MSI, which uses only neural networks without the assumption that food regions tend to be found at the center of images, predicted food regions most accurately in terms of areas under a receiver operating characteristic curve (AUC). Additionally, GMR, which assumes that food regions tend not to be found around the four sides of an image, was also effective on the food extraction task. The AUCs of these methods were more than 4% larger than that of a center model that is frequently used as a baseline for saliency detection. Furthermore, this paper supports these results by comparing other methods and determining the properties of food images.","PeriodicalId":111905,"journal":{"name":"Proceedings of the 4th International Conference on Graphics and Signal Processing","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129089017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-Time Cleaning and Refinement of Facial Animation Signals","authors":"Eloïse Berson, Catherine Soladié, Nicolas Stoiber","doi":"10.1145/3406971.3406985","DOIUrl":"https://doi.org/10.1145/3406971.3406985","url":null,"abstract":"With the increasing demand for real-time animated 3D content in the entertainment industry and beyond, performance-based animation has garnered interest among both academic and industrial communities. While recent solutions for motion-capture animation have achieved impressive results, handmade postprocessing is often needed, as the generated animations often contain artifacts. Existing real-time motion capture solutions have opted for standard signal processing methods to strengthen temporal coherence of the resulting animations and remove inaccuracies. While these methods produce smooth results, they inherently filter-out part of the dynamics of facial motion, such as high frequency transient movements. In this work, we propose a real-time animation refining system that preserves -or even restores- the natural dynamics of facial motions. To do so, we leverage an off-the-shelf recurrent neural network architecture that learns proper facial dynamics patterns on clean animation data. We parametrize our system using the temporal derivatives of the signal, enabling our network to process animations at any framerate. Qualitative results show that our system is able to retrieve natural motion signals from noisy or degraded input animation.","PeriodicalId":111905,"journal":{"name":"Proceedings of the 4th International Conference on Graphics and Signal Processing","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125592321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tomographic Reconstruction using Range Restricted GMRES","authors":"N. R. Jaffri, L. Shi, Usama Abrar","doi":"10.1145/3406971.3409042","DOIUrl":"https://doi.org/10.1145/3406971.3409042","url":null,"abstract":"This paper is concerned with the reconstruction of large ill determined and ill-posed problem using the iterative method. The ill-posed problems ascend from the discretization of ill-posed linear systems. The technique used in this work widely applicable to reconstruct the two-dimensional tomographic image. Reconstruction achieved after producing a right-hand side that can be done using any of the famous tomographic experimental arrangements (TDLAS, ECT, ultrasound, etc.). The problem discussed in this paper is with error contaminated right-hand side. The numerical solution of this matrix is fractionally complicated as the matrix is vast and ill-conditioned. Range-restricted GMRES (RRGMRES) used as regularization in this work. The iteration method is implemented using MATLAB to compute the inverse problem.","PeriodicalId":111905,"journal":{"name":"Proceedings of the 4th International Conference on Graphics and Signal Processing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121788424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accuracy Assessment for Geometric Features Extraction from Multirotor UAV's Images","authors":"Mohammed AL-Qadri, Jian Cheng","doi":"10.1145/3406971.3406989","DOIUrl":"https://doi.org/10.1145/3406971.3406989","url":null,"abstract":"The use of the new technology Unmanned Aerial Vehicle (UAVs) is overgrowing across many application fields, including engineering surveys, terrain mapping, or generating 3D models. Accuracy assessment for object extraction from UAV images is critical for a large number of applications and plays an essential role in ensuring its quality. This study aims to assess the accuracy of geometric features that can be acquired with low-cost multirotor UAV. In this study, the UAV used to capture the ground at different altitudes, as the accuracy is influenced by flight height---the data processed by using photogrammetric software, which is based on structure from motion. The accuracy assessment is conducted by comparing the estimated results with those obtained via field measurements. It was found that high altitude gives a better result as compared to low altitude. In conclusion, this study demonstrates the feasibility of using high accuracy UAV's image to provide data for geometric design with reliable accuracy.","PeriodicalId":111905,"journal":{"name":"Proceedings of the 4th International Conference on Graphics and Signal Processing","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127496256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast and Effective Median Filter Algorithms for Salt and Pepper Noise","authors":"Xingyu Gao, Guo Peng, Lihua Ning","doi":"10.1145/3406971.3409043","DOIUrl":"https://doi.org/10.1145/3406971.3409043","url":null,"abstract":"For large data of the image processing in the fields of medication or communication, the salt and pepper noise is one of the key problems to affect the quality of the images. In order to obtain stable, fast, and effective filtering algorithms for salt and pepper noise, this paper proposes four filter algorithms, named fast and effective median filter1 (FEMF1), fast and effective median filter2(FEMF2), fast and effective median filter3(FEMF3), and fast and effective median filter4(FEMF4), respectively. FEMF1 uses a second-order differential method to convert the image to a binary matrix to determine the exact location of the broken pixels. It is stable and fast, however it has a sense of noise. FEMF2 and FEMF3 use different directions to search with changing windows. FEMF4 grows square bar window with unlimited growth and has search functions in all directions. FEMF4 costs the lowest time and presents the high quality performance. The running time of the FEMF2 is only slower than that of the FEMF4. Experiment show that FEMF2, FMEF3 and FEMF4 have stable, fast and effective features for low and high noises. These algorithms can be used in engineering applications.","PeriodicalId":111905,"journal":{"name":"Proceedings of the 4th International Conference on Graphics and Signal Processing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123432180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Solving Least Square Problem in Tomography","authors":"N. R. Jaffri, L. Shi, Usama Abrar","doi":"10.1145/3406971.3409044","DOIUrl":"https://doi.org/10.1145/3406971.3409044","url":null,"abstract":"The efficacy of the tomographic process depends upon the image reconstruction. Utmost mathematical problems encounter in tomography are systems of large linear equations. Krylov solvers for linear systems have sophisticated and straightforward formulae for the residual norm. Two Krylov solvers CGLS and LSQR are the variations of steep descent. The steep descent is one of the fundamental iterative technique used exclusively for the solution of large sparse square matrices. However, CGLS and LSQR the variations of steep descent also solve least square problems. This work involves the comparison of CGLS and LSQR. CGLS and LSQR are mathematically equivalent, but LSQR is robust and difficult to apply. Large sparse linear least square problem solved by LSQR, that is Krylov space solver in-fact based on Lanczos's bidiagonalization. This work applies said variations (CGLS and LSQR) of the steep descent on a tomographic test problem and compare the two algorithms on the basis of accuracy using MATLAB.","PeriodicalId":111905,"journal":{"name":"Proceedings of the 4th International Conference on Graphics and Signal Processing","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123213716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}