{"title":"Research on Optimization Recognition Method of Digital Image Target Point Based on Machine Vision","authors":"G. Zhao","doi":"10.1145/3506651.3506976","DOIUrl":"https://doi.org/10.1145/3506651.3506976","url":null,"abstract":"In order to enhance the auto-focus detection ability of digital images exposed by a single strong light source, an optimized recognition method of digital image target points based on machine vision tracking learning is proposed. Establishing a single strong light source exposure digital image feature point enhancement detection model, carrying out feature matching of the single strong light source exposure digital image under information enhancement technology, establishing a single strong light source exposure digital image three-dimensional reconstruction model, constructing a fuzzy feature detection algorithm of the single strong light source exposure digital image, and carrying out RGB decomposition of the single strong light source exposure digital image through fast low illumination image feature point identification feature matching, The spatial matching function of digital image under fast low illumination image feature point recognition is obtained. Under the machine vision tracking recognition model, fast low illumination image feature point recognition information fusion is carried out. And combined with spatial visual information enhancement method, the matching filter detection of digital image exposed by single strong light source is carried out. Through wavelet feature decomposition and enhanced information method, the target point optimization recognition of digital image exposed by single strong light source is carried out, and the signal-to-noise ratio of digital image exposed by single strong light source is improved. The results show that this method can be used to identify the target points of digital images exposed by high single strong light source.","PeriodicalId":280080,"journal":{"name":"2021 4th International Conference on Digital Medicine and Image Processing","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114705206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On Image Registration using The Radon Transform: Review-and-Improvement","authors":"F. Hjouj, Mohamed Soufiane Jouini","doi":"10.1145/3506651.3506654","DOIUrl":"https://doi.org/10.1145/3506651.3506654","url":null,"abstract":"In this paper, we review the problem of identifying a Linear Transformation applied on an image. Three major parts are presented, all involved the use of Radon Transform: First, recovering a sequence of basic transformations on an image; namely, reflection, rotation, dilation, and translation. Second, recovering a transformation on reference Image and an inspected image, where is obtained from by a general Linear Transformation. In doing so, we review our Analysis using the Singular Value Decomposition of the Transformation's Matrix. Third, we present an alternative efficient method of obtaining a matrix of transformation by testing a well-defined class of potential matrices using only the two projections of the inspected image.","PeriodicalId":280080,"journal":{"name":"2021 4th International Conference on Digital Medicine and Image Processing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121507113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analysis of the Relationship of Systemic Lupus Erythematosus with exogenous factors in Peru","authors":"Sario Angel Chamorro Quijano, Mauricio Muñoz Melgarejo, G. Rodríguez, Doris Marlene Muñoz Saenz, Jenny Caroline Muñoz Saenz","doi":"10.1145/3506651.3506663","DOIUrl":"https://doi.org/10.1145/3506651.3506663","url":null,"abstract":"This research analyzes how exogenous factors (EF) influence the complications of Systemic Lupus Erythematosus by region in Peru (Costa, Sierra and Selva). Three studies are presented: The first study determines how EF complicates the clinical manifestations in patients diagnosed with SLE in each region. In the second study the most common and susceptible diseases related to lupus are compared through medical reports by region, in the third study clinical histories of patients with Systemic Lupus Erythematosus (SLE) from Costa, Sierra and Selva are taken, which in turn presented complications specific to your region, analyzing and comparing if it is related. The results indicate that it is not related, since each region presents its own EF such as climate, diet, m.a.s.l., radiation, which influence the evolution, prognosis and complication of the disease, it should be noted that in each region there are a different Complication in the case of the mountains, the clinical picture of the patient was SLE + SEROSITIS, in the case of the jungle the clinical picture of the patient SLE + TUBERCULOSIS, finally on the coast they presented SLE + SKIN DISEASES, the cause being mortality due to SLE but the complication associated with it according to the region.","PeriodicalId":280080,"journal":{"name":"2021 4th International Conference on Digital Medicine and Image Processing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114114287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"New Level-Set-Based Shape Recovery Method and its application to sparse-view shape tomography","authors":"Haytham A. Ali, H. Kudo","doi":"10.1145/3506651.3506655","DOIUrl":"https://doi.org/10.1145/3506651.3506655","url":null,"abstract":"The recovery of shapes from a few numbers of their projections is very important in Computed tomography. In this paper, we propose a novel scheme based on a collocation set of Gaussian functions to represent any object by using a limited number of projections. This approach provides a continuous representation of both the implicit function and its zero level set. We show that the appropriate choice of a basis function to represent the parametric level-set leads to an optimization problem with a modest number of parameters, which exceeds many difficulties with traditional level set methods, such as regularization, re-initialization, and use of signed distance function. For the purposes of this paper, we used a dictionary of Gaussian function to provide flexibility in the representation of shapes with few terms as a basis function located at lattice points to parameterize the level set function. We propose a convex program to recover the dictionary coefficients successfully so it works stably with only four projections by overcoming the issue of local-minimum of the cost function. Finally, the performance of the proposed approach in three examples of inverse problems shows that our method compares favorably to Sparse Shape Composition (SSC), Total Variation, and Dual Problem.","PeriodicalId":280080,"journal":{"name":"2021 4th International Conference on Digital Medicine and Image Processing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127951917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Median Filter Helps Lymph Node Segmentation in Deep Learning via PET/CT","authors":"Xuan Zhang, Wentao Liao, Guoping Xu","doi":"10.1145/3506651.3506662","DOIUrl":"https://doi.org/10.1145/3506651.3506662","url":null,"abstract":"Pathological lymph node segmentation plays a vital role in clinical practice. Yet it is still a challenging problem owing to low contrast to surrounding structures on images. In this paper, we investigate the problem whether the classical median filter helps lymph node segmentation in deep learning via PET/CT. Specifically, we design a median filter layer and integrate it into two types of deep convolution neural networks: SegNet and DeepLabv3+, which takes the encoder and decoder structure that has the advantage to segment objects in a multi-scale way. Meanwhile, we adopt three various objective functions, which are cross entropy loss, generalized Dice loss and focal loss, to study which is the best choice for pathological lymph node segmentation with median filter. Four-fold cross validation has been done on 63 volumes containing 214 malignant lymph nodes, and the experiments demonstrate that median filter could help improve the lymph segmentation performance with cross entropy as loss function, which has 3% and 2% improvements with SegNet and 4% and 3% improvements with DeepLabv3+ in terms of Sensitivity and Dice Similarity Coefficient (DSC).","PeriodicalId":280080,"journal":{"name":"2021 4th International Conference on Digital Medicine and Image Processing","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128063573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CoFIM: A Computational Framework for Proteomic and Metabolomic Integrated Data Analysis","authors":"A. Zhong, Alice Liu, Amy Wu","doi":"10.1145/3506651.3506658","DOIUrl":"https://doi.org/10.1145/3506651.3506658","url":null,"abstract":"Motivation: Coronavirus disease (COVID-19) struck the world in late 2019 and caused millions of deaths worldwide as an infectious disease caused by the SARS-CoV-2 virus. An effective and early diagnosis is truly pivotal, and thus, many studies were initiated for that. The existing studies have some limitations such as only focusing on one type of omics data. The study aims to develop a computational model which studies COVID-19 with the integration of metabolomics and proteomics data, therefore reaching the goal of detecting the virus early in the stage. Methods: The computational framework for integrating multi-omics data (CoFIM) consists of two parts. The first part is a statistical analysis of datasets. In this study, a series of statistical analyses including univariate and multivariate analyses were conducted to identify a number of potential biomarkers after pulling the data of severe patients and non-severe patients from a proteomic and metabolomics dataset of sera samples of COVID-19 patients. The second part is a machine learning model that was conducted to predict a patient's disease progression and provide more insightful information to understand the disease. Results: CoFIM integrates both proteomic and metabolomics data and provides a customizable and scalable framework to analyze the multi-omics data. CoFIM is demonstrated on the COVID-19 dataset and a number of biomarkers were detected. Several new protein biomarkers (IGKV1-12, PCOLCE, PGLYRP2, PCYOX1, LUM, IGHV1-46) were detected. We believe CoFIM will be widely used for multi-omics data analysis.","PeriodicalId":280080,"journal":{"name":"2021 4th International Conference on Digital Medicine and Image Processing","volume":"33 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123231380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Image Combination Segmentation Method Based on Clustering Analysis and Edge Detection","authors":"Guocheng Liu","doi":"10.1145/3506651.3506975","DOIUrl":"https://doi.org/10.1145/3506651.3506975","url":null,"abstract":"Considering that it is difficult to completely segment the spider mite image on the leaves of field crops from the leaf background, a combination segmentation method combining K-means clustering algorithm and Canny edge detection algorithm is proposed. This method first uses the K-means clustering algorithm to filter out most of the leaf background, then extracts the edge closed contour of the spider mite based on Canny edge detection, and implements the binarization segmentation of the spider mite image by algorithms such as seed filling and morphological opening operations. Experiments show that this method can achieve complete segmentation of spider mites images on leaves, which provides a new technique and method for spider mite pest analysis and insect number counting.","PeriodicalId":280080,"journal":{"name":"2021 4th International Conference on Digital Medicine and Image Processing","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130344412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiawei Liu, Chenyang Jin, Jingxi Liang, Luoqi Wang
{"title":"Comparison of Different Models of Voiceprint Recognition used in Automatic Door Lock System (August 2021)","authors":"Jiawei Liu, Chenyang Jin, Jingxi Liang, Luoqi Wang","doi":"10.1145/3506651.3506660","DOIUrl":"https://doi.org/10.1145/3506651.3506660","url":null,"abstract":"For any system, its reliability and the cost of construction have always been two major determinants of whether it can be used daily. In the field of voiceprint recognition, people are often forced to choose between accuracy and convenience. This paper discusses the performance of two speaker verification models in different environment and whether it is possible to balance between the cost and the result. The Gaussian Mixture Model with universal background model (GMM-UBM) and deep-learning method are selected to represent two common approaches in speaker verification. Through comparison between the two models, we find that the deep-learning method is in greater need of large training datasets to function since it performs poorer than the GMM-UBM model while trained with the same dataset containing only a few samples, while both of these two methods reach nearly 100% accuracy if provided a large enough dataset to train the model. Meanwhile, despite the attempt to yield higher accuracy by configuring the setting of both models, it appears that excellent performance only occurs when large amounts of training data are given, and little noise is present.","PeriodicalId":280080,"journal":{"name":"2021 4th International Conference on Digital Medicine and Image Processing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130671366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Study on Diffusion Law of Aerosol Particles in Indoor Human Droplets","authors":"Changping Chen, C. Qian, Zhengyu Luo","doi":"10.1145/3506651.3506665","DOIUrl":"https://doi.org/10.1145/3506651.3506665","url":null,"abstract":"This paper studies the movement characteristics of aerosol particles produced by human cough behavior, and on this basis, the influence of different air distribution forms on the distribution and movement characteristics of aerosol particles in simple indoor human body is compared, so as to obtain the best air distribution form. Using computational fluid dynamics (CFD) method, a 1:1 human body model was established according to the actual human body ratio, and a mathematical model of aerosol particle movement was established using DPM model. The diffusion law of aerosol particles exhaled by human body was simulated under three ventilation modes of mixed ventilation, displacement ventilation and floor air supply. The results show that the droplet particles move forward and down and diffuse after being ejected from the human mouth, and the particle mass concentration is gradually diluted in the air. Larger diameter of particle deposition to the end of the small size of particles is influenced by the human body thermal plume \"floating\" suspension aerosol formation, different airflow organization forms can cause significant disturbance to particle diffusion, including displacement ventilation in dilute particle mass concentration has the best performance, followed with floor board air supply and mixed ventilation performance is relatively poor.","PeriodicalId":280080,"journal":{"name":"2021 4th International Conference on Digital Medicine and Image Processing","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133362802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D Registration of the Point Cloud Data Using Parameter Adaptive Super4PCS Algorithm in Medical Image Analysis","authors":"Shun Su, G. Song, Yiwen Zhao","doi":"10.1145/3506651.3506652","DOIUrl":"https://doi.org/10.1145/3506651.3506652","url":null,"abstract":"In this article, we use the parameter-adaptive Super4PCS algorithm to achieve high-precision registration of medical point clouds. First, generate the corresponding point cloud from the biological data (CT, MRI) to be registered. Then analyze the characteristics of the point cloud to be registered, and use it to adaptively set the parameters of Super4PCS, and finally perform point cloud registration. We compare the performance of six different algorithms with their accuracy and robustness. The accuracy, robustness of our method are the best. At the same time, no parameter input is required which is very convenient for medical workers. Experiments on medical models demonstrate the efficiency and robustness of our algorithm.","PeriodicalId":280080,"journal":{"name":"2021 4th International Conference on Digital Medicine and Image Processing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132290195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}