{"title":"Real-time detection of navel orange fruits in the natural environment based on deep learning","authors":"Qianli Zhang, Qiusheng Li, Junyong Hu, Xianghui Xie","doi":"10.1145/3503047.3503105","DOIUrl":"https://doi.org/10.1145/3503047.3503105","url":null,"abstract":"Abstact: Deep learning is widely used in intelligent picking, but the adverse effects of different environmental scenes on target detection and recognition are crucial to picking robots’ accurate and efficient work. First, the data set needed for the experiment was manually created. The data set selected 925 navel orange images, including 290 backlit sunny days, 310 forward light, and 325 cloudy days. The training and test sets were divided into 8:2. Then, we studied the detection of navel orange based on the improved model of single-stage target detection network PP-YOLO. Used the backbone network ResNet with deformable convolution to extract image features and combined with FPN (feature pyramid network) for feature fusion to achieve multi-scale detection. The K-means clustering algorithm clustered the appropriate Anchor size for the target navel orange, which reduced the training time and the confidence error of the prediction frame. Loaded the pre-trained model and compared the model performance with the original PP-YOLO, YOLO-v4, YOLO-v3, and Faster RCNN network. Analyzed the Loss curve and AP curve of the training log, the task of detecting navel oranges under sunny, sunny, and cloudy conditions was realized. Finally, the improved PP-YOLO detection accuracy was 90.81%, 92.46%, and 94.31%, and the recognition efficiency reached 72.3 fps, 73.71 fps, and 74.9 fps, respectively. The model performance is better than the other four, with better robustness. CCS CONCEPTS • Computing methodologies∼Artificial intelligence∼Computer vision∼Computer vision tasks∼Vision for robotics","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124987147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Domain adaptation based on the measure of kernel-product maximum mean discrepancy","authors":"Xuerui Chen, Guohua Peng","doi":"10.1145/3503047.3503108","DOIUrl":"https://doi.org/10.1145/3503047.3503108","url":null,"abstract":"Transfer learning is an important branch of machine learning, focusing on applying what has been learned in the old field to new problems. Maximum mean discrepancy (MMD) is used in most existing works to measure the difference between two distributions by applying a single kernel. Recent works exploit linear combination of multiple kernels and need to learn the weight of each kernel. Because of the singleness of single-kernel and the complexity of multiple-kernel, we propose a novel kernel-product maximum mean discrepancy (DA-KPMMD) approach. We choose the product of linear kernel and Gaussian kernel as the new kernel. Specifically, we reduce differences in the marginal and conditional distribution simultaneously between source and target domain by adaptively adjusting the importance of the two distributions. Further, the within-class distance is minimizing to differentiate samples of different classes. We conduct cross-domain classification experiments on three image datasets and experimental results show the superiority of DA-KPMMD compared with several domain adaptation methods. CCS CONCEPTS • Computing methodologies • Machine learning • Machine learning approaches • Kernel methods","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115469477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Su, Baosheng Wang, Qianqian Xing, Xiaofeng Wang
{"title":"DISCA: Decentralized Infrastructure for Secure Community Attribute certifying","authors":"Yi Su, Baosheng Wang, Qianqian Xing, Xiaofeng Wang","doi":"10.1145/3503047.3503089","DOIUrl":"https://doi.org/10.1145/3503047.3503089","url":null,"abstract":"Inter-domain routing is the cornerstone of the modern Internet, and the security of inter-domain routing is very important to the reliability and security of Internet basic services. However, BGP protocol, as a current standard inter-domain routing protocol, lacks security considerations at the beginning of its design and does not authenticate routing messages. Because the BGP Community attribute is widely used, researchers have found a variety of new routing attacks using the Community attribute. This kind of attack is more covert and flexible, the detection mechanism is difficult to detect its existence, and the current trusted verification scheme can not completely defend against this kind of attack. In order to solve the above problems, this paper proposes a BGP Community attribute authentication scheme based on blockchain. This scheme authenticates the use of BGP Community attributes based on the blockchain smart contract for the first time. Based on the routing source authentication provided by the routing source authentication mechanism, this scheme further puts forward the concept of \"the right to know about using\". Through the agent authentication mechanism, it can effectively resist a variety of new routing attacks without changing the existing BGP routing protocol.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116242965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Improved GAIL Based on Object Detection, GRU, and Attention","authors":"Qinghe Liu, Yinghong Tian","doi":"10.1145/3503047.3503063","DOIUrl":"https://doi.org/10.1145/3503047.3503063","url":null,"abstract":"Imitation Learning (IL) learns expert behavior without any reinforcement signal. Thus, it is seen as a potential alternative to Reinforcement Learning (RL) in tasks where it is not easy to design reward functions. However, most models based on IL methods cannot work well when the demonstration is high dimension, and the tasks are complex. We set one realistic-like UAV race simulation environment on AirSim Drone Racing Lab (ADRL) to study the two problems. We propose a new model improves on Generative Adversarial Imitation Learning (GAIL). An object detection network trained by the expert dataset allows the model to use high-dimensional visual inputs while alleviating the data inefficiencies of GAIL. Benefit from the recurrent structure and attention mechanism, the model can control the drone cross the gates and complete the race as if it were an expert. Compared to the primitive GAIL structure, our improved structure showed a 70.6% improvement in average successful crossing over 2000 flight training sessions. The average missed crossing decreased by 18.8% and the average collision decreased by 14.1%.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"746 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122966011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unsupervised Barcode Image Reconstruction Based on Knowledge Distillation","authors":"X. Cui, Ting Sun, Shuixin Deng, Yusen Xie, Lei Deng, Baohua Chen","doi":"10.1145/3503047.3503073","DOIUrl":"https://doi.org/10.1145/3503047.3503073","url":null,"abstract":"Due to the influence of the lighting and the focal length of the camera, the barcode images collected are degraded with low contrast, blur and insufficient resolution, which affects the barcode recognition. To solve the above problems, this paper proposes an unsupervised low-quality barcode image reconstruction method based on knowledge distillation by combining traditional image processing and deep learning technology. The method includes both teacher and student network, in the teachers' network, the first to use the traditional algorithm to enhance the visibility of the barcode image and edge information, and then the method of using migration study, using the barcode image super-resolution network training to blur and super resolution, the final barcode image reconstruction using the depth image prior to in addition to the noise in the image; In order to meet the real-time requirements of model deployment, the student network chooses a lightweight super-resolution network to learn the mapping between the input low quality barcode image and the output high quality barcode image of the teacher network. Experiment shows the proposed algorithm effectively improves the quality and the recognition rate of barcode image, under the premise of ensuring real-time performance.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114423958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qin Li, Wei Liu, C. Niu, Yanyun Wang, Ou Gao, Yintu Bao, Wei‐qi Zou, Haobo Zhang, Q. Hu, Zhikang Lin, Chaofan Pan
{"title":"The influence of different preprocessing on radar signal recognition based on time-frequency analysis and deep learning","authors":"Qin Li, Wei Liu, C. Niu, Yanyun Wang, Ou Gao, Yintu Bao, Wei‐qi Zou, Haobo Zhang, Q. Hu, Zhikang Lin, Chaofan Pan","doi":"10.1145/3503047.3503127","DOIUrl":"https://doi.org/10.1145/3503047.3503127","url":null,"abstract":"∗Aiming at the problem of the lack of rigorous comparative analysis in the preprocessing step in the classification and recognition of radar signals based on time-frequency diagrams and deep learning, this paper analyzes the influence of signal denoising, different time-frequency analysis, and time-frequency graph denoising in preprocessing on the classification and recognition of radar signals through control variables. Lay the foundation for further research on radar signal classification and recognition.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122047270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Line-Constrained L∞ One-Center Problem on Uncertain Points","authors":"Quan Nguyen, Jingru Zhang","doi":"10.1145/3503047.3503124","DOIUrl":"https://doi.org/10.1145/3503047.3503124","url":null,"abstract":"Problems on uncertain data have attracted significant attention due to the imprecise nature of the measurement. In this paper, we consider the (weighted) L∞ one-center problem on uncertain data with an addition constraint that requires the sought center to be on a line. Given are a set of n (weighted) uncertain points and a line L. Each uncertain point has m possible locations in the plane associated with probabilities. The L∞ one-center aims to compute a point q* on L to minimize the maximum of the expected L∞ distances of all uncertain points to q*. We propose an algorithm to solve this problem in O(mn) time, which is optimal since the input is O(mn).","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123358325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unsupervised Face Recognition Algorithm based on Fast Density Clustering Algorithm","authors":"Guodong Jiang, Jingjing Zhang, Jinyin Chen, Haibin Zheng, Zhiqing Chen, Liang Bao","doi":"10.1145/3503047.3503117","DOIUrl":"https://doi.org/10.1145/3503047.3503117","url":null,"abstract":"Most classic face recognition classification algorithms need to extract enough face images with class label information as training samples. However in most practical applications, face recognition based on supervised methods are incapable to deal with images without any label information. A novel unsupervised face recognition algorithm based on fast density clustering algorithm is proposed in this paper, which doesn't need sample images with class label information. Without any labelled images as examples, the designed method still get higher recognition rate compared with the same classifiers with labelled training sample. The main contributions of this paper include three aspects. Firstly, aiming at most current clustering algorithm has challenges as low clustering purity, parameter sensibility and cluster center manual determination, a fast density clustering algorithm (FDCA) with automatic cluster center determination (ACC) is proposed. Secondly, based on ACC-FDCA, an unsupervised face image recognition algorithm is designed. SSIM, CW-SSIM and PSNR are adopted to calculate face image similarity matrix. Finally, an online unsupervised face video recognition platform is developed based on brought up ACC-FDCA face recognition algorithm. Real life videos are recorded and recognized to testify the high performance of brought up method. We can conclude that classifiers using FDCA to get image samples label information for training could achieve higher recognition rate compared with the same classifiers trained with labelled image samples.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132970194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Few-shot Adversarial Audio Driving Talking Face Generation","authors":"Ruyi Chen, Shengwu Xiong","doi":"10.1145/3503047.3503054","DOIUrl":"https://doi.org/10.1145/3503047.3503054","url":null,"abstract":"Talking-face generation is an interesting and challenging problem in computer vision and has become a research focus. This project aims to generate real talking-face video sequences, especially lip synchronization and head motion. In order to create a personalized talking-face model, these works require training on large-scale audio-visual datasets. However, in many practical scenarios, the personalized appearance features, and audio-video synchronization relationships need to be learned from a few lip synchronization sequences. In this paper, we consider it as a few-shot image synchronization problem: synthesizing talking-face with audio if there are additionally a few lip-synchronized video sequences as the learning task? We apply the reptile methods to train the meta adversarial networks and this meta-model could be adapted on just a few references sequences and done quickly to learn the personalized references models. With meta-learning on the dataset, the model can learn the initialization parameters. And with few adapt steps on the reference sequences, the model can learn quickly and generate highly realistic images with more facial texture and lip-sync. Experiments on several datasets demonstrated significantly better results obtained by our methods than the state-of-the-art methods in both quantitative and quantitative comparisons.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115629298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Chen, Jiajia Wang, Yurong Liu, Jiangtao Xu, Chao Fan
{"title":"Rearch on quantitative evaluation technology of equipment battlefield environment adaptability","authors":"Hao Chen, Jiajia Wang, Yurong Liu, Jiangtao Xu, Chao Fan","doi":"10.1145/3503047.3503134","DOIUrl":"https://doi.org/10.1145/3503047.3503134","url":null,"abstract":"For the performance evaluation, environmental adaptability evaluation, system contribution rate evaluation and other evaluation activities in the equipment test process, the mathematical methods such as uncertainty quantification and Bayesian network are used to quickly build the evaluation index system and quantitative evaluation model related to the evaluation task, design the evaluation prototype, and support various evaluation activities in complex scenarios, Quantifying the impact of uncertain factors as much as possible can speed up the progress of evaluation activities and improve the credibility of evaluation results.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"176 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114752914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}