{"title":"Human Activity Recognition Based on Transformer via Smart-phone Sensors","authors":"Y. Liang, Kaile Feng, Zizhuo Ren","doi":"10.1109/CCAI57533.2023.10201297","DOIUrl":"https://doi.org/10.1109/CCAI57533.2023.10201297","url":null,"abstract":"Capturing the spatial and temporal relationships of time-series signals is a significant obstacle for human activity recognition based on wearable devices. Traditional artificial intelligence algorithms cannot handle it well, with convolution-based models focusing on local feature extraction and recurrent networks lacking consideration of the spatial domain. This paper offers a deep learning architecture based on transformer to address the aforementioned issue with data collected from smart-phones embedded with three-axis accelerometers. The transformer model, as a deep learning network mainly applied to natural language processing (NLP), is good at processing time-series information, where the self-attention mechanism captures the dependencies of perceptual signals in the temporal and spatial domains, improving the overall comprehensibility. We implement convolutional neural networks (CNN) and long and short-term memory networks (LSTM) for evaluation while our proposed model achieves an average classification accuracy of 94.84%, which is an improvement compared to the traditional model.","PeriodicalId":285760,"journal":{"name":"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115266196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Compact Microstrip-Fed Antenna with Simple Layout for 5G N79 Band Applications","authors":"Honglei Sun","doi":"10.1109/CCAI57533.2023.10201292","DOIUrl":"https://doi.org/10.1109/CCAI57533.2023.10201292","url":null,"abstract":"A novel microstrip-fed patch antenna for the fifth generation (5G) mobile communication N79 band is proposed and verified. The proposed antenna is printed on one front and back surface of the printed circuit board (PCB) for simple layout. The measured relative bandwidth reaches to 12.8% (4.4-5 GHz) and the entire size of the antenna is $30 times 26$ mm$^{2}$ only. The reflection sensitivities with different interfering objects are also compared. The radiation pattern is nearly omnidirectional and having a power handing capability of 43.5 dBm, which is suitable for 5G mobile base station applications.","PeriodicalId":285760,"journal":{"name":"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124879163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CCAI 2023 Cover Page","authors":"","doi":"10.1109/ccai57533.2023.10201277","DOIUrl":"https://doi.org/10.1109/ccai57533.2023.10201277","url":null,"abstract":"","PeriodicalId":285760,"journal":{"name":"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114887000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Key Technologies and Development Directions of Brain-Computer Interface Technology for Manned Space Mission","authors":"Zhenyu Qiu, Huanxi Zhao, Shijin Wang, Yue Wang","doi":"10.1109/CCAI57533.2023.10201247","DOIUrl":"https://doi.org/10.1109/CCAI57533.2023.10201247","url":null,"abstract":"The application of brain-computer interface (BCI) technology in manned space mission can improve the safety of astronauts and the reliability of space operation. The BCI technology for manned space mission is introduced firstly, and then the application of BCI technology in manned space mission such as the control of astronauts' living environment, the rapid response of astronauts' emergency system, the flexible control of space manipulator, the remote operation and emergency maintenance of space robots is given. Then key technologies of BCI in manned space engineering such as Non-invasive EEG signal acquisition and recognition of astronauts, efficient transmission of brain-computer signals in space mission, high- precision control of BCI, Rapid feedback of BCI in spacecraft engineering is discussed. And the Impact of space environment on BCI hardware and software for space engineering is analyzed. At last, future development directions of BCI technology for manned space Mission is proposed.","PeriodicalId":285760,"journal":{"name":"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129834752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-Language Binary-Source Code Matching Based on Rust and Intermediate Representation","authors":"Jiacheng Mao, Zukai Tang, Wenbi Rao","doi":"10.1109/CCAI57533.2023.10201266","DOIUrl":"https://doi.org/10.1109/CCAI57533.2023.10201266","url":null,"abstract":"Binary-source code matching is a crucial task in computer security and software engineering, enabling reverse engineering by matching binary code with its corresponding source code and aiding vulnerability detection by searching for binary code given its source code. However, cross-language binary-source code matching remains largely unexplored, with existing research mostly focusing on C/C++ and Java due to a lack of suitable datasets. This paper attempts to address this gap by introducing a new language, Rust, and conducting experiments on it for the task. Rust offers several advantages, such as being commonly compiled to LLVM-IR and binary files, and its compiler performs various transformations during compilation to ensure memory safety, thread safety, and null safety of programs, resulting in an increased prevalence of function clones. Matching binary-source code across Rust and C/C++ poses greater challenges and research opportunities. Moreover, in this paper, we also re-trained the OSCAR model on the CodeNet dataset and evaluated its performance, which we call XLOSCAR. We designed a series of experiments to analyze cross-language binary-source code matching between Rust and C/C++, and compared XLOSCAR with the general OSCAR model.","PeriodicalId":285760,"journal":{"name":"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114819155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Poultry Disease Identification Based on Light Weight Deep Neural Networks","authors":"Xiaodan Liu, Yinghua Zhou, Yuxiang Liu","doi":"10.1109/CCAI57533.2023.10201323","DOIUrl":"https://doi.org/10.1109/CCAI57533.2023.10201323","url":null,"abstract":"Poultry farmers are often plagued by poultry diseases during production and face the risk of large-scale spread of poultry epidemic diseases. Accurate and efficient identification of poultry diseases is a necessary prerequisite for timely symptomatic treatment and economic loss avoidance. In this paper, a poultry disease identification model based on a light weight deep neural network is established and named PoultryNet, which adopts MobileNetV3 as the backbone. A feature fusion structure is designed to enhance the feature extraction ability of the model, and a SA module is used to add channel attention and spatial attention. The experiment result shows that the classification accuracy of the proposed PoultryNet for poultry feces images is 97.77%, which is higher than that of MobileNetV3, ShuffleNetV2, EfficientNet, and GoogleNet models by 1.12%, 1.67%, 1.27%, and 3.97%, respectively. Compared with the base model, the amount of parameters of PoultryNet was reduced by 0.33 M. The effectiveness of PoultryNet, as a poultry disease identification model, is therefore proved.","PeriodicalId":285760,"journal":{"name":"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128162361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Expressway Guidance Information Selecting Method Based on Stochastic User Equilibrium","authors":"Xiaofeng Zhu, Zijie Liu, Xiaoming Xu, Qing Chen","doi":"10.1109/CCAI57533.2023.10201273","DOIUrl":"https://doi.org/10.1109/CCAI57533.2023.10201273","url":null,"abstract":"Expressway guidance signs provide clear direction information for traveling vehicles to the city or important places. Reasonable and efficient guidance sign information is beneficial for improving vehicle traffic efficiency, alleviating road network congestion, and promoting energy conservation and emission reduction. Currently, the selection of expressway guidance sign information lacks good theoretical support in practice. Therefore, this paper proposes a guidance sign information selection method based on the theory of stochastic user equilibrium traffic assignment. Under the condition of known traffic demand, obtain the traffic flow assumed by all links in the case of road network equilibrium by stochastic user equilibrium traffic assignment, and clarify the traffic distribution of each OD demand in each link. Then, 2-3 guidance destination information is selected by sorting the volume of traffic flow. Finally, a case study is conducted based on real data of the Chitou hub in Guangxi expressways, to show the proposed guidance sign selecting method, and compare the newly generated and the existing selected guidance sign information.","PeriodicalId":285760,"journal":{"name":"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115778826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Floor Water Detection Technology Based on Improved Resnet","authors":"Ruijie Hao, Siyi Xia, Youbin Fang, Taiyu Yan","doi":"10.1109/CCAI57533.2023.10201324","DOIUrl":"https://doi.org/10.1109/CCAI57533.2023.10201324","url":null,"abstract":"With the rapid development of artificial intelligence and the arrival of an aging population, the need for intelligent service robots is increasing. Older adults and the blind are susceptible to slipping due to weak vision, especially when they cannot see water on indoor floors. We use computer vision technology to solve this problem. However, it is challenging to detect floor water using existing target detection algorithms due to uncertainty about its shape and size. This paper proposes a floor water detection technology based on improved Resnet, which can be deployed on the intelligent service robot to remind the elderly and the blind to be careful When the service robot detects water in the ground. Our proposed topics and methods can significantly reduce the probability of the elderly and blind people slipping. The method proposed in this paper is 3.6% higher than the original Resnet18 and 8.1% higher than Mobilenetv2; the number of parameters in our method is only 8.5 percent of VGG16_bn and yet achieves similar performance to VGG16_bn. This paper suggests a new trajectory for intelligent service robots by detecting water on the floor, and it has demonstrated promising results in accuracy and speed. It is hoped that this paper will arouse more scholars’ interest in the detection technology of floor water.","PeriodicalId":285760,"journal":{"name":"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)","volume":"21 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132663218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Examining Manual and Automatic MT Evaluation: A Grey Relational Analysis for Chinese-Portuguese Translation Quality","authors":"Yuqi Sun, Lap-Man Hoi, S. Im","doi":"10.1109/CCAI57533.2023.10201322","DOIUrl":"https://doi.org/10.1109/CCAI57533.2023.10201322","url":null,"abstract":"This study investigates the relationship between manual and automatic machine translation evaluation methodologies by employing Grey Correlation Analysis (GRA) to assess the correlation between Chinese-Portuguese machine translation outputs’ BLEU scores and human evaluation scores based on a proposed evaluation index system. The research aims to provide insights into the factors impacting machine translation quality and the most relevant linguistic dimensions in translation evaluation. The findings reveal that “usability” and “adequacy” exhibit the highest correlation with BLEU scores, while “Semantics” ranks highest among the ten manual evaluation indicators, when correlated with the aggregated human evaluation results, followed by “Correction” (correct information) and “Omission and/or Addition”. The findings contribute to the field of machine translation evaluation by illuminating the complex relationship between manual and automatic evaluation techniques and guiding future improvements in machine translation systems and evaluation methodologies.","PeriodicalId":285760,"journal":{"name":"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114565454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhengdong Li, Yan He, Xuanyan Wu, Zheng Hong, Xiaojuan Hou, Xiuling Li
{"title":"Research on the Teaching Reform of Electronic Technology Experiments Based on Multisim14","authors":"Zhengdong Li, Yan He, Xuanyan Wu, Zheng Hong, Xiaojuan Hou, Xiuling Li","doi":"10.1109/CCAI57533.2023.10201275","DOIUrl":"https://doi.org/10.1109/CCAI57533.2023.10201275","url":null,"abstract":"In the teaching of electronic technology courses in higher vocational education, it is undoubtedly very important to cultivate students’ practical abilities. Aiming at the problems of students’ weak circuit analysis ability, insufficient theoretical integration with practice, and low experimental efficiency in practice, this paper proposes an auxiliary teaching method of applying Multisim14 software to circuit simulation design and analysis before entering the laboratory, followed by laboratory practical verification. Taking the design of a DC regulated power supply as an example, the experimental design and simulation analysis were completed using Multisim14. The practice shows that the simulation results are consistent with the theoretical analysis results, which proves that the simulation software is feasible for teaching electronic technology design.","PeriodicalId":285760,"journal":{"name":"2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114916131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}