L. N. Mantari-Ramos, Alem Huayta-Uribe, Helder Alexis Mayta-Leon, Hitan Orlando Cordova-Sanchez, Deyby Huamanchahua
{"title":"Design of a control and monitoring system for pollutants in a handcrafted footwear factory","authors":"L. N. Mantari-Ramos, Alem Huayta-Uribe, Helder Alexis Mayta-Leon, Hitan Orlando Cordova-Sanchez, Deyby Huamanchahua","doi":"10.1109/ARACE56528.2022.00015","DOIUrl":"https://doi.org/10.1109/ARACE56528.2022.00015","url":null,"abstract":"Autonomous systems provide a new approach to environmental quality control in the labour market, especially in jobs that expose the employee to concentrations of pollutants, which, if constantly exposed, can cause damage to the employee’s health and well-being. Therefore, this work presents a system of control and monitoring of pollutants in an independent way for a handmade footwear factory. For the development of the design, the VDI 2206 methodology was used, where the technological information, control design and system integration are presented. All this will allow the system to perform a good collection of information of the main environmental parameters to then be displayed on an HMI screen in real time, also the system has a PLC controller to activate the air conditioning instruments according to the information received in order to maintain the maximum permissible parameters of pollutants, which are a temperature between 30 ° C and 35 ° C, a relative humidity between 30 % and 70 % and an exposure of VOC between 0.50 ppm and 0.70 ppm. In this way, the system prevents the occurrence of diseases caused by unintentional exposure to pollutants.","PeriodicalId":437892,"journal":{"name":"2022 Asia Conference on Advanced Robotics, Automation, and Control Engineering (ARACE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129108338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruth A. Bastidas-Alva, Jose A. Paitan Cardenas, Kris S. Bazan Espinoza, Vrigel K. Povez Nuñez, Maychol E. Quincho Rivera, Jaime Huaytalla
{"title":"Recognition and classification system for trinitario cocoa fruits according to their ripening stage based on the Yolo v5 algorithm","authors":"Ruth A. Bastidas-Alva, Jose A. Paitan Cardenas, Kris S. Bazan Espinoza, Vrigel K. Povez Nuñez, Maychol E. Quincho Rivera, Jaime Huaytalla","doi":"10.1109/ARACE56528.2022.00032","DOIUrl":"https://doi.org/10.1109/ARACE56528.2022.00032","url":null,"abstract":"The objective of this research is the recognition and classification of the ripening state of trintario cocoa, based on the artificial vision technique YOLO-v5, executed in the Google Colab and MiniConda environment. The methodology contemplates preprocessing, processing and post-processing; in the first one, data acquisition, annotation and augmentation are performed; in the second one, the neural network architecture and the execution code are precise; finally, the model accuracy is determined and inferences are made through image and video tests in real time. The database contains 1286 training images collected in VRAEM fields, which were augmented using the novel Mosaic-12 method, which consists of improving the data with respect to the 4-mosaic model. The accuracy results for the model trained with the improved database is 60.2% and for the model with the unimproved database is 56%, confirming the technical value of the proposed method, achieving the recognition and classification of Trinitario cocoa according to its ripening stage in real time.","PeriodicalId":437892,"journal":{"name":"2022 Asia Conference on Advanced Robotics, Automation, and Control Engineering (ARACE)","volume":"477 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132307502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chenyang Qi, Chengfu Wu, Lei Lei, Xiaolu Li, Peiyan Cong
{"title":"UAV path planning based on the improved PPO algorithm","authors":"Chenyang Qi, Chengfu Wu, Lei Lei, Xiaolu Li, Peiyan Cong","doi":"10.1109/ARACE56528.2022.00040","DOIUrl":"https://doi.org/10.1109/ARACE56528.2022.00040","url":null,"abstract":"In this paper, we consider the problem of unmanned aerial vehicle (UAV) path planning. The traditional path planning algorithm has the problems of low efficiency and poor adaptability, so this paper uses the reinforcement learning algorithm to complete the path planning. The classic proximal policy optimization (PPO) algorithm has problems that the samples with large rewards in the experience replay buffer will seriously affect training, this situation causes the agent’s exploration performance degradation and the algorithm has poor convergence in some path planning tasks. To solve these problems, this paper proposes a frequency decomposition-PPO algorithm (FD-PPO) based on the frequency decomposition and designs a heuristic reward function to solve the UAV path planning problem. The FD-PPO algorithm decomposes rewards into multi-dimensional frequency rewards, then calculate the frequency return to efficiently guide UAV to complete the path planning task. The simulation results show that the FD-PPO algorithm proposed in this paper can adapt to the complex environment, and has outstanding stability under the continuous state space and continuous action space. At the same time, the FD-PPO algorithm has better performance in path planning than the PPO algorithm.","PeriodicalId":437892,"journal":{"name":"2022 Asia Conference on Advanced Robotics, Automation, and Control Engineering (ARACE)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131552590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DGGCNN: An Improved Generative Grasping Convolutional Neural Networks","authors":"Zhenyu Zhang, Junqi Luo, Jiyuan Liu, Mingyou Chen, Shanjun Zhang, Liucun Zhu","doi":"10.1109/ARACE56528.2022.00019","DOIUrl":"https://doi.org/10.1109/ARACE56528.2022.00019","url":null,"abstract":"The traditional robot grasping detection methods suffer from unstable grasping accuracy and slow convergence rate of training. In this paper, a depth generative grasping convolutional neural networks (DGGCNN) is proposed. A modified convolutional neural network architecture is designed to output the grasp quality, angle and width of the target. A novel loss function is also defined to further optimize the training quality of the network. The Cornell dataset is then used to train the network. The results of the simulation show that the proposed method has a superior success rate of grasping compared with original generative grasping convolutional neural networks (GGCNN).","PeriodicalId":437892,"journal":{"name":"2022 Asia Conference on Advanced Robotics, Automation, and Control Engineering (ARACE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124746130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huanying Yang, Weiguo Chen, Heng Zhao, Wanqian Zhang, Xiangyu You, Chao Zhang, Gang Zheng, Tingrui Sang, Xiangfu Wang
{"title":"Robot assisted unilateral biportal endoscopic lumbar interbody fusion for lumbar spondylolisthesis: A case report","authors":"Huanying Yang, Weiguo Chen, Heng Zhao, Wanqian Zhang, Xiangyu You, Chao Zhang, Gang Zheng, Tingrui Sang, Xiangfu Wang","doi":"10.1109/ARACE56528.2022.00037","DOIUrl":"https://doi.org/10.1109/ARACE56528.2022.00037","url":null,"abstract":"Objective: This paper reports a case of lumbar spondylolisthesis treated by unilateral biportal endoscopic lumbar interbody fusion (ULIF) surgery under the assistance of orthopedic robot. The clinical symptoms, surgical way and advantages of robotic surgery were reported conjunction with other literature. Method: One patient with lumbar spondylolisthesis underwent robot-assisted ULIF surgery after completing relevant examinations. The pain visual analogue scale (VAS) and Oswestry disability index (ODI) were recorded before and 3 days after surgery. The accuracy of pedicle screw placement was evaluated according to the Gertzbein-Robbins criteria. Result: The surgery went well. Compared with preoperative, postoperative VAS score and ODI index were significantly improved. 3 days after operation, X-ray and MRI showed that the position of the cage and internal fixation was accurate. The Gertzbein-Robbins score was Category A. Conclusion: Robot-assisted ULIF surgery provides a minimally invasive surgical approach for patients with lumbar spondylolisthesis due to its unique advantages of high precision and minimal invasiveness.","PeriodicalId":437892,"journal":{"name":"2022 Asia Conference on Advanced Robotics, Automation, and Control Engineering (ARACE)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122742611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robotic grasping target detection based on domain randomization","authors":"Jiyuan Liu, Junqi Luo, Zhenyu Zhang, Daopeng Liu, Shanjun Zhang, Liucun Zhu","doi":"10.1109/ARACE56528.2022.00038","DOIUrl":"https://doi.org/10.1109/ARACE56528.2022.00038","url":null,"abstract":"In recent years, deep learning has been a great success in robotic vision grasping, which is largely due to its adaptive learning capability and large-scale training samples. However, the hand-crafted datasets may suffer the dilemma of time-cost and quality. In this paper, a robot grasping target detection algorithm based on synthetic data is proposed. The training samples are generated quickly and accurately by domain randomization technique. Each RGB image of the domain randomized dataset contains complex backgrounds and randomly rotated detection targets, while the illumination of the scene and the occlusion of the targets are randomized to improve the generalization of the model, and finally we put the dataset into YOLOv3 for training. The YCB dataset is used as the training and testing samples. The experiments compare the detecting effects of the networks that are trained by YCB dataset and its synthetic data respectively. The results show that the dataset by domain randomization is consistent with the YCB dataset in terms of recognition accuracy, while the mAP of the dataset by domain randomization is improved by 10% compared to the YCB dataset, which further indicates that the synthetic dataset constructed by domain randomization can effectively improve the network learning effect and further improve the recognized performance of the target in complex scene.","PeriodicalId":437892,"journal":{"name":"2022 Asia Conference on Advanced Robotics, Automation, and Control Engineering (ARACE)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124564051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Survey of Gait Recognition with Deep Learning for Mass Surveillance","authors":"Wang Xijuan, Fakhrul Hazman Bin Yusoff, M. Yusoff","doi":"10.1109/ARACE56528.2022.00039","DOIUrl":"https://doi.org/10.1109/ARACE56528.2022.00039","url":null,"abstract":"Gait recognition is a biometric recognition technology that supports long-distance, multi-target recognition with resistance to partial occlusions and does not require active user cooperation; thus, it is more suitable than other technologies for individual identification in mass video surveillance systems. Gait recognition based on deep learning has become the mainstream technology in this field because of its strong self-learning and model prediction abilities. However, there is still a lack of research focusing on actual scenes and application requirements for gait recognition, such as multi-target, real-time, and robust recognition. Therefore, this paper analyzes the basic tasks of deep gait recognition methods and encapsulates the application scope of deep gait recognition. Subsequently, this paper investigates the methods of large-space deep gait recognition from three aspects: image preprocessing, gait feature extraction with deep learning, and classifier and evaluation. In particular, the study investigated and analyzed the gait input templates often used in mass surveillance, auto encoder with deep learning, and performance evaluation indexes for the first time. Finally, the unresolved issues in deep gait recognition are summarized, and suggestions and directions for future research are presented.","PeriodicalId":437892,"journal":{"name":"2022 Asia Conference on Advanced Robotics, Automation, and Control Engineering (ARACE)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130150644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinjing Qin, Ping Zhang, Xinyang Zhang, Bin Cheng, Xianglin Bao
{"title":"Research on Submarket Effects of Real Estate Valuation Based on Bayesian Probability Model. A Comparison Between Cities","authors":"Xinjing Qin, Ping Zhang, Xinyang Zhang, Bin Cheng, Xianglin Bao","doi":"10.1109/ARACE56528.2022.00035","DOIUrl":"https://doi.org/10.1109/ARACE56528.2022.00035","url":null,"abstract":"Submarket effects are essential for real estate valuation since they could be used to increase both the prediction accuracy of housing prices and the interpretability of the machine learning model. In this paper, a Bayesian probability model that divides the housing market based on the housing location is proposed to forecast house prices, and discover key factors in house prices. A comparison of the key influencing factors affecting the real estate market in Hangzhou and in Chengdu is provided. The experimental results show that the key influencing factors in corresponding functional areas of different cities are similar, which sheds a light on creating a unified model for the real estate valuation.","PeriodicalId":437892,"journal":{"name":"2022 Asia Conference on Advanced Robotics, Automation, and Control Engineering (ARACE)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128659415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neural Networks Using Multiplicative Features Based on Second-Order Statistics for Acoustic and Speech Applications","authors":"A. Kobayashi","doi":"10.1109/ARACE56528.2022.00029","DOIUrl":"https://doi.org/10.1109/ARACE56528.2022.00029","url":null,"abstract":"This paper investigates multiplicative interactions such as auto-correlations between features in neural networks. Conventionally, in the field of pattern recognition, including spoken language processing, non-linear relationships among features, e.g., high-order local auto-correlations and multiplicative features seen in sigma-pi cells, have been explored. These features are specifically designed to capture the correlations in the spectro-temporal regions to gain robustness for classification. However, the features based on the multiplicative interactions, or elementary second-order statistics like autocorrelations, have not been well explored in speech processing. Accordingly, there would be open to discussion about the performance improvement of classification problems employing multiplicative features. Thus, we will investigate the multiplicative interactions extracted from spectro-temporal regions through the neural networks. We will conduct the experiments on three kinds of classification tasks, i.e., acoustic event/scene classification and speech recognition, while implementing a simple multiplicative module to produce the interactions between features. Our proposed neural networks with multiplicative blocks achieved promising improvements in all tasks, and the experimental results show that the proposed method improved accuracy by 0.45 % in the acoustic event classification, by 2.15 % in the acoustic scene classification, and the phone error rate (PER) by 6.5 % in the phoneme recognition.","PeriodicalId":437892,"journal":{"name":"2022 Asia Conference on Advanced Robotics, Automation, and Control Engineering (ARACE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130840364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An optimized Hebbian Learning Rule for Spiking Neural Networks on the Classification Problems with Informative Data Features","authors":"Tingyu Chen, Xin Hu, Yiren Zhou, Zhuo Zou, Longfei Liang, Wen-Chi Yang","doi":"10.1109/arace56528.2022.00012","DOIUrl":"https://doi.org/10.1109/arace56528.2022.00012","url":null,"abstract":"We proposed a new Hebbian learning rule that Neglects Historical data and only Compares Voltages (referred to NHCV in the paper). Unlike the traditional Hebbian learning rules that rely on comparing the spike timing, NHCV is designed to adjust the weight of the synapse based on the voltage of the neuron as soon as it fires. NHCV is computationally efficient and have advantages in processing informative features. Compared to traditional STDP learning rules, it accelerated training process (0.5 to 2 seconds improvement on each sample) and achieved better accuracy on Wine dataset (5.7% absolute improvement) and Diabetes dataset (12% absolute improvement). We reveal that the information amount inside the features of a dataset considerably affects the performance of SNNs.","PeriodicalId":437892,"journal":{"name":"2022 Asia Conference on Advanced Robotics, Automation, and Control Engineering (ARACE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114716258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}