{"title":"Implementation of the Racing Game with the Virtual Reality and Cable Suspended Parallel Robot (CSPR)","authors":"Chu Nhat Minh Quan, T. Tho, Nguyen Truong Thinh","doi":"10.23919/ICCAS55662.2022.10003804","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003804","url":null,"abstract":"The paper aims to describe the implementation of racing game named “Arena of Speed” based a Virtual Reality (VR) utilizing a Cable Suspended Parallel Robot (CSPR), a VR headset, and a hand controller. The game “Arena of Speed” is utilized as a special kind of graphical user interface, displaying a computer-generated immersive which can be accessed utilizing hand controller. CSPR can freely manipulate objects in a large workspace with up to 6 degrees of freedom using 8 cables, which activates the proprioceptive sensation of and provides the user with a realistic and exhilarating racing experience. The method of synchronizing the movement of the virtual car with the CSPR is proposed, which includes analyzing and sending data between devices as well as calculating and operating the robot based on game data. The biggest hurdle in replicating the racing experience in real time is that the entire system must work well together and meet the specified deadline. The results of the experiments indicate that the system can provide players playing “Arena of Speed” with the realistic sensation of racing on a racetrack; however, the sys-tem occasionally misses its deadline with an acceptably low chance. Besides the entertainment purpose, it may also be ideal for individuals who want to experience driving or racing but do not have the opportunity or condition to do so in real life.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134240243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Walk Error Compensation of ToF LiDAR using Zero Crossing Discriminator with Auto Gain Control Amplifier","authors":"Young-Hwan. Choi, Tae-Yong Kuc","doi":"10.23919/ICCAS55662.2022.10003943","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003943","url":null,"abstract":"This paper proposes a walk error compensation method based on AGC (Auto Gain Control) amplifier and ZCD (Zero Crossing Discriminator). A walk error, often referred to as a time walk, is one of the errors that occurs in the distance measurement results of ToF LiDAR. Consistent thresholds cannot be used because the amplitude of the received signal varies depending on the distance from the target object or the optical characteristics such as the light reflectance of the target surface and the angle of incidence. This amplitude change creates an error at the time when the signal is recognized as an active signal. In this paper, to compensate for walk error, peak timing was detected by measuring peak amplitude of the input signal, calculating a suitable gain, and differentiating a variably amplified signal. The results were confirmed through simulation and actual experiment. In the range of APD output signal peak amplitude 1 uA to 40 uA, the standard deviation of walk error was reduced to about 4.2% compared to conventional leading edge discriminator methods, and the standard deviation was reduced to about 65% compared to ZCD without AGC amplifier.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133961688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Improvement of 3D DR/INS/GNSS Integrated System using Inequality Constrained EKF","authors":"Hoang Viet Do, Y. Kwon, H. Kim, J. Song","doi":"10.23919/ICCAS55662.2022.10003721","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003721","url":null,"abstract":"It is well known that INS/GNSS integrated system is either unavailable or unreliable in high-rise buildings environment. This study proposes a novel framework to fuse Odometer, INS, and GNSS to provide robust pose estimation for the mentioned challenge. Motivated by the disadvantage of the recent development of 3DDR/GNSS, we relax its assumption where velocities and accelerometer biases are estimated without sensor pre-calibration. In particular, the traditional INS/GNSS and DR/GNSS are augmented into a single system without conflict to perform EKF. Moreover, inequalities-constrained EKF is derived based on the characteristic of the presented system to increase the robustness. This constraint exploits an empirical observable where the position estimation of the odometer is considered more accurate than INS since it only requires one-step integration. The proposed approach is validated through an author-designed Unreal Engine challenging map with the AirSim plugin of an autonomous ground vehicle. The results show a significant accuracy improvement in which the position and velocity error have been reduced respectively 68% and 39% on average over a 0.81km driving.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134349713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jun-Ho Baek, Junhyeon Choi, Sangmin Kim, HyunJeong Park, Tae-Yong Kuc
{"title":"A Mobile Robot Framework in Industrial Disaster for Human Rescue","authors":"Jun-Ho Baek, Junhyeon Choi, Sangmin Kim, HyunJeong Park, Tae-Yong Kuc","doi":"10.23919/ICCAS55662.2022.10003936","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003936","url":null,"abstract":"In this paper, we develop a rescue robot framework for first-step rescue work that can help rescue teams. We model a disaster environment in detail and updated the layered map and divided the topography as per the reachability of the robot using memory-efficient 3D Octomap. And we construct an autonomous driving algorithm for efficient and fast calculation by projecting it into a 2D map. Also, unlike previous studies that detect a moving person with an onboard sensor, we propose a system that detects a person’s posture by converting the 2D keypoint coordinates extracted through pose estimation using a depth camera into 3D. Keypoints converted to 3D distinguish a person’s posture and show the current state with a 3D map. And a person’s pose is discriminated by using the coordinates and the posture is sent to the team. Therefore, the whole process was experimented, and we confirmed that we can provide the first stage of an efficient human structure framework.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"205 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131818058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-Assessment and Working time on Nut Size Distinction Skill of Marine Engineers","authors":"Kikuchi Kazumitsu","doi":"10.23919/ICCAS55662.2022.10003770","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003770","url":null,"abstract":"A rule of thumb for marine engineers is that if a marine engineer has much experience in maintaining machines, the marine engineer can distinguish the size of the tool (spanner or wrench) required for removing machine nuts or bolts by mere observation [1]. This study investigates whether this rule of thumb is related to self-assessment and working time for the distinction of nuts. The participants in this experiment were marine engineering students without practical experience. The experiment included visual and hand conditions. The experiment place was not only a ship engine room but a training room where noise, high temperature, and the smell of fuel oil in the marine engine plant were not experienced by the students. The results showed that the number of the nuts accurately distinguished was unaffected by working hours and self-assessment of the participants who had a boarding history of 12 months on the training ship. Safe operation on marine engine plants must rely on the skills of marine engineers, such as a nut size distinction skill.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"18 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130753988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nobuaki Endo, T. Yoshimi, Koichiro Hayashi, H. Murakami
{"title":"Application of Model Predictive Control to Polishing Robot for Pushing Operation","authors":"Nobuaki Endo, T. Yoshimi, Koichiro Hayashi, H. Murakami","doi":"10.23919/ICCAS55662.2022.10003683","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003683","url":null,"abstract":"Much of the polishing work is done manually by skilled workers. It is not easy to teach robots to perform the detailed work of theirs and to conFigure and operate an appropriate control system to achieve this, and automation of this process has been delayed. Polishing is performed by pressing a rotating tool against the workpiece to be machined. To achieve this motion, PID control is used in the controllers of many robots. However, to determine the appropriate control gain, it is necessary to repeatedly adjust the control gain according to the processing target and processing conditions. The purpose of this research is to introduce Model Predictive Control (MPC) as a new control system for polishing robots. MPC is a control that predicts control output using a model of the control target. Therefore, we considered the target force value could be achieved without changing the MPC parameters when the force condition, a machining condition, is changed. In this paper, control block diagrams were created in MATLAB Simulink to apply MPC. The block diagram was then mounted on the actual machine to check whether it could be pressed with appropriate force, and the differences from PID were evaluated.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132873467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simultaneous Use of Autonomy Guidance Haptic Feedback and Obstacle Avoiding Force Feedback for Mobile Robot Teleoperation","authors":"Kwang-Hyun Lee, Harsimran Singh, T. Hulin, J. Ryu","doi":"10.23919/ICCAS55662.2022.10003886","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003886","url":null,"abstract":"In teleoperation, force feedback is used not only for feedback on interactions with the environment such as contact but also as a method of providing a virtual guide to support the operator perform tasks efficiently. In particular, in shared teleoperation, which is a method of assisting the operator using autonomy to improve task efficiency and reduce fatigue, force feedback is used as a key means of haptically providing a guide of autonomy to the operator to achieve efficient collaboration. However, it is difficult to use these force guides concurrently with force feedback on interactions. This is because not only the interaction force and the virtual guidance force offset each other when those forces have different directions, but also the interaction force and the virtual guidance force cannot be distinguished, making it difficult for the operator to be aware of the situation. In this paper, we propose a method to solve this problem by assigning different force magnitudes through the different stiffness for each method and for the simultaneous use of both methods. The proposed method is verified through mobile robot teleoperation experiments on the simulation environment, and the experiment result shows that the proposed method performed better than when only one type of force feedback is used.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115336271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Woojin Ahn, G. Yang, H. Choi, M. Lim, Tae-Koo Kang
{"title":"Improving Vision Transformer with Multi-Task Training","authors":"Woojin Ahn, G. Yang, H. Choi, M. Lim, Tae-Koo Kang","doi":"10.23919/ICCAS55662.2022.10003833","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003833","url":null,"abstract":"Self-supervised learning methods have shown excellent performance in improving the performance of existing networks by learning visual representations from large amounts of unlabeled data. In this paper, we propose a end-to-end multi-task self-supervision method for vision transformer. The network is given two task: inpainting, position prediction. Given a masked image, the network predicts the missing pixel information and also predicts the position of the given puzzle patches. Through classification experiment, we demonstrate that the proposed method improves performance of the network compared to the direct supervised learning method.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114498752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards On-device Deep Neural Network Inference and Model Update for Real-time Gesture Classification","authors":"Mustapha Deji Dere, Jo Ji-Hun, Boreom Lee","doi":"10.23919/ICCAS55662.2022.10003782","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003782","url":null,"abstract":"Deep learning resurgence ushered in the application of pattern recognition algorithms in high-impact research fields with impressive accuracy. In addition, deep neural networks (DNN) have recently been used to classify gestures for rehabilitation device control utilizing raw electromyography data. However, the computational resources required by a convolution neural network (CNN) are a constraint that often limits deployment to embedded devices for real-time inference. An optimized edge adaptive convolutional neural network using a short-time Fourier transform (STFT) spectrogram input was proposed in this study. The model’s classification accuracy was evaluated offline and on-device for inter-subject accuracy. Furthermore, an adaptive weight update approach was implemented to improve inference model accuracy due to degradation. The proposed model and optimization technique achieved offline accuracy of 92.19 % and 94.29 % for the raw and STFT input, respectively. However, the on-device accuracy for raw and STFT input to the model was 82.26 % and 85.19 %, respectively. On the other hand, the adaptive model update increased the respective accuracy by an average of 7% on-device. Finally, our study demonstrates the deployment of DNN on-device for real-time gesture classification inference.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123550981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Augmented Reality Display of Robot with Graphs of Property Response Using Its USD Model","authors":"Kazuki Tsukamoto, M. Koga","doi":"10.23919/ICCAS55662.2022.10003675","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003675","url":null,"abstract":"This study proposes a method that can easily grasp the relationship between the actual machine and the graphs. In recent years, there has been a lot of research on augmented reality displays. The fields of research range from education to welfare. In the development of control systems, when evaluating the performance of a system by simulation or experiment, the results are often checked as graphs. Since the graphs are checked on a PC using CAD or other means, it is difficult to know which part of the actual machine each graph corresponds to. Therefore, we developed a tool that displays graphs in augmented reality around the actual machine through a camera on a mobile terminal. To display graphs in augmented reality, it is important to obtain the coordinates of the actual machine and display them in a location associated with the device. Therefore, a USD model with the same shape and size as the actual machine is used. This is achieved by displaying the USD model in augmented reality so that it is superimposed on the actual machine. The accuracy of the tool was also examined and its usefulness was evaluated.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121906014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}