2022 22nd International Conference on Control, Automation and Systems (ICCAS)最新文献

筛选
英文 中文
Development of Self-Training Algorithm for Predicting Mango Maturity 芒果成熟度预测自训练算法的发展
2022 22nd International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2022-11-27 DOI: 10.23919/ICCAS55662.2022.10003751
Nguyen Minh Trieu, Nguyen Truong Thinh
{"title":"Development of Self-Training Algorithm for Predicting Mango Maturity","authors":"Nguyen Minh Trieu, Nguyen Truong Thinh","doi":"10.23919/ICCAS55662.2022.10003751","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003751","url":null,"abstract":"The quality and maturity of mangoes are inhomogeneous, even when mangoes are harvested from the same tree at the same time, however, the maturity of mangoes greatly affects the storage and transport time. Therefore, the determination of mango maturity is very important. This study aims to determine the mango maturity by using the internal and external features of mangoes (length, width, defect, weight, density, and color) based on a hybrid model of a multilayer Feed-Forward Neural Network (FFNN). In detail, the mango is segmented based on analyzing color space then algorithms in image processing are applied. After determining the architecture, the FFNN model is trained with the dataset in which each data point has 14 features. Another self-training algorithm is applied to increase the accuracy of FFNN. The proposed system has a mean-square error of 0.259 in maturity prediction which is shown in the results and experiments section.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127121244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combination of Deep Learner Network and Transformer for 3D Human Pose Estimation 结合深度学习网络和变压器的三维人体姿态估计
2022 22nd International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2022-11-27 DOI: 10.23919/ICCAS55662.2022.10003954
T. Tran, Xuan-Thuy Vo, Duy-Linh Nguyen, K. Jo
{"title":"Combination of Deep Learner Network and Transformer for 3D Human Pose Estimation","authors":"T. Tran, Xuan-Thuy Vo, Duy-Linh Nguyen, K. Jo","doi":"10.23919/ICCAS55662.2022.10003954","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003954","url":null,"abstract":"Deep neural networks (DNNs) have attained the maximum performance today not just for human pose estimation but also for other machine vision applications (e.g., semantic segmentation, object detection, image classification). Besides, the Transformer shows its good performance for extracting the information in temporal information for video challenges. As a result, the combination of deep learner and transformer gains a better performance than only the utility one, especially for 3D human pose estimation. At the start point, input the 2D key point into the deep learner layer and transformer and then use the additional function to combine the extracted information. Finally, the network collects more data in terms of using the fully connected layer to generate the 3D human pose which makes the result increased precision efficiency. Our research would also reveal the relationship between the use of the deep learner and transformer. When compared to the baseline-DNNs, the suggested architecture outperforms the baseline-DNNs average error under Protocol 1 and Protocol 2 in the Human3.6M dataset, which is now available as a popular dataset for 3D human pose estimation.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124325200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Image classification using DETR based object-level feature 基于对象级特征的DETR图像分类
2022 22nd International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2022-11-27 DOI: 10.23919/ICCAS55662.2022.10003912
Chung-Gi Ban, Dayoung Park, Youngbae Hwang
{"title":"Image classification using DETR based object-level feature","authors":"Chung-Gi Ban, Dayoung Park, Youngbae Hwang","doi":"10.23919/ICCAS55662.2022.10003912","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003912","url":null,"abstract":"The object in an image is the main information of image representation for image classification. In case that the background in the image is complex or an object size is small, the existing invariant feature, such as Scale Invariant Feature Transform (SIFT) or Speeded Up Robust Features (SURF) is not easy to use for object-level representation. Because SIFT can not distinguish whether the feature includes relevant object information, it may consist of background or less informative features. We use Detection Transformer (DETR), the state of the art object detector to represent the object-level information. By visualizing the attention map of Transformer decoder, we find that each output vector indicates the region of objects effectively. Bag of visual words (BoVW) is applied to represent N output vectors of DETR as the feature of a query image. Based on scene-level and object-level datasets, we compare our method with SIFT based BoVW as an image classification task. We show that the proposed method perform better for object-level dataset than BoVW of SIFT.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121844859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adapting Masking Network for Bloom Identification Number Recognition to Different Domains 基于掩模网络的多域布隆识别码识别
2022 22nd International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2022-11-27 DOI: 10.23919/ICCAS55662.2022.10003768
Wonseok Jeong, Hyeyeon Choi, Bum Jun Kim, Hyeonah Jang, Dong Gu Lee, Donggeon Lee, Sang Woo Kim
{"title":"Adapting Masking Network for Bloom Identification Number Recognition to Different Domains","authors":"Wonseok Jeong, Hyeyeon Choi, Bum Jun Kim, Hyeonah Jang, Dong Gu Lee, Donggeon Lee, Sang Woo Kim","doi":"10.23919/ICCAS55662.2022.10003768","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003768","url":null,"abstract":"These days, there are lots of smart factories with automatic systems that improve the factory’s manufacturing efficiency. One of the systems is product identification number recognition. In this study, we handled Bloom Identification Number (BIN) which is common in steel industries. For our BIN recognition algorithm, we adopted deep learning because it outperforms conventional algorithms in many computer vision tasks. Furthermore, applying a trained deep learning model to another factory is a big issue because data from different factories can look alike to us, but the trained models might confuse them because of the difference in background, light condition, and camera position. For this reason, new label annotations are required to train the model once again. However, label annotations will always be a big burden whenever applying a trained model to different factories. In this paper, we introduce a new method of BIN recognition that does not require data labeling of new data when training. This gives us the advantage of eliminating the time of labeling new collected data when applying the deep learning network to other factories.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123414913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Group Estimation for Social Robot Navigation in Crowded Environments 拥挤环境下社交机器人导航的群体估计
2022 22nd International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2022-11-27 DOI: 10.23919/ICCAS55662.2022.10003761
Mincheul Kim, Youngsun Kwon, Sung-eui Yoon
{"title":"Group Estimation for Social Robot Navigation in Crowded Environments","authors":"Mincheul Kim, Youngsun Kwon, Sung-eui Yoon","doi":"10.23919/ICCAS55662.2022.10003761","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003761","url":null,"abstract":"Socially acceptable navigation in a crowded environment is a challenging problem in robotics due to diverse and unknown human intent. Previous studies have dealt with the social navigation problem in dense crowds via multi-robot collision avoidance. However, it is intractable to follow social compliant trajectory since human-robot interaction differs from the multi-robot collision avoidance problem. To approach our goal, this work exploits a human behavior model and focuses on social group actions such as walking together. We observed that human recognizes the other human groups and avoids them during navigation while maintaining social distances. Based on this observation, this paper proposes a social robot navigation method under group space estimation of crowds on a deep reinforcement learning framework. The proposed method estimates the social groups of crowds based on the behavioral similarities in sensory information. Our reinforcement learning framework learns a socially compliant and effective navigation policy through the proposed human group-aware reward. Our experiment in a crowd simulation demonstrates that the proposed approach generates a human-friendly trajectory with improved navigation performance.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131936407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continuous Control of Autonomous Vehicles using Plan-assisted Deep Reinforcement Learning 基于计划辅助深度强化学习的自动驾驶车辆连续控制
2022 22nd International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2022-11-27 DOI: 10.23919/ICCAS55662.2022.10003698
Tanay Dwivedi, Tobias Betz, Florian Sauerbeck, P. Manivannan, M. Lienkamp
{"title":"Continuous Control of Autonomous Vehicles using Plan-assisted Deep Reinforcement Learning","authors":"Tanay Dwivedi, Tobias Betz, Florian Sauerbeck, P. Manivannan, M. Lienkamp","doi":"10.23919/ICCAS55662.2022.10003698","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003698","url":null,"abstract":"End-to-end deep reinforcement learning (DRL) is emerging as a promising paradigm for autonomous driving. Although DRL provides an elegant framework to accomplish final goals without extensive manual engineering, capturing plans and behavior using deep neural networks is still an unsolved issue. End-to-end architectures, as a result, are currently limited to simple driving scenarios, often performing sub-optimally when rare, unique conditions are encountered. We propose a novel plan-assisted deep reinforcement learning framework that, along with the typical state-space, leverages a “trajectory-space” to learn optimal control. While the trajectory-space, generated by an external planner, intrinsically captures the agent’s high-level plans, world models are used to understand the dynamics of the environment for learning behavior in latent space. An actor-critic network, trained in imagination, uses these latent features to predict policy and state-value function. Based primarily on DreamerV2 and Racing Dreamer, the proposed model is first trained in a simulator and eventually tested on the FITENTH race car. We evaluate our model for best lap times against parameter-tuned and learning-based controllers on unseen race tracks and demonstrate that it generalizes to complex scenarios where other approaches perform sub-optimally. Furthermore, we show the model’s enhanced stability as a trajectory tracker and establish the improvement in interpretability achieved by the proposed framework.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"46 14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131646057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Generation Method of Autonomous Evasive Maneuver Strategy in Air Combat 空战中自主规避机动策略的生成方法
2022 22nd International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2022-11-27 DOI: 10.23919/ICCAS55662.2022.10003888
Lin Li, Zhen Yang, Zhixiao Sun, Guang Zhan, Haiyin Piao, Deyun Zhou
{"title":"Generation Method of Autonomous Evasive Maneuver Strategy in Air Combat","authors":"Lin Li, Zhen Yang, Zhixiao Sun, Guang Zhan, Haiyin Piao, Deyun Zhou","doi":"10.23919/ICCAS55662.2022.10003888","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003888","url":null,"abstract":"Air combat is usually a continuous process involving multiple rounds of missile attacks, UCAV(Unmanned Combat Aerial Vehicle) should consider the impact of maneuver on the entire air combat countermeasure mission rather than solely focusing on safety factors when evading incoming air to air missiles. In this paper, a method of generating autonomous evasive maneuver strategy in UCAV air combat is proposed under the condition of miss distance and energy dissipation. Firstly, the UCAV-missile three-dimensional space pursuit model and the UCAV autonomous evading state space, action space and reward function model are established. Dueling DDQN (Double Deep Q Network) algorithm is used to solve the model, which integrates Double DQN and Dueling DQN Network model. Simulation experiments and analysis show that the method proposed in this paper has significant effectiveness in solving the problem of autonomous evasive maneuvers in air combat under the conditions of miss distance and energy dissipation tactical requirements. The evasive maneuvers strategy obtained can reflect the two tactical requirements of air combat while ensuring the safety of UCAV itself.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116554662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Development of Painting Robot for Cage-shape Iron Pillar Supporting Train Track: Prototype Design and Basic Motion Verification 笼形铁柱支撑列车轨道涂装机器人的研制:样机设计与基本运动验证
2022 22nd International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2022-11-27 DOI: 10.23919/ICCAS55662.2022.10003938
Seonghee Jeong, Katsushi Ogawa, T. Jozen
{"title":"Development of Painting Robot for Cage-shape Iron Pillar Supporting Train Track: Prototype Design and Basic Motion Verification","authors":"Seonghee Jeong, Katsushi Ogawa, T. Jozen","doi":"10.23919/ICCAS55662.2022.10003938","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003938","url":null,"abstract":"In this paper, a prototype painting robot that can paint a surface of steel material while climbing up and down an inside of a cage-type iron pillar supporting a train track is proposed. Target specifications are set based on the requirements of the painting robot, and then, the robot is designed so that each specification can be cleared. The robot consists a climbing device that climbing up and down the inside of the iron pillar, a painting arm that paints the surface of the steel material, and a paint suppling device that supplies paint to a brush. The climbing device has a leg-shaped structure, and climbs up and down inside the iron pillar while alternately gripping the steel material with an upper and lower legs. In addition, the painting arm has a 6-DOF serial link mechanism, and the brush can be appropriately posed on the surface of the target steel. The paint supply device is equipped with a uniaxial screw type mohno pump so that an appropriate amount of high-viscosity paint could be supplied to the brush. Through simple experiments, it was confirmed that each device could realize basic functions according to the design specifications.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121077268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lidar SLAM Comparison in a Featureless Tunnel Environment 无特征隧道环境下的激光雷达SLAM比较
2022 22nd International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2022-11-27 DOI: 10.23919/ICCAS55662.2022.10003820
Iulian Filip, Juhyun Pyo, Meungsuk Lee, Hangil Joe
{"title":"Lidar SLAM Comparison in a Featureless Tunnel Environment","authors":"Iulian Filip, Juhyun Pyo, Meungsuk Lee, Hangil Joe","doi":"10.23919/ICCAS55662.2022.10003820","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003820","url":null,"abstract":"Simultaneous Localization and Mapping (SLAM) algorithm consists a vital part of decision-making process of autonomous robot platforms. Many lidar-based SLAM methods have been proposed for indoor and urban environments. However, a few studies are reported in a featureless tunnel environment. In this paper we consider recent lidar SLAM frameworks and test their performance in a tunnel environment. Our dataset is collected by a four-wheeled ground vehicle that is equipped with a lidar sensor used for mapping and feature detection and an IMU sensor used for odometry tracking information. The performance of seven different lidar SLAM algorithms is tested and as a result, in corridor environment LIO-SAM and SC-LIO-SAM frameworks show the lowest trajectory and point cloud error, respectively. On the other hand, LIO-SAM and FAST-LIO2 displays the best trajectory accuracy in the tunnel environment with addition of artificial landmarks and without them, respectively. The results obtained during the performance of seven different lidar SLAM algorithms can contribute to the development of a SLAM framework in a featureless tunnel environment.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115062845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Comparison of Human Activity Recognition for Unmanned Retails 无人零售中人类活动识别的性能比较
2022 22nd International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2022-11-27 DOI: 10.23919/ICCAS55662.2022.10003872
Sheilla Wesonga, Nusrat Jahan Tahira, Jangsik Park
{"title":"Performance Comparison of Human Activity Recognition for Unmanned Retails","authors":"Sheilla Wesonga, Nusrat Jahan Tahira, Jangsik Park","doi":"10.23919/ICCAS55662.2022.10003872","DOIUrl":"https://doi.org/10.23919/ICCAS55662.2022.10003872","url":null,"abstract":"Lately, the broad usage of technology in almost all aspects of life has led to the increase in research supporting technology advancement. One of these research topics is Human Activity Recognition (HAR) with diverse applicability which include and not limited to video surveillance, healthcare and education. In this paper, we present a study based on human activity recognition while employing the Kinect RGB and Depth sensor camera to recognize seven different human activities (7 classes). The joint angles extracted from the Kinect depth sensor each has 3 axes (X, Y, Z) for the 8 limbs employed in our experiment as the feature vectors. For the purpose of classifying the human activities, we train and test with 3 different state of the art recurrent neural network models (GRU, LSTM, Bi-LSTM). The comparison of the 3 recurrent neural network models shows that LSTM has a higher human activity classification accuracy at 96% and using the confusion matrix as the performance metric for all the models, we show classification per activity.","PeriodicalId":129856,"journal":{"name":"2022 22nd International Conference on Control, Automation and Systems (ICCAS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116466617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信