2020 20th International Conference on Control, Automation and Systems (ICCAS)最新文献

筛选
英文 中文
Training Deep Neural Networks with Synthetic Data for Off-Road Vehicle Detection 基于合成数据的深度神经网络训练用于越野车辆检测
2020 20th International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268430
Eunchong Kim, Kanghyun Park, Hunmin Yang, Se-Yoon Oh
{"title":"Training Deep Neural Networks with Synthetic Data for Off-Road Vehicle Detection","authors":"Eunchong Kim, Kanghyun Park, Hunmin Yang, Se-Yoon Oh","doi":"10.23919/ICCAS50221.2020.9268430","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268430","url":null,"abstract":"In tandem with growing deep learning technology, vehicle detection using convolutional neural network is now become a mainstream in the field of autonomous driving and ADAS. Taking advantage of this, lots of real image datasets have been produced in spite of the painstaking work of data collection and ground truth annotation. As an alternative, virtually generated images are introduced. This makes data collection and annotation much easier, but a different kind of problem called ‘domain gap’ is announced. For instance, in off-road vehicle detection, there is a difficulty in producing off-road image dataset not only by collecting real images, but also by synthesizing images sidestepping the domain gap. In this paper, focusing on the off-road army tank detection, we introduce a synthetic image generator using domain randomization on off-road scene context. We train a deep learning model on synthetic dataset using low level features form feature extractor pre-trained on real common object dataset. With proposed method, we improve the model accuracy to 0.86 AP@0.5IOU, outperforming naïve domain randomization approach.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"28 1","pages":"427-431"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90048676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Research on jamming strategy of surface-type infrared decoy against by infrared-guided simulation 红外制导仿真对地面型红外诱饵的干扰策略研究
2020 20th International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268227
W. Sun, M. Yao
{"title":"Research on jamming strategy of surface-type infrared decoy against by infrared-guided simulation","authors":"W. Sun, M. Yao","doi":"10.23919/ICCAS50221.2020.9268227","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268227","url":null,"abstract":"In the use of surface-type infrared decoys, a reasonable and effective jamming strategy is the key to successfully jam the infrared-guided missile. To solve this problem, a jamming strategy of the surface-type infrared decoys against the infrared-guided missile is obtained by doing theoretical analysis and simulation. This paper introducees a simulation model that the attack process is divided the attack process into pre-lock and post-lock. Use the hit rate to evaluate the success rate, the optimal jamming strategy under two stages is obtained, including the optimal release time of decoys, release interval, and the maneuvering action that should be taken by the carrier aircraft.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"1 1","pages":"845-849"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83088481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent task robot system based on process recipe extraction from product 3D modeling file 基于工艺配方提取产品三维建模文件的智能任务机器人系统
2020 20th International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268427
Hyonyoung Han, Heechul Bae, Hyunchul Kang, Jiyon Son, H. Kim
{"title":"Intelligent task robot system based on process recipe extraction from product 3D modeling file","authors":"Hyonyoung Han, Heechul Bae, Hyunchul Kang, Jiyon Son, H. Kim","doi":"10.23919/ICCAS50221.2020.9268427","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268427","url":null,"abstract":"This study introduces intelligent task robot system based on process recipe extraction from standard 3D model files. In small quantity batch production and mixed flow manufacturing condition, lots of time is spent on process planning and device control such as path planning in a robot system. If these processes could be automated, mixed flow production of various products will be working efficiently. This paper suggests automated process recipe extraction module based product registration subsystem and visual servoing based intelligent assembly task robot subsystem. The recipe module extracts list of parts, each part size and position from standard 3D model file (STEP) and analyzes the structure of product between parts. The extracted product data is stored in the recipe knowledge base as a recipe format and also plan-view image of each part. Robot system consists of real-time part recognition module, part scheduling module and motion planner module. The part recognition module identifies parts by matching real-time RGB image and plan-view image in knowledge base. The part scheduling module plan the sequence of part for task using a decision tree method. The motion planner module controls assembly task robot according to process recipe depending on task type. Performance of the system was tested with five types of sample products.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"74 1","pages":"856-859"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83218781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep Reinforcement Learning-based ROS-Controlled RC Car for Autonomous Path Exploration in the Unknown Environment 基于深度强化学习的ros控制RC车在未知环境下的自主路径探索
2020 20th International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268370
Sabir Hossain, Oualid Doukhi, Yeon-ho Jo, D. Lee
{"title":"Deep Reinforcement Learning-based ROS-Controlled RC Car for Autonomous Path Exploration in the Unknown Environment","authors":"Sabir Hossain, Oualid Doukhi, Yeon-ho Jo, D. Lee","doi":"10.23919/ICCAS50221.2020.9268370","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268370","url":null,"abstract":"Nowadays, Deep reinforcement learning has become the front runner to solve problems in the field of robot navigation and avoidance. This paper presents a LiDAR-equipped RC car trained in the GAZEBO environment using the deep reinforcement learning method. This paper uses reshaped LiDAR data as the data input of the neural architecture of the training network. This paper also presents a unique way to convert the LiDAR data into a 2D grid map for the input of training neural architecture. It also presents the test result from the training network in different GAZEBO environment. It also shows the development of hardware and software systems of embedded RC car. The hardware system includes-Jetson AGX Xavier, teensyduino and Hokuyo LiDAR; the software system includes-ROS and Arduino C. Finally, this paper presents the test result in the real world using the model generated from training simulation.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"8 1","pages":"1231-1236"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88762419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
UAV Engine Control Monitoring System based on CAN Network 基于CAN网络的无人机发动机控制监控系统
2020 20th International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268244
Hyun Lee
{"title":"UAV Engine Control Monitoring System based on CAN Network","authors":"Hyun Lee","doi":"10.23919/ICCAS50221.2020.9268244","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268244","url":null,"abstract":"This paper proposes UAV (Unmanned Aerial Vehicle) engine control monitoring system using a dynamic ID application and a scheduling method of CAN network sensors which collect the temperatures, pressure, vibration, Fuel level of UAV engine through the network. This paper aims at developing an effective monitoring method for UAV engine control system, which is implemented based upon CAN (Controller Area Network) network. As the UAV engine control monitoring system requires various kinds of information, a lot of sensor nodes are distributed to several different places. The dynamic application mechanism of CAN protocol ensures the effective utilization of the bandwidth of the network, in which all nodes are sending the data to the bus according to the priority of node identifiers.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"6 1","pages":"820-823"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87363322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Clusters in multi-leader directed consensus networks 多领导导向共识网络中的集群
2020 20th International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268254
Jeong-Min Ma, Hyung-Gohn Lee, H. Ahn, K. Moore
{"title":"Clusters in multi-leader directed consensus networks","authors":"Jeong-Min Ma, Hyung-Gohn Lee, H. Ahn, K. Moore","doi":"10.23919/ICCAS50221.2020.9268254","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268254","url":null,"abstract":"In a directed graph a leader is a node that has no in-degree edges. If there are multiple leaders in a directed consensus network, the system will not reach consensus. In such systems the nodes will organize into clusters or groups of node that converge to the same value. These clusters are not dependent on initial conditions or edge weights. In this paper we study clusters in multi-leader directed consensus networks. Specifically, we present an algorithm to classify all clusters in the graph.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"36 1","pages":"379-384"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84711366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Mobile service robot multi-floor navigation using visual detection and recognition of elevator features(ICCAS 2020) 基于电梯特征视觉检测与识别的移动服务机器人多层导航(ICCAS 2020)
2020 20th International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268202
Eun-ho Kim, Sanghyeon Bae, T. Kuc
{"title":"Mobile service robot multi-floor navigation using visual detection and recognition of elevator features(ICCAS 2020)","authors":"Eun-ho Kim, Sanghyeon Bae, T. Kuc","doi":"10.23919/ICCAS50221.2020.9268202","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268202","url":null,"abstract":"Mobile service robot multi-floor navigation is a challenging issue for in indoor robot navigation, especially when moving between floors, entering and leaving elevator. So, in this paper we propose detection and recognition method of elevator features and robot navigation for entering and leaving the elevator. Thus, in this paper we propose a method which uses deep learning. Based image recognition system to identify particular floor from an elevator display. Using this method robot determines whether particular floor has reached. We proposed two-fold methods to accomplish our goal. On the first method we performed the extraction of elevator button coordinates through traditional feature extractor such as adaptive thresholding, blob detection, template matching. The next part of our approach is by using DL- based recognition, done by YOLO 9000 on the floor count display panel of the elevator. From our analysis of these above mentioned methods we discovered that the feature extractor out-performs the DL-based recognition system even in the tricky conditions. Such as lighter reflection, motion blur etc. and proves to be more robust system for detection and recognition.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"71 1","pages":"982-985"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74495992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Image Registration Method from LDCT Image Using FFD Algorithm 基于FFD算法的LDCT图像配准方法
2020 20th International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268267
Chika Tanaka, Tohru Kamiya, T. Aoki
{"title":"Image Registration Method from LDCT Image Using FFD Algorithm","authors":"Chika Tanaka, Tohru Kamiya, T. Aoki","doi":"10.23919/ICCAS50221.2020.9268267","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268267","url":null,"abstract":"In recent years, the number of lung cancer deaths has been increasing. In Japan, CT (Computed Tomography) equipment is used for its visual screening. However, there is a problem that seeing huge number of images taken by CT is a burden on the doctor. To overcome this problem, the CAD (Computer Aided Diagnosis) system is introduced on medical fields. In CT screening, LDCT (Low Dose Computed Tomography) screening is desirable considering radiation exposure. However, the image quality which is caused the lower the dose is another problem on the screening. A CAD system that enables accurate diagnosis even at low doses is needed. Therefore, in this paper, we propose a registration method for generating temporal subtraction images that can be applied to low-quality chest LDCT images. Our approach consists of two major components. Firstly, global matching based on the center of gravity is performed on the preprocessed images, and the region of interest (ROI) is set. Secondly, local matching by free-form deformation (FFD) based on B-Spline is performed on the ROI as final registration. In this paper, we apply our proposed method to LDCT images of 6 cases, and reduce 57.29% in the calculation time, 26.1% in the half value width, and 29.6% in the sum of histogram of temporal subtraction images comparing with the conventional method.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"7 1","pages":"411-414"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84800576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Body Trajectory Generation Using Quadratic Programming in Bipedal Robots 基于二次规划的两足机器人身体轨迹生成
2020 20th International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268204
Min. InJoon, Yoo. DongHa, Ahn. MinSung, Han. Jeakweon
{"title":"Body Trajectory Generation Using Quadratic Programming in Bipedal Robots","authors":"Min. InJoon, Yoo. DongHa, Ahn. MinSung, Han. Jeakweon","doi":"10.23919/ICCAS50221.2020.9268204","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268204","url":null,"abstract":"The preview control walking method, which is commonly used in bipedal walking, requires jerk and ZMP errors as cost functions to generate body trajectory. Since the two inputs are dependent, optimization to form body trajectory is performed simultaneously with weight factors. Therefore, it is often seen that the resulting body trajectory rapidly changes on velocity according to the weight factors. This eventually requires a torque actuator in order to perform such action. In order to overcome this problem, we apply a method used on a quadruped to a bipedal robot. Since, it only targets to minimize the acceleration of the body trajectory, the body does not require rapid speed change. Also, this method can eliminate the computation time needed for preview control referred to preview time. When applying a quadruped robots walking method that has a relatively large support polygon than that of a bipedal robot, stability deterioration may occur. Therefore, we approached to secure ZMP constraints with relatively small support polygon area as within bipedal robots. In this paper we propose a body trajectory generation method that guarantees real-time stability while minimizing acceleration.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"25 1","pages":"251-257"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80172221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Verification method to improve the efficiency of traffic survey 验证方法,提高交通调查效率
2020 20th International Conference on Control, Automation and Systems (ICCAS) Pub Date : 2020-10-13 DOI: 10.23919/ICCAS50221.2020.9268311
Mi-Seon Kang, Pyong-Kun Kim, Kil-Taek Lim
{"title":"Verification method to improve the efficiency of traffic survey","authors":"Mi-Seon Kang, Pyong-Kun Kim, Kil-Taek Lim","doi":"10.23919/ICCAS50221.2020.9268311","DOIUrl":"https://doi.org/10.23919/ICCAS50221.2020.9268311","url":null,"abstract":"Road traffic volume survey is a survey to determine the number and type of vehicles passing at a specific point for a certain period of time. Previously, a method of classifying the number of vehicles and vehicle types has been used while a person sees an image photographed using a camera with the naked eye, but this has a disadvantage in that a lot of manpower and cost are incurred. Recently, a method of applying an automated algorithm has been widely attempted, but has a disadvantage in that the accuracy is inferior to the existing method performed by manpower. To address these problems, we propose a method to automate road traffic volume surveys and a new method to verify the results. The proposed method extracts the number of vehicles and vehicle types from an image using deep learning, analyzes the results, and automatically informs the user of candidates with a high probability of error, so that highly reliable traffic volume survey information can be efficiently generated. The performance of the proposed method is tested using a data set collected by an actual road traffic survey company. The experiment proved that it is possible to verify the vehicle classification and route simply and quickly using the proposed method. The proposed method can not only reduce the investigation process and cost, but also increase the reliability due to more accurate results.","PeriodicalId":6732,"journal":{"name":"2020 20th International Conference on Control, Automation and Systems (ICCAS)","volume":"1 1","pages":"339-343"},"PeriodicalIF":0.0,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83422067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信