2019 7th International Conference on Robot Intelligence Technology and Applications (RiTA)最新文献

筛选
英文 中文
Combining Local and Global Descriptors Through Rotation Invariant Texture Analysis for Ulos Classification 基于旋转不变纹理分析的局部和全局描述符结合Ulos分类
T. Panggabean, A. Barus
{"title":"Combining Local and Global Descriptors Through Rotation Invariant Texture Analysis for Ulos Classification","authors":"T. Panggabean, A. Barus","doi":"10.1109/RITAPP.2019.8932823","DOIUrl":"https://doi.org/10.1109/RITAPP.2019.8932823","url":null,"abstract":"Performing images augmentation for data multiplication, to some degree, has negative impact to classification tasks, particularly to the object that has texture patterns with specific direction (anisotropic). As Ulos data mostly are anisotropic textures, the convolution neural networks (CNNs) fail to discriminate image if the images are arbitrarily rotated. This is due to CNNs are not rotation invariant. To benefit anisotropic and isotropic (has no specific direction) textures, conducting features extraction with discrete techniques is needed. Extracting features by wavelet transform (DWT) for directional specific patterns changes the wavelet energy features significantly while the isotropic one does not. To address the issue radon transform is first being employed, as to get principal direction for anisotropic textures. The output of wavelet transform is just globally rotation invariant. On this work, we propose a new approach to obtain robust features set by combining both local and global rotation invariant, as the output from LBP-ROR and wavelet transform. Our work shows that the performance outperforms the previous research done by scholars.","PeriodicalId":234023,"journal":{"name":"2019 7th International Conference on Robot Intelligence Technology and Applications (RiTA)","volume":"2009 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127333646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Design of a Robot Controller for Peloton Formation Using Fuzzy Logic 基于模糊逻辑的队列机器人控制器设计
R. Bedruz, A. Bandala, R. R. Vicerra, Ronnie S. Concepcion, E. Dadios
{"title":"Design of a Robot Controller for Peloton Formation Using Fuzzy Logic","authors":"R. Bedruz, A. Bandala, R. R. Vicerra, Ronnie S. Concepcion, E. Dadios","doi":"10.1109/RITAPP.2019.8932858","DOIUrl":"https://doi.org/10.1109/RITAPP.2019.8932858","url":null,"abstract":"This paper presents a controller for the optimization of flocking and formation algorithm adapted from the flocking behavior of cycling team or pelotons. The controller developed is a fuzzy-logic controller for each of the robotic agent in order for them to perform a peloton formation. Results from the simulation shows that the developed fuzzy logic controller is slightly better than the mathematical models in maintaining a small and optimal position for the peloton formation which results to a more efficient and robust swarm system.","PeriodicalId":234023,"journal":{"name":"2019 7th International Conference on Robot Intelligence Technology and Applications (RiTA)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117110449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
BALD-VAE: Generative Active Learning based on the Uncertainties of Both Labeled and Unlabeled Data BALD-VAE:基于标记和未标记数据不确定性的生成式主动学习
Sun-Kyung Lee, Jong-Hwan Kim
{"title":"BALD-VAE: Generative Active Learning based on the Uncertainties of Both Labeled and Unlabeled Data","authors":"Sun-Kyung Lee, Jong-Hwan Kim","doi":"10.1109/RITAPP.2019.8932813","DOIUrl":"https://doi.org/10.1109/RITAPP.2019.8932813","url":null,"abstract":"Deep learning has shown outstanding performances on real world problems, but acquiring sufficient labeled data to train a model is still an on-going issue. Specifically, manually labeling data is time-consuming and costly. One approach to tackle this issue is active learning. Recently, pool-based methods and generative methods are widely studied among various approaches of active learning. Especially in the uncertainty pool-based methods, a small labeled data set and a large unlabeled data set are given. A model is trained on the labeled data set and then observes the unlabeled data set. The trained model ranks the unlabeled data in order of uncertainty to select the data which has the highest uncertainty. In the generative methods, a generative model is used to generate informative samples. In the previous studies of the uncertainty pool-based active learning, however, the uncertainty of labeled data was not considered. Thus, we propose a new Bayesian active learning by disagreement with variational autoencoder (BALD-VAE), which considers the uncertainty of labeled data when generating informative samples. Basically following the uncertainty pool-based active learning with BALD, the pro-posed algorithm also utilizes the concept of generative active learning to generate informative data using VAE. Then, the generated data complement the highly uncertain labeled data. To demonstrate the effectiveness, the proposed method is tested on MNIST and CIFAR10 data sets and shown to outperform the previous algorithms.","PeriodicalId":234023,"journal":{"name":"2019 7th International Conference on Robot Intelligence Technology and Applications (RiTA)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130557327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Orientation Correction for Hector SLAM at Starting Stage 启动阶段Hector SLAM的方位校正
Weichen Wei, B. Shirinzadeh, Shunmugasundar Esakkiappan, M. Ghafarian, A. Al-Jodah
{"title":"Orientation Correction for Hector SLAM at Starting Stage","authors":"Weichen Wei, B. Shirinzadeh, Shunmugasundar Esakkiappan, M. Ghafarian, A. Al-Jodah","doi":"10.1109/RITAPP.2019.8932722","DOIUrl":"https://doi.org/10.1109/RITAPP.2019.8932722","url":null,"abstract":"Hector simultaneous localisation and mapping(SLAM) is a popular approach for mapping a space. It requires only a Light Detection and Ranging (LiDAR) sensor to perform the mapping. It uses previous scan results to estimate the current state of the system. However, Hector SLAM suffers from serious drifting in the starting stage. This does not affect the mapping during the process but will significantly interfere the future pose estimation of the robot. Because the future pose is an estimation from the previous pose, the drift from the beginning will be recorded and results in a random rotation and translation of the map frame against other ground truth frames. This research uses a reference frame to locate the robot and correct its orientation and position during the starting period of Hector SLAMing using Point-Line Iterative Closest Point (PL-ICP). By compare the trajectory from the reference frame and the trajectory generated by the Hector SLAM, the translations and rotations caused by the joggling from the beginning can be estimated. Map and current poses of the Hector node are rotated and translated according to this translation and rotation to re-align the mapping frame to the ground truth frame.","PeriodicalId":234023,"journal":{"name":"2019 7th International Conference on Robot Intelligence Technology and Applications (RiTA)","volume":" 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132074902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Millimeter-Wave Radar and RGB-D Camera Sensor Fusion for Real-Time People Detection and Tracking 毫米波雷达与RGB-D相机传感器融合用于实时人员检测与跟踪
Natnael S. Zewge, Youngmin Kim, Jintae Kim, Jong-Hwan Kim
{"title":"Millimeter-Wave Radar and RGB-D Camera Sensor Fusion for Real-Time People Detection and Tracking","authors":"Natnael S. Zewge, Youngmin Kim, Jintae Kim, Jong-Hwan Kim","doi":"10.1109/RITAPP.2019.8932892","DOIUrl":"https://doi.org/10.1109/RITAPP.2019.8932892","url":null,"abstract":"One of the key aspects of modern-day robotics research is the development of perceptual capabilities of agents. Robots need to understand their surroundings in order to reason or infer about a given situation. Chief among the areas of perception is that of detection and tracking of people. In this work we employ millimeter-wave radar and imaging sensor fusion approach to pedestrian detection and tracking. We perform experiments in a variety of settings (single and multi- target, varying illumination, varying distances and fields of view, dense and light clutter, and through-the-wall tracking). Our results show that our fusion and tracking architecture is far superior to camera only systems in terms of accuracy and added functionality. Our implementation mitigates the effects of occlusions (including wooden walls), blurry images, obscured lens, and field of view limitations.","PeriodicalId":234023,"journal":{"name":"2019 7th International Conference on Robot Intelligence Technology and Applications (RiTA)","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131466964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Object Detection for Similar Appearance Objects Based on Entropy 基于熵的相似外观对象检测
Minjeong Ju, Sangkeun Moon, C. Yoo
{"title":"Object Detection for Similar Appearance Objects Based on Entropy","authors":"Minjeong Ju, Sangkeun Moon, C. Yoo","doi":"10.1109/RITAPP.2019.8932791","DOIUrl":"https://doi.org/10.1109/RITAPP.2019.8932791","url":null,"abstract":"In order to detect objects with similar appearance more accurately, we propose an object detection algorithm with entropy loss. Applying entropy loss makes detector predicts the class of detected bounding boxes more robust with high score probability. It also leads to decrease of confidence loss. Therefore, the detection performance for similar objects is improved. We reconstructed the dataset from previous two datasets to evaluate our method, implemented experiments, and obtained high performance gain. In addition, we conducted an analysis of the score distribution for detected objects and the other loss terms, in order to observe the effects of applying entropy loss.","PeriodicalId":234023,"journal":{"name":"2019 7th International Conference on Robot Intelligence Technology and Applications (RiTA)","volume":"258-260 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130749835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Optimal Path Planning of Automated Guided Vehicle using Dijkstra Algorithm under Dynamic Conditions 动态条件下基于Dijkstra算法的自动引导车辆最优路径规划
Sungkwan Kim, Hojun Jin, Minah Seo, Dongsoo Har
{"title":"Optimal Path Planning of Automated Guided Vehicle using Dijkstra Algorithm under Dynamic Conditions","authors":"Sungkwan Kim, Hojun Jin, Minah Seo, Dongsoo Har","doi":"10.1109/RITAPP.2019.8932804","DOIUrl":"https://doi.org/10.1109/RITAPP.2019.8932804","url":null,"abstract":"This paper presents an optimal path planning aiming at minimization of energy consumption and decrement of operation time of an automated guided vehicle (AGV) under dynamic operation conditions in a graph containing random slopes and distances. To convey one load to a desired- destination by AGVs is considered common practice in a logistic center. Loads of varying mass are transferred to desired-destinations in a graph composed of vertices and edges. In the graph, slopes and distances required for calculation of weight of edges are randomly given between a pair of vertices. Considering tractive forces of the AGV, a tractive force model is developed and this model is applied to path planning method. Mass variation of the AGV that occurs when it places the loads to each vertex is used for calculation of the energy consumption. According to the mass variation of an AGV and road conditions, the weights between vertices are determined. Based on the graph with the weights, the Dijkstra algorithm is applied to get an optimal path for the AGV. Proposed approach demonstrates minimization of AGV’s energy consumption and improvement of operation time with the optimal path.","PeriodicalId":234023,"journal":{"name":"2019 7th International Conference on Robot Intelligence Technology and Applications (RiTA)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115330560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Conversion of Body Temperature from Skin Temperature using Neural Network for Smart Band 基于神经网络的智能手环体温与皮肤温度的转换
Young-Tae Kwak, Jiwon Yang, Yongduck You
{"title":"Conversion of Body Temperature from Skin Temperature using Neural Network for Smart Band","authors":"Young-Tae Kwak, Jiwon Yang, Yongduck You","doi":"10.1109/RITAPP.2019.8932736","DOIUrl":"https://doi.org/10.1109/RITAPP.2019.8932736","url":null,"abstract":"As smart bands develop gradually, their functions are also diversifying. Among these functions, body temperature should be accurate, but in reality it is not. Therefore, this paper proposes a method of converting the skin temperature measured with infrared sensor to the body temperature using a neural network. The proposed method can be applied to hardware modules in smart bands and will also increase medical confidence as it provides more accurate body temperature.","PeriodicalId":234023,"journal":{"name":"2019 7th International Conference on Robot Intelligence Technology and Applications (RiTA)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122882297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Development of A Virtual Environment to Realize Human-Machine Interaction of Forklift Operation 实现叉车操作人机交互的虚拟环境开发
Jouh Yeong Chew, Ken Okayama, T. Okuma, M. Kawamoto, H. Onda, Norihiko Kato
{"title":"Development of A Virtual Environment to Realize Human-Machine Interaction of Forklift Operation","authors":"Jouh Yeong Chew, Ken Okayama, T. Okuma, M. Kawamoto, H. Onda, Norihiko Kato","doi":"10.1109/RITAPP.2019.8932837","DOIUrl":"https://doi.org/10.1109/RITAPP.2019.8932837","url":null,"abstract":"This study presents an experimental concept to develop realistic Human-Machine Interaction (HMI) for a Virtual Environment (VE) and a novel evaluation methodology of such system. Such evaluation is motivated by the need to facilitate transfer of model/knowledge from VE to the Real Environment (RE), where it is crucial for the VE to trigger similar user behavior as in the RE. This paper discusses the application of such concept to evaluate interactions of forklift operation in the VE. First, a Virtual Reality (VR) forklift simulator is developed using motion capture and 3D reconstruction methods to mimic HMI of the real forklift operation. Then, the Dynamic Time Warping (DTW) algorithm is used for temporal evaluation of operation behaviors in VE and RE. Results of DTW (i.e. distance and correlation) are used as objective measures to evaluate fidelity of VE during forklift operations on the simulator. Results suggest the proposed forklift simulator triggers operation behavior which is similar (highly correlated) to that of real forklift operation. The contributions of this paper are (a) the novel VR forklift simulator system to realize interactions of real forklift in the VE, and (b) the proposed objective measures for temporal evaluation of the fidelity of VE.","PeriodicalId":234023,"journal":{"name":"2019 7th International Conference on Robot Intelligence Technology and Applications (RiTA)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128595071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Gesture Recognition and Effective Interaction Based Dining Table Cleaning Robot 基于手势识别和有效交互的餐桌清洁机器人
J. Moh, T. Kijima, Bin Zhang, Hun-ok Lim
{"title":"Gesture Recognition and Effective Interaction Based Dining Table Cleaning Robot","authors":"J. Moh, T. Kijima, Bin Zhang, Hun-ok Lim","doi":"10.1109/RITAPP.2019.8932802","DOIUrl":"https://doi.org/10.1109/RITAPP.2019.8932802","url":null,"abstract":"We present a framework for dining table cleaning robot, which enables the robot to detect the cleaning target and perform cleaning task correspondingly to the given instruction, without needing prior information of the cleaning target. A cleaning robot should be able to detect the object efficiently. In order to enable object detection without prior information, the background subtraction method is employed, which is based on the 3D point group data taken by a RGB-D camera. In addition to object detection, a cleaning robot should be able to modify its movement in accordance with the user’s instructions. Therefore, we propose an interaction system which allows the user to use gesture to provide instructions to the robot. A pointing gesture is used to specify the cleaning target. When the information needed for the cleaning task is insufficient, the robot will ask for further information from the user. If multiple objects are detected, the robot will rank all the objects according to their distance from the pointed coordinate. The user can re-designate the cleaning target with preregistered gesture commands. Once the robot has collected enough information for its duty, it will execute the cleaning task specified by the user.","PeriodicalId":234023,"journal":{"name":"2019 7th International Conference on Robot Intelligence Technology and Applications (RiTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131045555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信