2016 IEEE International Conference on Real-time Computing and Robotics (RCAR)最新文献

筛选
英文 中文
Towards autonomous tracking and landing on moving target 实现对运动目标的自主跟踪与着陆
2016 IEEE International Conference on Real-time Computing and Robotics (RCAR) Pub Date : 2016-06-06 DOI: 10.1109/RCAR.2016.7784101
Lingyun Xu, Haibo Luo
{"title":"Towards autonomous tracking and landing on moving target","authors":"Lingyun Xu, Haibo Luo","doi":"10.1109/RCAR.2016.7784101","DOIUrl":"https://doi.org/10.1109/RCAR.2016.7784101","url":null,"abstract":"The battery capacity of Unmanned Aerial Vehicle (UAV) is the main limitation, but with the rapid growth of UAV deployment in both military and civilian application, there is an urgent need to development the reliable and automated landing procedure. This paper is aim to propose a basic framework for autonomous landing on moving target for the Vertical Take-off and Landing (VTOL) UAVs. The VTOL vehicle is assumed to equip with a Global Navigation Satellite System (GNSS) system and a stereo vision system which could generate the point cloud within 20 meters. We applied a particle filter based Visual Servo in the UAV vision system to detect and track the moving the target at real time. We also combine the inertial measurement unit (IMU) data with the stereo vision based visual odometry to make the relative accurate pose estimation. The relative position, orientation and velocity to the landing area on the moving carrier is obtained by a modified optical flow method. The control method used in this framework combined tracking and approaching base on the range distance. We has applied our proposed framework on both simulation and landing task on a moving vehicle, and the result shows the efficiency and extended ability of our framework.","PeriodicalId":402174,"journal":{"name":"2016 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116101085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Biomechanical analysis of yeast cell based a piezoresistive cantilever sensor 基于压阻悬臂式传感器的酵母细胞生物力学分析
2016 IEEE International Conference on Real-time Computing and Robotics (RCAR) Pub Date : 2016-06-06 DOI: 10.1109/RCAR.2016.7784058
Wenkui Xu, Liguo Chen, Haibo Huang, Leilei Zhang, Xiangpeng Li, Yadi Li, Lining Sun
{"title":"Biomechanical analysis of yeast cell based a piezoresistive cantilever sensor","authors":"Wenkui Xu, Liguo Chen, Haibo Huang, Leilei Zhang, Xiangpeng Li, Yadi Li, Lining Sun","doi":"10.1109/RCAR.2016.7784058","DOIUrl":"https://doi.org/10.1109/RCAR.2016.7784058","url":null,"abstract":"The biomechanical properties (Young's Modulus, stiffness, adhesion force) is important for the cell micro/nano manipulation. In this article, a piezoresistive cantilever sensor is suitable for measuring quantitatively Young's Modulus of a single yeast cell because of some advantage of high sensitivity, dynamic response, simple structure, low cost. For calibrating a piezoresistive cantilever's elastic coefficient, it is carried out through the setup designed. Between the force, the cantilever tip deflection and the output voltage follow a good linear relationship. Analyzing the recorded force curves by applying Hertz-Sneddon model allows the extraction of cell mechanical properties Young's Modulus. Young's Modulus is realized for measuring single yeast cell, yielding young's modulus values of 2.9 ± 2.2Mpa. The result indicates that Young's Modulus measured both by a piezoresistive cantilever and AFM precise can be in good consistency of order of magnitude. Investigations aiming a description of micro/nanoforce measuring of the piezoresistive cantilever sensor are conducted in micro/nano manipulation.","PeriodicalId":402174,"journal":{"name":"2016 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123762557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards culturally aware robot navigation 走向具有文化意识的机器人导航
2016 IEEE International Conference on Real-time Computing and Robotics (RCAR) Pub Date : 2016-06-06 DOI: 10.1109/RCAR.2016.7784002
Xuan-Tung Truong, Y. Ou, T. Ngo
{"title":"Towards culturally aware robot navigation","authors":"Xuan-Tung Truong, Y. Ou, T. Ngo","doi":"10.1109/RCAR.2016.7784002","DOIUrl":"https://doi.org/10.1109/RCAR.2016.7784002","url":null,"abstract":"When we look towards the world of humans and robots harmonically working together in a social environment, the robots should behave in cultural norms. A culturally aware robot navigation is highly expected to enable mobile service robots to politely and respectfully navigate among humans in human-robot shared workspaces. In this paper, we present a foundation of culturally aware robot navigation for mobile service robots in a social environment. The culturally aware robot navigation system is developed by integrating extended personal spaces representing individual states and social interaction spaces representing human-robot interactions and human groups. The culturally aware robot navigation plays the role of human-aware decision making upon the conventional robot navigation system to ensure that a mobile service robot is capable of detecting and identifying social contexts and situations to culturally navigate in human appearances. Simulation results illustrate our methodological approach.","PeriodicalId":402174,"journal":{"name":"2016 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133991183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Facial expression recognition with PCA and LBP features extracting from active facial patches 基于PCA和LBP特征的面部表情识别
2016 IEEE International Conference on Real-time Computing and Robotics (RCAR) Pub Date : 2016-06-06 DOI: 10.1109/RCAR.2016.7784056
Yanpeng Liu, Yu Cao, Yibin Li, Ming Liu, R. Song, Yafang Wang, Zhigang Xu, Xin Ma
{"title":"Facial expression recognition with PCA and LBP features extracting from active facial patches","authors":"Yanpeng Liu, Yu Cao, Yibin Li, Ming Liu, R. Song, Yafang Wang, Zhigang Xu, Xin Ma","doi":"10.1109/RCAR.2016.7784056","DOIUrl":"https://doi.org/10.1109/RCAR.2016.7784056","url":null,"abstract":"Facial expression recognition is an important part of Natural User Interface (NUI). Feature extraction is one important step which could contribute to fast and accurate expression recognition. In order to extract more effective features from the static images, this paper proposes an algorithm based on the combination of gray pixel value and Local Binary Patterns (LBP) features. Principal component analysis (PCA) is used to reduce dimensions of the features which are combined by the gray pixel value and Local Binary Patterns (LBP) features. All the features are extracted from the active facial patches. The active facial patches are these face regions which undergo a major change during different expressions. Softmax regression classifier is used to classify the six basic facial expressions, the experimental results on extended Cohn-Kanade (CK+) database gain an average recognition rate of 96.3% under leave-one-out cross validation method which validates every subject in the database.","PeriodicalId":402174,"journal":{"name":"2016 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134557658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Integration of a stereo matching algorithm on chip for real-time 3-D sensing 一种用于实时三维传感的立体匹配算法集成芯片
2016 IEEE International Conference on Real-time Computing and Robotics (RCAR) Pub Date : 2016-06-06 DOI: 10.1109/RCAR.2016.7784065
Baowen Chen, Jun Jiang, Jun Cheng, Jie Chen
{"title":"Integration of a stereo matching algorithm on chip for real-time 3-D sensing","authors":"Baowen Chen, Jun Jiang, Jun Cheng, Jie Chen","doi":"10.1109/RCAR.2016.7784065","DOIUrl":"https://doi.org/10.1109/RCAR.2016.7784065","url":null,"abstract":"Compared to active triangulation systems, the binocular vision ones have the superiority of low cost and small size for real-time 3-D sensing. The guided-filter driven stereo matching is state-of-the-art from the viewpoints of denseness and edge-preserving. However, as most of stereo matching algorithms, the high computation and storage prevent the guided-filter driven one from being integrated on chip. In order to overcome the engineering problem, this paper analyzes how to optimize the guided-filter driven algorithm, and compares the performances among the optimized method, its original and other most important ones. Thorough experimental results validate that the optimized method is most suitable to be integrated on chip as the core of a compact binocular vision product. Besides, this paper describes the design of the algorithm on the chip system briefly.","PeriodicalId":402174,"journal":{"name":"2016 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124887944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A wearable sensor system for knee adduction moment measurement 一种测量膝关节内收力矩的可穿戴传感器系统
2016 IEEE International Conference on Real-time Computing and Robotics (RCAR) Pub Date : 2016-06-06 DOI: 10.1109/RCAR.2016.7783992
Yang Shen, Tao Liu, Qingguo Li, J. Yi, Xiaoyu Xie, Bo Wen, Y. Inoue
{"title":"A wearable sensor system for knee adduction moment measurement","authors":"Yang Shen, Tao Liu, Qingguo Li, J. Yi, Xiaoyu Xie, Bo Wen, Y. Inoue","doi":"10.1109/RCAR.2016.7783992","DOIUrl":"https://doi.org/10.1109/RCAR.2016.7783992","url":null,"abstract":"Knee adduction moment is a key parameter that links with the severity of knee osteoarthritis. However, assessment of the knee adduction moment is commonly implemented through the stationary measurement systems in a gait laboratory. The purpose of this study is to develop a wearable sensor system that can be used to estimate the knee adduction moment. A wearable sensor sock, composed of six pressure sensors, were developed using the pressure-sensitive electric conductive rubber. Based on the sensor sock measurements and the reference knee adduction moment obtained from the motion capture system (Vicon), we trained a neural network model to estimate the knee adduction moment. In our validation experiments on healthy subjects, the knee adduction moment can be accurately estimated with the trained neural network model.","PeriodicalId":402174,"journal":{"name":"2016 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132524520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Finger-eye: A wearable text reading assistive system for the blind and visually impaired 指眼:为盲人和视障人士设计的可穿戴文本阅读辅助系统
2016 IEEE International Conference on Real-time Computing and Robotics (RCAR) Pub Date : 2016-06-06 DOI: 10.1109/RCAR.2016.7784012
Zhiming Liu, Yudong Luo, José Cordero, Na Zhao, Yantao Shen
{"title":"Finger-eye: A wearable text reading assistive system for the blind and visually impaired","authors":"Zhiming Liu, Yudong Luo, José Cordero, Na Zhao, Yantao Shen","doi":"10.1109/RCAR.2016.7784012","DOIUrl":"https://doi.org/10.1109/RCAR.2016.7784012","url":null,"abstract":"This paper presents our recent research work in developing a portable and refreshable text reading system, called Finger-eye. In the system, a small camera is added to the fingertip-electrode interface of the current Electro-tactile Braille Display and placed on a blind person's finger to continuously process images using a developed rapid optical character recognition (OCR) method. This will allow translation of text to braille or audio with natural movement as if they were reading any Braille Display or book. The braille system that will be used is a portable electrical-based braille system that will eliminate the problems associated with refreshable mechanical braille displays. The goal of the research is to aid the blind and visually impaired (BVI) with a portable means to translate any text to braille, whether in the digital realm or physically, on any surface.","PeriodicalId":402174,"journal":{"name":"2016 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128748384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Emotion recognition using fixed length micro-expressions sequence and weighting method 基于定长微表情序列和加权法的情感识别
2016 IEEE International Conference on Real-time Computing and Robotics (RCAR) Pub Date : 2016-06-06 DOI: 10.1109/RCAR.2016.7784067
Mengting Chen, H. T. Ma, Jie Li, Huanhuan Wang
{"title":"Emotion recognition using fixed length micro-expressions sequence and weighting method","authors":"Mengting Chen, H. T. Ma, Jie Li, Huanhuan Wang","doi":"10.1109/RCAR.2016.7784067","DOIUrl":"https://doi.org/10.1109/RCAR.2016.7784067","url":null,"abstract":"Facial micro-expressions are brief involuntary spontaneous facial expressions which can reveal suppressed affect. However, previous studies ignored that different face regions have different contributions to micro-expression recognition. In this study, we proposed a method which employs weight of feature and weighted fuzzy classification to enhance the effective information in micro-expression sequences. The proposed method achieved facial micro-expression recognition based on the combination of an outstanding spatio-temporal descriptor HOG3D for action classification and weighting method together. The results demonstrated that the spatio-temporal descriptor HOG3D and the proposed weighting method have superior performance and achieve very promising results in micro-expression recognition.","PeriodicalId":402174,"journal":{"name":"2016 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133605884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
The implementation of augmented reality in a robotic teleoperation system 增强现实在机器人远程操作系统中的实现
2016 IEEE International Conference on Real-time Computing and Robotics (RCAR) Pub Date : 2016-06-06 DOI: 10.1109/RCAR.2016.7784014
Yuan Lin, Shuang Song, M. Meng
{"title":"The implementation of augmented reality in a robotic teleoperation system","authors":"Yuan Lin, Shuang Song, M. Meng","doi":"10.1109/RCAR.2016.7784014","DOIUrl":"https://doi.org/10.1109/RCAR.2016.7784014","url":null,"abstract":"The Wheelchair Mounted Robotic Manipulators (WMRM) are prevalently used to help the elderly or people with spine injury to conduct Activities of Daily Living (ADL). In this project, we extended its working range by making such system teleoperatable. In addition, we enhanced its efficiency by providing more natural and intuitive method for manipulation. With Augmented Reality (AR) technology, our system could present the reconstructed 3D scene of remote area at local station in an interactive way. Besides adjusting perspective and scale of the display, the gesture operating on virtual object would also be converted into control commands of robotic arm in real-time. Series of experiments have been carried out to demonstrate the function of our system.","PeriodicalId":402174,"journal":{"name":"2016 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115141103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Optimal path planning for mobile manipulator based on manipulability and localizability 基于可操作性和可定位性的移动机械臂最优路径规划
2016 IEEE International Conference on Real-time Computing and Robotics (RCAR) Pub Date : 2016-06-06 DOI: 10.1109/RCAR.2016.7784104
Cheng-liang Hu, Weidong Chen, Jingchuan Wang, Hesheng Wang
{"title":"Optimal path planning for mobile manipulator based on manipulability and localizability","authors":"Cheng-liang Hu, Weidong Chen, Jingchuan Wang, Hesheng Wang","doi":"10.1109/RCAR.2016.7784104","DOIUrl":"https://doi.org/10.1109/RCAR.2016.7784104","url":null,"abstract":"This paper presents an approach to uncertainty-optimal path planning for a mobile manipulator based on localizability and guarantee manipulability simultaneously. Information matrix is used to indicate the localizability or localization uncertainty in a known map and the cubic Bezier spline is used to represent the path for platform to check. Given a path of end-effector, this algorithm guarantees the manipulability greater than a given value, chooses the uncertainty of end-effector as the optimization index, uses particle swarm optimization algorithm to find an optimal path for platform to track. Simulations and experiments are presented that show the algorithm has the capability to reduce the uncertainty of platform and end-effector.","PeriodicalId":402174,"journal":{"name":"2016 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127563637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信