2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)最新文献

筛选
英文 中文
Hardware Accelerated Inverse Kinematics for Low Power Surgical Manipulators 低功率手术机械臂的硬件加速逆运动学
2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS) Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205769
Oleksii M. Tkachenko, K. Song
{"title":"Hardware Accelerated Inverse Kinematics for Low Power Surgical Manipulators","authors":"Oleksii M. Tkachenko, K. Song","doi":"10.1109/ARIS50834.2020.9205769","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205769","url":null,"abstract":"Robotic minimally invasive surgery (MIS) is performed via small incisions and so lessens wound healing time, associated pain and risk of infection. We refactor the control pipeline and accelerate the most time-consuming stage- inverse kinematics (IK) calculation for robot assisted MIS. Field programmable gate array (FPGA) technology is used to develop a low power hardware IK accelerator. The set of optimization techniques reduces the design’s size so it can fit onto the real hardware. Accelerator executes IK in approximately 30 microseconds. System architecture runs on a heterogeneous CPUFPGA platform. Single and multi-point architectures are developed, where multi-point architecture overcomes communication overhead between platforms and allows achieving a higher output rate. Implementation is tested for 16, 24 and 32-bit fixed-point numbers, with an average computation error of 0.07 millimeters for 32-bit architecture. Experimental results validate and verify the proposed solution.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123063420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SLAM Configuration from Video Images for Remote Omni-direction Vehicle Platform 基于视频图像的远程全方位车辆平台SLAM配置
2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS) Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205779
P. I. Chang, Y. Shi, S. C. Fan-Chiang, C. Lan
{"title":"SLAM Configuration from Video Images for Remote Omni-direction Vehicle Platform","authors":"P. I. Chang, Y. Shi, S. C. Fan-Chiang, C. Lan","doi":"10.1109/ARIS50834.2020.9205779","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205779","url":null,"abstract":"This paper attempts to fully reconstruct a local mapping for robotic vehicle platforms, by use of 3D commercial camera. The reconstructed SLAM is verified by the global positioning of the surrounding with a-priori knowledge. While the whole omni-directional vehicle is designed and built in-house to maximize utility of all the signals available from the system. The mapping error for 2D for localization is estimated at 5% showing promise for this approach.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129837824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Artificial Intelligence and Internet of Things for Robotic Disaster Response 机器人灾难响应的人工智能和物联网
2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS) Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205794
Min-Fan Ricky Lee, Tzu-Wei Chien
{"title":"Artificial Intelligence and Internet of Things for Robotic Disaster Response","authors":"Min-Fan Ricky Lee, Tzu-Wei Chien","doi":"10.1109/ARIS50834.2020.9205794","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205794","url":null,"abstract":"After the Fukushima nuclear disaster and the Wenchuan earthquake, the relevant government agencies recognized the urgency of disaster-straining robots. There are many natural or man-made disasters in Taiwan, and it is usually impossible to dispatch relevant personnel to search or explore immediately. The project proposes to use the architecture of Intelligent Internet of Things (AIoT) (Artificial Intelligence + Internet of Things) to coordinate with ground, surface and aerial and underwater robots, and apply them to disaster response, ground, surface and aerial and underwater swarm robots to collect environmental big data from the disaster site, and then through the Internet of Things. From the field workstation to the cloud for “training” deep learning model and “model verification”, the trained deep learning model is transmitted to the field workstation via the Internet of Things, and then transmitted to the ground, surface and aerial and underwater swarm robots for on-site continuing objects classification. Continuously verify the “identification” with the environment and make the best decisions for the response. The related tasks include monitoring, search and rescue of the target.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133341130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A Hybrid Network for Facial Age Progression and Regression Learning 面部年龄进展与回归学习的混合网络
2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS) Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205788
Rui-Cang Xie, G. Hsu
{"title":"A Hybrid Network for Facial Age Progression and Regression Learning","authors":"Rui-Cang Xie, G. Hsu","doi":"10.1109/ARIS50834.2020.9205788","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205788","url":null,"abstract":"Facial age transformation is an attractive application on an entertainment or amusement robot. With this application, the robot can transform an input face to the same face but in different ages. We propose a new algorithm for age transformation. Due to recent progresses made by state-of-theart deep learning approaches, the facial age progression and regression has become an attractive research topic in the fields of computer vision. Many existing approaches require paired data which refer to the face images of the same person at different ages. As the cost of collecting such paired datasets is expensive, some emerging approaches have been proposed to learn the facial age manifold from unpaired data. However, the images generated by these approaches suffer from the weakness or loss in generating some age traits, for example wrinkles and creases. We propose a hybrid network that is composed of a generator and two discriminators. The generator is trained to disentangle the age from the identity of the face so that it can generate a face of the same identity as of the input face but at a different age. One of the discriminator is designed for handling multitasks, including the identification of real vs. fake (generated) faces and the classification of the identities and ages of the faces. The other discriminator is designed to make the latent space satisfy the requirement so that the generated image can be made more realistic. Experiments show that the proposed network can generates better facial age images with more age traits compared with other state-of-the-art approaches.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131248250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Skeleton-based Hand Gesture Recognition for Assembly Line Operation 基于骨架的装配线操作手势识别
2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS) Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205781
Chao-Lung Yang, Wen-Ting Li, Shang-Che Hsu
{"title":"Skeleton-based Hand Gesture Recognition for Assembly Line Operation","authors":"Chao-Lung Yang, Wen-Ting Li, Shang-Che Hsu","doi":"10.1109/ARIS50834.2020.9205781","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205781","url":null,"abstract":"This research aims to develop a hand gesture recognition (HGR) by combining the OpenPose and Spatial Temporal Graph Convolution Network (ST-GCN) to classify the operator’s assembly motion. By defining the hand gestures with five types of therbligs, the network model was trained to recognize the human hand gesture. Although the accuracy of recognition is 78.3% with room for improvement based on preliminary experiment results, the structure of the proposed network establishes a foundation for further improvement in future work.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115674214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Automated Biometric Identification System Using CNN-Based Palm Vein Recognition 基于cnn手掌静脉识别的自动生物识别系统
2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS) Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205778
Sin-Ye Jhong, Po-Yen Tseng, Natnuntnita Siriphockpirom, Chih-Hsien Hsia, Ming-Shih Huang, K. Hua, Yung-Yao Chen
{"title":"An Automated Biometric Identification System Using CNN-Based Palm Vein Recognition","authors":"Sin-Ye Jhong, Po-Yen Tseng, Natnuntnita Siriphockpirom, Chih-Hsien Hsia, Ming-Shih Huang, K. Hua, Yung-Yao Chen","doi":"10.1109/ARIS50834.2020.9205778","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205778","url":null,"abstract":"Recently, automated biometric identification system (ABIS) has wide applications involving automatic identification and data capture (AIDC), which includes automatic security checking, verifying personal identity to prevent information disclosure or identity fraud, and so on. With the advancement of biotechnology, identification systems based on biometrics have emerged in the market. These systems require high accuracy and ease of use. Palm vein identification is a type of biometric that identifies palm vein features. Compared with other features, palm vein recognition provides accurate results and has received considerable attention. We developed a novel high-performance and noncontact palm vein recognition system by using high-performance adaptive background filtering to obtain palm vein images of the region of interest. We then used a modified convolutional neural network to determine the best recognition model through training and testing. Finally, the developed system was implemented on the low-level embedded Raspberry Pi platform with cloud computing technology. The results showed that the system can achieve an accuracy of 96.54%.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129671468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信