2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)最新文献

筛选
英文 中文
General Purpose Task and Motion Planning for Human-Robot Teams 人机团队的通用任务和运动规划
Liliana Antão, Nuno Costa, Gil Gonçalves
{"title":"General Purpose Task and Motion Planning for Human-Robot Teams","authors":"Liliana Antão, Nuno Costa, Gil Gonçalves","doi":"10.1109/RAAI56146.2022.10092974","DOIUrl":"https://doi.org/10.1109/RAAI56146.2022.10092974","url":null,"abstract":"In the current industrial environment, product customization and process flexibility have taken a central role. Human-robot teams try to answer this demand by coupling human and robot skills. Recent developments in task planning often overlook the first step in task planning, task's discretization and formalization, which is mostly performed manually. Furthermore, resulting task plans alone may not translate into feasible solutions, due to environment constraints. Consequently, motion planning is essential for the evaluation of the tasks' validity and for obtaining appropriate outcomes. To combat this problem, a task-motion planning framework is proposed. The implementation uses a bottom-up approach for the formalization of the task, based on an input that holds an abstraction of the desired outcome. Subsequently planning graphs are generated based on the different formalizations, where task plans can be obtained and scrutinized by a motion planning module that simulates the robotic movements. The output should include the most time-efficient viable plans. This approach was tested using a furniture assembly case study. Results were taken from two prototypical objects suggested by this case study, with different levels of complexity.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121034809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-End Video Quality Assessment with Deep Neural Networks 基于深度神经网络的端到端视频质量评估
Alejandro Villena-Rodríguez, Carlos Cárdenas-Angelat, M. Aguayo-Torres
{"title":"End-to-End Video Quality Assessment with Deep Neural Networks","authors":"Alejandro Villena-Rodríguez, Carlos Cárdenas-Angelat, M. Aguayo-Torres","doi":"10.1109/RAAI56146.2022.10092980","DOIUrl":"https://doi.org/10.1109/RAAI56146.2022.10092980","url":null,"abstract":"The explosiveness of media consumption patterns over the last years has brought a new landscape of high competition that forces the involved companies to worry about the quality of service of the end-user. However, current methods fail to measure the quality of experience in a manner close to that of the end-user. This work presents an artificial intelligence-based system able to assess the video quality in an end-to-end manner and in a similar way to what a user would do. For such purpose, a novel method for generating samples that reflects the quality of video signals over real network conditions has been implemented. Additionally, a hybrid neural network was developed. This hybrid neural network is comprised of convolutional and recurrent layers in charge of extracting spatial and temporal features respectively. Results show a high ability of the proposed system to generate reliable estimations. More precisely, the system can get precision values up to 80 compared to that of humans during the same task, which can go up 89. Moreover, it has been proved that such results can be achieved without the need for long training processes or large datasets.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122904573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recognizing Phases in Batch Production via Interactive Feature Extraction 基于交互特征提取的批量生产阶段识别
Nick Just, Chengru Song, E. Haffner, M. Gärtler
{"title":"Recognizing Phases in Batch Production via Interactive Feature Extraction","authors":"Nick Just, Chengru Song, E. Haffner, M. Gärtler","doi":"10.1109/RAAI56146.2022.10092982","DOIUrl":"https://doi.org/10.1109/RAAI56146.2022.10092982","url":null,"abstract":"Batch production is a manufacturing process, in which different components of a product are processed in a step-by-step procedure. Each step can be considered as a batch phase and each batch phase can be distinguished based on the process time series data. Identifying the batch phases from production signals can reveal useful information to analyze the production quality, and optimize the control and monitoring of the process. In many cases the timestamps of start and end of batch phases are not recorded in historical data, neither are the labels for batch operations. The conventional way of using machine learning techniques to determine batch phases in such a situation involves three major steps: 1) segmenting time series data into samples that correspond to batch phases, 2) labeling the obtained samples, and 3) building a machine learning classifier with labeled samples. Each step can be very tedious for realworld applications. Algorithms for segmenting process data into expected batch phases often need parameter tuning. Labeling such industrial data requires domain knowledge. Model selection and hyperparameter tuning are both necessary processes for building a classifier, which is also time-consuming. In this study, we introduce a workflow for extracting phase segments directly from time series data without following the three conventional steps. The proposed workflow starts with extracting distinguished shape features from time series in a semi-automated manner. Subsequently, user-desired shapes can be selected through an interactive interface. In the end, the corresponding segments can be identified and exported. The advantage of this method is that it requires limited human effort in data preparation and machine learning model building, and the workflow can be used for batch phase extraction, data exploration, etc.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123221466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Kinematics and Fuzzy Control of Continuum Robot Based on Semi-closed Loop System 基于半闭环系统的连续体机器人运动学与模糊控制
Chunxu Song, Guohua Gao, Pengyu Wang, Hao Wang
{"title":"Kinematics and Fuzzy Control of Continuum Robot Based on Semi-closed Loop System","authors":"Chunxu Song, Guohua Gao, Pengyu Wang, Hao Wang","doi":"10.1109/RAAI56146.2022.10092975","DOIUrl":"https://doi.org/10.1109/RAAI56146.2022.10092975","url":null,"abstract":"Accuracy improvement toward continuum robot brings huge challenge on motion controlling and path planning, especially without external sensory system, e.g. stereovision and magnetic positioning. Thus, this paper proposes an efficient method to analyze, establish and integrate the kinematic model into a fuzzy controller for acquiring precise motion performance of the continuum robot. Firstly, through the analysis of the forward and inverse kinematics of its tip, posture and driving parameters, the mutual mapping relationship between its position, posture and driving spaces is obtained. Then, according to its kinematics model, the motion control system is designed and the semi-closed loop feedback based on fuzzy control is introduced to compensate the error accumulation of the drive system. Circular trajectory motion control experiments are carried out under openloop control and semi-closed loop compensation, respectively, to explore the influence of drive error accumulation on the posture parameters and tip motion accuracy of the continuum robot, and to analyze the compensation effect of semi-closed loop feedback based on fuzzy control. The experimental results prove the effectiveness of the semi-closed loop system, and the actual trajectory error does not exceed 6.01%.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":" 42","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113948652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimising Faster R-CNN Training to Enable Video Camera Compression for Assisted and Automated Driving Systems 优化更快的R-CNN训练,使视频摄像机压缩辅助和自动驾驶系统
V. Donzella, P. H. Chan, A. Huggett
{"title":"Optimising Faster R-CNN Training to Enable Video Camera Compression for Assisted and Automated Driving Systems","authors":"V. Donzella, P. H. Chan, A. Huggett","doi":"10.1109/RAAI56146.2022.10092961","DOIUrl":"https://doi.org/10.1109/RAAI56146.2022.10092961","url":null,"abstract":"Advanced driving assistance systems based on only one camera or one RADAR are evolving into the current assisted and automated driving functions delivering SAE Level 2 and above capabilities. A suite of environmental perception sensors is required to achieve safe and reliable planning and navigation in future vehicles equipped with these capabilities. The sensor suite, based on several cameras, LiDARs, RADARs and ultrasonic sensors, needs to be adequate to provide sufficient (and redundant, depending on the level of driving automation) spatial and temporal coverage of the environment around the vehicle. However, the data amount produced by the sensor suite can easily exceed a few tens of Gb/s, with a single ‘average’ automotive camera producing more than 3 Gb/s. It is therefore important to consider leveraging traditional video compression techniques as well as to investigate novel ones to reduce the amount of video camera data to be transmitted to the vehicle processing unit(s). In this paper, we demonstrate that lossy compression schemes, with high compression ratios (up to 1:1,000) can be applied safely to the camera video data stream when machine learning based object detection is used to consume the sensor data. We show that transfer learning can be used to re-train a deep neural network with H.264 and H.265 compliant compressed data, and it allows the network performance to be optimised based on the compression level of the generated sensor data. Moreover, this form of transfer learning improves the neural network performance when evaluating uncompressed data, increasing its robustness to real world variations of the data.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125615334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Modular Supervisory Control Scheme for the Safety of an Automated Manufacturing System 一种自动化制造系统安全的模块化监控方案
N. Kouvakas, F. Koumboulis, D. Fragkoulis, Konstantinos Markou
{"title":"A Modular Supervisory Control Scheme for the Safety of an Automated Manufacturing System","authors":"N. Kouvakas, F. Koumboulis, D. Fragkoulis, Konstantinos Markou","doi":"10.1109/RAAI56146.2022.10093007","DOIUrl":"https://doi.org/10.1109/RAAI56146.2022.10093007","url":null,"abstract":"The analytic finite deterministic automata models of the subsystems of an automated manufacturing system in circular mode are presented. The total model of the manufacturing system is determined. The safety specifications of the system are formulated in the form of rules. The rules are translated to appropriate desired regular languages. The languages are realized in analytic forms of finite deterministic supervisor automata. A modular supervisory control scheme based on the supervisor automata, is proposed. The satisfactory performance of the controlled automaton is proven, through the computation of the marked language of the controlled automaton and the proof the nonblocking property of the controlled automaton as well as the proof of the controllability of the languages, realized by the supervisor automata. The complexity of the proposed supervisor scheme is computed.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"84 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127981996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Jetson Nano-Based Two-Way Communication System with Filipino Sign Language Recognition Using LSTM Deep Learning Model for Able and Deaf-Mute Persons 基于Jetson纳米的菲律宾手语双向交流系统,使用LSTM深度学习模型,用于残疾人和聋哑人
Rain Kristine B. Cabigting, Carl James U. Grantoza, Leonardo D. Valiente, Ericson D. Dimaunahan
{"title":"Jetson Nano-Based Two-Way Communication System with Filipino Sign Language Recognition Using LSTM Deep Learning Model for Able and Deaf-Mute Persons","authors":"Rain Kristine B. Cabigting, Carl James U. Grantoza, Leonardo D. Valiente, Ericson D. Dimaunahan","doi":"10.1109/RAAI56146.2022.10092971","DOIUrl":"https://doi.org/10.1109/RAAI56146.2022.10092971","url":null,"abstract":"Communication is the foundation of what it is to be human. The majority of human communication is reliant on sounds. However, it is not the sole natural means of communication; other people employ alternative ways. One of which is the Deaf community’s language. Communication between the Deaf-mute community and hearing or able individuals is one of the various challenges the two parties encounter. In the Philippines, around 70% of the Filipino Deaf Community utilizes Filipino Sign Language (FSL) as their primary language, whereas some hearing persons may be illiterate in the Deaf community’s native language. With the given situation, this study created a twoway communication device using Jetson Nano, covering the proper translation of FSL to text and speech and the conversion of input speech into text. Ten (10) dynamic FSL gestures are considered in this study. The device used LSTM Deep Learning Model and MediaPipe to recognize the FSL gestures, then convert them into speech through Google Text-to-Speech (gTTS) API. The device also converts speech to text using Google Speech-to-Text (gSTT) API. Sixty (60) trials of a two-way conversation between a deaf-mute and a hearing person are performed. By conducting a Test of Proportion for a Two-Way Conversation, it has revealed that the prototype exceeded the standard value of 91.11%, which garnered an accuracy of 93.33%, rendering the device highly effective and reliable as a means of communication between a deaf-mute person and a fully able one.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128195026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Floating Focusing System Based on Polymer Films: A New Example of a Smart Energy System 基于聚合物薄膜的浮动聚焦系统:智能能源系统的新实例
S. Kabdushev, K. Kadyrzhan, Y. Vitulyova, A. Bakirov, E. Kopishev, I. Suleimenov
{"title":"Floating Focusing System Based on Polymer Films: A New Example of a Smart Energy System","authors":"S. Kabdushev, K. Kadyrzhan, Y. Vitulyova, A. Bakirov, E. Kopishev, I. Suleimenov","doi":"10.1109/RAAI56146.2022.10092955","DOIUrl":"https://doi.org/10.1109/RAAI56146.2022.10092955","url":null,"abstract":"A design of a floating focusing system based on polymer films is proposed, in which water is the material for making the lens. This makes it possible to implement a focusing system of high optical power with minimal material costs, as well as with minimal energy consumption to ensure that the focusing system is pointed at the Sun, which is achieved due to zero (or close to it) buoyancy of the tunable element. It is shown that this approach, among other things, meets the concept of small green energy, which is focused on ensuring maximum independence of households from centralized energy supplies. It is taken into account that the energy consumption structure of a typical household is such that only a small proportion of consumption falls on appliances whose operation cannot exclude electricity consumption. Most of the energy consumption can be reoriented to energy sources that do not require the mandatory generation of electric current, which also determines the social significance of the proposed approach.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"270 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123383829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Human-Flow Analysis Based on PCA:A Case Study on Population Data Near Railway 基于PCA的人口流动分析——以铁路附近人口数据为例
S. Kim, T. Shibuya, Shingo Toride, Y. Endo
{"title":"A Human-Flow Analysis Based on PCA:A Case Study on Population Data Near Railway","authors":"S. Kim, T. Shibuya, Shingo Toride, Y. Endo","doi":"10.1109/RAAI56146.2022.10092981","DOIUrl":"https://doi.org/10.1109/RAAI56146.2022.10092981","url":null,"abstract":"The damage caused by natural disasters and accidents is increasing every year. To reduce such damage from spreading, it is important to detect an accident promptly. However, current sensing systems are difficult to use because they have narrow coverage and are specialized in few detectable accidents. In this paper, we propose a method to detect disasters and accidents by calculating the degree of an anomaly in human flow by treating a common human flow as a single large sensor. Human flow can be assumed to have typical patterns in people’s daily life, such as going to work and leaving work. Developing an anomaly detection method of human flow can lead to the discovery of any hidden causes such as accidents and disasters. In this paper, we study a method that aims to detect anomalies in human flow, considering the operational status of railways as an example. We confirm that our method can detect the actual suspension of operations.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"343 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115273579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lessons Learned from Utilizing Guided Policy Search for Human-Robot Handovers with a Collaborative Robot 基于协同机器人的人机切换引导策略搜索的经验教训
Alap Kshirsagar, Tair Faibish, G. Hoffman, A. Biess
{"title":"Lessons Learned from Utilizing Guided Policy Search for Human-Robot Handovers with a Collaborative Robot","authors":"Alap Kshirsagar, Tair Faibish, G. Hoffman, A. Biess","doi":"10.1109/RAAI56146.2022.10092989","DOIUrl":"https://doi.org/10.1109/RAAI56146.2022.10092989","url":null,"abstract":"We evaluate the performance of Guided Policy Search (GPS), a model-based reinforcement learning method, for generating the handover reaching motions of a collaborative robot arm. In a previous work, we evaluated GPS for the same task but only in a simulated environment. This paper provides a replication of the findings in simulation, along with new insights on GPS when used on a physical robot platform. First, we find that a policy learned in simulation does not transfer readily to the physical robot due to differences in model parameters and existing safety constraints on the real robot. Second, in order to successfully train a GPS model, the robot’s workspace needs to be severely reduced, owing to the joint-space limitations of the physical robot. Third, a policy trained with moving targets results in large worst-case errors even in regions spatially close to the training target locations. Our findings motivate further research towards utilizing GPS in humanrobot interaction settings, especially where safety constraints are imposed.","PeriodicalId":190255,"journal":{"name":"2022 2nd International Conference on Robotics, Automation and Artificial Intelligence (RAAI)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131226457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信