2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)最新文献

筛选
英文 中文
Master-Slave Guidewire and Catheter Robotic System for Cardiovascular Intervention 主从式导丝导管机器人心血管介入系统
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956423
Yujia Xiang, Hao Shen, Le Xie, Hesheng Wang
{"title":"Master-Slave Guidewire and Catheter Robotic System for Cardiovascular Intervention","authors":"Yujia Xiang, Hao Shen, Le Xie, Hesheng Wang","doi":"10.1109/RO-MAN46459.2019.8956423","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956423","url":null,"abstract":"Cardiovascular disease remains a primary cause of morbidity globally. Percutaneous coronary intervention plays a crucial role in the treatment. The radiation exposure of surgeons during the cardiovascular intervention can be avoided by master-slave surgical robots. This paper introduces a master- slave guidewire and catheter robotic system to protect the surgeons from X ray radiation to the most extent. And the jitters of master manipulators are mitigated by Kalman filtering algorithm. With two master manipulators, it helps to retain the surgeon’s traditional operating habits. Also, a vascular model trial was conducted to validate that this interventional robotic system could complete the alternate progress and rotation of interventional guidewire and catheter.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133908461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Investigation of the driver’s seat that displays future vehicle motion 对驾驶员座位的调查,显示未来车辆的运动
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956338
Y. Ishii, Tetsushi Ikeda, Toru Kobayashi, Y. Kato, A. Utsumi, Isamu Nagasawa, S. Iwaki
{"title":"Investigation of the driver’s seat that displays future vehicle motion","authors":"Y. Ishii, Tetsushi Ikeda, Toru Kobayashi, Y. Kato, A. Utsumi, Isamu Nagasawa, S. Iwaki","doi":"10.1109/RO-MAN46459.2019.8956338","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956338","url":null,"abstract":"Automated driving reduces the burden on the driver, however also makes it difficult for the driver to understand the current situation and predict the future movement of the vehicle. When the acceleration due to automated driving occurs without future prediction, the driver’s anxiety and discomfort are increased compared to the case in manual driving. To facilitate the prediction of the future behavior of the vehicle by the driver, this paper aims to design and evaluate a haptic interface that actuates the vehicle seat. Our system displays to the driver the movement of the vehicle a few seconds in the future, which allows the driver to make predictions and preparations. Using a driving simulator, we compared the conditions where the movement of the car was displayed in advance for the length of different time. The subjective evaluation of the driver showed that the predictability of the behavior of the vehicle were significantly increased compared to the case without display. The experiment also showed that comfortable feeling significantly decreased if the preceding display is too early.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115537049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Novel Image-based Path Planning Algorithm for Eye-in-Hand Visual Servoing of a Redundant Manipulator in a Human Centered Environment 一种基于图像的冗余机械臂眼手视觉伺服路径规划算法
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956330
Deepak Raina, P. Mithun, S. Shah, S. Kumar
{"title":"A Novel Image-based Path Planning Algorithm for Eye-in-Hand Visual Servoing of a Redundant Manipulator in a Human Centered Environment","authors":"Deepak Raina, P. Mithun, S. Shah, S. Kumar","doi":"10.1109/RO-MAN46459.2019.8956330","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956330","url":null,"abstract":"This paper presents a novel image-based path-planning and execution framework for vision-based control of a robot in a human centered environment. The proposed method involves applying Rapidly-exploring Random Tree (RRT) exploration to perform Image-Based Visual Servoing (IBVS) while satisfying multiple task constraints by exploiting robot redundancy. The methodology incorporates data-set of robot’s workspace images for path-planning and design a controller based on visual servoing framework. This method is generic enough to include constraints like Field-of-View (FoV) limits, joint limits, obstacles, various singularities, occlusions etc. in the planning stage itself using task function approach and thereby avoiding them during the execution. The use of path-planning eliminates many of the inherent limitations of IBVS with eye-in-hand configuration and makes the use of visual servoing practical for dynamic and complex environments. Several experiments have been performed on a UR5 robotic manipulator to demonstrate that it is an effective and robust way to guide a robot in such environments.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"182 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115580920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Communicating with SanTO – the first Catholic robot 与圣托交流——第一个天主教机器人
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956250
G. Trovato, Franco Pariasca, R. Ramirez, Javier Cerna, V. Reutskiy, Laureano Rodriguez, F. Cuéllar
{"title":"Communicating with SanTO – the first Catholic robot","authors":"G. Trovato, Franco Pariasca, R. Ramirez, Javier Cerna, V. Reutskiy, Laureano Rodriguez, F. Cuéllar","doi":"10.1109/RO-MAN46459.2019.8956250","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956250","url":null,"abstract":"In the 1560s Philip II of Spain commissioned the realisation of a “mechanical monk”, a small humanoid automaton with the ability to move and walk. Centuries later, we present a Catholic humanoid robot. With the appearance of a statue of a saint and some interactive features, it is designed for Christian Catholic users for a variety of purposes. Its creation offers new insights on the concept of sacredness applied to a robot and the role of automation in religion. In this paper we present its concept, its functioning, and a preliminary test. A dialogue system, integrated within the multimodal communication consisting of vision, touch, voice and lights, drives the interaction with the users. We collected the first responses, particularly focused on the impression of sacredness of the robot, during an experiment that took place in a church in Peru.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114186493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
TeachMe: Three-phase learning framework for robotic motion imitation based on interactive teaching and reinforcement learning TeachMe:基于交互式教学和强化学习的机器人运动模仿三相学习框架
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956326
Taewoo Kim, Joo-Haeng Lee
{"title":"TeachMe: Three-phase learning framework for robotic motion imitation based on interactive teaching and reinforcement learning","authors":"Taewoo Kim, Joo-Haeng Lee","doi":"10.1109/RO-MAN46459.2019.8956326","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956326","url":null,"abstract":"Motion imitation is a fundamental communication skill for a robot; especially, as a nonverbal interaction with a human. Owing to kinematic configuration differences between the human and the robot, it is challenging to determine the appropriate mapping between the two pose domains. Moreover, technical limitations while extracting 3D motion details, such as wrist joint movements from human motion videos, results in significant challenges in motion retargeting. Explicit mapping over different motion domains indicates a considerably inefficient solution. To solve these problems, we propose a three-phase reinforcement learning scheme to enable a NAO robot to learn motions from human pose skeletons extracted from video inputs. Our learning scheme consists of three phases: (i) phase one for learning preparation, (ii) phase two for a simulation-based reinforcement learning, and (iii) phase three for a human-in-the-loop-based reinforcement learning. In phase one, embeddings of the motions of a human skeleton and robot are learned by an autoencoder. In phase two, the NAO robot learns a rough imitation skill using reinforcement learning that translates the learned embeddings. In the last phase, the robot learns motion details that were not considered in the previous phases by interactively setting rewards based on direct teaching instead of the method used in the previous phase. Especially, it is to be noted that a relatively smaller number of interactive inputs are required for motion details in phase three when compared to the large volume of training sets required for overall imitation in phase two. The experimental results demonstrate that the proposed method improves the imitation skills efficiently for hand waving and saluting motions obtained from NTU-DB.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127552824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Audio-Visual SLAM towards Human Tracking and Human-Robot Interaction in Indoor Environments 面向室内环境中人跟踪与人机交互的视听SLAM
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956321
Aaron D. Chau, Kouhei Sekiguchi, Aditya Arie Nugraha, Kazuyoshi Yoshii, Kotaro Funakoshi
{"title":"Audio-Visual SLAM towards Human Tracking and Human-Robot Interaction in Indoor Environments","authors":"Aaron D. Chau, Kouhei Sekiguchi, Aditya Arie Nugraha, Kazuyoshi Yoshii, Kotaro Funakoshi","doi":"10.1109/RO-MAN46459.2019.8956321","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956321","url":null,"abstract":"We propose a novel audio-visual simultaneous and localization (SLAM) framework that exploits human pose and acoustic speech of human sound sources to allow a robot equipped with a microphone array and a monocular camera to track, map, and interact with human partners in an indoor environment. Since human interaction is characterized by features perceived in not only the visual modality, but the acoustic modality as well, SLAM systems must utilize information from both modalities. Using a state-of-the-art beamforming technique, we obtain sound components correspondent to speech and noise; and estimate the Direction-of-Arrival (DoA) estimates of active sound sources as useful representations of observed features in the acoustic modality. Through estimated human pose by a monocular camera, we obtain the relative positions of humans as representation of observed features in the visual modality. Using these techniques, we attempt to eliminate restrictions imposed by intermittent speech, noisy periods, reverberant periods, triangulation of sound-source range, and limited visual field-of-views; and subsequently perform early fusion on these representations. We develop a system that allows for complimentary action between audio-visual sensor modalities in the simultaneous mapping of multiple human sound sources and the localization of observer position.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"7 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126082478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Designing an Experimental and a Reference Robot to Test and Evaluate the Impact of Cultural Competence in Socially Assistive Robotics 设计一个实验和参考机器人来测试和评估文化能力对社会辅助机器人的影响
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956440
C. Recchiuto, C. Papadopoulos, Tetiana Hill, Nina Castro, Barbara Bruno, I. Papadopoulos, A. Sgorbissa
{"title":"Designing an Experimental and a Reference Robot to Test and Evaluate the Impact of Cultural Competence in Socially Assistive Robotics","authors":"C. Recchiuto, C. Papadopoulos, Tetiana Hill, Nina Castro, Barbara Bruno, I. Papadopoulos, A. Sgorbissa","doi":"10.1109/RO-MAN46459.2019.8956440","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956440","url":null,"abstract":"The article focusses on the work performed in preparation for an experimental trial aimed at evaluating the impact of a culturally competent robot for care home assistance. Indeed, it has been estabilished that the user’s cultural identity plays an important role during the interaction with a robotic system and cultural competence may be one of the key elements for increasing capabilities of socially assistive robots. Specifically, the paper describes part of the work carried out for the definition and implementation of two different robotic systems for the care of older adults: a culturally competent robot, that shows its awareness of the user’s cultural identity, and a reference robot, non culturally competent, but with the same functionalities of the former. The design of both robots is here described in detail, together with the key elements that make a socially assistive robot culturally competent, which should be absent in the non-culturally competent counterpart. Examples of the experimental phase of the CARESSES project, with a fictional user are reported, giving a hint of the validness of the proposed approach.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121564642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Real-Time Gazed Object Identification with a Variable Point of View Using a Mobile Service Robot 基于移动服务机器人的可变视点实时凝视目标识别
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956451
Akishige Yuguchi, Tomoaki Inoue, G. A. G. Ricardez, Ming Ding, J. Takamatsu, T. Ogasawara
{"title":"Real-Time Gazed Object Identification with a Variable Point of View Using a Mobile Service Robot","authors":"Akishige Yuguchi, Tomoaki Inoue, G. A. G. Ricardez, Ming Ding, J. Takamatsu, T. Ogasawara","doi":"10.1109/RO-MAN46459.2019.8956451","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956451","url":null,"abstract":"As sensing and image recognition technologies advance, the environments where service robots operate expand into human-centered environments. Since the roles of service robots depend on the user situations, it is important for the robots to understand human intentions. Gaze information, such as gazed objects (i. e., the objects humans are looking at) can help to understand the users’ intentions. In this paper, we propose a real-time gazed object identification method from RGBD images captured by a camera mounted on a mobile service robot. First, we search for the candidate gazed objects using state-of-the-art, real-time object detection. Second, we estimate the human face direction using facial landmarks extracted by a real-time face detection tool. Then, by searching for an object along the estimated face direction, we identify the gazed object. If the gazed object identification fails even though a user is looking at an object, i. e., has a fixed gaze direction, the robot can determine whether the object is inside or outside the robot’s view based on the face direction, and, then, change its point of view to improve the identification. Finally, through multiple evaluation experiments with the mobile service robot Pepper, we verified the effectiveness of the proposed identification and the improvement of the identification accuracy by changing the robot’s point of view.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125014032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Conflict Mediation in Human-Machine Teaming: Using a Virtual Agent to Support Mission Planning and Debriefing 人机协作中的冲突调解:使用虚拟代理支持任务规划和汇报
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956414
Kerstin S Haring, Jessica Tobias, Justin Waligora, Elizabeth Phillips, N. Tenhundfeld, Gale M. Lucas, E. D. Visser, J. Gratch, Chad C. Tossell
{"title":"Conflict Mediation in Human-Machine Teaming: Using a Virtual Agent to Support Mission Planning and Debriefing","authors":"Kerstin S Haring, Jessica Tobias, Justin Waligora, Elizabeth Phillips, N. Tenhundfeld, Gale M. Lucas, E. D. Visser, J. Gratch, Chad C. Tossell","doi":"10.1109/RO-MAN46459.2019.8956414","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956414","url":null,"abstract":"Socially intelligent artificial agents and robots are anticipated to become ubiquitous in home, work, and military environments. With the addition of such agents to human teams it is crucial to evaluate their role in the planning, decision making, and conflict mediation processes. We conducted a study to evaluate the utility of a virtual agent that provided mission planning support in a three-person human team during a military strategic mission planning scenario. The team consisted of a human team lead who made the final decisions and three supporting roles, two humans and the artificial agent. The mission outcome was experimentally designed to fail and introduced a conflict between the human team members and the leader. This conflict was mediated by the artificial agent during the debriefing process through discuss or debate and open communication strategies of conflict resolution [1]. Our results showed that our teams experienced conflict. The teams also responded socially to the virtual agent, although they did not find the agent beneficial to the mediation process. Finally, teams collaborated well together and perceived task proficiency increased for team leaders. Socially intelligent agents show potential for conflict mediation, but need careful design and implementation to improve team processes and collaboration.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123253433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Optimal Feature Selection for EMG-Based Finger Force Estimation Using LightGBM Model 基于LightGBM模型的肌电图手指力估计的最优特征选择
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956453
Yuhang Ye, Chao Liu, N. Zemiti, Chenguang Yang
{"title":"Optimal Feature Selection for EMG-Based Finger Force Estimation Using LightGBM Model","authors":"Yuhang Ye, Chao Liu, N. Zemiti, Chenguang Yang","doi":"10.1109/RO-MAN46459.2019.8956453","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956453","url":null,"abstract":"Electromyogram (EMG) signal has been long used in human-robot interface in literature, especially in the area of rehabilitation. Recent rapid development in artificial intelligence (AI) has provided powerful machine learning tools to better explore the rich information embedded in EMG signals. For our specific application task in this work, i.e. estimate human finger force based on EMG signal, a LightGBM (Gradient Boosting Machine) model has been used. The main contribution of this study is the development of an objective and automatic optimal feature selection algorithm that can minimize the number of features used in the LightGBM model in order to simplify implementation complexity, reduce computation burden and maintain comparable estimation performance to the one with full features. The performance of the LightGBM model with selected optimal features is compared with 4 other popular machine learning models based on a dataset including 45 subjects in order to show the effectiveness of the developed feature selection method.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123471873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信