2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)最新文献

筛选
英文 中文
Toward a Wearable Affective Robot That Detects Human Emotions from Brain Signals by Using Deep Multi-Spectrogram Convolutional Neural Networks (Deep MS-CNN) 利用深度多谱图卷积神经网络(Deep MS-CNN)从大脑信号中检测人类情绪的可穿戴情感机器人
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956382
Ker-Jiun Wang, C. Zheng
{"title":"Toward a Wearable Affective Robot That Detects Human Emotions from Brain Signals by Using Deep Multi-Spectrogram Convolutional Neural Networks (Deep MS-CNN)","authors":"Ker-Jiun Wang, C. Zheng","doi":"10.1109/RO-MAN46459.2019.8956382","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956382","url":null,"abstract":"Wearable robot that constantly monitors, adapts and reacts to human’s need is a promising potential for technology to facilitate stress alleviation and contribute to mental health. Current means to help with mental health include counseling, drug medications, and relaxation techniques such as meditation or breathing exercises to improve mental status. The theory of human touch that causes the body to release hormone oxytocin to effectively alleviate anxiety shed light on a potential alternative to assist existing methods. Wearable robots that generate affective touch have the potential to improve social bonds and regulate emotion and cognitive functions. In this study, we used a wearable robotic tactile stimulation device, AffectNodes2, to mimic human affective touch. The touch-stimulated brain waves were captured from 4 EEG electrodes placed on the parietal, prefrontal and left and right temporal lobe regions of the brain. The novel Deep MSCNN with emotion polling structure had been developed to extract Affective touch, Non-affective touch and Relaxation stimuli with over 95% accuracy, which allows the robot to grasp the current human affective status. This sensing and decoding structure is our first step towards developing a self-adaptive robot to adjust its touch stimulation patterns to help regulate affective status.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124862618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A Brief Review of the Electronics, Control System Architecture, and Human Interface for Commercial Lower Limb Medical Exoskeletons Stabilized by Aid of Crutches 用拐杖稳定的商用下肢医用外骨骼的电子、控制系统架构和人机界面综述
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956311
Nahla Tabti, Mohamad Kardofaki, S. Alfayad, Y. Chitour, F. Ouezdou, Eric Dychus
{"title":"A Brief Review of the Electronics, Control System Architecture, and Human Interface for Commercial Lower Limb Medical Exoskeletons Stabilized by Aid of Crutches","authors":"Nahla Tabti, Mohamad Kardofaki, S. Alfayad, Y. Chitour, F. Ouezdou, Eric Dychus","doi":"10.1109/RO-MAN46459.2019.8956311","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956311","url":null,"abstract":"Research in the field of the powered orthoses or exoskeletons has expanded tremendously over the past years. Lower limb exoskeletons are widely used in robotic rehabilitation and are showing benefits in the patients quality of life. Many engineering reviews have been published about these devices and addressed general aspects. To the best of our knowledge, no review has minutely discussed specifically the control of the most common used devices, particularly the algorithms used to define the function state of the exoskeleton, such as walking, sit-to-stand, etc. In this contribution, the control hardware and software, as well as the integrated sensors for the feedback are thoroughly analyzed. We will also discuss the importance of user-specific state definition and customized control architecture. Although there are many prototypes developed nowadays, we chose to target medical lower limb exoskeletons that uses crutches to keep balance, and that are minimally actuated. These are the most common system that are now being commercialized and used worldwide. Therefore, the outcome of such a review helps to have a practical insight in all of: the mechatronics design, system architecture, and control technology.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125214129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fatigue Estimation using Facial Expression features and Remote-PPG Signal 基于面部表情特征和Remote-PPG信号的疲劳估计
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956411
Masaki Hasegawa, Kotaro Hayashi, J. Miura
{"title":"Fatigue Estimation using Facial Expression features and Remote-PPG Signal","authors":"Masaki Hasegawa, Kotaro Hayashi, J. Miura","doi":"10.1109/RO-MAN46459.2019.8956411","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956411","url":null,"abstract":"Currently, research and development of lifestyle support robots in daily life is being actively conducted. Health-case is one such function robots. In this research, we develop a fatigue estimation system using a camera that can easily be mounted on robots. Measurements taken in a real environment have to be consider noises caused by changes in light and the subject’s movement. This fatigue estimation system is based on a robust feature extraction method. As an indicator of fatigue, LF/HF-ratio was calculated from the power spectrum of RR interval in the electrocardiogram or the blood volume pulse (BVP). The BVP can be detected from the fingertip by using the photoplethysmography (PPG). In this study, we used a contactless PPG: remote-PPG (rPPG) detected by the luminance change of the face image. Some studies show facial expression features extracted from facial video are also useful for fatigue estimation. dimension reduction of past method using LLE spoiled the information in the large dimention of feature. We also developed a fatigue estimation method with such features using a camera for the healthcare robots. It used facial landmark points, line-of-sight vector, and size of the ellipse fitted with eyes and mouth landmark points. Therefore, proposed method simply use time-varying shape information of face like size of eyes, or gaze direction. We verified the performance of proposed features by the fatigue state classification using Support Vector Machine (SVM).","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126186657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On the Role of Trust in Child-Robot Interaction* 信任在儿童-机器人交互中的作用研究*
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956400
Paulina Zguda, B. Sniezynski, B. Indurkhya, Anna Kolota, Mateusz Jarosz, Filip Sondej, Takamune Izui, Maria Dziok, A. Belowska, Wojciech Jędras, G. Venture
{"title":"On the Role of Trust in Child-Robot Interaction*","authors":"Paulina Zguda, B. Sniezynski, B. Indurkhya, Anna Kolota, Mateusz Jarosz, Filip Sondej, Takamune Izui, Maria Dziok, A. Belowska, Wojciech Jędras, G. Venture","doi":"10.1109/RO-MAN46459.2019.8956400","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956400","url":null,"abstract":"In child-robot interaction, the element of trust towards the robot is critical. This is particularly important the first time the child meets the robot, as the trust gained during this interaction can play a decisive role in future interactions. We present an in-the-wild study where Polish kindergartners interacted with a Pepper robot. The videos of this study were analyzed for the issues of trust, anthropomorphization, and reaction to malfunction, with the assumption that the last two factors influence the children’s trust towards Pepper. Our results reveal children’s interest in the robot performing tasks specific for humans, highlight the importance of the conversation scenario and the need for an extended library of answers provided by the robot about its abilities or origin and show how children tend to provoke the robot.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128887332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Social and Entertainment Gratifications of Videogame Play Comparing Robot, AI, and Human Partners 比较机器人、人工智能和人类伙伴玩电子游戏的社交和娱乐满足感
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956256
N. Bowman, J. Banks
{"title":"Social and Entertainment Gratifications of Videogame Play Comparing Robot, AI, and Human Partners","authors":"N. Bowman, J. Banks","doi":"10.1109/RO-MAN46459.2019.8956256","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956256","url":null,"abstract":"As social robots’ and AI agents’ roles are becoming more diverse, those machines increasingly function as sociable partners. This trend raises questions about whether social gaming gratifications known to emerge in human-human co-play may (not) also manifest in human-machine co-play. In the present study, we examined social outcomes of playing a videogame with a human partner as compared to an ostensible social robot or A.I (i.e., computer-controlled player) partner. Participants (N = 103) were randomly assigned to three experimental conditions in which they played a cooperative video game with either a human, embodied robot, or non-embodied AI. Results indicated that few statistically significant or meaningful differences existed between any of the partner types on perceived closeness with partner, relatedness need satisfaction, or entertainment outcomes. However, qualitative data suggested that human and robot partners were both seen as more sociable, while AI partners were seen as more functional.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130430523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Establishing Human-Robot Trust through Music-Driven Robotic Emotion Prosody and Gesture 通过音乐驱动的机器人情感、韵律和手势建立人与机器人的信任
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956386
Richard J. Savery, R. Rose, Gil Weinberg
{"title":"Establishing Human-Robot Trust through Music-Driven Robotic Emotion Prosody and Gesture","authors":"Richard J. Savery, R. Rose, Gil Weinberg","doi":"10.1109/RO-MAN46459.2019.8956386","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956386","url":null,"abstract":"As human-robot collaboration opportunities continue to expand, trust becomes ever more important for full engagement and utilization of robots. Affective trust, built on emotional relationship and interpersonal bonds is particularly critical as it is more resilient to mistakes and increases the willingness to collaborate. In this paper we present a novel model built on music-driven emotional prosody and gestures that encourages the perception of a robotic identity, designed to avoid uncanny valley. Symbolic musical phrases were generated and tagged with emotional information by human musicians. These phrases controlled a synthesis engine playing back pre-rendered audio samples generated through interpolation of phonemes and electronic instruments. Gestures were also driven by the symbolic phrases, encoding the emotion from the musical phrase to low degree-of-freedom movements. Through a user study we showed that our system was able to accurately portray a range of emotions to the user. We also showed with a significant result that our non-linguistic audio generation achieved an 8% higher mean of average trust than using a state-of-the-art text-to-speech system.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"225 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117181278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Ontologenius: A long-term semantic memory for robotic agents 本体记忆:机器人代理的长期语义记忆
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956305
Guillaume Sarthou, A. Clodic, R. Alami
{"title":"Ontologenius: A long-term semantic memory for robotic agents","authors":"Guillaume Sarthou, A. Clodic, R. Alami","doi":"10.1109/RO-MAN46459.2019.8956305","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956305","url":null,"abstract":"In this paper we present Ontologenius, a semantic knowledge storage and reasoning framework for autonomous robots. More than a classic ontology software to query a knowledge base and a first-order internal logic as it can be done for web-semantics, we propose with Ontologenius features adapted to a robotic use including human-robot interaction. We introduce the ability to modify the knowledge base during execution, whether through dialogue or geometric reasoning, and keep these changes even after the robot is powered off. Since Ontologenius was developed to be used by a robot which interacts with humans, we have endowed the system with ability to perform attributes and properties generalization and with the possibility to model and estimate the semantic memory of a human partner and to implement theory of mind processes. This paper presents the architecture and the main features of Ontologenius as well as examples of its use in robotics applications.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124512572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Surprise! Predicting Infant Visual Attention in a Socially Assistive Robot Contingent Learning Paradigm 惊喜!在社会辅助机器人偶然学习范式中预测婴儿视觉注意
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956385
Lauren Klein, L. Itti, Beth A. Smith, Marcelo R. Rosales, S. Nikolaidis, M. Matarić
{"title":"Surprise! Predicting Infant Visual Attention in a Socially Assistive Robot Contingent Learning Paradigm","authors":"Lauren Klein, L. Itti, Beth A. Smith, Marcelo R. Rosales, S. Nikolaidis, M. Matarić","doi":"10.1109/RO-MAN46459.2019.8956385","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956385","url":null,"abstract":"Early intervention to address developmental disability in infants has the potential to promote improved outcomes in neurodevelopmental structure and function [1]. Researchers are starting to explore Socially Assistive Robotics (SAR) as a tool for delivering early interventions that are synergistic with and enhance human-administered therapy. For SAR to be effective, the robot must be able to consistently attract the attention of the infant in order to engage the infant in a desired activity. This work presents the analysis of eye gaze tracking data from five 6-8 month old infants interacting with a Nao robot that kicked its leg as a contingent reward for infant leg movement. We evaluate a Bayesian model of low-level surprise on video data from the infants’ head-mounted camera and on the timing of robot behaviors as a predictor of infant visual attention. The results demonstrate that over 67% of infant gaze locations were in areas the model evaluated to be more surprising than average. We also present an initial exploration using surprise to predict the extent to which the robot attracts infant visual attention during specific intervals in the study. This work is the first to validate the surprise model on infants; our results indicate the potential for using surprise to inform robot behaviors that attract infant attention during SAR interactions.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124252406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Verbal Explanations for Deep Reinforcement Learning Neural Networks with Attention on Extracted Features 关注提取特征的深度强化学习神经网络的语言解释
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956301
Xinzhi Wang, Shengcheng Yuan, Hui Zhang, M. Lewis, K. Sycara
{"title":"Verbal Explanations for Deep Reinforcement Learning Neural Networks with Attention on Extracted Features","authors":"Xinzhi Wang, Shengcheng Yuan, Hui Zhang, M. Lewis, K. Sycara","doi":"10.1109/RO-MAN46459.2019.8956301","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956301","url":null,"abstract":"In recent years, there has been increasing interest in transparency in Deep Neural Networks. Most of the works on transparency have been done for image classification. In this paper, we report on work of transparency in Deep Reinforcement Learning Networks (DRLNs). Such networks have been extremely successful in learning action control in Atari games. In this paper, we focus on generating verbal (natural language) descriptions and explanations of deep reinforcement learning policies. Successful generation of verbal explanations would allow better understanding by people (e.g., users, debuggers) of the inner workings of DRLNs which could ultimately increase trust in these systems. We present a generation model which consists of three parts: an encoder on feature extraction, an attention structure on selecting features from the output of the encoder, and a decoder on generating the explanation in natural language. Four variants of the attention structure full attention, global attention, adaptive attention and object attention - are designed and compared. The adaptive attention structure performs the best among all the variants, even though the object attention structure is given additional information on object locations. Additionally, our experiment results showed that the proposed encoder outperforms two baseline encoders (Resnet and VGG) on the capability of distinguishing the game state images.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131375974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Designing a Socially Assistive Robot for Long-Term In-Home Use for Children with Autism Spectrum Disorders 为自闭症谱系障碍儿童设计一个长期在家使用的社交辅助机器人
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956468
Roxanna Pakkar, Caitlyn E. Clabaugh, Rhianna Lee, Eric Deng, M. Matarić
{"title":"Designing a Socially Assistive Robot for Long-Term In-Home Use for Children with Autism Spectrum Disorders","authors":"Roxanna Pakkar, Caitlyn E. Clabaugh, Rhianna Lee, Eric Deng, M. Matarić","doi":"10.1109/RO-MAN46459.2019.8956468","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956468","url":null,"abstract":"Socially assistive robotics (SAR) research has shown great potential for supplementing and augmenting therapy for children with autism spectrum disorders (ASD). However, the vast majority of SAR research has been limited to short-term studies in highly controlled environments. The design and development of a SAR system capable of interacting autonomously in situ for long periods of time involves many engineering and computing challenges. This paper presents the design of a fully autonomous SAR system for long-term, in-home use with children with ASD. We address design decisions based on robustness and adaptability needs, discuss the development of the robot’s character and interactions, and provide insights from the month-long, in-home data collections with children with ASD. This work contributes to a larger research program that is exploring how SAR can be used for enhancing the social and cognitive development of children with ASD.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125691799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信