ACM Transactions on Human-Robot Interaction最新文献

筛选
英文 中文
Field Trial of a Queue-Managing Security Guard Robot 排队管理保安机器人现场试验
ACM Transactions on Human-Robot Interaction Pub Date : 2024-07-25 DOI: 10.1145/3680292
Sachi Edirisinghe, S. Satake, Yuyi Liu, Takayuki Kanda
{"title":"Field Trial of a Queue-Managing Security Guard Robot","authors":"Sachi Edirisinghe, S. Satake, Yuyi Liu, Takayuki Kanda","doi":"10.1145/3680292","DOIUrl":"https://doi.org/10.1145/3680292","url":null,"abstract":"We developed a security guard robot that is specifically designed to manage queues of people and conducted a field trial at an actual public event to assess its effectiveness. However, the acceptance of robot instructions or admonishments poses challenges in real-world applications. Our primary objective was to achieve an effective and socially acceptable queue-management solution. To accomplish this, we took inspiration from human security guards whose role has already been well-received in society. Our robot, whose design embodied the image of a professional security guard, focused on three key aspects: duties, professional behavior, and appearance. To ensure its competence, we interviewed professional security guards to deepen our understanding of the responsibilities associated with queue management. Based on their insights, we incorporated features of ushering, admonishing, announcing, and question answering into the robot’s functionality. We also prioritized the modeling of professional ushering behavior. During a 10-day field trial at a children’s amusement event, we interviewed both the visitors who interacted with the robot and the event staff. The results revealed that visitors generally complied with its ushering and admonishments, indicating a positive reception. Both visitors and event staff expressed an overall favorable impression of the robot and its queue-management services. These findings suggest that our proposed security guard robot shows great promise as a solution for effective crowd handling in public spaces.","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"92 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141802654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to the Special Issue on Artificial Intelligence for Human-Robot Interaction (AI-HRI) 人工智能促进人机交互(AI-HRI)特刊简介
ACM Transactions on Human-Robot Interaction Pub Date : 2024-07-20 DOI: 10.1145/3672535
Jivko Sinapov, Zhao Han, Shelly Bagchi, Muneeb Ahmad, Matteo Leonetti, Ross Mead, Reuth Mirsky, Emmanuel Senft
{"title":"Introduction to the Special Issue on Artificial Intelligence for Human-Robot Interaction (AI-HRI)","authors":"Jivko Sinapov, Zhao Han, Shelly Bagchi, Muneeb Ahmad, Matteo Leonetti, Ross Mead, Reuth Mirsky, Emmanuel Senft","doi":"10.1145/3672535","DOIUrl":"https://doi.org/10.1145/3672535","url":null,"abstract":"","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"73 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141818832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding the Interaction between Delivery Robots and Other Road and Sidewalk Users: A Study of User-generated Online Videos 了解送货机器人与其他道路和人行道使用者之间的互动:用户生成的在线视频研究
ACM Transactions on Human-Robot Interaction Pub Date : 2024-07-17 DOI: 10.1145/3677615
Xinyan Yu, Marius Hoggenmüller, Tram Thi Minh Tran, Yiyuan Wang, M. Tomitsch
{"title":"Understanding the Interaction between Delivery Robots and Other Road and Sidewalk Users: A Study of User-generated Online Videos","authors":"Xinyan Yu, Marius Hoggenmüller, Tram Thi Minh Tran, Yiyuan Wang, M. Tomitsch","doi":"10.1145/3677615","DOIUrl":"https://doi.org/10.1145/3677615","url":null,"abstract":"The deployment of autonomous delivery robots in urban environments presents unique challenges in navigating complex traffic conditions and interacting with diverse road and sidewalk users. Effective communication between robots and road and sidewalk users is crucial to address these challenges. This study investigates real-world encounter scenarios where delivery robots and road and sidewalk users interact, seeking to understand the essential role of communication in ensuring seamless encounters. Following an online ethnography approach, we collected 117 user-generated videos from TikTok and their associated 2067 comments. Our systematic analysis revealed several design opportunities to augment communication between delivery robots and road and sidewalk users, which include facilitating multi-party path negotiation, managing unexpected robot behaviour via transparency information, and expressing robot limitations to request human assistance. Moreover, the triangulation of video and comments analysis provides a set of design considerations to realise these opportunities. The findings contribute to understanding the operational context of delivery robots and offer insights for designing interactions with road and sidewalk users, facilitating their integration into urban spaces.","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":" 25","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141830218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enacting Human-Robot Encounters with Theater Professionals on a Mixed Reality Stage 在混合现实舞台上与剧院专业人员上演人与机器人的邂逅
ACM Transactions on Human-Robot Interaction Pub Date : 2024-07-17 DOI: 10.1145/3678186
Marco C. Rozendaal, J. Vroon, M. Bleeker
{"title":"Enacting Human-Robot Encounters with Theater Professionals on a Mixed Reality Stage","authors":"Marco C. Rozendaal, J. Vroon, M. Bleeker","doi":"10.1145/3678186","DOIUrl":"https://doi.org/10.1145/3678186","url":null,"abstract":"In this paper, we report on methodological insights gained from a workshop in which we collaborated with theater professionals to enact situated encounters between humans and robots on a mixed reality stage combining VR with real-life interaction. We deployed the skills of theater professionals to investigate the behaviors of humans encountering robots to speculate about the kind of interactions that may result from encountering robots in supermarket settings. The mixed reality stage made it possible to adapt the robot’s morphology quickly, as well as its movement and perceptual capacities, to investigate how this together co-determines possibilities for interaction. This setup allowed us to follow the interactions simultaneously from different perspectives, including the robot’s, which provided the basis for a collective phenomenological analysis of the interactions. Our work contributes to approaches to HRI that do not work towards identifying communicative behaviors that can be universally applied but instead work towards insights that can be used to develop HRI that is emergent, and situation- and robot-specific. Furthermore, it supports a more-than-human-design approach that takes the fundamental differences between humans and robots as a starting point for the creative development of new kinds of communication and interaction.","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":" 24","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141828773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Longitudinal Study of Mobile Telepresence Robots in Older Adults’ Homes: Uses, Social Connection, and Comfort with Technology 在老年人家中使用移动远程呈现机器人的纵向研究:技术的使用、社会联系和舒适度
ACM Transactions on Human-Robot Interaction Pub Date : 2024-07-11 DOI: 10.1145/3674956
Jennifer Rheman, Rune P. Baggett, Martin Simecek, Marlena R. Fraune, Katherine M. Tsui
{"title":"Longitudinal Study of Mobile Telepresence Robots in Older Adults’ Homes: Uses, Social Connection, and Comfort with Technology","authors":"Jennifer Rheman, Rune P. Baggett, Martin Simecek, Marlena R. Fraune, Katherine M. Tsui","doi":"10.1145/3674956","DOIUrl":"https://doi.org/10.1145/3674956","url":null,"abstract":"\u0000 Mobile telepresence robots can help reduce loneliness by facilitating people to visit each other and have more social presence than visiting via video or audio calls. However, using new technology can be challenging for many older adults. In this paper, we examine how older adults use and want to use mobile telepresence robots, how these robots affect their social connection, and how they can be improved for older adults’ use. We placed a mobile telepresence robot in the home of older adult primary participants (\u0000 N\u0000 = 7; age 60+) for 7 months and facilitated monthly activities between them and a secondary participant (\u0000 N\u0000 = 8; age 18+) of their choice. Participants used the robots as they liked between monthly activities. We collected diary entries and monthly interviews from primary participants and a final interview from secondary participants. Results indicate that older adults found many creative uses for the robots, including conversations, board games, and hide ‘n’ seek. Several participants felt more socially connected with others and a few had improved their comfort with technology because of their use of the robot. They also suggested design recommendations and updates for the robots related to size, mobility, and more, which can help practitioners improve robots for older adults’ use.\u0000","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"138 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141834929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Autonomous Viewpoint Adjustment from Human Demonstrations for Telemanipulation 从人类示范中学习自主视角调整,实现远程操控
ACM Transactions on Human-Robot Interaction Pub Date : 2024-04-24 DOI: 10.1145/3660348
Ruixing Jia, Lei Yang, Ying Cao, Calvin Kalun Or, Wenping Wang, Jia Pan
{"title":"Learning Autonomous Viewpoint Adjustment from Human Demonstrations for Telemanipulation","authors":"Ruixing Jia, Lei Yang, Ying Cao, Calvin Kalun Or, Wenping Wang, Jia Pan","doi":"10.1145/3660348","DOIUrl":"https://doi.org/10.1145/3660348","url":null,"abstract":"Teleoperation systems find many applications from earlier search-and-rescue to more recent daily tasks. It is widely acknowledged that using external sensors can decouple the view of the remote scene from the motion of the robot arm during manipulation, facilitating the control task. However, this design requires the coordination of multiple operators or may exhaust a single operator as s/he needs to control both the manipulator arm and the external sensors. To address this challenge, our work introduces a viewpoint prediction model, the first data-driven approach that autonomously adjusts the viewpoint of a dynamic camera to assist in telemanipulation tasks. This model is parameterized by a deep neural network and trained on a set of human demonstrations. We propose a contrastive learning scheme that leverages viewpoints in a camera trajectory as contrastive data for network training. We demonstrated the effectiveness of the proposed viewpoint prediction model by integrating it into a real-world robotic system for telemanipulation. User studies reveal that our model outperforms several camera control methods in terms of control experience and reduces the perceived task load compared to manual camera control. As an assistive module of a telemanipulation system, our method significantly reduces task completion time for users who choose to adopt its recommendation.","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"36 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140662965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What is Proactive Human-Robot Interaction? - A review of a progressive field and its definitions 什么是主动式人机交互?- 回顾一个进步的领域及其定义
ACM Transactions on Human-Robot Interaction Pub Date : 2024-04-23 DOI: 10.1145/3650117
Marike K. van den Broek, T. Moeslund
{"title":"What is Proactive Human-Robot Interaction? - A review of a progressive field and its definitions","authors":"Marike K. van den Broek, T. Moeslund","doi":"10.1145/3650117","DOIUrl":"https://doi.org/10.1145/3650117","url":null,"abstract":"During the last 15 years, an increasing amount of works have investigated proactive robotic behavior in relation to Human-Robot Interaction (HRI). The works engage with a variety of research topics and technical challenges. In this paper a review of the related literature identified through a structured block search is performed. Variations in the corpus are investigated, and a definition of Proactive HRI is provided. Furthermore, a taxonomy is proposed based on the corpus and exemplified through specific works. Finally, a selection of noteworthy observations is discussed.","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"90 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140670483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance-Aware Trust Modeling Within a Human-Multi-Robot Collaboration Setting 在人与多机器人协作环境中建立性能感知信任模型
ACM Transactions on Human-Robot Interaction Pub Date : 2024-04-22 DOI: 10.1145/3660648
Md Khurram Monir Rabby, M. Khan, Steven Xiaochun Jiang, A. Karimoddini
{"title":"Performance-Aware Trust Modeling Within a Human-Multi-Robot Collaboration Setting","authors":"Md Khurram Monir Rabby, M. Khan, Steven Xiaochun Jiang, A. Karimoddini","doi":"10.1145/3660648","DOIUrl":"https://doi.org/10.1145/3660648","url":null,"abstract":"In this study, a novel time-driven mathematical model for trust is developed considering human-multi-robot performance for a Human-robot Collaboration (HRC) framework. For this purpose, a model is developed to quantify human performance considering the effects of physical and cognitive constraints and factors such as muscle fatigue and recovery, muscle isometric force, human (cognitive and physical) workload and workloads due to the robots’ mistakes, and task complexity. The performance of multi-robot in the HRC setting is modeled based upon the rate of task assignment and completion as well as the mistake probabilities of the individual robots. The human trust in HRC setting with single and multiple robots are modeled over different operation regions, namely unpredictable region, predictable region, dependable region, and faithful region. The relative performance difference between the human operator and the robot is used to analyze the effect on the human operator’s trust in robots’ operation. The developed model is simulated for a manufacturing workspace scenario considering different task complexities and involving multiple robots to complete shared tasks. The simulation results indicate that for a constant multi-robot performance in operation, the human operator’s trust in robots’ operation improves whenever the comparative performance of the robots improves with respect to the human operator performance. The impact of robot hypothetical learning capabilities on human trust in the same HRC setting is also analyzed. The results confirm that a hypothetical learning capability allows robots to reduce human workloads, which improves human performance. The simulation result analysis confirms that the human operator’s trust in the multi-robot operation increases faster with the improvement of the multi-robot performance when the robots have a hypothetical learning capability. An empirical study was conducted involving a human operator and two collaborator robots with two different performance levels in a software-based HRC setting. The experimental results closely followed the pattern of the developed mathematical models when capturing human trust and performance in terms of human-multi-robot collaboration.","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"19 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140673394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Dimensional Evaluation of an Augmented Reality Head-Mounted Display User Interface for Controlling Legged Manipulators 多维度评估用于控制腿部机械手的增强现实头戴式显示器用户界面
ACM Transactions on Human-Robot Interaction Pub Date : 2024-04-22 DOI: 10.1145/3660649
Rodrigo Chacón Quesada, Y. Demiris
{"title":"Multi-Dimensional Evaluation of an Augmented Reality Head-Mounted Display User Interface for Controlling Legged Manipulators","authors":"Rodrigo Chacón Quesada, Y. Demiris","doi":"10.1145/3660649","DOIUrl":"https://doi.org/10.1145/3660649","url":null,"abstract":"\u0000 Controlling assistive robots can be challenging for some users, especially those lacking relevant experience. Augmented Reality (AR) User Interfaces (UIs) have the potential to facilitate this task. Although extensive research regarding legged manipulators exists, comparatively little is on their UIs. Most existing UIs leverage traditional control interfaces such as joysticks, Hand-held (HH) controllers, and 2D UIs. These interfaces not only risk being unintuitive, thus discouraging interaction with the robot partner, but also draw the operator’s focus away from the task and towards the UI. This shift in attention raises additional safety concerns, particularly in potentially hazardous environments where legged manipulators are frequently deployed. Moreover, traditional interfaces limit the operators’ availability to use their hands for other tasks. Towards overcoming these limitations, in this article, we provide a user study comparing an AR Head Mounted Display (HMD) UI we developed for controlling a legged manipulator against off-the-shelf control methods for such robots. This user study involved 27 participants and 135 trials, from which we gathered over 405 completed questionnaires. These trials involved multiple navigation and manipulation tasks with varying difficulty levels using a Boston Dynamics (BD) Spot\u0000 ®\u0000 , a 7 DoF Kinova\u0000 ®\u0000 robot arm, and a Robotiq\u0000 ®\u0000 2F-85 gripper that we integrated into a legged manipulator. We made the comparison between UIs across multiple dimensions relevant to a successful human-robot interaction. These dimensions include cognitive workload, technology acceptance, fluency, system usability, immersion and trust. Our study employed a factorial experimental design with participants undergoing five different conditions, generating longitudinal data. Due to potential unknown distributions and outliers in such data, using parametric methods for its analysis is questionable, and while non-parametric alternatives exist, they may lead to reduced statistical power. Therefore, to analyse the data that resulted from our experiment, we chose Bayesian data analysis as an effective alternative to address these limitations. Our results show that AR UIs can outpace HH-based control methods and reduce the cognitive requirements when designers include hands-free interactions and cognitive offloading principles into the UI. Furthermore, the use of the AR UI together with our cognitive offloading feature resulted in higher usability scores and significantly higher fluency and Technology Acceptance Model (TAM) scores. Regarding immersion, our results revealed that the response values for the Augmented Reality Immersion (ARI) questionnaire associated with the AR UI are significantly higher than those associated with the HH UI, regardless of the main interaction method with the former, i.e., hand gestures or cognitive offloading. Derived from the participants’ qualitative answers, we believe this is due to a combination of facto","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"37 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140674546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing Socially Assistive Robots 设计具有社会辅助功能的机器人
ACM Transactions on Human-Robot Interaction Pub Date : 2024-04-11 DOI: 10.1145/3657646
Ela Liberman-Pincu, Oliver Korn, Jonas Grund, Elmer D. Van Grondelle, T. Oron-Gilad
{"title":"Designing Socially Assistive Robots","authors":"Ela Liberman-Pincu, Oliver Korn, Jonas Grund, Elmer D. Van Grondelle, T. Oron-Gilad","doi":"10.1145/3657646","DOIUrl":"https://doi.org/10.1145/3657646","url":null,"abstract":"Socially assistive robots (SARs) are becoming more prevalent in everyday life, emphasizing the need to make them socially acceptable and aligned with users' expectations. Robots' appearance impacts users' behaviors and attitudes towards them. Therefore, product designers choose visual qualities to give the robot a character and to imply its functionality and personality. In this work, we sought to investigate the effect of cultural differences on Israeli and German designers' perceptions of SARs' roles and appearance in four different contexts: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. The key insight is that although Israeli and German designers share similar perceptions of visual qualities for most of the robotics roles, we found differences in the perception of the COVID-19 officer robot's role and, by that, its most suitable visual design. This work indicates that context and culture play a role in users' perceptions and expectations; therefore, they should be taken into account when designing new SARs for diverse contexts.","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"18 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140714718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信