2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)最新文献

筛选
英文 中文
SMAK-Net: Self-Supervised Multi-level Spatial Attention Network for Knowledge Representation towards Imitation Learning SMAK-Net:面向模仿学习的知识表示自监督多层次空间注意网络
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956303
Kartik Ramachandruni, M. Vankadari, A. Majumder, S. Dutta, Swagat Kumar
{"title":"SMAK-Net: Self-Supervised Multi-level Spatial Attention Network for Knowledge Representation towards Imitation Learning","authors":"Kartik Ramachandruni, M. Vankadari, A. Majumder, S. Dutta, Swagat Kumar","doi":"10.1109/RO-MAN46459.2019.8956303","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956303","url":null,"abstract":"In this paper, we propose an end-to-end self-supervised feature representation network for imitation learning. The proposed network incorporates a novel multi-level spatial attention module to amplify the relevant and suppress the irrelevant information while learning task-specific feature embeddings. The multi-level attention module takes multiple intermediate feature maps of the input image at different stages of the CNN pipeline and results a 2D matrix of compatibility scores for each feature map with respect to the given task. The weighted combination of the feature vectors with the scores estimated from attention modules leads to a more task specific feature representation of the input images. We thus name the proposed network as SMAK-Net, abbreviated from Self-supervised Multi-level spatial Attention Knowledge representation Network. We have trained this network using a metric learning loss which aims to decrease the distance between the feature representations of simultaneous frames from multiple view points and increases the distance between the neighboring frames of the same view point. The experiments are performed on the publicly available Multi-View pouring dataset [1]. The outputs of the attention module are demonstrated to highlight the task specific objects while suppressing the rest of the background in the input image. The proposed method is validated by qualitative and quantitative comparisons with the state-of-the art technique TCN [1] along with intensive ablation studies. This method is shown to significantly outperform TCN by 6.5% in the temporal alignment error metric while reducing the total number of training steps by 155K.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123803982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of Robots that Signals a Pedestrian Using Face Orientation Based on Moving Trajectory Analysis 基于运动轨迹分析的人脸方向行人信号机器人评价
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956337
Shohei Yamashita, Tetsushi Ikeda, K. Shinozawa, S. Iwaki
{"title":"Evaluation of Robots that Signals a Pedestrian Using Face Orientation Based on Moving Trajectory Analysis","authors":"Shohei Yamashita, Tetsushi Ikeda, K. Shinozawa, S. Iwaki","doi":"10.1109/RO-MAN46459.2019.8956337","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956337","url":null,"abstract":"Robots that share daily environments with us are required to behave in a socially acceptable manner. There are two important approaches to this purpose: 1) robots model human behavior, understand it properly and behave appropriately 2) robots present their understanding and future behavior to surrounding people. In this paper, considering people present various cues to other people around them using gaze and face direction, we focus on the latter approach and propose a robot that presents cues to an opposing pedestrian by turning face. Another problem with the conventional research is that the evaluation of the pedestrian’s ease of passing with the robot depends only on the subjective impression, so it was difficult to design the robot’s behavior based on the temporal change of the ease of walking. In this paper, we evaluate the fluctuation of the pedestrian’s moving velocity vector as an index of the ease of walking and analyze the temporal change. We have conducted preliminary experiments in which 12 subjects passed by the robot and compared the three types of presentation methods using the face. By presenting information using a face, we confirmed that the subjects tended to have better impressions of walking based on subjective evaluation and that the walking was relatively easy to walk for several seconds while approaching the robot based on analyzing the fluctuation of the moving speed vector.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130498995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Vision-based Fractional Order Sliding Mode Control for Autonomous Vehicle Tracking by a Quadrotor UAV 基于视觉的四旋翼无人机自动车辆跟踪分数阶滑模控制
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956317
Heera Lal Maurya, Archit Krishna Kamath, N. Verma, L. Behera
{"title":"Vision-based Fractional Order Sliding Mode Control for Autonomous Vehicle Tracking by a Quadrotor UAV","authors":"Heera Lal Maurya, Archit Krishna Kamath, N. Verma, L. Behera","doi":"10.1109/RO-MAN46459.2019.8956317","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956317","url":null,"abstract":"This paper proposes a vision-based sliding mode control technique for autonomous tracking of a moving vehicle by a quadrotor. The proposed vision algorithm estimates the quadrotor’s position relative to moving vehicle using an on-board monocular camera. The relative position is provided as an input to a Fractional Order Sliding mode Controller (FOSMC) which ensures the convergence of the relative position between the moving vehicle and the quadrotor thereby enabling it to track the vehicle effectively. In addition, the proposed controller guarantees robustness towards bounded external disturbances and modelling uncertainties. The proposed vision-based control scheme is implemented using numerical simulations and validated in real-time on the DJI Matrice 100. Theses validations help in gaining into the maximum allowable speed of the moving target for the quadrotor to successfully track the object. This plays a vital role in surveillance operations and intruder chase.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116588454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Identity, Gender, and Age Recognition Convergence System for Robot Environments 机器人环境的身份、性别和年龄识别收敛系统
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956313
Jaeyoon Jang, Hosub Yoon, Jaehong Kim
{"title":"Identity, Gender, and Age Recognition Convergence System for Robot Environments","authors":"Jaeyoon Jang, Hosub Yoon, Jaehong Kim","doi":"10.1109/RO-MAN46459.2019.8956313","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956313","url":null,"abstract":"This paper proposes a new dentity, gender, and age recognition convergence system for robot environments. In a robot environment, it is difficult to apply deep learning based methods because of various limitations. To overcome the limitations, we propose a shallow deep-learning fusion model that can calculate identity, gender, and age at once, and a technique for improving recognition performance. Using convergence network, we can obtain three pieces of information from a single input through a single operation. In addition, we propose a 2D / 3D augmentation method to generate virtual additional datasets for learning data. The proposed method has a smaller model size and faster computation time than existing methods and uses a very small number of parameters. Through the proposed method, we finally achieved 99.35%, 90.0%, and 60.9% / 94.5% of performance in identity recognition, gender recognition, and age recognition. In all experiments, we did not exceed the state-of-the-art results, but compared to other studies, we obtained performance similar to the previous study using only less than 10% parameters. In some experiments, we also achieved state-of-the-art result.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122319767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards a Driver Monitoring System for Estimating Driver Situational Awareness 基于情景感知的驾驶员监控系统研究
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956378
Ala'aldin Hijaz, W. Louie, Iyad Mansour
{"title":"Towards a Driver Monitoring System for Estimating Driver Situational Awareness","authors":"Ala'aldin Hijaz, W. Louie, Iyad Mansour","doi":"10.1109/RO-MAN46459.2019.8956378","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956378","url":null,"abstract":"Autonomous vehicle technology is rapidly developing but the current state-of-the-art still has limitations and requires frequent human intervention. However, handovers from an autonomous vehicle to a human driver are challenging because a human operator may be unaware of the vehicle surroundings during a handover which can lead to dangerous driving outcomes. There is presently an urgent need to develop advanced driver-assistance systems capable of monitoring driver situational awareness within an autonomous vehicle and intelligently handing-over control to a human driver in emergency situations. Towards this goal, in this paper we present the development and evaluation of a vision-based system that identifies visual cues of a driver’s situational awareness including their: head pose, eye pupil position, average head movement rate and visual focus of attention.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123107674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Analysis of factors influencing the impression of speaker individuality in android robots* android机器人说话人个性印象影响因素分析*
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956395
Ryusuke Mikata, C. Ishi, T. Minato, H. Ishiguro
{"title":"Analysis of factors influencing the impression of speaker individuality in android robots*","authors":"Ryusuke Mikata, C. Ishi, T. Minato, H. Ishiguro","doi":"10.1109/RO-MAN46459.2019.8956395","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956395","url":null,"abstract":"Humans use not only verbal information but also non-verbal information in daily communication. Among the non-verbal information, we have proposed methods for automatically generating hand gestures in android robots, with the purpose of generating natural human-like motion. In this study, we investigate the effects of hand gesture models trained/designed for different speakers on the impression of the individuality through android robots. We consider that it is possible to express individuality in the robot, by creating hand motion that are unique to that individual. Three factors were taken into account: the appearance of the robot, the voice, and the hand motion. Subjective evaluation experiments were conducted by comparing motions generated in two android robots, two speaker voices, and two motion types, to evaluate how each modality affects the impression of the speaker individuality. Evaluation results indicated that all these three factors affect the impression of speaker individuality, while different trends were found depending on whether or not the android is copy of an existent person.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"14 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134410170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Augmented Robotics for Learners: A Case Study on Optics 增强机器人学习者:光学案例研究
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956363
W. Johal, Olguta Robu, Amaury Dame, Stéphane Magnenat, F. Mondada
{"title":"Augmented Robotics for Learners: A Case Study on Optics","authors":"W. Johal, Olguta Robu, Amaury Dame, Stéphane Magnenat, F. Mondada","doi":"10.1109/RO-MAN46459.2019.8956363","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956363","url":null,"abstract":"In recent years, robots have been surfing on a trendy wave as standard devices for teaching programming. The tangibility of robotics platforms allows for collaborative and interactive learning. Moreover, with these robot platforms, we also observe the occurrence of a shift of visual attention from the screen (on which the programming is done) to the physical environments (i.e. the robot). In this paper, we describe an experiment aiming at studying the effect of using augmented reality (AR) representations of sensor data in a robotic learning activity. We designed an AR system able to display in real-time the data of the Infra-Red sensors of the Thymio robot. In order to evaluate the impact of AR on the learner’s understanding on how these sensors worked, we designed a pedagogical lesson that can run with or without the AR rendering. Two different age groups of students participated in this between-subject experiment, counting a total of 74 children. The tests were the same for the experimental (AR) and control group (no AR). The exercises differed only through the use of AR. Our results show that AR was worth being used for younger groups dealing with difficult concepts. We discuss our findings and propose future works to establish guidelines for designing AR robotic learning sessions.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133991118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Desk Organization: Effect of Multimodal Inputs on Spatial Relational Learning 桌面组织:多模态输入对空间关系学习的影响
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956243
Ryan Rowe, Shivam Singhal, Daqing Yi, T. Bhattacharjee, S. Srinivasa
{"title":"Desk Organization: Effect of Multimodal Inputs on Spatial Relational Learning","authors":"Ryan Rowe, Shivam Singhal, Daqing Yi, T. Bhattacharjee, S. Srinivasa","doi":"10.1109/RO-MAN46459.2019.8956243","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956243","url":null,"abstract":"For robots to operate in a three dimensional world and interact with humans, learning spatial relationships among objects in the surrounding is necessary. Reasoning about the state of the world requires inputs from many different sensory modalities including vision (V) and haptics (H). We examine the problem of desk organization: learning how humans spatially position different objects on a planar surface according to organizational “preference”. We model this problem by examining how humans position objects given multiple features received from vision and haptic modalities. However, organizational habits vary greatly between people both in structure and adherence. To deal with user organizational preferences, we add an additional modality, “utility” (U), which informs on a particular human’s perceived usefulness of a given object. Models were trained as generalized (over many different people) or tailored (per person). We use two types of models: random forests, which focus on precise multi-task classification, and Markov logic networks, which provide an easily interpretable insight into organizational habits. The models were applied to both synthetic data, which proved to be learnable when using fixed organizational constraints, and human-study data, on which the random forest achieved over 90% accuracy. Over all combinations of {H, U, V} modalities, UV and HUV were the most informative for organization. In a follow-up study, we gauged participants preference of desk organizations by a generalized random forest organization vs. by a random model. On average, participants rated the random forest models as 4.15 on a 5-point Likert scale compared to 1.84 for the random model.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121925080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Autonomous Generation of Robust and Focused Explanations for Robot Policies 机器人策略的鲁棒和集中解释的自主生成
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956323
Oliver Struckmeier, M. Racca, V. Kyrki
{"title":"Autonomous Generation of Robust and Focused Explanations for Robot Policies","authors":"Oliver Struckmeier, M. Racca, V. Kyrki","doi":"10.1109/RO-MAN46459.2019.8956323","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956323","url":null,"abstract":"Transparency of robot behaviors increases efficiency and quality of interactions with humans. To increase transparency of robot policies, we propose a method for generating robust and focused explanations that express why a robot chose a particular action. The proposed method examines the policy based on the state space in which an action was chosen and describes it in natural language. The method can generate focused explanations by leaving out irrelevant state dimensions, and avoid explanations that are sensitive to small perturbations or have ambiguous natural language concepts. Furthermore, the method is agnostic to the policy representation and only requires the policy to be evaluated at different samples of the state space. We conducted a user study with 18 participants to investigate the usability of the proposed method compared to a comprehensive method that generates explanations using all dimensions. We observed how focused explanations helped the subjects more reliably detect the irrelevant dimensions of the explained system and how preferences regarding explanation styles and their expected characteristics greatly differ among the participants.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125826158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Effect of Human Hand Dynamics on Haptic Rendering of Stiff Springs using Virtual Mass Feedback 人手动力学对基于虚质量反馈的刚性弹簧触觉渲染的影响
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) Pub Date : 2019-10-01 DOI: 10.1109/RO-MAN46459.2019.8956422
Indrajit Desai, Abhishek Gupta, D. Chakraborty
{"title":"Effect of Human Hand Dynamics on Haptic Rendering of Stiff Springs using Virtual Mass Feedback","authors":"Indrajit Desai, Abhishek Gupta, D. Chakraborty","doi":"10.1109/RO-MAN46459.2019.8956422","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956422","url":null,"abstract":"Hard surfaces are typically simulated in a haptic interface as stiff springs. Stable interaction with these surfaces using force feedback is challenging due to the discrete nature of the controller. Previous research has shown that adding a virtual damping or virtual mass to the rendered surface helps to increase the stiffness of the surface for stable interaction. In this paper, we analyze the effect of adding virtual mass on the range of stiffness that can be stably rendered. The analysis is performed in the discrete time domain. Specifically, we study the coupled (with human hand dynamics) stability of the haptic interface. Stability, when the human interacts with the robot, is investigated by considering different human hand models. Our analysis shows that, when the human operator is coupled to an uncoupled stable system, an increase in the mass of a human hand decreases maximum renderable stiffness. Moreover, the increase in human hand damping increases the stably renderable stiffness.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124673636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信