Autonomous Robots最新文献

筛选
英文 中文
Inverse reinforcement learning for autonomous navigation via differentiable semantic mapping and planning 基于可微语义映射和规划的自主导航逆强化学习
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-07-06 DOI: 10.1007/s10514-023-10118-4
Tianyu Wang, Vikas Dhiman, Nikolay Atanasov
{"title":"Inverse reinforcement learning for autonomous navigation via differentiable semantic mapping and planning","authors":"Tianyu Wang,&nbsp;Vikas Dhiman,&nbsp;Nikolay Atanasov","doi":"10.1007/s10514-023-10118-4","DOIUrl":"10.1007/s10514-023-10118-4","url":null,"abstract":"<div><p>This paper focuses on inverse reinforcement learning for autonomous navigation using distance and semantic category observations. The objective is to infer a cost function that explains demonstrated behavior while relying only on the expert’s observations and state-control trajectory. We develop a map encoder, that infers semantic category probabilities from the observation sequence, and a cost encoder, defined as a deep neural network over the semantic features. Since the expert cost is not directly observable, the model parameters can only be optimized by differentiating the error between demonstrated controls and a control policy computed from the cost estimate. We propose a new model of expert behavior that enables error minimization using a closed-form subgradient computed only over a subset of promising states via a motion planning algorithm. Our approach allows generalizing the learned behavior to new environments with new spatial configurations of the semantic categories. We analyze the different components of our model in a minigrid environment. We also demonstrate that our approach learns to follow traffic rules in the autonomous driving CARLA simulator by relying on semantic observations of buildings, sidewalks, and road lanes.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 6","pages":"809 - 830"},"PeriodicalIF":3.5,"publicationDate":"2023-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10118-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44835312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
AGRI-SLAM: a real-time stereo visual SLAM for agricultural environment 农业环境实时立体视觉SLAM
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-07-04 DOI: 10.1007/s10514-023-10110-y
Rafiqul Islam, Habibullah Habibullah, Tagor Hossain
{"title":"AGRI-SLAM: a real-time stereo visual SLAM for agricultural environment","authors":"Rafiqul Islam,&nbsp;Habibullah Habibullah,&nbsp;Tagor Hossain","doi":"10.1007/s10514-023-10110-y","DOIUrl":"10.1007/s10514-023-10110-y","url":null,"abstract":"<div><p>In this research, we proposed a stereo visual simultaneous localisation and mapping (SLAM) system that efficiently works in agricultural scenarios without compromising the performance and accuracy in contrast to the other state-of-the-art methods. The proposed system is equipped with an image enhancement technique for the ORB point and LSD line features recovery, which enables it to work in broader scenarios and gives extensive spatial information from the low-light and hazy agricultural environment. Firstly, the method has been tested on the standard dataset, i.e., KITTI and EuRoC, to validate the localisation accuracy by comparing it with the other state-of-the-art methods, namely VINS-SLAM, PL-SLAM, and ORB-SLAM2. The experimental results evidence that the proposed method obtains superior localisation and mapping accuracy than the other visual SLAM methods. Secondly, the proposed method is tested on the ROSARIO dataset, our low-light agricultural dataset, and O-HAZE dataset to validate the performance in agricultural environments. In such cases, while other methods fail to operate in such complex agricultural environments, our method successfully operates with high localisation and mapping accuracy.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 6","pages":"649 - 668"},"PeriodicalIF":3.5,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10110-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45660238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On robot grasp learning using equivariant models 基于等变模型的机器人抓取学习
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-07-04 DOI: 10.1007/s10514-023-10112-w
Xupeng Zhu, Dian Wang, Guanang Su, Ondrej Biza, Robin Walters, Robert Platt
{"title":"On robot grasp learning using equivariant models","authors":"Xupeng Zhu,&nbsp;Dian Wang,&nbsp;Guanang Su,&nbsp;Ondrej Biza,&nbsp;Robin Walters,&nbsp;Robert Platt","doi":"10.1007/s10514-023-10112-w","DOIUrl":"10.1007/s10514-023-10112-w","url":null,"abstract":"<div><p>Real-world grasp detection is challenging due to the stochasticity in grasp dynamics and the noise in hardware. Ideally, the system would adapt to the real world by training directly on physical systems. However, this is generally difficult due to the large amount of training data required by most grasp learning models. In this paper, we note that the planar grasp function is <span>(textrm{SE}(2))</span>-equivariant and demonstrate that this structure can be used to constrain the neural network used during learning. This creates an inductive bias that can significantly improve the sample efficiency of grasp learning and enable end-to-end training from scratch on a physical robot with as few as 600 grasp attempts. We call this method Symmetric Grasp learning (SymGrasp) and show that it can learn to grasp “from scratch” in less that 1.5 h of physical robot time. This paper represents an expanded and revised version of the conference paper Zhu et al. (2022).\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1175 - 1193"},"PeriodicalIF":3.5,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10112-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138473184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TNES: terrain traversability mapping, navigation and excavation system for autonomous excavators on worksite TNES:用于现场自动挖掘机的地形可穿越性测绘、导航和挖掘系统
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-07-04 DOI: 10.1007/s10514-023-10113-9
Tianrui Guan, Zhenpeng He, Ruitao Song, Liangjun Zhang
{"title":"TNES: terrain traversability mapping, navigation and excavation system for autonomous excavators on worksite","authors":"Tianrui Guan,&nbsp;Zhenpeng He,&nbsp;Ruitao Song,&nbsp;Liangjun Zhang","doi":"10.1007/s10514-023-10113-9","DOIUrl":"10.1007/s10514-023-10113-9","url":null,"abstract":"<div><p>We present a terrain traversability mapping and navigation system (TNS) for autonomous excavator applications in an unstructured environment. We use an efficient approach to extract terrain features from RGB images and 3D point clouds and incorporate them into a global map for planning and navigation. Our system can adapt to changing environments and update the terrain information in real-time. Moreover, we present a novel dataset, the Complex Worksite Terrain dataset, which consists of RGB images from construction sites with seven categories based on navigability. Our novel algorithms improve the mapping accuracy over previous methods by 4.17–30.48<span>(%)</span> and reduce MSE on the traversability map by 13.8–71.4<span>(%)</span>. We have combined our mapping approach with planning and control modules in an autonomous excavator navigation system and observe <span>(49.3%)</span> improvement in the overall success rate. Based on TNS, we demonstrate the first autonomous excavator that can navigate through unstructured environments consisting of deep pits, steep hills, rock piles, and other complex terrain features. In addition, we combine the proposed TNS with the autonomous excavation system (AES), and deploy the new pipeline, TNES, on a more complex construction site. With minimum human intervention, we demonstrate autonomous navigation capability with excavation tasks.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 6","pages":"695 - 714"},"PeriodicalIF":3.5,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41582747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Complex environment localization system using complementary ceiling and ground map information 利用互补的天花板和地面地图信息的复杂环境定位系统
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-06-28 DOI: 10.1007/s10514-023-10116-6
Chee-An Yu, Hao-Yun Chen, Chun-Chieh Wang, Li-Chen Fu
{"title":"Complex environment localization system using complementary ceiling and ground map information","authors":"Chee-An Yu,&nbsp;Hao-Yun Chen,&nbsp;Chun-Chieh Wang,&nbsp;Li-Chen Fu","doi":"10.1007/s10514-023-10116-6","DOIUrl":"10.1007/s10514-023-10116-6","url":null,"abstract":"<div><p>This paper proposes a robust localization system using complementary information extracted from ceiling and ground plans, particularly applicable to dynamic and complex environments. The ceiling perception provides the robot with stable and time-invariant environmental features independent of the dynamic changes on the ground, whereas the ground perception allows the robot to navigate in the ground plane while avoiding stationary obstacles. We propose an architecture to fuse ground 2D LiDAR scan and ceiling 3D LiDAR scan with our enhanced mapping algorithm associating perception from both sources efficiently. The localization ability and the navigation performance can be promisingly secured even in a harsh environment with our complementary sensed information from the ground and ceiling. The salient feature of our work is that our system can simultaneously map both the ceiling and ground plane efficiently without extra efforts of deploying articulated landmarks and apply such hybrid information effectively, which facilitates the robot to travel through any indoor environment with human crowds without getting lost.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 6","pages":"669 - 683"},"PeriodicalIF":3.5,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10116-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41898236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Event-based neural learning for quadrotor control 基于事件的四旋翼控制神经学习
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-06-23 DOI: 10.1007/s10514-023-10115-7
Estéban Carvalho, Pierre Susbielle, Nicolas Marchand, Ahmad Hably, Jilles S. Dibangoye
{"title":"Event-based neural learning for quadrotor control","authors":"Estéban Carvalho,&nbsp;Pierre Susbielle,&nbsp;Nicolas Marchand,&nbsp;Ahmad Hably,&nbsp;Jilles S. Dibangoye","doi":"10.1007/s10514-023-10115-7","DOIUrl":"10.1007/s10514-023-10115-7","url":null,"abstract":"<div><p>The design of a simple and adaptive flight controller is a real challenge in aerial robotics. A simple flight controller often generates a poor flight tracking performance. Furthermore, adaptive algorithms might be costly in time and resources or deep learning based methods may cause instability problems, for instance in presence of disturbances. In this paper, we propose an event-based neural learning control strategy that combines the use of a standard cascaded flight controller enhanced by a deep neural network that learns the disturbances in order to improve the tracking performance. The strategy relies on two events: one allowing the improvement of tracking errors and the second to ensure closed-loop system stability. After a validation of the proposed strategy in a ROS/Gazebo simulation environment, its effectiveness is confirmed in real experiments in the presence of wind disturbance.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1213 - 1228"},"PeriodicalIF":3.5,"publicationDate":"2023-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45786020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning latent representations to co-adapt to humans 学习潜在表征以共同适应人类
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-06-17 DOI: 10.1007/s10514-023-10109-5
Sagar Parekh, Dylan P. Losey
{"title":"Learning latent representations to co-adapt to humans","authors":"Sagar Parekh,&nbsp;Dylan P. Losey","doi":"10.1007/s10514-023-10109-5","DOIUrl":"10.1007/s10514-023-10109-5","url":null,"abstract":"<div><p>When robots interact with humans in homes, roads, or factories the human’s behavior often changes in response to the robot. Non-stationary humans are challenging for robot learners: actions the robot has learned to coordinate with the original human may fail after the human adapts to the robot. In this paper we introduce an algorithmic formalism that enables robots (i.e., ego agents) to <i>co-adapt</i> alongside dynamic humans (i.e., other agents) using only the robot’s low-level states, actions, and rewards. A core challenge is that humans not only react to the robot’s behavior, but the way in which humans react inevitably changes both over time and between users. To deal with this challenge, our insight is that—instead of building an exact model of the human–robots can learn and reason over <i>high-level representations</i> of the human’s policy and policy dynamics. Applying this insight we develop RILI: Robustly Influencing Latent Intent. RILI first embeds low-level robot observations into predictions of the human’s latent strategy and strategy dynamics. Next, RILI harnesses these predictions to select actions that influence the adaptive human towards advantageous, high reward behaviors over repeated interactions. We demonstrate that—given RILI’s measured performance with users sampled from an underlying distribution—we can probabilistically bound RILI’s expected performance across new humans sampled from the same distribution. Our simulated experiments compare RILI to state-of-the-art representation and reinforcement learning baselines, and show that RILI better learns to coordinate with imperfect, noisy, and time-varying agents. Finally, we conduct two user studies where RILI co-adapts alongside actual humans in a game of tag and a tower-building task. See videos of our user studies here: https://youtu.be/WYGO5amDXbQ</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 6","pages":"771 - 796"},"PeriodicalIF":3.5,"publicationDate":"2023-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10109-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45856995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A learning-based approach to surface vehicle dynamics modeling for robust multistep prediction 基于学习的地面车辆动力学建模鲁棒多步预测方法
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-06-14 DOI: 10.1007/s10514-023-10114-8
Junwoo Jang, Changyu Lee, Jinwhan Kim
{"title":"A learning-based approach to surface vehicle dynamics modeling for robust multistep prediction","authors":"Junwoo Jang,&nbsp;Changyu Lee,&nbsp;Jinwhan Kim","doi":"10.1007/s10514-023-10114-8","DOIUrl":"10.1007/s10514-023-10114-8","url":null,"abstract":"<div><p>Determining the dynamics of surface vehicles and marine robots is important for developing marine autopilot and autonomous navigation systems. However, this often requires extensive experimental data and intense effort because they are highly nonlinear and involve various uncertainties in real operating conditions. Herein, we propose an efficient data-driven approach for analyzing and predicting the motion of a surface vehicle in a real environment based on deep learning techniques. The proposed multistep model is robust to measurement uncertainty and overcomes compounding errors by eliminating the correlation between the prediction results. Additionally, latent state representation and mixup augmentation are introduced to make the model more consistent and accurate. The performance analysis reveals that the proposed method outperforms conventional methods and is robust against environmental disturbances.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 6","pages":"797 - 808"},"PeriodicalIF":3.5,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42816645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Regulated pure pursuit for robot path tracking 用于机器人路径跟踪的调节纯追击
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-06-10 DOI: 10.1007/s10514-023-10097-6
Steve Macenski, Shrijit Singh, Francisco Martín, Jonatan Ginés
{"title":"Regulated pure pursuit for robot path tracking","authors":"Steve Macenski,&nbsp;Shrijit Singh,&nbsp;Francisco Martín,&nbsp;Jonatan Ginés","doi":"10.1007/s10514-023-10097-6","DOIUrl":"10.1007/s10514-023-10097-6","url":null,"abstract":"<div><p>The accelerated deployment of service robots have spawned a number of algorithm variations to better handle real-world conditions. Many local trajectory planning techniques have been deployed on practical robot systems successfully. While most formulations of Dynamic Window Approach and Model Predictive Control can progress along paths and optimize for additional criteria, the use of pure path tracking algorithms is still commonplace. Decades later, Pure Pursuit and its variants continues to be one of the most commonly utilized classes of local trajectory planners. However, few Pure Pursuit variants have been proposed with schema for variable linear velocities—they either assume a constant velocity or fails to address the point at all. This paper presents a variant of Pure Pursuit designed with additional heuristics to regulate linear velocities, built atop the existing Adaptive variant. The <i>Regulated Pure Pursuit algorithm</i> makes incremental improvements on state of the art by adjusting linear velocities with particular focus on safety in constrained and partially observable spaces commonly negotiated by deployed robots. We present experiments with the Regulated Pure Pursuit algorithm on industrial-grade service robots. We also provide a high-quality reference implementation that is freely included ROS 2 Nav2 framework at https://github.com/ros-planning/navigation2 for fast evaluation.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 6","pages":"685 - 694"},"PeriodicalIF":3.5,"publicationDate":"2023-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42455935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An overview of space-variant and active vision mechanisms for resource-constrained human inspired robotic vision 资源受限的人启发机器人视觉的空间变型和主动视觉机制综述
IF 3.5 3区 计算机科学
Autonomous Robots Pub Date : 2023-06-09 DOI: 10.1007/s10514-023-10107-7
Rui Pimentel de Figueiredo, Alexandre Bernardino
{"title":"An overview of space-variant and active vision mechanisms for resource-constrained human inspired robotic vision","authors":"Rui Pimentel de Figueiredo,&nbsp;Alexandre Bernardino","doi":"10.1007/s10514-023-10107-7","DOIUrl":"10.1007/s10514-023-10107-7","url":null,"abstract":"<div><p>In order to explore and understand the surrounding environment in an efficient manner, humans have developed a set of space-variant vision mechanisms that allow them to actively attend different locations in the surrounding environment and compensate for memory, neuronal transmission bandwidth and computational limitations in the brain. Similarly, humanoid robots deployed in everyday environments have limited on-board resources, and are faced with increasingly complex tasks that require interaction with objects arranged in many possible spatial configurations. The main goal of this work is to describe and overview biologically inspired, space-variant human visual mechanism benefits, when combined with state-of-the-art algorithms for different visual tasks (e.g. object detection), ranging from low-level hardwired attention vision (i.e. foveal vision) to high-level visual attention mechanisms. We overview the state-of-the-art in biologically plausible space-variant resource-constrained vision architectures, namely for active recognition and localization tasks.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1119 - 1135"},"PeriodicalIF":3.5,"publicationDate":"2023-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10107-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47815592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信