IEEE Transactions on Intelligent Vehicles最新文献

筛选
英文 中文
Evaluation of Control Modalities in Highly Automated Vehicles: A Virtual Reality Simulation-Based Study 高度自动化车辆控制模式评估:基于虚拟现实仿真的研究
IF 14.3 1区 工程技术
IEEE Transactions on Intelligent Vehicles Pub Date : 2024-09-04 DOI: 10.1109/TIV.2024.3454608
Chongren Sun;Amandeep Singh;Siby Samuel
{"title":"Evaluation of Control Modalities in Highly Automated Vehicles: A Virtual Reality Simulation-Based Study","authors":"Chongren Sun;Amandeep Singh;Siby Samuel","doi":"10.1109/TIV.2024.3454608","DOIUrl":"https://doi.org/10.1109/TIV.2024.3454608","url":null,"abstract":"The integration of effective control modalities is paramount for enhancing user experience and safety in autonomous vehicles. This study investigates the performance and user experience of three control modalities i.e., voice, hand gesture, and physical button controls in high-level autonomous vehicles (Levels 4 and 5), under both distraction and non-distraction conditions. Our objective was to evaluate error rates, physiological responses, and subjective workload across these control modalities. The results revealed that distraction significantly increases error rates and perceived workload across all models. Voice control exhibited the lowest error rates without distraction but was most affected by it, whereas Hand Gesture control showed the highest error rates and workload in both scenarios. Physical Button control demonstrated moderate error rates and the least impact from distraction. Physiological data supported these findings, with significant increases in heart rate under distraction for all models, particularly in the voice control model. The NASA Task Load Index scores indicated higher workload under distraction, with hand gesture control being the most demanding. Our findings suggest that a combination of Physical Button and Voice control may offer the most effective solution, with recommendations for adaptive and multimodal interaction designs to mitigate distraction effects and enhance overall user satisfaction.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"10 5","pages":"3494-3503"},"PeriodicalIF":14.3,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure Distributed Model Predictive Control for Heterogeneous UAV-UGV Formation Under DoS Attacks DoS攻击下异构UAV-UGV编队的安全分布式模型预测控制
IF 14.3 1区 工程技术
IEEE Transactions on Intelligent Vehicles Pub Date : 2024-09-04 DOI: 10.1109/TIV.2024.3454712
Hui Tang;Yong Chen;Ikram Ali
{"title":"Secure Distributed Model Predictive Control for Heterogeneous UAV-UGV Formation Under DoS Attacks","authors":"Hui Tang;Yong Chen;Ikram Ali","doi":"10.1109/TIV.2024.3454712","DOIUrl":"https://doi.org/10.1109/TIV.2024.3454712","url":null,"abstract":"This study addresses the secure distributed model predictive control (SDMPC) challenge for the heterogeneous UAV-UGV formation system under malicious denial-of-service (DoS) attacks, utilizing a nonlinear discrete-time model to represent system dynamics. It examines the scenario where DoS attacks obstruct communication between neighboring agents. A novel neighbor output prediction strategy is introduced to mitigate the impact of DoS attacks. Upon detecting a DoS attack, subsystems affected by the compromised channel predict the output sequences of their upstream counterparts, updating these predictions at each time step based on receiver buffer contents and attack duration. Subsequently, a cost function incorporating the predicted output sequences and a terminal constraint tailored to DoS conditions is formulated to maintain system stability during attacks. The analysis thoroughly explores recursive feasibility and input-to-state practical stability (ISpS). Comparative tests underscore the proposed SDMPC algorithm's effectiveness and enhanced security in maintaining stability amid DoS attacks.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"10 5","pages":"3504-3516"},"PeriodicalIF":14.3,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AEFusion: An Attention-Based Ensemble Learning Approach for BEV Fusion Perception in Autonomous Modular Buses AEFusion:一种基于注意力的集成学习方法,用于自主模块化客车的纯电动汽车融合感知
IF 14.3 1区 工程技术
IEEE Transactions on Intelligent Vehicles Pub Date : 2024-09-04 DOI: 10.1109/TIV.2024.3454288
Hongyi Lin;Shouqun Ming;Yang Liu;Xiaobo Qu
{"title":"AEFusion: An Attention-Based Ensemble Learning Approach for BEV Fusion Perception in Autonomous Modular Buses","authors":"Hongyi Lin;Shouqun Ming;Yang Liu;Xiaobo Qu","doi":"10.1109/TIV.2024.3454288","DOIUrl":"https://doi.org/10.1109/TIV.2024.3454288","url":null,"abstract":"Autonomous modular buses (AMB) are considered a promising solution to the challenges in public transportation, as they can reduce commute times, enhance transfer convenience, and address supply-demand imbalances in transportation systems. Nonetheless, current research mainly focuses on operational aspects, whereas the high precision required for in-transit docking remains a critical challenge for implementation. The accuracy of current autonomous driving perception systems is often limited due to errors introduced by multi-sensor fusion methods. To address this issue, this paper introduces an attention-based ensemble learning fusion method (AEfusion) which includes a supervision module that utilizes the more accurate depth information from LiDAR to guide the generation of image depth information. Additionally, the fusion module incorporates two enhanced channel attention blocks and a spatial attention block to strengthen feature learning and integration. Experiments on both the nuScenes dataset and a self-collected dataset demonstrate that our method is suited for full-range docking perception in AMBs and is superior to the existing approaches.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"10 5","pages":"3468-3480"},"PeriodicalIF":14.3,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mobility AI Agents and Networks 移动性人工智能代理和网络
IF 14 1区 工程技术
IEEE Transactions on Intelligent Vehicles Pub Date : 2024-09-04 DOI: 10.1109/TIV.2024.3454285
Haoxuan Ma;Yifan Liu;Qinhua Jiang;Brian Yueshuai He;Xishun Liao;Jiaqi Ma
{"title":"Mobility AI Agents and Networks","authors":"Haoxuan Ma;Yifan Liu;Qinhua Jiang;Brian Yueshuai He;Xishun Liao;Jiaqi Ma","doi":"10.1109/TIV.2024.3454285","DOIUrl":"https://doi.org/10.1109/TIV.2024.3454285","url":null,"abstract":"Intelligent vehicles and smart mobility systems are at the forefront of transportation evolution, yet effective management of these new mobility technologies and services are non-trivial. This perspective presents an Intelligent Mobility System Digital Twin (MSDT) framework as a solution. Our framework uniquely maps human beings and vehicles to AI agents, and the mobility systems to AI networks, creating realistic digital simulacra of the physical mobility system. By integrating AI agents and AI networks, this framework offers unprecedented capabilities in prediction and automated simulation of the entire mobility systems, thereby improving planning, operations, and decision-making in smart cities.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"9 7","pages":"5124-5129"},"PeriodicalIF":14.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142320478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling and Control of a Coaxial Pendulum Drone 同轴摆无人机的建模与控制
IF 14.3 1区 工程技术
IEEE Transactions on Intelligent Vehicles Pub Date : 2024-09-04 DOI: 10.1109/TIV.2024.3454340
Yifan Wang;Zhiyu Wang;Gaoran Wang;Liangming Chen
{"title":"Modeling and Control of a Coaxial Pendulum Drone","authors":"Yifan Wang;Zhiyu Wang;Gaoran Wang;Liangming Chen","doi":"10.1109/TIV.2024.3454340","DOIUrl":"https://doi.org/10.1109/TIV.2024.3454340","url":null,"abstract":"Given the high energy utilization efficiency of coaxial drones compared to quadrotors, an inverted pendulum coaxial drone is designed with a focus on its modeling and control. Based on the Lagrangian modeling method, a 6-DoF dynamical model of the pendulum drone is established. The strong coupling and under-actuated nature of the model pose significant control challenges. Controllers suitable for such system are proposed to stabilize the fully-actuated part and the two under-actuated parts of the dynamics, respectively. A theoretical stability analysis of the closed-loop dynamics is presented. Finally, in the simulation examples under static reference, dynamic reference, impulse disturbance and model uncertainties, the effectiveness of the proposed controller is verified, and its superior performance is demonstrated in comparative simulations.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"10 5","pages":"3481-3493"},"PeriodicalIF":14.3,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144989949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Locational Intelligence Using GPS Trajectory Records of Courier Motorcycles 利用快递摩托车的GPS轨迹记录进行定位情报
IF 14.3 1区 工程技术
IEEE Transactions on Intelligent Vehicles Pub Date : 2024-09-03 DOI: 10.1109/TIV.2024.3453511
Yigit Cetinel;Ilgin Gokasar;Muhammet Deveci
{"title":"Locational Intelligence Using GPS Trajectory Records of Courier Motorcycles","authors":"Yigit Cetinel;Ilgin Gokasar;Muhammet Deveci","doi":"10.1109/TIV.2024.3453511","DOIUrl":"https://doi.org/10.1109/TIV.2024.3453511","url":null,"abstract":"In recent years, the use of motorcycles has witnessed a remarkable surge in urban areas, paralleled by a growing demand for research and development in the motorcycle industry. Furthermore, the widespread adoption of GPS-enabled devices over the last few decades has opened up exciting possibilities, particularly in the realm of data analysis, where motorcycle GPS data has emerged as a valuable resource for various applications. This article presents a novel methodology for estimating the travel duration of powered two-wheelers (PTWs) in heterogeneous traffic using GPS data generated by motorcycles on urban road networks. The proposed methodology has the potential to offer valuable insights into the behavior of PTWs in heterogeneous traffic environments. By analyzing Big Data generated by GPS-based trajectory data, researchers can identify areas with high motorcycle density and pinpoint potential bottlenecks that impact travel times. Temporal data storing with bearing information in hexagonal shards called “bubbles” enables researchers to utilize Big Data more efficiently. Spatial transformation, Kalman filtering, and map-matching of the trajectory data significantly enhance the quality of the data. In this study, the 10-minute interval is performed as optimal for estimating travel time with 4.3% MAPE. Furthermore, combining historical bubble data with a 0.35 scale factor improves MAPE by 9.6%. Despite the limitations, not only is the transferability of this methodology noteworthy, but it is also opening the door to broader applications in diverse urban settings.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"10 5","pages":"3434-3441"},"PeriodicalIF":14.3,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144989948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OccFusion: Multi-Sensor Fusion Framework for 3D Semantic Occupancy Prediction 聚焦:用于三维语义占用预测的多传感器融合框架
IF 14.3 1区 工程技术
IEEE Transactions on Intelligent Vehicles Pub Date : 2024-09-03 DOI: 10.1109/TIV.2024.3453293
Zhenxing Ming;Julie Stephany Berrio;Mao Shan;Stewart Worrall
{"title":"OccFusion: Multi-Sensor Fusion Framework for 3D Semantic Occupancy Prediction","authors":"Zhenxing Ming;Julie Stephany Berrio;Mao Shan;Stewart Worrall","doi":"10.1109/TIV.2024.3453293","DOIUrl":"https://doi.org/10.1109/TIV.2024.3453293","url":null,"abstract":"A comprehensive understanding of 3D scenes is crucial in autonomous vehicles (AVs), and recent models for 3D semantic occupancy prediction have successfully addressed the challenge of describing real-world objects with varied shapes and classes. However, existing methods for 3D semantic occupancy prediction heavily rely on surround-view camera images, making them susceptible to changes in lighting and weather conditions. This paper introduces OccFusion, a novel sensor fusion framework for predicting 3D semantic occupancy. By integrating features from additional sensors, such as lidar and surround view radars, our framework enhances the accuracy and robustness of occupancy prediction, resulting in top-tier performance on the nuScenes benchmark. Furthermore, extensive experiments conducted on the nuScenes and semanticKITTI dataset, including challenging night and rainy scenarios, confirm the superior performance of our sensor fusion strategy across various perception ranges.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"10 5","pages":"3421-3433"},"PeriodicalIF":14.3,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LightCast: Efficient Traffic Flow Forecasting via an Integrated Compression Framework LightCast:通过集成压缩框架有效的交通流量预测
IF 14.3 1区 工程技术
IEEE Transactions on Intelligent Vehicles Pub Date : 2024-09-03 DOI: 10.1109/TIV.2024.3454177
Rui Zheng;Dalin Zhang;Chunjiao Dong;Shouyu Huang;Jing Wang
{"title":"LightCast: Efficient Traffic Flow Forecasting via an Integrated Compression Framework","authors":"Rui Zheng;Dalin Zhang;Chunjiao Dong;Shouyu Huang;Jing Wang","doi":"10.1109/TIV.2024.3454177","DOIUrl":"https://doi.org/10.1109/TIV.2024.3454177","url":null,"abstract":"Traffic forecasting plays a pivotal role in intelligent transportation systems. To enhance forecasting accuracy, existing deep learning models often feature complex structures with large computational demands for deployment. To achieve seamless traffic system operations and prompt communication between vehicles and roadway infrastructure, the best solution is to deploy models on locally operating edge devices, which are limited in hardware resources. In this realm, we consider both efficiency and effectiveness in this paper with our newly proposed Lightweight Forecasting (<sc>LightCast</small>) model. Specifically, we first design Spatio-temporal Global-local Former (<sc>STGLFormer</small>) that introduces various self-attention mechanisms comprehensively considering both global and local spatio-temporal information in traffic data to offer state-of-the-art (SOTA) forecasting accuracy. Furthermore, <sc>LightCast</small> involves a mix-granularity pruning strategy to remove redundant components in <sc>STGLFormer</small> at different granularities and an automated layer-matching distillation scheme to effectively restore the forecasting accuracy after pruning. The automated layer-matching distillation scheme resolves the issues of layer mismatching in the traditional feature distillation approach. Extensive experiments conducted on four real-world public transportation datasets demonstrate that our approach can achieve near SOTA performance at a much higher computation efficiency.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"10 5","pages":"3458-3467"},"PeriodicalIF":14.3,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-Strea[M] YOLOV8: Object and Motion Detection in Driving Videos 2 - strea [M] YOLOV8:驾驶视频的目标和运动检测
IF 14.3 1区 工程技术
IEEE Transactions on Intelligent Vehicles Pub Date : 2024-09-03 DOI: 10.1109/TIV.2024.3448631
Ozlem Okur;Mehmet Kilicarslan
{"title":"Two-Strea[M] YOLOV8: Object and Motion Detection in Driving Videos","authors":"Ozlem Okur;Mehmet Kilicarslan","doi":"10.1109/TIV.2024.3448631","DOIUrl":"https://doi.org/10.1109/TIV.2024.3448631","url":null,"abstract":"Object detection has numerous applications in intelligent vehicles, as it is crucial to quickly determine an object's location and movement for autonomous driving. Traditionally, most algorithms handle these tasks in sequential steps, detecting objects based on appearance features in video frames, and then analyzing their behavior through frame tracking. This study presents a novel deep learning-based object and motion detection method that uniquely combines spatial and temporal information into a single framework. The motion pattern of objects is uniform across different object classes and appears as traces in the spatial-temporal domain. These object movements can be interpreted from motion profile images even in complex driving environments. Unlike two-stage methods that rely on detection and tracking, our approach directly learns object motion from a vast dataset of driving videos, demonstrating its efficiency and practicality. It is specifically designed to address the challenges encountered in dynamic driving scenarios, proving its effectiveness and relevance in practical applications. The goal is to quickly identify objects and their motion in the driving context. Our method excels in real-time performance with interpretable motion detection in the spatial-temporal domain. It also demonstrates high mean average precision, <inline-formula><tex-math>$mathbf {78%}$</tex-math></inline-formula>, and low mean average error, <inline-formula><tex-math>$mathbf {3.09^circ }$</tex-math></inline-formula>, on a publicly available dataset, further validating its effectiveness and reliability.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"10 5","pages":"3166-3177"},"PeriodicalIF":14.3,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrastive Late Fusion for 3D Object Detection 三维目标检测的对比后期融合
IF 14.3 1区 工程技术
IEEE Transactions on Intelligent Vehicles Pub Date : 2024-09-03 DOI: 10.1109/TIV.2024.3454085
Tingyu Zhang;Zhigang Liang;Yanzhao Yang;Xinyu Yang;Yu Zhu;Jian Wang
{"title":"Contrastive Late Fusion for 3D Object Detection","authors":"Tingyu Zhang;Zhigang Liang;Yanzhao Yang;Xinyu Yang;Yu Zhu;Jian Wang","doi":"10.1109/TIV.2024.3454085","DOIUrl":"https://doi.org/10.1109/TIV.2024.3454085","url":null,"abstract":"In the field of autonomous driving, accurate and efficient 3D object detection is crucial for ensuring safe and reliable operation. This paper focuses on the fusion of camera and LiDAR data in a late-fusion manner for 3D object detection. The proposed approach incorporates contrastive learning to enhance feature consistency between camera and LiDAR candidates, which is named as Contrastive Camera-LiDAR Object Candidates (C-CLOCs) fusion network, facilitating better fusion results. We delve into the label assignment aspect in late fusion methods and introduce a novel label assignment strategy to filter out irrelevant information. Additionally, a Multi-modality Ground-truth Sampling (MGS) method is introduced, which leverages the inclusion of point cloud information from LiDAR and corresponding images in training samples, resulting in improved performance. Experimental results demonstrate the effectiveness of the proposed method in achieving accurate 3D object detection in autonomous driving scenarios.","PeriodicalId":36532,"journal":{"name":"IEEE Transactions on Intelligent Vehicles","volume":"10 5","pages":"3442-3457"},"PeriodicalIF":14.3,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144990321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信