Drones最新文献

筛选
英文 中文
Integration of Unmanned Aerial Vehicle Imagery and Machine Learning Technology to Map the Distribution of Conifer and Broadleaf Canopy Cover in Uneven-Aged Mixed Forests 利用无人机图像和机器学习技术绘制非均衡树龄混交林针叶树和阔叶树树冠覆盖分布图
IF 4.8 2区 地球科学
Drones Pub Date : 2023-12-13 DOI: 10.3390/drones7120705
Nyo Me Htun, T. Owari, Satoshi Tsuyuki, Takuya Hiroshima
{"title":"Integration of Unmanned Aerial Vehicle Imagery and Machine Learning Technology to Map the Distribution of Conifer and Broadleaf Canopy Cover in Uneven-Aged Mixed Forests","authors":"Nyo Me Htun, T. Owari, Satoshi Tsuyuki, Takuya Hiroshima","doi":"10.3390/drones7120705","DOIUrl":"https://doi.org/10.3390/drones7120705","url":null,"abstract":"Uneven-aged mixed forests have been recognized as important contributors to biodiversity conservation, ecological stability, carbon sequestration, the provisioning of ecosystem services, and sustainable timber production. Recently, numerous studies have demonstrated the applicability of integrating remote sensing datasets with machine learning for forest management purposes, such as forest type classification and the identification of individual trees. However, studies focusing on the integration of unmanned aerial vehicle (UAV) datasets with machine learning for mapping of tree species groups in uneven-aged mixed forests remain limited. Thus, this study explored the feasibility of integrating UAV imagery with semantic segmentation-based machine learning classification algorithms to describe conifer and broadleaf species canopies in uneven-aged mixed forests. The study was conducted in two sub-compartments of the University of Tokyo Hokkaido Forest in northern Japan. We analyzed UAV images using the semantic-segmentation based U-Net and random forest (RF) classification models. The results indicate that the integration of UAV imagery with the U-Net model generated reliable conifer and broadleaf canopy cover classification maps in both sub-compartments, while the RF model often failed to distinguish conifer crowns. Moreover, our findings demonstrate the potential of this method to detect dominant tree species groups in uneven-aged mixed forests.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"85 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139003949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal Model-Free Finite-Time Control Based on Terminal Sliding Mode for a Coaxial Rotor 基于终端滑动模式的同轴转子最优无模型有限时间控制
IF 4.8 2区 地球科学
Drones Pub Date : 2023-12-13 DOI: 10.3390/drones7120706
Hossam-Eddine Glida, C. Sentouh, J. Rath
{"title":"Optimal Model-Free Finite-Time Control Based on Terminal Sliding Mode for a Coaxial Rotor","authors":"Hossam-Eddine Glida, C. Sentouh, J. Rath","doi":"10.3390/drones7120706","DOIUrl":"https://doi.org/10.3390/drones7120706","url":null,"abstract":"This study focuses on addressing the tracking control problem for a coaxial unmanned aerial vehicle (UAV) without any prior knowledge of its dynamic model. To overcome the limitations of model-based control, a model-free approach based on terminal sliding mode control is proposed for achieving precise position and rotation tracking. The terminal sliding mode technique is utilized to approximate the unknown nonlinear model of the system, while the global stability with finite-time convergence of the overall system is guaranteed using the Lyapunov theory. Additionally, the selection of control parameters is addressed by incorporating the accelerated particle swarm optimization (APSO) algorithm. Finally, numerical simulation tests are provided to demonstrate the effectiveness and feasibility of the proposed design approach, which demonstrates the capability of the model-free control approach to achieve accurate tracking control even without prior knowledge of the system’s dynamic model.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"48 1","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139003234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SiamMAN: Siamese Multi-Phase Aware Network for Real-Time Unmanned Aerial Vehicle Tracking SiamMAN:用于无人机实时跟踪的暹罗多相感知网络
IF 4.8 2区 地球科学
Drones Pub Date : 2023-12-13 DOI: 10.3390/drones7120707
Faxue Liu, Xuan Wang, Qiqi Chen, Jinghong Liu, Chenglong Liu
{"title":"SiamMAN: Siamese Multi-Phase Aware Network for Real-Time Unmanned Aerial Vehicle Tracking","authors":"Faxue Liu, Xuan Wang, Qiqi Chen, Jinghong Liu, Chenglong Liu","doi":"10.3390/drones7120707","DOIUrl":"https://doi.org/10.3390/drones7120707","url":null,"abstract":"In this paper, we address aerial tracking tasks by designing multi-phase aware networks to obtain rich long-range dependencies. For aerial tracking tasks, the existing methods are prone to tracking drift in scenarios with high demand for multi-layer long-range feature dependencies such as viewpoint change caused by the characteristics of the UAV shooting perspective, low resolution, etc. In contrast to the previous works that only used multi-scale feature fusion to obtain contextual information, we designed a new architecture to adapt the characteristics of different levels of features in challenging scenarios to adaptively integrate regional features and the corresponding global dependencies information. Specifically, for the proposed tracker (SiamMAN), we first propose a two-stage aware neck (TAN), where first a cascaded splitting encoder (CSE) is used to obtain the distributed long-range relevance among the sub-branches by the splitting of feature channels, and then a multi-level contextual decoder (MCD) is used to achieve further global dependency fusion. Finally, we design the response map context encoder (RCE) utilizing long-range contextual information in backpropagation to accomplish pixel-level updating for the deeper features and better balance the semantic and spatial information. Several experiments on well-known tracking benchmarks illustrate that the proposed method outperforms SOTA trackers, which results from the effective utilization of the proposed multi-phase aware network for different levels of features.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"55 7","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139003705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imitation Learning of Complex Behaviors for Multiple Drones with Limited Vision 视觉受限的多架无人机复杂行为的模仿学习
IF 4.8 2区 地球科学
Drones Pub Date : 2023-12-13 DOI: 10.3390/drones7120704
Yu Wan, Jun Tang, Zipeng Zhao
{"title":"Imitation Learning of Complex Behaviors for Multiple Drones with Limited Vision","authors":"Yu Wan, Jun Tang, Zipeng Zhao","doi":"10.3390/drones7120704","DOIUrl":"https://doi.org/10.3390/drones7120704","url":null,"abstract":"Navigating multiple drones autonomously in complex and unpredictable environments, such as forests, poses a significant challenge typically addressed by wireless communication for coordination. However, this approach falls short in situations with limited central control or blocked communications. Addressing this gap, our paper explores the learning of complex behaviors by multiple drones with limited vision. Drones in a swarm rely on onboard sensors, primarily forward-facing stereo cameras, for environmental perception and neighbor detection. They learn complex maneuvers through the imitation of a privileged expert system, which involves finding the optimal set of neural network parameters to enable the most effective mapping from sensory perception to control commands. The training process adopts the Dagger algorithm, employing the framework of centralized training with decentralized execution. Using this technique, drones rapidly learn complex behaviors, such as avoiding obstacles, coordinating movements, and navigating to specified targets, all in the absence of wireless communication. This paper details the construction of a distributed multi-UAV cooperative motion model under limited vision, emphasizing the autonomy of each drone in achieving coordinated flight and obstacle avoidance. Our methodological approach and experimental results validate the effectiveness of the proposed vision-based end-to-end controller, paving the way for more sophisticated applications of multi-UAV systems in intricate, real-world scenarios.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"52 8","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139003716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Air-to-Ground Path Loss Model at 3.6 GHz under Agricultural Scenarios Based on Measurements and Artificial Neural Networks 基于测量和人工神经网络的农业场景下 3.6 GHz 的空地路径损耗模型
IF 4.8 2区 地球科学
Drones Pub Date : 2023-12-11 DOI: 10.3390/drones7120701
Hanpeng Li, Kai Mao, Xuchao Ye, Taotao Zhang, Qiuming Zhu, Manxi Wang, Yurao Ge, Hangang Li, Farman Ali
{"title":"Air-to-Ground Path Loss Model at 3.6 GHz under Agricultural Scenarios Based on Measurements and Artificial Neural Networks","authors":"Hanpeng Li, Kai Mao, Xuchao Ye, Taotao Zhang, Qiuming Zhu, Manxi Wang, Yurao Ge, Hangang Li, Farman Ali","doi":"10.3390/drones7120701","DOIUrl":"https://doi.org/10.3390/drones7120701","url":null,"abstract":"Unmanned aerial vehicles (UAVs) have found expanding utilization in smart agriculture. Path loss (PL) is of significant importance in the link budget of UAV-aided air-to-ground (A2G) communications. This paper proposes a machine-learning-based PL model for A2G communication in agricultural scenarios. On this basis, a double-weight neurons-based artificial neural network (DWN-ANN) is proposed, which can strike a fine equilibrium between the amount of measurement data and the accuracy of predictions by using ray tracing (RT) simulation data for pre-training and measurement data for optimization training. Moreover, an RT pre-correction module is introduced into the DWN-ANN to optimize the impact of varying farmland materials on the accuracy of RT simulation, thereby improving the accuracy of RT simulation data. Finally, channel measurement campaigns are carried out over a farmland area at 3.6 GHz, and the measurement data are used for the training and validation of the proposed DWN-ANN. The prediction results of the proposed PL model demonstrate a fine concordance with the measurement data and are better than the traditional empirical models.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"128 3","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138981601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of the Impact of Structural Parameter Changes on the Overall Aerodynamic Characteristics of Ducted UAVs 结构参数变化对风道式无人机整体气动特性的影响分析
IF 4.8 2区 地球科学
Drones Pub Date : 2023-12-11 DOI: 10.3390/drones7120702
Huarui Xv, Lei Zhao, Mingjian Wu, Kun Liu, Hongyue Zhang, Zhilin Wu
{"title":"Analysis of the Impact of Structural Parameter Changes on the Overall Aerodynamic Characteristics of Ducted UAVs","authors":"Huarui Xv, Lei Zhao, Mingjian Wu, Kun Liu, Hongyue Zhang, Zhilin Wu","doi":"10.3390/drones7120702","DOIUrl":"https://doi.org/10.3390/drones7120702","url":null,"abstract":"Ducted UAVs have attracted much attention because the duct structure can reduce the propeller tip vortices and thus increase the effective lift area of the lower propeller. This paper investigates the effects of parameters on the aerodynamic characteristics of ducted UAVs, such as co-axial twin propeller configuration and duct structure. The aerodynamic characteristics of the UAV were analyzed using CFD methods, while the impact sensitivity analysis of the simulation data was sorted using the orthogonal test method. The results indicate that, while maintaining overall strength, increasing the propeller spacing by about 0.055 times the duct chord length can increase the lift of the upper propeller by approximately 1.3% faster. Reducing the distance between the propeller and the top surface of the duct by about 0.5 times the duct chord length can increase the lift of the lower propeller by approximately 7.7%. Increasing the chord length of the duct cross-section by about 35.3% can simultaneously make the structure of the duct and the total lift of the drone faster by approximately 150.6% and 15.7%, respectively. This research provides valuable guidance and reference for the subsequent overall design of ducted UAVs.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"18 4","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138979053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Attention-Based Odometry Framework for Multisensory Unmanned Ground Vehicles (UGVs) 基于注意力的多感知无人地面飞行器(UGV)测距框架
IF 4.8 2区 地球科学
Drones Pub Date : 2023-12-09 DOI: 10.3390/drones7120699
Zhiyao Xiao, Guobao Zhang
{"title":"An Attention-Based Odometry Framework for Multisensory Unmanned Ground Vehicles (UGVs)","authors":"Zhiyao Xiao, Guobao Zhang","doi":"10.3390/drones7120699","DOIUrl":"https://doi.org/10.3390/drones7120699","url":null,"abstract":"Recently, deep learning methods and multisensory fusion have been applied to address odometry challenges in unmanned ground vehicles (UGVs). In this paper, we propose an end-to-end visual-lidar-inertial odometry framework to enhance the accuracy of pose estimation. Grayscale images, 3D point clouds, and inertial data are used as inputs to overcome the limitations of a single sensor. Convolutional neural network (CNN) and recurrent neural network (RNN) are employed as encoders for different sensor modalities. In contrast to previous multisensory odometry methods, our framework introduces a novel attention-based fusion module that remaps feature vectors to adapt to various scenes. Evaluations on the Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (KITTI) odometry benchmark demonstrate the effectiveness of our framework.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"565 ","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138983155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fixed-Time Extended Observer-Based Adaptive Sliding Mode Control for a Quadrotor UAV under Severe Turbulent Wind 基于固定时间扩展观测器的四旋翼无人机自适应滑模控制,适用于恶劣湍流风环境
IF 4.8 2区 地球科学
Drones Pub Date : 2023-12-09 DOI: 10.3390/drones7120700
Armando Miranda-Moya, H. Castañeda, Hesheng Wang
{"title":"Fixed-Time Extended Observer-Based Adaptive Sliding Mode Control for a Quadrotor UAV under Severe Turbulent Wind","authors":"Armando Miranda-Moya, H. Castañeda, Hesheng Wang","doi":"10.3390/drones7120700","DOIUrl":"https://doi.org/10.3390/drones7120700","url":null,"abstract":"This paper presents a fixed-time extended state observer-based adaptive sliding mode controller evaluated in a quadrotor unmanned aerial vehicle subject to severe turbulent wind while executing a desired trajectory. Since both the state and model of the system are assumed to be partially known, the observer, whose convergence is independent from the initial states of the system, estimates the full state, model uncertainties, and the effects of turbulent wind in fixed time. Such information is then compensated via feedback control conducted by a class of adaptive sliding mode controller, which is robust to perturbations and reduces the chattering effect by non-overestimating its adaptive gain. Furthermore, the stability of the closed-loop system is analyzed by means of the Lyapunov theory. Finally, simulation results validate the feasibility and advantages of the proposed strategy, where the observer enhances performance. For further demonstration, a comparison with an existent approach is provided.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"321 2","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138983313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Commonality Evaluation and Prediction Study of Light and Small Multi-Rotor UAVs 轻型和小型多旋翼无人机共性评估和预测研究
IF 4.8 2区 地球科学
Drones Pub Date : 2023-12-08 DOI: 10.3390/drones7120698
Yongjie Zhang, Yongqi Zeng, K. Cao
{"title":"Commonality Evaluation and Prediction Study of Light and Small Multi-Rotor UAVs","authors":"Yongjie Zhang, Yongqi Zeng, K. Cao","doi":"10.3390/drones7120698","DOIUrl":"https://doi.org/10.3390/drones7120698","url":null,"abstract":"Light small-sized, multi-rotor UAVs, with their notable advantages of portability, intelligence, and low cost, occupy a significant share in the civilian UAV market. To further reduce the full lifecycle cost of products, shorten development cycles, and increase market share, some manufacturers of these UAVs have adopted a series development strategy based on the concept of commonality in design. However, there is currently a lack of effective methods to quantify the commonality in UAV designs, which is key to guiding commonality design. In view of this, our study innovatively proposes a new UAV commonality evaluation model based on the basic composition of light small-sized multi-rotor UAVs and the theory of design structure matrices. Through cross-evaluations of four models, the model has been confirmed to comprehensively quantify the degree of commonality between models. To achieve commonality prediction in the early stages of multi-rotor UAV design, we constructed a commonality prediction dataset centered around the commonality evaluation model using data from typical light small-sized multi-rotor UAV models. After training this dataset with convolutional neural networks, we successfully developed an effective predictive model for the commonality of new light small-sized multi-rotor UAV models and verified the feasibility and effectiveness of this method through a case application in UAV design. The commonality evaluation and prediction models established in this study not only provide strong decision-making support for the series design and commonality design of UAV products but also offer new perspectives and tools for strategic development in this field.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"235 ","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139011309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Adversarial Detection Method for UAV Vision Systems via Attribution Maps 通过归因图实现无人机视觉系统的新型对抗检测方法
IF 4.8 2区 地球科学
Drones Pub Date : 2023-12-07 DOI: 10.3390/drones7120697
Zhun Zhang, Qihe Liu, Chunjiang Wu, Shijie Zhou, Zhangbao Yan
{"title":"A Novel Adversarial Detection Method for UAV Vision Systems via Attribution Maps","authors":"Zhun Zhang, Qihe Liu, Chunjiang Wu, Shijie Zhou, Zhangbao Yan","doi":"10.3390/drones7120697","DOIUrl":"https://doi.org/10.3390/drones7120697","url":null,"abstract":"With the rapid advancement of unmanned aerial vehicles (UAVs) and the Internet of Things (IoTs), UAV-assisted IoTs has become integral in areas such as wildlife monitoring, disaster surveillance, and search and rescue operations. However, recent studies have shown that these systems are vulnerable to adversarial example attacks during data collection and transmission. These attacks subtly alter input data to trick UAV-based deep learning vision systems, significantly compromising the reliability and security of IoTs systems. Consequently, various methods have been developed to identify adversarial examples within model inputs, but they often lack accuracy against complex attacks like C&W and others. Drawing inspiration from model visualization technology, we observed that adversarial perturbations markedly alter the attribution maps of clean examples. This paper introduces a new, effective detection method for UAV vision systems that uses attribution maps created by model visualization techniques. The method differentiates between genuine and adversarial examples by extracting their unique attribution maps and then training a classifier on these maps. Validation experiments on the ImageNet dataset showed that our method achieves an average detection accuracy of 99.58%, surpassing the state-of-the-art methods.","PeriodicalId":36448,"journal":{"name":"Drones","volume":"52 49","pages":""},"PeriodicalIF":4.8,"publicationDate":"2023-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138592999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信