ISPRS Open Journal of Photogrammetry and Remote Sensing最新文献

筛选
英文 中文
Results of the ISPRS benchmark on indoor modelling ISPRS室内造型基准的结果
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2021-12-01 DOI: 10.1016/j.ophoto.2021.100008
Kourosh Khoshelham , Ha Tran , Debaditya Acharya , Lucia Díaz Vilariño , Zhizhong Kang , Sagi Dalyot
{"title":"Results of the ISPRS benchmark on indoor modelling","authors":"Kourosh Khoshelham ,&nbsp;Ha Tran ,&nbsp;Debaditya Acharya ,&nbsp;Lucia Díaz Vilariño ,&nbsp;Zhizhong Kang ,&nbsp;Sagi Dalyot","doi":"10.1016/j.ophoto.2021.100008","DOIUrl":"10.1016/j.ophoto.2021.100008","url":null,"abstract":"<div><p>This paper reports the results of the ISPRS benchmark on indoor modelling. Reconstructed models submitted by 11 participating teams are evaluated on a dataset comprising 6 point clouds representing indoor environments of different complexity. The evaluation is based on measuring the completeness, correctness, and accuracy of the reconstructed wall elements through comparison with manually generated reference models. The results show that the performance of the methods varies across different datasets, but generally the reconstruction methods achieve better results for the point clouds with higher accuracy and density and fewer gaps, as well as the point clouds representing less complex environments. Filtering clutter points in a pre-processing step contributes to higher correctness, and making strong assumptions on the shape of the reconstructed elements contributes to higher completeness and accuracy for models of Manhattan World environments.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"2 ","pages":"Article 100008"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393221000089/pdfft?md5=5bfa00dcd11021f1de42c95c2d66cb2a&pid=1-s2.0-S2667393221000089-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86363255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Monitoring aseismic fault creep using persistent urban geodetic markers generated from mobile laser scanning 利用移动激光扫描产生的持久城市大地测量标记监测地震断层蠕变
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2021-12-01 DOI: 10.1016/j.ophoto.2021.100009
Xinxiang Zhu (Ph.D. candidate) Sean , Craig L. Glennie Ph.D., P.Eng. , Benjamin A. Brooks Ph.D. , Todd L. Ericksen M.S., P.Eng.
{"title":"Monitoring aseismic fault creep using persistent urban geodetic markers generated from mobile laser scanning","authors":"Xinxiang Zhu (Ph.D. candidate) Sean ,&nbsp;Craig L. Glennie Ph.D., P.Eng. ,&nbsp;Benjamin A. Brooks Ph.D. ,&nbsp;Todd L. Ericksen M.S., P.Eng.","doi":"10.1016/j.ophoto.2021.100009","DOIUrl":"10.1016/j.ophoto.2021.100009","url":null,"abstract":"<div><p>High resolution and high accuracy distributed detection of fault creep deformation remains challenging given limited observations and associated change detection strategies. A mobile laser scanning-based change detection method that is capable of measuring centimeter-level near-field (<span><math><mo>&lt;</mo><mn>150</mn></math></span> m from fault) deformation is described. The methodology leverages the use of man-made features in the built environment as geodetic markers that can be temporally tracked. The proposed framework consists of a RANSAC-based corresponding plane detector and a combined least squares displacement estimator. Using repeat mobile laser scanning data collected in 2015 and 2017 on a 2 ​km segment of the Hayward fault, near-field fault creep displacement and non-linear creep deformation are estimated. The detection results reveal 2.5 ​± ​1.5 ​cm of accumulated fault parallel creep displacement in the far-field. The laser scanning estimates of displacement match collocated alinement array observations at the 4 ​mm level in the near field. The proposed change detection framework is shown to be accurate and practical for fault creep displacement detection in the near field and the detected non-linear creep displacement patterns will help elucidate the complex physics of surface faulting.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"2 ","pages":"Article 100009"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393221000090/pdfft?md5=9dbc42c227f1d8a9bb0569ac5a8181f1&pid=1-s2.0-S2667393221000090-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80246088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Efficient coarse registration method using translation- and rotation-invariant local descriptors towards fully automated forest inventory 基于平移和旋转不变局部描述符的高效粗配准方法,实现全自动森林清查
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2021-12-01 DOI: 10.1016/j.ophoto.2021.100007
Eric Hyyppä , Jesse Muhojoki , Xiaowei Yu , Antero Kukko , Harri Kaartinen , Juha Hyyppä
{"title":"Efficient coarse registration method using translation- and rotation-invariant local descriptors towards fully automated forest inventory","authors":"Eric Hyyppä ,&nbsp;Jesse Muhojoki ,&nbsp;Xiaowei Yu ,&nbsp;Antero Kukko ,&nbsp;Harri Kaartinen ,&nbsp;Juha Hyyppä","doi":"10.1016/j.ophoto.2021.100007","DOIUrl":"10.1016/j.ophoto.2021.100007","url":null,"abstract":"<div><p>In this paper, we present a simple, efficient, and robust algorithm for 2D coarse registration of two point clouds. In the proposed algorithm, the locations of some distinct objects are detected from the point cloud data, and a rotation- and translation-invariant feature descriptor vector is computed for each of the detected objects based on the relative locations of the neighboring objects. Subsequently, the feature descriptors obtained for the different point clouds are compared against one another by using the Euclidean distance in the feature space as the similarity criterion. By using the nearest neighbor distance ratio, the most promising matching object pairs are found and further used to fit the optimal Euclidean transformation between the two point clouds. Importantly, the time complexity of the proposed algorithm scales quadratically in the number of objects detected from the point clouds. We demonstrate the proposed algorithm in the context of forest inventory by performing coarse registration between terrestrial and airborne point clouds. To this end, we use trees as the objects and perform the coarse registration by using no other information than the locations of the detected trees. We evaluate the performance of the algorithm using both simulations and three test sites located in a boreal forest. We show that the algorithm is fast and performs well for a large range of stem densities and for test sites with up to 10 ​000 trees. Additionally, we show that the algorithm works reliably even in the case of moderate errors in the tree locations, commission and omission errors in the tree detection, and partial overlap of the data sets. We also demonstrate that additional tree attributes can be incorporated into the proposed feature descriptor to improve the robustness of the registration algorithm provided that reliable information of these additional tree attributes is available. Furthermore, we show that the registration accuracy between the terrestrial and airborne point clouds can be significantly improved if stem positions estimated from the terrestrial data are matched to stem positions obtained from the airborne data instead of matching them to tree top positions estimated from the airborne data. Even though the 2D coarse registration algorithm is demonstrated in the context of forestry, the algorithm is not restricted to forest data and it may potentially be utilized in other applications, in which efficient 2D point set registration is needed.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"2 ","pages":"Article 100007"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393221000077/pdfft?md5=f4cc4071095a78abf53972f77305e84a&pid=1-s2.0-S2667393221000077-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91150857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Pose-aware monocular localization of occluded pedestrians in 3D scene space 三维场景空间中遮挡行人的姿态感知单目定位
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2021-12-01 DOI: 10.1016/j.ophoto.2021.100006
Mohammad Masoud Rahimi , Kourosh Khoshelham , Mark Stevenson , Stephan Winter
{"title":"Pose-aware monocular localization of occluded pedestrians in 3D scene space","authors":"Mohammad Masoud Rahimi ,&nbsp;Kourosh Khoshelham ,&nbsp;Mark Stevenson ,&nbsp;Stephan Winter","doi":"10.1016/j.ophoto.2021.100006","DOIUrl":"10.1016/j.ophoto.2021.100006","url":null,"abstract":"<div><p>Localization of pedestrians in 3D scene space from single RGB images is critical for various downstream applications. Current monocular approaches employ either the bounding box of pedestrians or the visible parts of their bodies for localization. Both approaches introduce additional error to the location estimation in the case of real-world scenarios – crowded environments with multiple occluded pedestrians. To overcome the limitation, this paper proposes a novel human pose-aware pedestrian localization framework to model poses of occluded pedestrians, where this enables accurate localization in image and ground space. This is done by proposing a light-weight neural network architecture, where this ensures a fast and accurate prediction of missing body parts for downstream applications. Comprehensive experiments on two real-world datasets demonstrate the effectiveness of the framework compared to state-of-the-art in predicting pedestrians missing body parts as well as pedestrian localization.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"2 ","pages":"Article 100006"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667393221000065/pdfft?md5=9eb7fb438c5548be0151101048d7d41b&pid=1-s2.0-S2667393221000065-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77996785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Hessigheim 3D (H3D) benchmark on semantic segmentation of high-resolution 3D point clouds and textured meshes from UAV LiDAR and Multi-View-Stereo 基于hessighim 3D (H3D)的无人机激光雷达和多视立体高分辨率三维点云和纹理网格语义分割基准
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2021-10-01 DOI: 10.1016/j.ophoto.2021.100001
Michael Kölle , Dominik Laupheimer , Stefan Schmohl , Norbert Haala , Franz Rottensteiner , Jan Dirk Wegner , Hugo Ledoux
{"title":"The Hessigheim 3D (H3D) benchmark on semantic segmentation of high-resolution 3D point clouds and textured meshes from UAV LiDAR and Multi-View-Stereo","authors":"Michael Kölle ,&nbsp;Dominik Laupheimer ,&nbsp;Stefan Schmohl ,&nbsp;Norbert Haala ,&nbsp;Franz Rottensteiner ,&nbsp;Jan Dirk Wegner ,&nbsp;Hugo Ledoux","doi":"10.1016/j.ophoto.2021.100001","DOIUrl":"10.1016/j.ophoto.2021.100001","url":null,"abstract":"<div><p>Automated semantic segmentation and object detection are of great importance in geospatial data analysis. However, supervised machine learning systems such as convolutional neural networks require large corpora of annotated training data. Especially in the geospatial domain, such datasets are quite scarce. Within this paper, we aim to alleviate this issue by introducing a new annotated 3D dataset that is unique in three ways: i) The dataset consists of both an Unmanned Aerial Vehicle (UAV) laser scanning point cloud and a 3D textured mesh. ii) The point cloud features a mean point density of about 800 ​pts/m<sup>2</sup> and the oblique imagery used for 3D mesh texturing realizes a ground sampling distance of about 2–3 ​cm. This enables the identification of fine-grained structures and represents the state of the art in UAV-based mapping. iii) Both data modalities will be published for a total of three epochs allowing applications such as change detection. The dataset depicts the village of Hessigheim (Germany), henceforth referred to as H3D - either represented as 3D point cloud H3D(PC) or 3D mesh H3D(Mesh). It is designed to promote research in the field of 3D data analysis on one hand and to evaluate and rank existing and emerging approaches for semantic segmentation of both data modalities on the other hand. Ultimately, we hope that H3D will become a widely used benchmark dataset in company with the well-established ISPRS Vaihingen 3D Semantic Labeling Challenge benchmark (V3D). The dataset can be downloaded from <span>https://ifpwww.ifp.uni-stuttgart.de/benchmark/hessigheim/default.aspx</span><svg><path></path></svg>.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"1 ","pages":"Article 100001"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.ophoto.2021.100001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81054911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Towards spherical robots for mobile mapping in human made environments 面向人造环境中移动测绘的球形机器人
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2021-10-01 DOI: 10.1016/j.ophoto.2021.100004
Fabian Arzberger, Anton Bredenbeck, Jasper Zevering, Dorit Borrmann, Andreas Nüchter
{"title":"Towards spherical robots for mobile mapping in human made environments","authors":"Fabian Arzberger,&nbsp;Anton Bredenbeck,&nbsp;Jasper Zevering,&nbsp;Dorit Borrmann,&nbsp;Andreas Nüchter","doi":"10.1016/j.ophoto.2021.100004","DOIUrl":"10.1016/j.ophoto.2021.100004","url":null,"abstract":"<div><p>Spherical robots are a format that has not been thoroughly explored for the application of mobile mapping. In contrast to other designs, it provides some unique advantages. Among those is a spherical shell that protects internal sensors and actuators from possible harsh environments, as well as an inherent rotation for locomotion that enables measurements in all directions. Mobile mapping always requires a high-precise pose knowledge to obtain consistent and correct environment maps. This is typically done by a combination of external reference sensors such as Global Navigation Satellite System (GNSS) measurements and inertial measurements or by coarsely estimating the pose using inertial measurement units (IMUs) and post processing the data by registering the different measurements to each other. In indoor environments, the GNSS reference is not an option. Hence many mobile mapping applications turn to the second option. An advantage of indoor environments is that human-made environments usually have a certain structure, such as parallel and perpendicular planes. We propose a registration procedure that exploits this structure by minimizing the distance of measured points to a corresponding plane. Further, we evaluate the procedure on a simulated dataset of an ideal corridor and on an experimentally acquired dataset with different motion profiles. We show that we nearly reproduce the ground truth for the simulated dataset and improve the average point-to-point distance to a reference scan in the experimental dataset. The presented algorithms are required to work completely autonomously.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"1 ","pages":"Article 100004"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.ophoto.2021.100004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73197204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A decision-level fusion approach to tree species classification from multi-source remotely sensed data 多源遥感数据树种分类的决策级融合方法
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2021-10-01 DOI: 10.1016/j.ophoto.2021.100002
Baoxin Hu , Qian Li , G. Brent Hall
{"title":"A decision-level fusion approach to tree species classification from multi-source remotely sensed data","authors":"Baoxin Hu ,&nbsp;Qian Li ,&nbsp;G. Brent Hall","doi":"10.1016/j.ophoto.2021.100002","DOIUrl":"10.1016/j.ophoto.2021.100002","url":null,"abstract":"<div><p>In this study, an object-oriented, decision-level fusion method is proposed for tree species classification based on spectral, textural, and structural features derived from multi-spectral and panchromatic imagery and Light Detection And Ranging (LiDAR) data. Murphy's average method based on the Dempster Shafer theory (DST) was used to calculate the combined mass function for decision making purposes. For individual feature groups, the mass functions were calculated using the support vector machine (SVM) classification method. The species examined included Norway maple, honey locust, Austrian pine, blue spruce, and white spruce. In addition to these species, a two- or three-species compound class was included in the decision process based on the normalized entropy in the presence of conflict that was itself determined according to whether individual groups of features were consistent. The developed method provided a mechanism to identify tree crowns, which could not be classified to one single species with a high confidence due to the conflict among feature groups. Data used in this study were obtained for the Keele Campus of York University, Toronto, Ontario. Among the 223 test crowns, 204 crowns were assigned to one single species, and the overall classification accuracy was 0.89. A decision could not be made for 19 crowns with confidence, and as a result, a two- or three-species compound class was assigned. The classification accuracy was higher than that obtained using SVM classification based on individual and combined spectral, structural, and textural features.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"1 ","pages":"Article 100002"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.ophoto.2021.100002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"93501283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Mapping sugarcane in Thailand using transfer learning, a lightweight convolutional neural network, NICFI high resolution satellite imagery and Google Earth Engine 使用迁移学习、轻量级卷积神经网络、NICFI高分辨率卫星图像和谷歌地球引擎绘制泰国甘蔗地图
ISPRS Open Journal of Photogrammetry and Remote Sensing Pub Date : 2021-10-01 DOI: 10.1016/j.ophoto.2021.100003
Ate Poortinga , Nyein Soe Thwal , Nishanta Khanal , Timothy Mayer , Biplov Bhandari , Kel Markert , Andrea P. Nicolau , John Dilger , Karis Tenneson , Nicholas Clinton , David Saah
{"title":"Mapping sugarcane in Thailand using transfer learning, a lightweight convolutional neural network, NICFI high resolution satellite imagery and Google Earth Engine","authors":"Ate Poortinga ,&nbsp;Nyein Soe Thwal ,&nbsp;Nishanta Khanal ,&nbsp;Timothy Mayer ,&nbsp;Biplov Bhandari ,&nbsp;Kel Markert ,&nbsp;Andrea P. Nicolau ,&nbsp;John Dilger ,&nbsp;Karis Tenneson ,&nbsp;Nicholas Clinton ,&nbsp;David Saah","doi":"10.1016/j.ophoto.2021.100003","DOIUrl":"10.1016/j.ophoto.2021.100003","url":null,"abstract":"<div><p>Air pollution from burning sugarcane is an important environmental issue in Thailand. Knowing the location and extent of sugarcane plantations would help in formulating effective strategies to reduce burning. High resolution satellite imagery combined with deep-learning technologies can be effective to map sugarcane with high precision. However, land cover mapping using high resolution data and computationally intensive deep-learning networks can be computationally costly. In this study, we used high resolution satellite imagery from Planet that has been made available to the public through the Norway's International Climate and Forest Initiative (NICFI). We tested a U-Net deep-learning algorithm with a lightweight MobileNetV2 network as the encoder branch using the Google Earth Engine computational platform. We trained a model using the RGB channels with pre-trained network (RGBt), a RGB model with randomly initialized weights (RGBr) and a model with randomly initialized weights including the NIR channel (RGBN). We found an F1-score of 0.9550, 0.9262 and 0.9297 for the RGBt, RGBr and RGBN models, respectively. For an independent model evaluation we found F1-scores of 0.9141, 0.8681 and 0.8911. We also found a discrepancy in the recall values reported by the model and those from the independent validation. We found that lightweight deep-learning models produce satisfactory results while providing effective means to apply mapping efforts at scale with reduced computational costs. We highlight the importance of central data repositories with labeled data as pre-trained networks were found to be effective in improving the accuracy.</p></div>","PeriodicalId":100730,"journal":{"name":"ISPRS Open Journal of Photogrammetry and Remote Sensing","volume":"1 ","pages":"Article 100003"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.ophoto.2021.100003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92757228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信