IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society最新文献

筛选
英文 中文
Enhanced Landslide Detection Using a Swin Transformer With Multiscale Feature Fusion and Local Information Aggregation Modules 基于多尺度特征融合和局部信息聚合模块的Swin变压器增强滑坡检测
Saied Pirasteh;Muhammad Yasir;Hong Fan;Fernando J. Aguilar;Md Sakaouth Hossain;Huxiong Li
{"title":"Enhanced Landslide Detection Using a Swin Transformer With Multiscale Feature Fusion and Local Information Aggregation Modules","authors":"Saied Pirasteh;Muhammad Yasir;Hong Fan;Fernando J. Aguilar;Md Sakaouth Hossain;Huxiong Li","doi":"10.1109/LGRS.2025.3560990","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3560990","url":null,"abstract":"In recent years, detecting and monitoring landslides have become increasingly critical for disaster management and mitigation efforts. Here, we propose a model for landslide detection utilizing a combination of the Swin Transformer architecture with multiscale feature fusion lateral connection and local information aggregation modules (LIAMs). The Swin Transformer, known for its effectiveness in image understanding tasks, serves as the backbone of our detection system. By leveraging its hierarchical self-attention mechanism, the Swin Transformer can effectively capture both local and global contextual information from input images, facilitating accurate feature representation. To increase the performance of the Swin Transformer specifically for landslide detection, we introduce two additional modules: the multiscale feature fusion lateral connection module (MFFLCM) and the LIAM. The former module enables the integration of features across multiple scales, allowing the model to capture both fine-grained details and broader contextual information relevant to landslide characteristics. Meanwhile, the latter module focuses on aggregating local information within regions of interest, further refining the model’s ability to discriminate between landslide and non-landslide areas. Through extensive test and evaluation of benchmark datasets, our proposed method demonstrates promising results in detecting landslides with high mIoU, <inline-formula> <tex-math>$F1$ </tex-math></inline-formula> score, kappa, precision, and recall 84.2%, 90.7%, 82.6%, 89.9%, and 91.9%, respectively. Moreover, its robustness to variations in terrain and environmental conditions suggests its potential for real-world applications in landslide monitoring and early warning systems. Overall, our study highlights the effectiveness of integrating advanced transformer architectures with tailored modules for addressing complex geospatial challenges like landslide detection.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143931317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CaPaT: Cross-Aware Paired-Affine Transformation for Multimodal Data Fusion Network 多模态数据融合网络的交叉感知对仿射变换
Jinping Wang;Hao Chen;Xiaofei Zhang;Weiwei Song
{"title":"CaPaT: Cross-Aware Paired-Affine Transformation for Multimodal Data Fusion Network","authors":"Jinping Wang;Hao Chen;Xiaofei Zhang;Weiwei Song","doi":"10.1109/LGRS.2025.3560931","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3560931","url":null,"abstract":"This letter proposes a cross-aware paired-affine transformation (CaPaT) network for multimodal data fusion tasks. Unlike existing networks that employ weight-sharing or indirect interaction strategies, the CaPaT introduces a direct feature interaction paradigm that significantly improves the transfer efficiency of feature fusion while reducing the number of model parameters. Specifically, this letter, respectively, splits multimodal data along the channel domain. It synthesizes specific group channels and opposite residual channels as data pairs to generate refined features, achieving direct interaction among multimodal features. Next, a scaling attention module is conducted on the refined feature pair for confidence map generation. Then, this letter multiplies confidence maps by their corresponding feature pairs, determining a more reasonable and significant margin feature representation. Finally, a classifier is conducted on the transformation features to output the final class labels. Experimental results demonstrate that the CaPaT achieves superior classification performance with fewer parameters than state-of-the-art methods.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143892421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-Overhead Compression-Aware Channel Filtering for Hyperspectral Image Compression 用于高光谱图像压缩的低开销压缩感知信道滤波
Wei Zhang;Jiayao Xu;Yueru Chen;Dingquan Li;Wen Gao
{"title":"Low-Overhead Compression-Aware Channel Filtering for Hyperspectral Image Compression","authors":"Wei Zhang;Jiayao Xu;Yueru Chen;Dingquan Li;Wen Gao","doi":"10.1109/LGRS.2025.3562933","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3562933","url":null,"abstract":"Both traditional and learning-based hyperspectral image (HSI) compression methods suffer from significant quality loss at high compression ratios. To address this, we propose a low-overhead, compression-aware channel filtering method. The encoder derives channel filters via least squares regression (LSR) between lossy compressed and original images. The bitstream, containing the compressed image and filters, is sent to the decoder, where the filters enhance image quality. This simple, compression-aware approach is compatible with any existing framework, enhancing quality while introducing only a negligible increase in bitstream size and decoding time, thereby achieving low overhead. Experimental results show consistent rate-distortion gains, reducing compression rates by 10.51% to 39.81% on the GF-5 dataset with minimal decoding and storage overhead.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143892449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Building Extraction From Multi-View RGB-H Images With General Instance Segmentation Networks and a Grouping Optimization Algorithm 基于通用实例分割网络和分组优化算法的多视图RGB-H图像建筑提取
Dawen Yu;Hao Cheng
{"title":"Building Extraction From Multi-View RGB-H Images With General Instance Segmentation Networks and a Grouping Optimization Algorithm","authors":"Dawen Yu;Hao Cheng","doi":"10.1109/LGRS.2025.3562892","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3562892","url":null,"abstract":"Bird’s-eye-view (BEV) building mapping from remote sensing images is a studying hotspot with broad applications. In recent years, deep learning (DL) has significantly advanced the development of automatic building extraction methods. However, most existing research focuses on segmenting buildings from a single perspective, such as orthophotos, overlooking the rich information of multi-view images. In surveying and mapping, individual building instances need to be separated even when they are adjacent or touching. Since orthophotos cannot capture building walls due to self-occlusion, distinguishing between closely connected buildings in densely built areas becomes challenging. To tackle this issue, we propose a multi-view collaborative pipeline for instance-level building segmentation. This pipeline utilizes a grouping optimization algorithm to merge segmentation results from multiple views, which are predicted by general instance segmentation networks and projected onto the BEV, to produce the final building instance polygons. Both qualitative and quantitative results show that the proposed multi-view collaborative pipeline significantly outperforms the popular orthophoto-based pipeline on the InstanceBuilding dataset.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143892549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shape Embedding and Knowledge Mining Network for Generalized Few-Shot Remote Sensing Segmentation 广义少镜头遥感分割的形状嵌入和知识挖掘网络
Zifeng Qiu;Hongyu Liu;Hang Xiong;Chengliang Di;Hao Fang;Runmin Cong
{"title":"Shape Embedding and Knowledge Mining Network for Generalized Few-Shot Remote Sensing Segmentation","authors":"Zifeng Qiu;Hongyu Liu;Hang Xiong;Chengliang Di;Hao Fang;Runmin Cong","doi":"10.1109/LGRS.2025.3562894","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3562894","url":null,"abstract":"In recent years, generalized few-shot segmentation (GFSS) has received widespread attention from scholars by virtue of its superiority in low-data regimes. Most of the existing research focuses on natural image processing, and few studies have been devoted to the practical but challenging topic of remote sensing image (RSI) understanding. In this letter, we propose a shape embedding and knowledge mining network (SKNet) for generalized few-shot RSI segmentation. Specifically, the framework is divided into two key stages: 1) in the base class learning stage, shape representation embedding is introduced to enhance the network’s ability to perceive remote sensing objects. Simultaneously, we introduce the self-reconstruction constraint (SRC) to prevent new unseen classes from merging, thereby improving the representation uniqueness of these classes and 2) in the novel class learning stage, a base class knowledge mining (BCKM) mechanism is designed to update the prototypes of the novel class using the prototype representation of the base class, so as to enhance the discrimination ability of the network. We validated our methods on the adapted version of the OpenEarthMap and iSAID datasets. In comparison to existing GFSS methods, the proposed approach demonstrates an advancement.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144117349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepPSF: On-Orbit Point Spread Function Estimation of Space Camera With Deep Learning 基于深度学习的空间相机在轨点扩展函数估计
Bo Wang;Hongyu Chen;Ying Lu;Jiantao Peng
{"title":"DeepPSF: On-Orbit Point Spread Function Estimation of Space Camera With Deep Learning","authors":"Bo Wang;Hongyu Chen;Ying Lu;Jiantao Peng","doi":"10.1109/LGRS.2025.3562763","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3562763","url":null,"abstract":"Obtaining an on-orbit space camera’s point spread function (PSF) is challenging but necessary for remote sensing image (RSI) restoration. Current methods rely on regular ground targets, causing determining the PSF at any position on the camera sensor to involve significant human effort and be highly inefficient. To reduce the difficulty and improve the precision of estimating the PSFs of an on-orbit space camera, this letter proposes DeepPSF, a novel PSF prediction method based on deep learning and Fourier transformation. DeepPSF employs a dual-stream convolutional neural network (CNN) to extract multiscale features from blurred and reference images, introduces a channel-wise Wiener filtering block for PSF feature calculation in frequency domain, and reconstructs high-precision PSF through a CNN network. Experiments demonstrate: 1) on synthetic datasets, DeepPSF achieves PSF prediction with 58.2 dB PSNR (SSIM > 0.64), significantly outperforming Wiener filtering and phase-only image (POI)-based kernel estimation method; 2) when combined with the nonblind deblurring algorithm DWDN, it delivers 26.1 dB restoration PSNR, surpassing comparative methods; and 3) real RSI tests validate its adaptability to complex scenarios. This method provides an efficient solution for full field-of-view PSF modeling of on-orbit cameras.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144072856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boundary SAM: Improved Parcel Boundary Delineation Using SAM’s Image Embeddings and Detail Enhancement Filters 边界SAM:利用SAM的图像嵌入和细节增强滤波器改进的包裹边界划分
Bahaa Awad;Isin Erer
{"title":"Boundary SAM: Improved Parcel Boundary Delineation Using SAM’s Image Embeddings and Detail Enhancement Filters","authors":"Bahaa Awad;Isin Erer","doi":"10.1109/LGRS.2025.3563023","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3563023","url":null,"abstract":"Accurate agricultural parcel boundary delineation is essential in remote sensing applications, yet traditional supervised methods require extensively annotated datasets and often fail to generalize across diverse landscapes. The segment anything model (SAM), a foundational model for zero-shot segmentation, provides scalability but struggles with certain remote sensing challenges, particularly agricultural parcels. In this letter, we propose a novel approach to enhance SAM’s performance by leveraging its embeddings to extract meaningful features. Our method applies principal component analysis (PCA) for dimensionality reduction, high-frequency decomposition, and guided filtering to enhance input images, aligning them better with SAM’s strengths. By refining the input data through these steps, we improve SAM’s ability to delineate parcel boundaries effectively. Experimental results demonstrate consistent improvements across SAM backbone sizes and parameter settings, achieving higher accuracy in segmentation metrics such as under-segmentation (US) rate, over-segmentation (OS) rate, intersection over union (IoU), and false negative (FN) rate.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144073099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GPR Bscan Imaging Enhancement Method for Rebar Occlusion 钢筋遮挡的GPR Bscan成像增强方法
Qiguo Xu;Tao Zhang;Zebang Pang;Wentai Lei
{"title":"GPR Bscan Imaging Enhancement Method for Rebar Occlusion","authors":"Qiguo Xu;Tao Zhang;Zebang Pang;Wentai Lei","doi":"10.1109/LGRS.2025.3562426","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3562426","url":null,"abstract":"When using ground-penetrating radar (GPR) to detect targets below shallow rebar mesh in reinforced concrete structures, the strong scattering characteristics of rebar mesh cause distortion and interference of target echoes and lead to imaging artifacts and degradation. This letter proposes a coarse-scale and fine-scale dual-branch imaging enhancement network (CFD-IENet) to achieve target imaging under rebar mesh in reinforced concrete by combining Bscan echo data enhancement with back projection (BP) imaging result enhancement. First, a residual U (Res-U) network suppresses complex background clutter in Bscan data to improve the signal-to-noise ratio. Then, a coarse-scale and fine-scale dual-branch network is constructed to enhance both Bscan and BP imaging. In the Bscan enhancement stage, strong and weak signals are trained separately, aiming for surface rebar echo interference in reconstructing weak target signals beneath the rebar mesh. In the BP imaging enhancement stage, artifacts and multipath ghosts are suppressed to enhance occluded target imaging. A bilinear fusion module (BFM) is designed to facilitate global feature interaction, promoting the fusion of Bscan and BP imaging features across scales, thereby improving reconstruction and enhancement accuracy. The experimental results on cracks occluded by rebar mesh demonstrate the method’s effectiveness, showing a 4.73-dB improvement in peak signal-to-noise ratio (PSNR) and a 0.16 improvement in structural similarity (SSIM) index compared to the RNMF + BP + Unet enhancement method.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143896500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conditional Brownian Bridge Diffusion Model for VHR SAR to Optical Image Translation VHR SAR到光学影像转换的条件布朗桥扩散模型
Seon-Hoon Kim;Daewon Chung
{"title":"Conditional Brownian Bridge Diffusion Model for VHR SAR to Optical Image Translation","authors":"Seon-Hoon Kim;Daewon Chung","doi":"10.1109/LGRS.2025.3562401","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3562401","url":null,"abstract":"Synthetic aperture radar (SAR) imaging technology provides the unique advantage of being able to collect data regardless of weather conditions and time. However, SAR images exhibit complex backscatter patterns and speckle noise, which necessitate expertise for interpretation. Research on translating SAR images into optical-like representations has been conducted to aid the interpretation of SAR data. Nevertheless, existing studies have predominantly utilized low-resolution satellite imagery datasets and have largely been based on generative adversarial network (GAN) which are known for their training instability and low fidelity. To overcome these limitations of low-resolution data usage and GAN-based approaches, this letter introduces a conditional image-to-image translation approach based on Brownian bridge diffusion model (BBDM). We conducted comprehensive experiments on the MSAW dataset, a paired SAR and optical images collection of 0.5 m very-high-resolution (VHR). The experimental results indicate that our method surpasses both the conditional diffusion models (CDMs) and the GAN-based models in diverse perceptual quality metrics.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143949223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-Tuning SAM for Forward-Looking Sonar With Collaborative Prompts and Embedding 具有协同提示和嵌入的前视声纳微调SAM
Jiayuan Li;Zhen Wang;Nan Xu;Zhuhong You
{"title":"Fine-Tuning SAM for Forward-Looking Sonar With Collaborative Prompts and Embedding","authors":"Jiayuan Li;Zhen Wang;Nan Xu;Zhuhong You","doi":"10.1109/LGRS.2025.3562182","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3562182","url":null,"abstract":"The segment anything model (SAM) represents a significant advancement in semantic segmentation, particularly for natural images, but encounters notable limitations when applied to forward-looking sonar (FLS) images. The primary challenges lie in the inherent boundary ambiguity of FLS images, which complicates the use of prompt strategies for accurate boundary delineation, and the lack of effective interaction between prompts and image features. In this letter, we introduce a collaborative prompting (CP) strategy to address these issues by generating dense prompt embeddings and sonar tokens that focus on contour and boundary features, thereby replacing the original dense prompt embedding and intersection over union (IoU) token. To further enhance segmentation, we use embedding compensation techniques based on Mamba and Kolmogorov–Arnold network (KAN), which increase boundary information to image embeddings and improve the fusion of prompts within image embeddings. We conducted comprehensive experiments, including comparative analyses and ablation studies, to validate the superiority of our proposed approach. Results show that our method significantly improves segmentation performance for FLS images, effectively addressing boundary ambiguity and optimizing prompt utilization. The source code and dataset will be available on <uri>https://github.com/darkseid-arch/FLSSAM</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143896232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信