Journal of Electronic Imaging最新文献

筛选
英文 中文
Yarn hairiness measurement based on multi-camera system and perspective maximization model 基于多摄像头系统和透视最大化模型的纱线毛羽测量
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043043
Hongyan Cao, Zhenze Chen, Haihua Hu, Xiangbing Huai, Hao Zhu, Zhongjian Li
{"title":"Yarn hairiness measurement based on multi-camera system and perspective maximization model","authors":"Hongyan Cao, Zhenze Chen, Haihua Hu, Xiangbing Huai, Hao Zhu, Zhongjian Li","doi":"10.1117/1.jei.33.4.043043","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043043","url":null,"abstract":"Accurate measurement and identification of the number and length of yarn hairiness is crucial for spinning process optimization and product quality control. However, the existing methods have problems, such as low detection accuracy and efficiency, and incomplete detection. In order to overcome the above defects, an image acquisition device based on a multi-camera system is established to accurately obtain multiple perspectives of hairiness images. An automatic threshold segmentation method based on the local bimodal is proposed based on image difference, convolution kernel enhancement, and histogram equalization. Then, the clear and unbroken yarn hairiness segmentation images are obtained according to the hairiness edge extraction method. Finally, a perspective maximization model is proposed to realize the calculation of the hairiness H value and the number of hairiness in interval. Six kinds of cotton ring-spun yarn with different linear densities are tested using the proposed method, YG133B/M instrument, manual method, and single perspective method. The results show that the proposed multi-camera method can realize the index measurement of the yarn hairiness.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video frame interpolation based on depthwise over-parameterized recurrent residual convolution 基于深度过参数化递归残差卷积的视频帧插值技术
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043036
Xiaohui Yang, Weijing Liu, Shaowen Wang
{"title":"Video frame interpolation based on depthwise over-parameterized recurrent residual convolution","authors":"Xiaohui Yang, Weijing Liu, Shaowen Wang","doi":"10.1117/1.jei.33.4.043036","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043036","url":null,"abstract":"To effectively address the challenges of large motions, complex backgrounds and large occlusions in videos, we introduce an end-to-end method for video frame interpolation based on recurrent residual convolution and depthwise over-parameterized convolution in this paper. Specifically, we devise a U-Net architecture utilizing recurrent residual convolution to enhance the quality of interpolated frame. First, the recurrent residual U-Net feature extractor is employed to extract features from input frames, yielding the kernel for each pixel. Subsequently, an adaptive collaboration of flows is utilized to warp the input frames, which are then fed into the frame synthesis network to generate initial interpolated frames. Finally, the proposed network incorporates depthwise over-parameterized convolution to further enhance the quality of interpolated frame. Experimental results on various datasets demonstrate the superiority of our method over state-of-the-art techniques in both objective and subjective evaluations.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141933510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Additive cosine margin for unsupervised softmax embedding 无监督软最大嵌入的加余弦余量
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.040501
Dan Wang, Jianwei Yang, Cailing Wang
{"title":"Additive cosine margin for unsupervised softmax embedding","authors":"Dan Wang, Jianwei Yang, Cailing Wang","doi":"10.1117/1.jei.33.4.040501","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.040501","url":null,"abstract":"Unsupervised embedding learning aims to learn highly discriminative features of images without using class labels. Existing instance-wise softmax embedding methods treat each instance as a distinct class and explore the underlying instance-to-instance visual similarity relationships. However, overfitting the instance features leads to insufficient discriminability and poor generalizability of networks. To tackle this issue, we introduce an instance-wise softmax embedding with cosine margin (SEwCM), which for the first time adds margin in the unsupervised instance softmax classification function from the cosine perspective. The cosine margin is used to separate the classification decision boundaries between instances. SEwCM explicitly optimizes the feature mapping of networks by maximizing the cosine similarity between instances, thus learning a highly discriminative model. Exhaustive experiments on three fine-grained image datasets demonstrate the effectiveness of our proposed method over existing methods.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-scale adaptive low-light image enhancement based on deep learning 基于深度学习的多尺度自适应弱光图像增强技术
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043033
Taotao Cao, Taile Peng, Hao Wang, Xiaotong Zhu, Jia Guo, Zhen Zhang
{"title":"Multi-scale adaptive low-light image enhancement based on deep learning","authors":"Taotao Cao, Taile Peng, Hao Wang, Xiaotong Zhu, Jia Guo, Zhen Zhang","doi":"10.1117/1.jei.33.4.043033","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043033","url":null,"abstract":"Existing low-light image enhancement (LLIE) technologies have difficulty balancing image quality and computational efficiency. In addition, they amplify the noise and artifacts of the original image when enhancing deep dark images. Therefore, this study proposes a multi-scale adaptive low-light image enhancement method based on deep learning. Specifically, feature extraction and noise reduction modules are designed. First, a more effective low-light enhancement effect is achieved by extracting the details of the dark area of an image. Depth extraction of the details of dark areas is realized through the design of a residual attention mechanism and nonlocal neural network in the UNet model to obtain a visual-attention map of the dark area. Second, the designed noise network obtains the real noise map of the low-light image. Subsequently, the enhanced network uses the dark area visual-attention and noise maps in conjunction with the original low-light image as inputs to adaptively realize LLIE. The LLIE results using the proposed network achieve excellent performance in terms of color, tone, contrast, and detail. Finally, quantitative and visual experiments on multiple test benchmark datasets demonstrate that the proposed method is superior to current state-of-the-art methods in terms of dark area details, image quality enhancement, and image noise reduction. The results of this study can help to address the real world challenges of low-light image quality, such as low contrast, poor visibility, and high noise levels.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141885486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Alternative evaluation of industrial surface defect synthesis data based on analytic hierarchy process 基于层次分析法的工业表面缺陷合成数据替代评估
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043055
Yang Lu, Hang Hao, Linhui Chen, Longfei Yang, Xiaoheng Jiang
{"title":"Alternative evaluation of industrial surface defect synthesis data based on analytic hierarchy process","authors":"Yang Lu, Hang Hao, Linhui Chen, Longfei Yang, Xiaoheng Jiang","doi":"10.1117/1.jei.33.4.043055","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043055","url":null,"abstract":"Deep learning based defect detection methods require a large amount of high-quality defect data. However, the defect samples obtained in practical production are relatively expensive and lack diversity. The data generation method based on generative adversarial networks (GANs) can address the issue of insufficient defect samples at a lower cost. However, the training process of data generation algorithms based on GANs may be affected by various factors, making it challenging to ensure the stability of the quality of the synthesized defect data. Since high-quality defect data determine the performance and representation of the detection model, it is necessary to conduct alternative evaluations on the synthesized defect data. We comprehensively consider the evaluation indicators that affect the generated defect data and propose an alternative evaluation method for comprehensive data on surface defects of industrial products. First, an evaluation index system is constructed based on the attributes of defect data. Then, a substitution evaluation model for surface defect data is built using a multi-level quantitative analytic hierarchy process. Finally, to verify the effectiveness of the evaluation model, we use three advanced defect detection networks and validate the effectiveness of the evaluation model through comparative experiments. We provide an effective solution for screening high-quality defect data generation and improve the performance of downstream task defect detection models.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing hyperspectral image classification with graph attention neural network 利用图注意神经网络加强高光谱图像分类
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043052
Niruban Rathakrishnan, Deepa Raja
{"title":"Enhancing hyperspectral image classification with graph attention neural network","authors":"Niruban Rathakrishnan, Deepa Raja","doi":"10.1117/1.jei.33.4.043052","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043052","url":null,"abstract":"Due to the rapid advancement of hyperspectral remote sensing technology, classification methods based on hyperspectral images (HSIs) have gained increasing significance in the processes of target identification, mineral mapping, and environmental management. This importance arises from the fact that HSIs offer a more comprehensive understanding of a target’s composition. However, addressing the challenges posed by the high dimensionality and redundancy of HSI sets, coupled with potential class imbalances in hyperspectral datasets, remains a complex task. Both convolutional neural networks (CNNs) and graph convolutional networks (GCNs) have demonstrated promising results in HSI classification in recent years. Nonetheless, CNNs struggle to attain high accuracy with limited sample sizes, whereas GCNs demand substantial computational resources. Oversmoothing remains a persistent challenge with conventional GCNs. In response to these issues, an approach known as the graph attention neural network for remote target classification (GATN-RTC) has been proposed. GATN-RTC employs a spectral filter and an autoregressive moving average filter to classify distant targets, addressing datasets both with and without labeled samples. To evaluate the performance of GATN-RTC, we conducted a comparative analysis against state-of-the-art methodologies using key performance metrics, such as overall accuracy (OA), per-class accuracy, and the Cohen’s Kappa statistic (KC). The findings reveal that GATN-RTC outperforms existing approaches, achieving improvements of 5.95% in OA, 5.33% in per-class accuracy, and 8.28% in the Cohen’s KC for the Salinas dataset. Furthermore, it demonstrates enhancements of 6.05% and 6.4% in OA, 6.56% and 5.89% in per-class accuracy, and 6.71% and 6.23% in the Cohen’s KC for the Pavia University dataset.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
D2Net: discriminative feature extraction and details preservation network for salient object detection D2Net:用于突出物体检测的判别特征提取和细节保存网络
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043047
Qianqian Guo, Yanjiao Shi, Jin Zhang, Jinyu Yang, Qing Zhang
{"title":"D2Net: discriminative feature extraction and details preservation network for salient object detection","authors":"Qianqian Guo, Yanjiao Shi, Jin Zhang, Jinyu Yang, Qing Zhang","doi":"10.1117/1.jei.33.4.043047","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043047","url":null,"abstract":"Convolutional neural networks (CNNs) with a powerful feature extraction ability have raised the performance of salient object detection (SOD) to a unique level, and how to effectively decode the rich features from CNN is the key to improving the performance of the SOD model. Some previous works ignored the differences between the high-level and low-level features and neglected the information loss during feature processing, making them fail in some challenging scenes. To solve this problem, we propose a discriminative feature extraction and details preservation network (D2Net) for SOD. According to the different characteristics of high-level and low-level features, we design a residual optimization module for filtering complex background noise in shallow features and a pyramid feature extraction module to eliminate the information loss caused by atrous convolution in high-level features. Furthermore, we design a features aggregation module to aggregate the elaborately processed high-level and low-level features, which fully considers the performance of different level features and preserves the delicate boundary of salient object. The comparisons with 17 existing state-of-the-art SOD methods on five popular datasets demonstrate the superiority of the proposed D2Net, and the effectiveness of each proposed module is verified through numerous ablation experiments.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge-oriented unrolling network for infrared and visible image fusion 用于红外和可见光图像融合的面向边缘的展开网络
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043051
Tianhui Yuan, Zongliang Gan, Changhong Chen, Ziguan Cui
{"title":"Edge-oriented unrolling network for infrared and visible image fusion","authors":"Tianhui Yuan, Zongliang Gan, Changhong Chen, Ziguan Cui","doi":"10.1117/1.jei.33.4.043051","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043051","url":null,"abstract":"Under unfavorable conditions, fusion images of infrared and visible images often lack edge contrast and details. To address this issue, we propose an edge-oriented unrolling network, which comprises a feature extraction network and a feature fusion network. In our approach, after respective enhancement processes, the original infrared/visible image pair with their enhancement version is combined as the input to get more prior information acquisition. First, the feature extraction network consists of four independent iterative edge-oriented unrolling feature extraction networks based on the edge-oriented deep unrolling residual module (EURM), in which the convolutions in the EURM modules are replaced with edge-oriented convolution blocks to enhance the edge features. Then, the convolutional feature fusion network with differential structure is proposed to obtain the final fusion result, through utilizing the concatenate operation to map multidimensional features. In addition, the loss function in the fusion network is optimized to balance multiple features with significant differences in order to achieve better visual effect. Experimental results on multiple datasets demonstrate that the proposed method produces competitive fusion images as evaluated subjectively and objectively, with balanced luminance, sharper edge, and better details.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precipitation nowcasting based on ConvLSTM-UNet deep spatiotemporal network 基于 ConvLSTM-UNet 深度时空网络的降水预报
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043053
Xiangming Zheng, Huawang Qin, Haoran Chen, Weixi Wang, Piao Shi
{"title":"Precipitation nowcasting based on ConvLSTM-UNet deep spatiotemporal network","authors":"Xiangming Zheng, Huawang Qin, Haoran Chen, Weixi Wang, Piao Shi","doi":"10.1117/1.jei.33.4.043053","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043053","url":null,"abstract":"The primary objective of precipitation forecasting is to accurately predict short-term precipitation at a high resolution within a specific area, which is a significant and intricate challenge. Traditional models often struggle to capture the multidimensional characteristics of precipitation clouds in both time and space, leading to imprecise predictions due to their expansion, dissipation, and deformation. Recognizing this limitation, we introduce ConvLSTM-UNet, which leverages spatiotemporal feature extraction from meteorological images. ConvLSTM-UNet is an efficient convolutional neural network (CNN) based on the classical UNet architecture, equipped with ConvLSTM and improved deep separable convolutions. We evaluate our approach on the generic time series dataset Moving MNIST and the regional precipitation dataset of the Netherlands. The experimental results show that the proposed method has better spatiotemporal prediction skills than other tested models, and the mean squared error is reduced by more than 7.2%. In addition, the visualization results of the precipitation forecast show that the approach has a better ability to capture heavy precipitation, and the texture details of the precipitation forecast are closer to the ground truth.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Squeeze-and-excitation attention and bi-directional feature pyramid network for filter screens surface detection 用于滤网表面检测的挤压-激发注意和双向特征金字塔网络
IF 1.1 4区 计算机科学
Journal of Electronic Imaging Pub Date : 2024-08-01 DOI: 10.1117/1.jei.33.4.043044
Junpeng Xu, Xiangbo Zhu, Lei Shi, Jin Li, Ziman Guo
{"title":"Squeeze-and-excitation attention and bi-directional feature pyramid network for filter screens surface detection","authors":"Junpeng Xu, Xiangbo Zhu, Lei Shi, Jin Li, Ziman Guo","doi":"10.1117/1.jei.33.4.043044","DOIUrl":"https://doi.org/10.1117/1.jei.33.4.043044","url":null,"abstract":"Based on the enhanced YOLOv5, a deep learning defect detection technique is presented to deal with the problem of inadequate effectiveness in manually detecting problems on the surface of filter screens. In the last layer of the backbone network, the method combines the squeeze-and-excitation attention mechanism module, the method assigns weights to image locations based on the channel domain perspective to obtain more feature information. It also compares the results with a simple, parameter-free attention model (SimAM), which is an attention mechanism without the channel domain, and the results are higher than SimAM 0.7%. In addition, the neck network replaces the basic PANet structure with the bi-directional feature pyramid network module, which introduces multi-scale feature fusion. The experimental results show that the improved YOLOv5 algorithm has an average defect detection accuracy of 97.7% on the dataset, which is 11.3%, 12.8%, 2%, 7.8%, 5.1%, and 1.3% higher than YOLOv3, faster R-CNN, YOLOv5, SSD, YOLOv7, and YOLOv8, respectively. It can quickly and accurately identify various defects on the surface of the filter, which has an outstanding contribution to the filter manufacturing industry.","PeriodicalId":54843,"journal":{"name":"Journal of Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142202451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信