IET Image Processing最新文献

筛选
英文 中文
Gaussian Process-Driven Semi-Supervised Single-Image Rain Removal: Enhancing Real-Scene Generalizability 高斯过程驱动的半监督单张图像雨水去除:增强真实场景的通用性
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-03-10 DOI: 10.1049/ipr2.70040
Lisha Liu, Peiquan Xiong, Fei Liu
{"title":"Gaussian Process-Driven Semi-Supervised Single-Image Rain Removal: Enhancing Real-Scene Generalizability","authors":"Lisha Liu,&nbsp;Peiquan Xiong,&nbsp;Fei Liu","doi":"10.1049/ipr2.70040","DOIUrl":"https://doi.org/10.1049/ipr2.70040","url":null,"abstract":"<p>This paper proposes a semi-supervised single-image rain removal method using Gaussian processes to decouple rain components and background features. Existing methods often fail to generalize to real scenes due to synthetic data's limited diversity in rain direction and density. To address this, we integrate synthetic and real rainy images, where Gaussian processes model synthetic intermediate features to generate pseudo-labels for real image supervision. A two-stage encoder–decoder architecture with squeeze-and-excitation residual and context feature fusion modules enhances feature disentanglement. Experimental results on both synthetic and real datasets demonstrate superior performance, achieving a peak signal-to-noise ratio of 26.11 dB and structural similarity of 0.89 on synthetic images, while preserving more background details and effectively supporting downstream tasks like object segmentation.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70040","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143595158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Self-Supervised Monocular Depth Estimation in Endoscopy via Feature-Based Perceptual Loss 基于特征感知损失的内窥镜自监督单眼深度估计
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-03-09 DOI: 10.1049/ipr2.70035
Kejin Zhu, Li Cui
{"title":"Enhancing Self-Supervised Monocular Depth Estimation in Endoscopy via Feature-Based Perceptual Loss","authors":"Kejin Zhu,&nbsp;Li Cui","doi":"10.1049/ipr2.70035","DOIUrl":"https://doi.org/10.1049/ipr2.70035","url":null,"abstract":"<p>In recent years, self-supervised learning methods for monocular depth estimation have garnered significant attention due to their ability to learn from large amounts of unlabelled data. In this study, we propose further improvements for endoscopic scenes based on existing self-supervised monocular depth estimation methods. The previous method introduce an appearance flow to address brightness inconsistencies caused by lighting changes and uses a unified self-supervised framework to estimate both depth and camera motion simultaneously. However, to further enhance the model's supervisory signals, we introduce a new feature-based perceptual loss. This module utilizes a pre-trained encoder to extract features from both the synthesized and target frames and calculates their cosine dissimilarity as an additional source of supervision. In this way, we aim to improve the model's robustness in handling complex lighting and surface reflection conditions in endoscopic scenes. We compare the performance of using two pre-trained CNN-based models and four foundational models as encoder. Experimental results show that our improve method further enhances the accuracy of depth estimation in medical imaging. Additionally, it demonstrates that features extracted by CNN-based models, which are sensitive to local details, outperform foundation models. This suggests that encoders for extracting medical image features may not require extensive pre-training, and relatively simple traditional convolutional neural networks can suffice.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70035","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143581366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flood-MATE: A Flood Segmentation Model in Urban Regions through Adaptation of Mean Teacher and Ensemble Approach Flood-MATE:通过适应平均值教师和集合方法建立的城市地区洪水分段模型
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-03-09 DOI: 10.1049/ipr2.70023
Bella Septina Ika Hartanti, Adila Alfa Krisnadhi, Laksmita Rahadianti, Wiwiek Dwi Susanti, Achmad Fakhrus Shomim
{"title":"Flood-MATE: A Flood Segmentation Model in Urban Regions through Adaptation of Mean Teacher and Ensemble Approach","authors":"Bella Septina Ika Hartanti,&nbsp;Adila Alfa Krisnadhi,&nbsp;Laksmita Rahadianti,&nbsp;Wiwiek Dwi Susanti,&nbsp;Achmad Fakhrus Shomim","doi":"10.1049/ipr2.70023","DOIUrl":"https://doi.org/10.1049/ipr2.70023","url":null,"abstract":"<p>Flood disasters remain one of the most recurring natural phenomena worldwide, resulting from excessive water flow submerging land for an extended period of time. The escalating occurrences of floods, particularly in urban areas, can be attributed to climate change, extreme weather patterns, uncontrolled urbanization, and complex geographical conditions. To mitigate the destructive impacts, such as loss of life and economic ramifications, automatic flood analysis and remote-sensing imagery segmentation offer valuable decision-making insights. However, the segmentation process for flood detection faces challenges due to the scarcity of labelled data and diverse resolutions, including medium resolution data. In response, the authors propose Flood-MATE, a novel semi-supervised learning approach based on the mean-teacher model. Our approach leverages the deep learning architecture and introduces a new loss function scenario for training. The dataset utilized in this study comprises SAR images of Sentinel-1 C-band that have undergone thorough processing. Promisingly, the results demonstrate a 4% improvement in the IoU metric compared to the baseline method employing pseudo-labelling.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70023","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143581367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Segmentation Refinement Based on Region Expansion and Minor Contour Adjustments 基于区域扩展和小轮廓调整的图像分割细化
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-03-08 DOI: 10.1049/ipr2.70017
Li-yue Yan, Xing Zhang, Kafeng Wang, Siting Xiong, De-jin Zhang
{"title":"Image Segmentation Refinement Based on Region Expansion and Minor Contour Adjustments","authors":"Li-yue Yan,&nbsp;Xing Zhang,&nbsp;Kafeng Wang,&nbsp;Siting Xiong,&nbsp;De-jin Zhang","doi":"10.1049/ipr2.70017","DOIUrl":"https://doi.org/10.1049/ipr2.70017","url":null,"abstract":"<p>In high-precision image segmentation tasks, even slight deviations in the segmentation results can bring about significant consequences, especially in certain application areas such as medical imaging and remote sensing image classification. The precision of segmentation has become the main factor limiting its development. Researchers typically refine image segmentation algorithms to enhance accuracy, but it is challenging for any improvement strategy to be effectively applied to images of different objects and scenes. To address this issue, we propose a two-step refinement method for image segmentation, comprising region expansion and minor contour adjustments. First, we design an adaptive gradient thresholding module to provide gradient-based constraints for the refinement process. Next, the region expansion module iteratively refines each segmented region based on colour differences and gradient thresholds. Finally, the minor contour adjustments module leverages local strong gradient features to refine the contour positions further. This method integrates region-level and pixel-level information to refine various image segmentation results. This method was applied to the BSDS500, Cells, and WHU Building datasets. The results demonstrate that the refined closed contours align more closely with the ground truth, with the most notable improvement observed at contour inflection points (corner points). Among the results, the Cells dataset showed the most significant improvement in segmentation accuracy, with the <i>F-score</i> increasing from 87.51% to 89.73% and <i>IoU</i> from 86.83% to 88.40%.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70017","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143571379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Underwater Image Enhancement Based on Depth and Light Attenuation Estimation 基于深度和光衰减估计的水下图像增强
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-03-06 DOI: 10.1049/ipr2.70037
Lianjun Zhang, Tingna Liu, Qichao Shi, Fen Chen
{"title":"Underwater Image Enhancement Based on Depth and Light Attenuation Estimation","authors":"Lianjun Zhang,&nbsp;Tingna Liu,&nbsp;Qichao Shi,&nbsp;Fen Chen","doi":"10.1049/ipr2.70037","DOIUrl":"https://doi.org/10.1049/ipr2.70037","url":null,"abstract":"<p>Light attenuation and complex water environments seriously deteriorate underwater imaging quality. Current underwater image restoration algorithms cannot handle low-quality colour-distorted images in aquatic environments. This study proposed a novel underwater image processing algorithm based on a light attenuation estimation model and a depth estimation network. First, a pseudo-depth map strategy was proposed to train the underwater image depth estimation network to realise underwater image depth estimation. Second, the attenuation coefficient of the current image was estimated based on the background light using a light attenuation model. Finally, the images were restored using an underwater imaging model. The proposed algorithm is superior to state-of-the-art underwater image processing algorithms regarding subjective and objective qualities.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70037","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143564833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structure-Aware Transformer for Shadow Detection 用于阴影检测的结构感知变压器
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-03-04 DOI: 10.1049/ipr2.70031
Wanlu Sun, Liyun Xiang, Wei Zhao
{"title":"Structure-Aware Transformer for Shadow Detection","authors":"Wanlu Sun,&nbsp;Liyun Xiang,&nbsp;Wei Zhao","doi":"10.1049/ipr2.70031","DOIUrl":"https://doi.org/10.1049/ipr2.70031","url":null,"abstract":"<p>Shadow detection helps reduce ambiguity in object detection and tracking. However, existing shadow detection methods tend to misidentify complex shadows and their similar patterns, such as soft shadow regions and shadow-like regions, since they treat all cases equally, leading to an incomplete structure of the detected shadow regions. To alleviate this issue, we propose a structure-aware transformer network (STNet) for robust shadow detection. Specifically, we first develop a transformer-based shadow detection network to learn significant contextual information interactions. To this end, a context-aware enhancement (CaE) block is also introduced into the backbone to expand the receptive field, thus enhancing semantic interaction. Then, we design an edge-guided multi-task learning framework to produce intermediate and main predictions with a rich structure. By fusing these two complementary predictions, we can obtain an edge-preserving refined shadow map. Finally, we introduce an auxiliary semantic-aware learning to overcome the interference from complex scenes, which facilitates the model to perceive shadow and non-shadow regions using a semantic affinity loss. By doing these, we can predict high-quality shadow maps in different scenarios. Experimental results demonstrate that our method reduces the balance error rate (BER) by 4.53%, 2.54%, and 3.49% compared to state-of-the-art (SOTA) methods on the benchmark datasets SBU, ISTD, and UCF, respectively.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70031","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143554773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SF-YOLO: A Novel YOLO Framework for Small Object Detection in Aerial Scenes SF-YOLO: 用于空中场景小物体检测的新型 YOLO 框架
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-03-03 DOI: 10.1049/ipr2.70027
Meng Sun, Le Wang, Wangyu Jiang, Fayaz Ali Dharejo, Guojun Mao, Radu Timofte
{"title":"SF-YOLO: A Novel YOLO Framework for Small Object Detection in Aerial Scenes","authors":"Meng Sun,&nbsp;Le Wang,&nbsp;Wangyu Jiang,&nbsp;Fayaz Ali Dharejo,&nbsp;Guojun Mao,&nbsp;Radu Timofte","doi":"10.1049/ipr2.70027","DOIUrl":"https://doi.org/10.1049/ipr2.70027","url":null,"abstract":"<p>Object detection models are widely applied in the fields such as video surveillance and unmanned aerial vehicles to enable the identification and monitoring of various objects on a diversity of backgrounds. The general CNN-based object detectors primarily rely on downsampling and pooling operations, often struggling with small objects that have low resolution and failing to fully leverage contextual information that can differentiate objects from complex background. To address the problems, we propose a novel YOLO framework called SF-YOLO for small object detection. Firstly, we present a spatial information perception (SIP) module to extract contextual features for different objects through the integration of space to depth operation and large selective kernel module, which dynamically adjusts receptive field of the backbone and obtains the enhanced features for richer understanding of differentiation between objects and background. Furthermore, we design a novel multi-scale feature weighted fusion strategy, which performs weighted fusion on feature maps by combining fast normalized fusion method and CARAFE operation, accurately assessing the importance of each feature and enhancing the representation of small objects. The extensive experiments conducted on VisDrone2019, Tiny-Person and PESMOD datasets demonstrate that our proposed method enables comparable detection performance to state-of-the-art detectors.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70027","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143534043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deformable Attention Network for Efficient Space-Time Video Super-Resolution 高效时空视频超分辨率的可变形注意力网络
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-03-03 DOI: 10.1049/ipr2.70026
Hua Wang, Rapeeporn Chamchong, Phatthanaphong Chomphuwiset, Pornntiwa Pawara
{"title":"Deformable Attention Network for Efficient Space-Time Video Super-Resolution","authors":"Hua Wang,&nbsp;Rapeeporn Chamchong,&nbsp;Phatthanaphong Chomphuwiset,&nbsp;Pornntiwa Pawara","doi":"10.1049/ipr2.70026","DOIUrl":"https://doi.org/10.1049/ipr2.70026","url":null,"abstract":"<p>Space-time video super-resolution (STVSR) aims to construct high space-time resolution video sequences from low frame rate and low-resolution video sequences. While recent STVSR works combine temporal interpolation and spatial super-resolution in a unified framework, they face challenges in computational complexity across both temporal and spatial dimensions, particularly in achieving accurate intermediate frame interpolation and efficient temporal information utilisation. To address these, we propose a deformable attention network for efficient STVSR. Specifically, we introduce a deformable interpolation block that employs hierarchical feature fusion to effectively handle complex inter-frame motions at multiple scales, enabling more accurate intermediate frame generation. To fully utilise temporal information, we design a temporal feature shuffle block (TFSB) to efficiently exchange complementary information across multiple frames. Additionally, we develop a motion feature enhancement block incorporating channel attention mechanism to selectively emphasise motion-related features, further boosting TFSB's effectiveness. Experimental results on benchmark datasets definitively demonstrate that our proposed method achieves competitive performance in STVSR tasks.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70026","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143534044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiclassification Tampering Detection Algorithm Based on Spatial-Frequency Fusion and Swin-T 基于空频融合和swing - t的多分类篡改检测算法
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-02-28 DOI: 10.1049/ipr2.70007
Li Li, Kejia Zhang, Jianfeng Lu, ShanQing Zhang, Ning Chu
{"title":"Multiclassification Tampering Detection Algorithm Based on Spatial-Frequency Fusion and Swin-T","authors":"Li Li,&nbsp;Kejia Zhang,&nbsp;Jianfeng Lu,&nbsp;ShanQing Zhang,&nbsp;Ning Chu","doi":"10.1049/ipr2.70007","DOIUrl":"https://doi.org/10.1049/ipr2.70007","url":null,"abstract":"<p>Deep learning methods for image forgery detection often struggle with compression attack robustness. This paper proposes a novel multi-class forgery detection framework combining spatial-frequency fusion with Swin-Transformer, outperforming existing methods in compression attack scenarios. Our approach integrates a frequency domain perception module with quantization tables, a spatial domain perception module through multi-strategy convolutions, and a dual-attention mechanism combining spatial and channel attention for feature fusion. Experimental results demonstrate superior performance with an <i>F</i>1 score of 87% under JPEG compression (<i>q</i> = 75), significantly surpassing current state-of-the-art methods by an average of 15% in compression resistance while maintaining high detection accuracy.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Discriminative Palmprint Anti-Spoofing Features via High-Frequency Spoofing Regions Adaptation 基于高频欺骗区域适应的掌纹防欺骗特征学习
IF 2 4区 计算机科学
IET Image Processing Pub Date : 2025-02-28 DOI: 10.1049/ipr2.70029
Chengcheng Liu, Huikai Shao, Dexing Zhong
{"title":"Learning Discriminative Palmprint Anti-Spoofing Features via High-Frequency Spoofing Regions Adaptation","authors":"Chengcheng Liu,&nbsp;Huikai Shao,&nbsp;Dexing Zhong","doi":"10.1049/ipr2.70029","DOIUrl":"https://doi.org/10.1049/ipr2.70029","url":null,"abstract":"<p>Recently, the majority of palmprint recognition studies have focused on feature extraction while neglecting security issues. Among the various attack types, spoofing attack poses a significant threat due to high success rates and minimal technical requirements. In this study, we explore the differences between real and fake palmprint images. Based on these differences, we propose the concept of ‘high-frequency spoofing regions’ to capture key discriminative spoofing clues. Specifically, the high-frequency spoofing regions adaptation (<i>HFSRA</i>) model is proposed to address palmprint anti-spoofing. The HFSRA consists of two key modules: the texture analysis module (TAM) and the spoofing attention module (SAM). In particular, the TAM divides the input feature map into several patches and evaluates the texture distribution within each patch. Next, the SAM dynamically constructs an attention map by mapping the texture distribution to an attention weight matrix. This adaptive structure forces the model to focus on high-frequency spoofing regions, which improves the model's ability to extract meaningful spoofing clues effectively. Furthermore, we establish three experimental protocols for evaluating the performance of palmprint anti-spoofing models. These protocols provide a standardized evaluation framework for future studies. Extensive experiments conducted under these protocols demonstrate the effectiveness and competitiveness of HFSRA.</p>","PeriodicalId":56303,"journal":{"name":"IET Image Processing","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ipr2.70029","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143513668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信