Pattern Recognition Letters最新文献

筛选
英文 中文
DLR: Adversarial examples detection and label recovery for deep neural networks
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-02-01 DOI: 10.1016/j.patrec.2024.12.009
Keji Han , Yao Ge , Ruchuan Wang , Yun Li
{"title":"DLR: Adversarial examples detection and label recovery for deep neural networks","authors":"Keji Han ,&nbsp;Yao Ge ,&nbsp;Ruchuan Wang ,&nbsp;Yun Li","doi":"10.1016/j.patrec.2024.12.009","DOIUrl":"10.1016/j.patrec.2024.12.009","url":null,"abstract":"<div><div>Deep neural networks (DNNs) have been shown to be vulnerable to adversarial examples crafted by adversaries to deceive the target model. Two popular approaches to mitigate this issue are adversarial training and adversarial example detection. Adversarial training aims to enable the target model to accurately recognize adversarial examples in image classification tasks; however, it often lacks generalizability. Conversely, adversarial detection demonstrates good generalization but does not assist the target model in recognizing adversarial examples. In this paper, we first define the label recovery task to address the adversarial challenges faced by DNNs. We then propose a novel generative classifier specifically for the adversarial example label recovery task. This method is termed <strong>D</strong>etection and <strong>Label R</strong>ecovery (DLR), which comprises two components: Detector and Recover. The Detector processes both legitimate and adversarial examples, while the Recover component seeks to ascertain the ground-truth label of the detected adversarial example. DLR effectively combines the strengths of adversarial training and adversarial example detection. Experimental results demonstrate that our method outperforms several state-of-the-art approaches.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"188 ","pages":"Pages 133-139"},"PeriodicalIF":3.9,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CamoEnv: Transferable and environment-consistent adversarial camouflage in autonomous driving
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-02-01 DOI: 10.1016/j.patrec.2024.12.003
Zijian Zhu , Xiao Yang , Hang Su , Shibao Zheng
{"title":"CamoEnv: Transferable and environment-consistent adversarial camouflage in autonomous driving","authors":"Zijian Zhu ,&nbsp;Xiao Yang ,&nbsp;Hang Su ,&nbsp;Shibao Zheng","doi":"10.1016/j.patrec.2024.12.003","DOIUrl":"10.1016/j.patrec.2024.12.003","url":null,"abstract":"<div><div>Adversarial camouflage has garnered significant attention in the security literature on autonomous driving. The ability to adapt to various angles makes adversarial camouflage important in autonomous driving attack. Traditional adversarial camouflages often exhibit unnatural and conspicuous appearances due to lacking consistency with the surrounding environment. They also have limited black-box transferability since the high-dimensional space of their explicit 3D object modeling induces overfitting problem. In this paper, we propose CamoEnv, a novel approach for creating environment-consistent and transferable adversarial camouflage. It not only maintains consistency as the object and viewpoint move, but also evades detection by various black-box models. Specifically, we present an object-environment integration method that generates object-environment-aligned images across varying viewpoints and maximizes their consistency. Additionally, we introduce an implicit color module that effectively reduces the parameter dimensionality, thus mitigating the overfitting problem and improving black-box transferability. Experimental results demonstrate that CamoEnv not only achieves superior environment consistency but also outperforms existing methods in black-box transferability by margins of 18.62% and 5.54% average attack success rate in digital and simulated attack experiments respectively.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"188 ","pages":"Pages 95-102"},"PeriodicalIF":3.9,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformer with token attention and attribute prediction for image captioning
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-02-01 DOI: 10.1016/j.patrec.2024.11.022
Lifei Song , Ying Wang , Linsu Shi , Jiazhong Yu , Fei Li , Shiming Xiang
{"title":"Transformer with token attention and attribute prediction for image captioning","authors":"Lifei Song ,&nbsp;Ying Wang ,&nbsp;Linsu Shi ,&nbsp;Jiazhong Yu ,&nbsp;Fei Li ,&nbsp;Shiming Xiang","doi":"10.1016/j.patrec.2024.11.022","DOIUrl":"10.1016/j.patrec.2024.11.022","url":null,"abstract":"<div><div>Recently, Vision Transformers (ViTs) have become the mainstream models in image captioning tasks. ViTs take all image tokens as inputs to extract visual features, which may cause concerns about worthless tokens, and meanwhile lead to a huge amount of computation. This paper proposes a novel token reduction module to remedy this drawback. Specifically, the module employs ViTs to embed the input tokens, and adaptively learns informative visual tokens in way of token attention on the channel-spatial granularity. Furthermore, an attribute prediction module is designed to strengthen the relationship between vision and language. Technically, the attribute prediction is achieved via a classifier in form of Multi-Layer Perceptron (MLP). Both the visual representations and attribute representations are obtained by Transformers, which are then combined as the input of the Transformer decoder for caption generation. All of the modules are constructed in an encoder–decoder framework and support the end-to-end learning. Experiment results have shown that our approach can effectively reduce the computational cost of ViTs while maintaining comparable performance on the MS COCO and NoCaps datasets. For example, by pruning more than 70% of the input tokens, our approach greatly reduces GFLOPs by 41% <span><math><mo>∼</mo></math></span> 47%, while preserving its accuracy of a 142.1 CIDEr score on the MS COCO dataset.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"188 ","pages":"Pages 74-80"},"PeriodicalIF":3.9,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A small object detection method with context information for high altitude images
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-02-01 DOI: 10.1016/j.patrec.2024.11.027
Zhengkai Ma , Linli Zhou , Di Wu , Xianliu Zhang
{"title":"A small object detection method with context information for high altitude images","authors":"Zhengkai Ma ,&nbsp;Linli Zhou ,&nbsp;Di Wu ,&nbsp;Xianliu Zhang","doi":"10.1016/j.patrec.2024.11.027","DOIUrl":"10.1016/j.patrec.2024.11.027","url":null,"abstract":"<div><div>Detection of small objects stands as a pivotal and difficult task because of their low resolution and lack of visualization features. Though achieving some promising results, recent detection methods utilize the context information insufficiently, leading to inadequate small object feature representation and increasing the misdetection and omission rates. We propose a method named Context Information Enhancement YOLO(CIE-YOLO) for small object detection. CIE-YOLO mainly includes a Context Reinforcement Module(CRM), a Channel Spatial Joint Attention(CSJA) module, and a Pixel Feature Enhancement Module(PFEM). The CRM module extracts and enhances the context information to mitigate the confusion between small objects and the background in the network. Then CSJA suppresses the background noise to highlight important small object features. Finally, PFEM reduces the small object feature losses in up-sampling via feature enhancement and pixel resolution enhancement. The effectiveness of the proposed CIE-YOLO in small object detection is demonstrated by extensive experiments.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"188 ","pages":"Pages 22-28"},"PeriodicalIF":3.9,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clinical knowledge aware synthesized CT image-based framework for improved detection and segmentation of hemorrhages
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-02-01 DOI: 10.1016/j.patrec.2024.11.028
Chitimireddy Sindhura, Subrahmanyam Gorthi
{"title":"Clinical knowledge aware synthesized CT image-based framework for improved detection and segmentation of hemorrhages","authors":"Chitimireddy Sindhura,&nbsp;Subrahmanyam Gorthi","doi":"10.1016/j.patrec.2024.11.028","DOIUrl":"10.1016/j.patrec.2024.11.028","url":null,"abstract":"<div><div>Intracranial hemorrhage (ICH) is a life-threatening condition characterized by bleeding within the brain tissue, necessitating immediate diagnosis and treatment to improve survival rates. CT imaging is the most commonly used modality for ICH diagnosis. Current methods typically depend on extensive annotated datasets and complex networks, and do not explicitly utilize the patient-specific clinical insights, which are crucial for precise diagnoses. In this paper, we introduce a novel deep-learning framework that utilizes synthesized CT images infused with clinical brain information to enhance the detection and segmentation of hemorrhages. This approach enhances data by synthesizing CT images based on the midsagittal plane and creates an asymmetry map that highlights the differences between the left and right halves of the CT image. We evaluated the performance of this approach using state-of-the-art deep learning architectures on two public datasets, INSTANCE and BHSD data sets, comprising around 300 CT scans with various types of haemorrhages. Results show that incorporating anatomical information improves the Dice Similarity Coefficient (DSC) for ICH segmentation by 7%–12% and increases detection accuracy by 4%–8%. Our findings suggest that incorporating prior anatomical knowledge can significantly enhance automated ICH diagnosis systems, paving the way for more reliable diagnostic solutions, even with limited data availability.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"188 ","pages":"Pages 46-52"},"PeriodicalIF":3.9,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cross-feature interaction network for 3D human pose estimation
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-02-01 DOI: 10.1016/j.patrec.2025.01.016
Jihua Peng , Yanghong Zhou , P.Y. Mok
{"title":"A cross-feature interaction network for 3D human pose estimation","authors":"Jihua Peng ,&nbsp;Yanghong Zhou ,&nbsp;P.Y. Mok","doi":"10.1016/j.patrec.2025.01.016","DOIUrl":"10.1016/j.patrec.2025.01.016","url":null,"abstract":"<div><div>The task of estimating 3D human poses from single monocular images is challenging because, unlike video sequences, single images can hardly provide any temporal information for the prediction. Most existing methods attempt to predict 3D poses by modeling the spatial dependencies inherent in the anatomical structure of the human skeleton, yet these methods fail to capture the complex local and global relationships that exist among various joints. To solve this problem, we propose a novel Cross-Feature Interaction Network to effectively model spatial correlations between body joints. Specifically, we exploit graph convolutional networks (GCNs) to learn the local features between neighboring joints and the self-attention structure to learn the global features among all joints. We then design a cross-feature interaction (CFI) module to facilitate cross-feature communications among the three different features, namely the local features, global features, and initial 2D pose features, aggregating them to form enhanced spatial representations of human pose. Furthermore, a novel graph-enhanced module (GraMLP) with parallel GCN and multi-layer perceptron is introduced to inject the skeletal knowledge of the human body into the final representation of 3D pose. Extensive experiments on two datasets (Human3.6M (Ionescu et al., 2013) and MPI-INF-3DHP (Mehta et al., 2017)) show the superior performance of our method in comparison to existing state-of-the-art (SOTA) models. The code and data are shared at <span><span>https://github.com/JihuaPeng/CFI-3DHPE</span><svg><path></path></svg></span></div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"189 ","pages":"Pages 175-181"},"PeriodicalIF":3.9,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143335308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global–local feature-mixed network with template update for visual tracking
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-02-01 DOI: 10.1016/j.patrec.2024.11.034
Li Zhao , Chenxiang Fan , Min Li , Zhonglong Zheng , Xiaoqin Zhang
{"title":"Global–local feature-mixed network with template update for visual tracking","authors":"Li Zhao ,&nbsp;Chenxiang Fan ,&nbsp;Min Li ,&nbsp;Zhonglong Zheng ,&nbsp;Xiaoqin Zhang","doi":"10.1016/j.patrec.2024.11.034","DOIUrl":"10.1016/j.patrec.2024.11.034","url":null,"abstract":"<div><div>Deep learning trackers have succeeded with a powerful local and global feature extraction capacity. However, both Siamese-based trackers with local convolution and Transformer-based trackers with global Transformer do not fully utilize frames. These trackers cannot obtain accurate tracking when they are faced with target appearance changes. This paper proposes a global–local features mixed tracker named GLT to complement the advantages of global and local frame features. GLT uses depth-wise convolution with dynamic weight to get local features and residual Transformer to get global features. Owing to global and local details, our method can perform accurate and robust tracking. Meanwhile, GLT has a template update strategy based on the key frame to face long-term tracking challenge. Numerous experiments show that our GLT achieves excellent performance on short-term and long-term benchmarks, including GOT-10k, TrackingNet and LaSOT. Furthermore, without many attention operations like other Transformer-based trackers, our GLT has fewer parameters and runs in real-time.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"188 ","pages":"Pages 111-116"},"PeriodicalIF":3.9,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143149700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SP-Det: Anchor-based lane detection network with structural prior perception
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-02-01 DOI: 10.1016/j.patrec.2024.11.030
Libo Sun, Hangyu Zhu, Wenhu Qin
{"title":"SP-Det: Anchor-based lane detection network with structural prior perception","authors":"Libo Sun,&nbsp;Hangyu Zhu,&nbsp;Wenhu Qin","doi":"10.1016/j.patrec.2024.11.030","DOIUrl":"10.1016/j.patrec.2024.11.030","url":null,"abstract":"<div><div>Effective perception and accurate localization of lane lines are the key points for intelligent vehicles to plan local driving paths and realize lane keeping and departure warning. However, the elongated structure of lane lines makes the performance of detectors degrade significantly when visual cues are scarce. The continuity of lane lines also puts forward higher requirements for the ability of algorithms to model long-range dependencies. In this paper, we propose a novel anchor-based lane detection network (SP-Det) combining the unique structural characteristics and pixel distribution of lane lines. Specifically, we introduce a Semantic-Guided Feature Calibration Unit (SG-FCU) to semantically calibrate and refine features from different layers and to narrow the semantic gap during fusion. Additionally, we propose a Spatial-aware Context Aggregation Block (S-CAB) and a Lane-aware Information Enhancement Module (LIEM) to improve the prediction accuracy of horizontal offsets of line anchors through global feature encoding and row-wise information sharing. The results of quantitative and qualitative experiments show that SP-Det achieves state-of-the-art performance on CULane and Tusimple benchmark datasets.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"188 ","pages":"Pages 60-66"},"PeriodicalIF":3.9,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An asymmetric heuristic for trained ternary quantization based on the statistics of the weights: An application to medical signal classification
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-02-01 DOI: 10.1016/j.patrec.2024.11.016
Yamil Vindas , Emmanuel Roux , Blaise Kévin Guépié , Marilys Almar , Philippe Delachartre
{"title":"An asymmetric heuristic for trained ternary quantization based on the statistics of the weights: An application to medical signal classification","authors":"Yamil Vindas ,&nbsp;Emmanuel Roux ,&nbsp;Blaise Kévin Guépié ,&nbsp;Marilys Almar ,&nbsp;Philippe Delachartre","doi":"10.1016/j.patrec.2024.11.016","DOIUrl":"10.1016/j.patrec.2024.11.016","url":null,"abstract":"<div><div>One of the main challenges in the field of deep learning and embedded systems is the mismatch between the memory, computational and energy resources required by the former for good performance and the resource capabilities offered by the latter. It is therefore important to find a good trade-off between performance and computational resources used. In this study, we propose a novel ternarization heuristic based on the statistics of the weights, in addition to asymmetric pruning. Our approach involves the computation of two asymmetric thresholds based on the mean and standard deviation of the weights. This allows us to distinguish between positive and negative values prior to ternarization. Two hyperparameters are introduced into these thresholds, which permit the user to control the trade-off between compression and classification performance. Following thresholding, ternarization is carried out in accordance with the methodology of trained ternary quantization (TTQ). The efficacy of the method is evaluated on three datasets, two of which are medical: a cerebral emboli (HITS) dataset, an epileptic seizure recognition (ESR) dataset, and the MNIST dataset. Two types of deep learning models were tested: 2D convolutional neural networks (CNNs) and 1D CNN-transformers. The results demonstrate that our approach, aTTQ, achieves a superior trade-off between classification performance and compression rate compared with TTQ, for all the models and datasets. In fact, our method is capable of reducing the memory requirements of a 1D CNN-transformer model for the ESR dataset by over 21% compared to TTQ, while maintaining a Matthews correlation coefficient of 95%. The code is available at: <span><span>https://github.com/yamilvindas/aTTQ</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"188 ","pages":"Pages 37-45"},"PeriodicalIF":3.9,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
New advances in body composition assessment with ShapedNet: A single image deep regression approach
IF 3.9 3区 计算机科学
Pattern Recognition Letters Pub Date : 2025-02-01 DOI: 10.1016/j.patrec.2024.11.029
Navar Medeiros M. Nascimento , Pedro Cavalcante de Sousa Junior , Pedro Yuri Rodrigues Nunes , Suane Pires Pinheiro da Silva , Luiz Lannes Loureiro , Victor Zaban Bittencourt , Valden Luis Matos Capistrano Junior , Pedro Pedrosa Rebouças Filho
{"title":"New advances in body composition assessment with ShapedNet: A single image deep regression approach","authors":"Navar Medeiros M. Nascimento ,&nbsp;Pedro Cavalcante de Sousa Junior ,&nbsp;Pedro Yuri Rodrigues Nunes ,&nbsp;Suane Pires Pinheiro da Silva ,&nbsp;Luiz Lannes Loureiro ,&nbsp;Victor Zaban Bittencourt ,&nbsp;Valden Luis Matos Capistrano Junior ,&nbsp;Pedro Pedrosa Rebouças Filho","doi":"10.1016/j.patrec.2024.11.029","DOIUrl":"10.1016/j.patrec.2024.11.029","url":null,"abstract":"<div><div>We introduce a novel technique called ShapedNet to enhance body composition assessment. This method employs a deep neural network capable of estimating Body Fat Percentage (BFP), performing individual identification, and enabling localization using a single photograph. The accuracy of ShapedNet is validated through comprehensive comparisons against the gold standard method, Dual-Energy X-ray Absorptiometry (DXA), utilizing 1273 healthy adults spanning various ages, sexes, and BFP levels. The results demonstrate that ShapedNet outperforms in 19.5% state of the art computer vision-based approaches for body fat estimation, achieving a Mean Absolute Percentage Error (MAPE) of 4.91% and Mean Absolute Error (MAE) of 1.42. The study evaluates both gender-based and Gender-neutral approaches, with the latter showcasing superior performance. The method estimates BFP with 95% confidence within an error margin of 4.01% to 5.81%. This research advances multi-task learning and body composition assessment theory through ShapedNet.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"188 ","pages":"Pages 88-94"},"PeriodicalIF":3.9,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信