Pattern Recognition Letters最新文献

筛选
英文 中文
Choroid plexus segmentation in MRI using the novel T1×FLAIR modality and PSU-Mamba: projective scan U-Mamba approach MRI脉络膜丛分割使用新颖的T1×FLAIR模式和PSU-Mamba:投影扫描U-Mamba方法
IF 3.3 3区 计算机科学
Pattern Recognition Letters Pub Date : 2026-04-01 Epub Date: 2026-01-25 DOI: 10.1016/j.patrec.2026.01.024
Lia Schmid , Giuseppe M. Facchi , Francesco Agnelli , Giorgio Bocca , Luca Sacchi , Raffaella Lanzarotti
{"title":"Choroid plexus segmentation in MRI using the novel T1×FLAIR modality and PSU-Mamba: projective scan U-Mamba approach","authors":"Lia Schmid ,&nbsp;Giuseppe M. Facchi ,&nbsp;Francesco Agnelli ,&nbsp;Giorgio Bocca ,&nbsp;Luca Sacchi ,&nbsp;Raffaella Lanzarotti","doi":"10.1016/j.patrec.2026.01.024","DOIUrl":"10.1016/j.patrec.2026.01.024","url":null,"abstract":"<div><div>The Choroid Plexus (CP) is emerging as a biomarker for neurodegenerative diseases (NDDs) such as Alzheimer’s Disease and its precursor pathologies. However, segmentation remains challenging, especially without Contrast-Enhanced T1-weighted (CE-T1w) imaging which is invasive and rarely used in NDDs. To address these challenges, we present three key contributions. First, we propose and validate <strong>T1×FLAIR</strong>, a novel, non-invasive modality created by gamma-corrected voxelwise multiplication of coregistered T1w and FLAIR images. Expert visual inspection confirmed that this choice enhances CP visibility while preserving standard resolution. Second, we release <strong>ChP-MRI</strong>, a high-quality MRI dataset of 168 patients with NDDs or Multiple Sclerosis, including T1w, FLAIR, and T1×FLAIR images with expert-verified CP segmentations. The dataset is multi-pathology, and accompanied by demographic details to support benchmarking. Third, we propose <strong>PSU-Mamba</strong> (Projective Scan U-Mamba), an adaptation of the U-Mamba segmentation model where the first encoder block is a Mamba layer equipped with a PCA-based scan path derived from anatomical priors. This design enhances segmentation accuracy, maintains linear complexity, and converges faster with fewer training epochs. Experiments on ChP-MRI confirm that T1×FLAIR is a more faithful substitute for CE-T1w than T1w, and that PSU-Mamba offers systematic robustness in segmenting the CP. The source code and the dataset are available at <span><span>https://github.com/phuselab/PSU_Mamba#</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"202 ","pages":"Pages 1-7"},"PeriodicalIF":3.3,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146102453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Underwater image color correction via global-local collaborative strategy 基于全局-局部协同策略的水下图像色彩校正
IF 3.3 3区 计算机科学
Pattern Recognition Letters Pub Date : 2026-03-01 Epub Date: 2026-01-23 DOI: 10.1016/j.patrec.2026.01.022
Ling Zhou , Baiqiang Yu , Hengyu Li , Wenyi Zhao , Weidong Zhang
{"title":"Underwater image color correction via global-local collaborative strategy","authors":"Ling Zhou ,&nbsp;Baiqiang Yu ,&nbsp;Hengyu Li ,&nbsp;Wenyi Zhao ,&nbsp;Weidong Zhang","doi":"10.1016/j.patrec.2026.01.022","DOIUrl":"10.1016/j.patrec.2026.01.022","url":null,"abstract":"<div><div>Underwater images often suffer from color distortion, blur, and low contrast due to light scattering and absorption. To this end, we propose a color correction method for underwater images called GLCS, which leverages a global-local collaborative strategy to mitigate color distortion effectively. Specifically, we construct a weight matrix to guide the channel with minimal attenuation in performing global compensation for the other channels. Following this, we design a local feedback strategy that dynamically adjusts the weight matrix based on the image’s local color bias, enabling collaborative correction between the global and local components. Finally, we design a loss function that combines color difference, mean, and standard deviation disparities to control the iteration process and optimize the compensation. Extensive experiments reveal that GLCS, as a preprocessing step, effectively alleviates color distortion in underwater images and significantly enhances the visual quality and performance of subsequent image enhancement methods.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 160-167"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid attention triple branch transformer net for underwater image enhancement 水下图像增强的混合关注三支路变压器网
IF 3.3 3区 计算机科学
Pattern Recognition Letters Pub Date : 2026-03-01 Epub Date: 2026-01-13 DOI: 10.1016/j.patrec.2026.01.014
Shaohui Jin , Guangpeng Li , Ziqin Xu , Yanxin Zhang , Zhengguang Qin , Hao Liu , Mingliang Xu
{"title":"Hybrid attention triple branch transformer net for underwater image enhancement","authors":"Shaohui Jin ,&nbsp;Guangpeng Li ,&nbsp;Ziqin Xu ,&nbsp;Yanxin Zhang ,&nbsp;Zhengguang Qin ,&nbsp;Hao Liu ,&nbsp;Mingliang Xu","doi":"10.1016/j.patrec.2026.01.014","DOIUrl":"10.1016/j.patrec.2026.01.014","url":null,"abstract":"<div><div>In real underwater scenes, the complexity of the environment leads to issues like light attenuation, scattering, and color distortion, resulting in reduced image quality and loss of details. To resolve these problems, we propose a hybrid attention triple branch transformer network (HATBformer). The backbone network adopts a three-layer encoder-decoder structure, making full use of the spatial and channel feature information of underwater images, and improving the network’s focus on color information and spatial regions with higher levels of attenuation. The detail enhancement branch incorporates the coordinate information perception mechanism and feature integration strategy through three consecutive feature enhancement blocks, aiming to deeply repair and optimize image details and effectively improve the image reconstruction quality. In addition, we established an underwater image dataset NLOS-TW that contains different optical thicknesses, including rich targets and various underwater scenes. Extensive experiments demonstrate that our method significantly enhances image quality and surpasses current state-of-the-art methods both qualitatively and quantitatively.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 95-102"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clustering criteria: What defines a good cluster? 聚类标准:什么定义了一个好的集群?
IF 3.3 3区 计算机科学
Pattern Recognition Letters Pub Date : 2026-03-01 Epub Date: 2026-01-13 DOI: 10.1016/j.patrec.2026.01.011
Jinli Yao, Yong Zeng
{"title":"Clustering criteria: What defines a good cluster?","authors":"Jinli Yao,&nbsp;Yong Zeng","doi":"10.1016/j.patrec.2026.01.011","DOIUrl":"10.1016/j.patrec.2026.01.011","url":null,"abstract":"<div><div>Clustering is a fundamental technique in unsupervised learning, enabling the discovery of patterns and natural groupings in data without prior labels. Despite its widespread applications across domains, the field of clustering faces persistent challenges, including a lack of universally accepted definitions, inconsistent classification criteria, and varying evaluation metrics. This review paper addresses these gaps by exploring the core question: What defines a good cluster? We investigate and summarize the induction principle behind clustering problems, clustering algorithms, and evaluation indices. The paper classifies clustering algorithms based on their criteria and principles, providing a structured understanding of their methodologies. It further categorizes datasets into synthetic and real-world examples, identifying the challenges posed by diverse cluster characteristics, such as varying shapes, densities, sizes, and overlapping cases, alongside high-dimensionality. A comprehensive review of evaluation indices-grouped into compactness, connectedness, and separation types-highlights their importance in assessing clustering quality. By consolidating these aspects, this review provides a cohesive framework to understand clustering principles and their applications.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 103-108"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FHPG: A unified framework for transformer with pruning and quantization FHPG:具有剪枝和量化的变压器统一框架
IF 3.3 3区 计算机科学
Pattern Recognition Letters Pub Date : 2026-03-01 Epub Date: 2026-01-22 DOI: 10.1016/j.patrec.2026.01.020
Ruiguo Ren
{"title":"FHPG: A unified framework for transformer with pruning and quantization","authors":"Ruiguo Ren","doi":"10.1016/j.patrec.2026.01.020","DOIUrl":"10.1016/j.patrec.2026.01.020","url":null,"abstract":"<div><div>Vision transformers (ViTs) have demonstrated strong performance across various vision tasks; however, their high computational demands limit practical deployment. Although unified post-training frameworks for pruning and quantization have been applied to deep neural networks, existing methods do not explicitly integrate Fisher–Hessian information for structured pruning and quantization. To address this limitation, we propose Fisher Hessian particle swarm optimization–gravitational search algorithm (FHPG), a unified framework that jointly performs structured pruning and quantization to improve compression efficiency and accuracy. FHPG leverages Fisher–Hessian metrics to generate pruning masks and quantization intervals, reducing parameter redundancy and guiding quantization more effectively. In addition, a hybrid particle swarm optimization and gravitational search (PSO–GSA) strategy is incorporated to enhance optimization stability and avoid local minima. Experiments on standard vision benchmarks with transformer architectures, including DeiT and Swin, demonstrate that FHPG achieves substantial reductions in model size and inference latency while maintaining accuracy loss within approximately 1%.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 174-179"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146188734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalization performance distributions along learning curves 沿着学习曲线的泛化性能分布
IF 3.3 3区 计算机科学
Pattern Recognition Letters Pub Date : 2026-03-01 Epub Date: 2026-01-03 DOI: 10.1016/j.patrec.2026.01.003
O. Taylan Turan , Marco Loog , David M.J. Tax
{"title":"Generalization performance distributions along learning curves","authors":"O. Taylan Turan ,&nbsp;Marco Loog ,&nbsp;David M.J. Tax","doi":"10.1016/j.patrec.2026.01.003","DOIUrl":"10.1016/j.patrec.2026.01.003","url":null,"abstract":"<div><div>Learning curves show the expected performance with respect to training set size. This is often used to evaluate and compare models, tune hyper-parameters and determine how much data is needed for a specific performance. However, the distributional properties of performance are frequently overlooked on learning curves. Generally, only an average with standard error or standard deviation is used. In this paper, we analyze the distributions of generalization performance on the learning curves. We compile a high-fidelity learning curve database, both with respect to training set size and repetitions of the sampling for a fixed training set size. Our investigation reveals that generalization performance rarely follows a Gaussian distribution for classical classifiers, regardless of dataset balance, loss function, sampling method, or hyper-parameter tuning along learning curves. Furthermore, we show that the choice of statistical summary, mean versus measures like quantiles affect the top model rankings. Our findings highlight the importance of considering different statistical measures and use of non-parametric approaches when evaluating and selecting machine learning models with learning curves.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 29-36"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145940475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LIFR-Net: A lightweight hybrid neural network with feature grouping for efficient food image recognition LIFR-Net:一个轻量级的混合神经网络,具有特征分组,用于有效的食品图像识别
IF 3.3 3区 计算机科学
Pattern Recognition Letters Pub Date : 2026-03-01 Epub Date: 2025-12-25 DOI: 10.1016/j.patrec.2025.12.011
Qingshuo Sun , Guorui Sheng , Xiangyi Zhu , Jingru Song , Yongqiang Song , Tao Yao , Haiyang Wang , Lili Wang
{"title":"LIFR-Net: A lightweight hybrid neural network with feature grouping for efficient food image recognition","authors":"Qingshuo Sun ,&nbsp;Guorui Sheng ,&nbsp;Xiangyi Zhu ,&nbsp;Jingru Song ,&nbsp;Yongqiang Song ,&nbsp;Tao Yao ,&nbsp;Haiyang Wang ,&nbsp;Lili Wang","doi":"10.1016/j.patrec.2025.12.011","DOIUrl":"10.1016/j.patrec.2025.12.011","url":null,"abstract":"<div><div>Food image recognition based on deep learning plays a crucial role in the field of food computing. However, its high demand for computing resources limits its deployment on end devices and fails to effectively achieve intelligent diet and nutrition management. To address this issue, we aim to balance computational efficiency with recognition accuracy and propose a compact food image recognition model named Lightweight Inter-Group Food Recognition Net (LIFR-Net) that combines Convolutional Neural Network (CNN) and Vision Transformer (ViT). In LIFR-Net, a lightweight ViT module called Lightweight Inter-group Transformer (LIT) is designed, and a lightweight component named Feature Grouping Transformer is constructed, which can efficiently extract local and global features of food images and optimize the number of parameters and computational complexity. In addition, by shuffling and fusing irregularly grouped feature maps, the information exchange among channels is enhanced, and the recognition accuracy of the model is improved. Extensive experiments on three commonly used public food image recognition datasets, namely ETHZ Food–101, Vireo Food–172, and UEC Food–256, show that LIFR-Net achieves recognition accuracies of 90.49%, 91.04%, and 74.23% with lower numbers of parameters and computational amounts.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 22-28"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145940477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DBASNet: A double-branch adaptive segmentation network for remote sensing image 基于DBASNet的遥感图像双分支自适应分割网络
IF 3.3 3区 计算机科学
Pattern Recognition Letters Pub Date : 2026-03-01 Epub Date: 2025-11-30 DOI: 10.1016/j.patrec.2025.11.043
Bo Huang , Yiwei Lu , Changsheng Yin , Ruopeng Yang , Yu Tao , Yongqi Shi , Shijie Wang , Qian Zhao
{"title":"DBASNet: A double-branch adaptive segmentation network for remote sensing image","authors":"Bo Huang ,&nbsp;Yiwei Lu ,&nbsp;Changsheng Yin ,&nbsp;Ruopeng Yang ,&nbsp;Yu Tao ,&nbsp;Yongqi Shi ,&nbsp;Shijie Wang ,&nbsp;Qian Zhao","doi":"10.1016/j.patrec.2025.11.043","DOIUrl":"10.1016/j.patrec.2025.11.043","url":null,"abstract":"<div><div>With the rapid development of artificial intelligence technology, deep learning has been widely applied in the semantic segmentation of remote sensing images. Current methods for remote sensing semantic segmentation mainly employ architectures based on convolutional neural networks and Transformer networks, achieving good performance in segmentation tasks. However, existing approaches fail to optimize segmentation for diverse terrain characteristics, leading to limitations in segmentation accuracy in complex scenes. To address this, we propose a novel network called DBASNet, which consists of two decoding branches: road topology and terrain classification. The former focuses on the integrity of the topological structure of road terrains, while the latter emphasizes the accuracy of other terrain segmentations. Experiments demonstrate that DBASNet achieves state-of-the-art semantic segmentation results by balancing terrain segmentation accuracy with road connectivity on the LoveDA and LandCover.ai datasets.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 9-14"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From-scratch dexterous grasp type annotation with SAM and lightweight vision-language models 从头开始灵巧的抓取类型注释与SAM和轻量级的视觉语言模型
IF 3.3 3区 计算机科学
Pattern Recognition Letters Pub Date : 2026-03-01 Epub Date: 2026-01-17 DOI: 10.1016/j.patrec.2026.01.018
Yifan Wang , Long Cheng
{"title":"From-scratch dexterous grasp type annotation with SAM and lightweight vision-language models","authors":"Yifan Wang ,&nbsp;Long Cheng","doi":"10.1016/j.patrec.2026.01.018","DOIUrl":"10.1016/j.patrec.2026.01.018","url":null,"abstract":"<div><div>Dexterous robotic hands enable versatile manipulation but require large annotated datasets for training, which are costly to obtain. This work presents a framework that integrates the Segment Anything Model (SAM) and small-scale vision-language models (VLMs) to automatically generate annotations from RGB-D images. Guided by the Fugl-Meyer grasp taxonomy and prompt engineering, the system produces labeled data from scratch, including object segmentation masks, semantic categories, and grasp type labels. Experimental results demonstrate that the proposed framework can successfully generate labeled RGB-D grasp data while enhancing the performance of lightweight VLMs on relevant task-specific submodules, underscoring its potential to accelerate research in dexterous manipulation.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 145-151"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DMAGaze : Gaze estimation using feature disentanglement and multi-scale attention 基于特征解纠缠和多尺度注意力的凝视估计
IF 3.3 3区 计算机科学
Pattern Recognition Letters Pub Date : 2026-03-01 Epub Date: 2026-01-13 DOI: 10.1016/j.patrec.2026.01.013
Haohan Chen , Hongjia Liu , Shiyong Lan , Wenwu Wang , Yixin Qiao , Yao Li , Guonan Deng
{"title":"DMAGaze : Gaze estimation using feature disentanglement and multi-scale attention","authors":"Haohan Chen ,&nbsp;Hongjia Liu ,&nbsp;Shiyong Lan ,&nbsp;Wenwu Wang ,&nbsp;Yixin Qiao ,&nbsp;Yao Li ,&nbsp;Guonan Deng","doi":"10.1016/j.patrec.2026.01.013","DOIUrl":"10.1016/j.patrec.2026.01.013","url":null,"abstract":"<div><div>Gaze estimation, which predicts gaze direction, commonly faces the challenge of interference from complex gaze-irrelevant information in face images—a key bottleneck limiting its accuracy in real-world scenarios. In this work, we propose DMAGaze, a novel gaze estimation framework that exploits information from facial images in three aspects: gaze-relevant global features (disentangled from facial image), local eye features (extracted from cropped eye patch), and head pose related features, to improve overall performance. Firstly, we design a new continuous mask-based Disentangler to separate gaze-relevant and gaze-irrelevant information in facial images through reconstructing the eye and non-eye regions using a dual-branch architecture. Furthermore, we introduce a new attention module, called Multi-Scale Global Local Attention Module (MS-GLAM), to fuse the global and local information at multiple scales via a customized attention structure, thereby further enhancing the information from the Disentangler. Finally, we combine the global gaze-relevant features, with head pose and local eye features, and pass them through the detection head for high-precision gaze estimation. Our proposed DMAGaze has been evaluated extensively on two widely used public datasets: obtaining a gaze estimation error of 3.74° on MPIIFaceGaze and 6.17° on RT-GENE, outperforming SOTA methods.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 109-116"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书