Pattern Analysis and Applications最新文献

筛选
英文 中文
MSU-Net: the multi-scale supervised U-Net for image splicing forgery localization MSU-Net:用于图像拼接伪造定位的多尺度监督 U-Net
IF 3.9 4区 计算机科学
Pattern Analysis and Applications Pub Date : 2024-07-20 DOI: 10.1007/s10044-024-01305-9
Hao Yu, Lichao Su, Chenwei Dai, Jinli Wang
{"title":"MSU-Net: the multi-scale supervised U-Net for image splicing forgery localization","authors":"Hao Yu, Lichao Su, Chenwei Dai, Jinli Wang","doi":"10.1007/s10044-024-01305-9","DOIUrl":"https://doi.org/10.1007/s10044-024-01305-9","url":null,"abstract":"<p>Image splicing forgery, that is, copying some parts of an image into another image, is one of the frequently used tampering methods in image forgery. As a research hotspot in recent years, deep learning has been used in image forgery detection. However, current deep learning methods have two drawbacks: first, they are too simple in feature fusion; second, they rely only on a single cross-entropy loss as the loss function, leading to models prone to overfitting. To address these issues, a image splicing forgery localization method based on multi-scale supervised U-shaped network, named MSU-Net, is proposed in this paper. First, a triple-stream feature extraction module is designed, which combines the noise view and edge information of the input image to extract semantic-related and semantic-agnostic features. Second, a feature hierarchical fusion mechanism is proposed that introduces a channel attention mechanism layer by layer to perceive multi-level manipulation trajectories, avoiding the loss of information in semantic-related and semantic-agnostic shallow features during the convolution process. Finally, a strategy for multi-scale supervision is developed, a boundary artifact localization module is designed to compute the edge loss, and a contrastive learning module is introduced to compute the contrastive loss. Through extensive experiments on several public datasets, MSU-Net demonstrates high accuracy in localizing tampered regions and outperforms state-of-the-art methods. Additional attack experiments show that MSU-Net exhibits good robustness against Gaussian blur, Gaussian noise, and JPEG compression attacks. Besides, MSU-Net is superior in terms of model complexity and localization speed.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"70 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141744688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Judicious way to restore random impulse noise using iterative weighted total variation diffusion technique 利用迭代加权总变异扩散技术恢复随机脉冲噪声的明智方法
IF 3.9 4区 计算机科学
Pattern Analysis and Applications Pub Date : 2024-07-09 DOI: 10.1007/s10044-024-01296-7
Keisham Pritamdas
{"title":"A Judicious way to restore random impulse noise using iterative weighted total variation diffusion technique","authors":"Keisham Pritamdas","doi":"10.1007/s10044-024-01296-7","DOIUrl":"https://doi.org/10.1007/s10044-024-01296-7","url":null,"abstract":"<p>Various types of pixel candidates are available in the literature to replace impulse noise after effective detection. However, using them in the correct location and preserving the signal content, structural similarity, and image details is a task that draws attention, especially in a highly corrupted image. Non-linear Diffusion-based restoration is an efficient solution since it can iteratively update corrupted pixels without diffusing the edge. This work assigns the iterative weighted total variation diffusion technique only for the possibly noisy pixels in high noise ratio processing windows where the windows are pre-classified as low or high noise ratio by a custom CNN classifier. The work, called as CNN-based locally adapting filter (CNN-LAF), can achieve a high structural similarity of .9167 by maintaining a PSNR of 24.01 dB at a 0.8 noise ratio.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"366 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141568746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel two-stage omni-supervised face clustering algorithm 新型两阶段全方位监督人脸聚类算法
IF 3.9 4区 计算机科学
Pattern Analysis and Applications Pub Date : 2024-07-09 DOI: 10.1007/s10044-024-01298-5
Sing Kuang Tan, Xiu Wang
{"title":"A novel two-stage omni-supervised face clustering algorithm","authors":"Sing Kuang Tan, Xiu Wang","doi":"10.1007/s10044-024-01298-5","DOIUrl":"https://doi.org/10.1007/s10044-024-01298-5","url":null,"abstract":"<p>Face clustering has applications in organizing personal photo album, video understanding and automatic labeling of data for semi-supervised learning. Many existing methods cannot cluster millions of faces. They are either too slow, inaccurate, or need a lot memory. In our paper, we proposed a two stage unsupervised clustering algorithm which can cluster millions of faces in minutes. A rough clustering using greedy Transitive Closure (TC) algorithm to separate the easy to locate clusters, then a more precise non-greedy clustering algorithm is used to split the clusters into smaller clusters. We also developed a set of omni-supervised transformations that can produce multiple embeddings using a single trained model as if there are multiple models trained. These embeddings are combined using simple averaging and normalization. We carried out extensive experiments with multiple datasets of different sizes comparing with existing state of the art clustering algorithms to show that our clustering algorithm is robust to differences between datasets, efficient and outperforms existing methods. We also carried out further analysis on number of singleton clusters and variations of our model using different non-greedy clustering algorithms. We did trained our semi-supervised model using the cluster labels and shown that our clustering algorithm is effective for semi-supervised learning.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"15 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141568744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring methods for the generation of visual counterfactuals in the latent space 探索在潜在空间生成视觉反事实的方法
IF 3.9 4区 计算机科学
Pattern Analysis and Applications Pub Date : 2024-07-08 DOI: 10.1007/s10044-024-01299-4
David Morales, Manuel P. Cuéllar, Diego P. Morales
{"title":"Exploring methods for the generation of visual counterfactuals in the latent space","authors":"David Morales, Manuel P. Cuéllar, Diego P. Morales","doi":"10.1007/s10044-024-01299-4","DOIUrl":"https://doi.org/10.1007/s10044-024-01299-4","url":null,"abstract":"<p>In the field of eXplainable Artificial Intelligence (XAI), the generation of counterfactuals is a promising method for human-interpretable explanations. A counterfactual explanation describes a causal situation in the form: “If X had not occurred, Y would not have occurred”. In this work, we study the generation of visual counterfactuals in the latent space for deep learning image classification models. We explore how to adapt the training environment to facilitate the generation of counterfactuals, combining ideas coming from different fields such as multitasking or generative learning, with the aim of developing more interpretable models. We study well-known counterfactual methods and how to apply them in the latent space. Furthermore, we propose a new way of generating counterfactuals working in the latent space and compare it with the other studied approaches, achieving competitive results.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"35 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141568748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boosting person ReID feature extraction via dynamic convolution 通过动态卷积增强人的 ReID 特征提取
IF 3.9 4区 计算机科学
Pattern Analysis and Applications Pub Date : 2024-07-08 DOI: 10.1007/s10044-024-01294-9
Elif Ecem Akbaba, Filiz Gurkan, Bilge Gunsel
{"title":"Boosting person ReID feature extraction via dynamic convolution","authors":"Elif Ecem Akbaba, Filiz Gurkan, Bilge Gunsel","doi":"10.1007/s10044-024-01294-9","DOIUrl":"https://doi.org/10.1007/s10044-024-01294-9","url":null,"abstract":"<p>Extraction of discriminative features is crucial in person re-identification (ReID) which aims to match a query image of a person to her/his images, captured by different cameras. The conventional deep feature extraction methods on ReID employ CNNs with static convolutional kernels, where the kernel parameters are optimized during the training and remain constant in the inference. This approach limits the network's ability to model complex contents and decreases performance, particularly when dealing with occlusions or pose changes. In this work, to improve the performance without a significant increase in parameter size, we present a novel approach by utilizing a channel fusion-based dynamic convolution backbone network, which enables the kernels to change adaptively based on the input image, within two existing ReID network architectures. We replace the backbone network of two ReID methods to investigate the effect of dynamic convolution on both simple and complex networks. The first one called Baseline, is a simpler network with fewer layers, while the second, CaceNet represents a more complex architecture with higher performance. Evaluation results demonstrate that both of the designed dynamic networks improve identification accuracy compared to the static counterparts. A significant increase in accuracy is reported under occlusion tested on Occluded-DukeMTMC. Moreover, our approach achieves a performance comparable to the state-of-the-art on Market1501, DukeMTMC-reID, and CUHK03 with a limited computational load. These findings validate the effectiveness of the dynamic convolution in enhancing the person ReID networks and push the boundaries of performance in this domain.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"40 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141568747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EdgeNet: a low-power image recognition model based on small sample information 边缘网:基于小样本信息的低功耗图像识别模型
IF 3.9 4区 计算机科学
Pattern Analysis and Applications Pub Date : 2024-07-08 DOI: 10.1007/s10044-024-01289-6
Weiyue Bao, Hong Zhang, Yaoyao Ding, Fangzhou Shen, Liujun Li
{"title":"EdgeNet: a low-power image recognition model based on small sample information","authors":"Weiyue Bao, Hong Zhang, Yaoyao Ding, Fangzhou Shen, Liujun Li","doi":"10.1007/s10044-024-01289-6","DOIUrl":"https://doi.org/10.1007/s10044-024-01289-6","url":null,"abstract":"<p>Existing deep convolutional neural networks that rely on large datasets typically require images with high resolution and deep neural network models trained and called upon to improve accuracy of image recognition and classification. It is needed to use lightweight model to adapt to such low-power devices. However, lightweight small models are limited in their ability to classify and recognize small-sized images with low-resolution and are constrained by the number of parameters in the model and unable to perform deep-level feature extraction, since the low-resolution indicates small sample information. In the intelligent interaction in digital media, capturing, storing, transmitting, and computing high-resolution, high-precision images incur high power consumption and operating costs. When deploying an image recognition system on the client-side of IoT devices, it is difficult to meet the hardware requirements of high storage space and fast computation speed. It is also challenging to directly use high-resolution image data for model fine-tuning and training, and the size and parameter updates of the model are also limited by the storage and operating capacity of the hardware facilities. We proposed a low-power image recognition framework consists data pre-processing part and lightweight modeling architecture part. The data pre-processing method for image data based on an Auto-Encoder that filters R, G, B color channel data using a resolution filter to realize data compression, that is Downscaling large input data to a smaller size, thus to address the limitations of low-power deep learning model deployment and training. Based on the resolution filter, a channel normalization method is proposed to perform batch normalization on each channel dimension to encode the original image data at the same size and improve the mean squared error discrimination of the image data. And the lightweight model uses a depth-separable convolutional neural network and two kinds of blocks: one with batch normalization and the other without, EdgeNet. The architecture makes it possible to deploy more suitable for IoT device. The proposed framework achieves only a small precision loss within permission, but improves the forward inference speed of the model, and reduce the memory storage to 8.7 MB.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"24 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141568749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An improved classification diagnosis approach for cervical images based on deep neural networks 基于深度神经网络的改进型宫颈图像分类诊断方法
IF 3.9 4区 计算机科学
Pattern Analysis and Applications Pub Date : 2024-07-03 DOI: 10.1007/s10044-024-01300-0
Juan Wang, Mengying Zhao, Chengyi Xia
{"title":"An improved classification diagnosis approach for cervical images based on deep neural networks","authors":"Juan Wang, Mengying Zhao, Chengyi Xia","doi":"10.1007/s10044-024-01300-0","DOIUrl":"https://doi.org/10.1007/s10044-024-01300-0","url":null,"abstract":"<p>In order to enhance the speed and performance of cervical diagnosis, we propose an improved Residual Network (ResNet) by combining pyramid convolution with depth-wise separable convolution to obtain the high-quality cervical classification. Since most of cervical images from patients are not in the center of colposcopy images, we devise the segmentation and extraction algorithm of the center movement of the region of interest (ROI), which will further enhance the classification performance. Extensive experiments indicate that our model can not only achieve the lightweight network model, but also fulfil the classification prediction, such as for three-classification of cervical lesions, the classification accuracy is as high as 91.29<span>(%)</span>, the precision is 89.70<span>(%)</span>, the sensitivity is 88.75<span>(%)</span>, the specificity is 94.98<span>(%)</span>, the rate of missed diagnosis is 11.25<span>(%)</span> and the rate of misdiagnosis is 5.02<span>(%)</span>. Finally, after dividing the colposcopy images into four categories, it is shown that our results are still better than those obtained from many previous works as far as the cervical image classification is concerned. The current work can not only assist doctors to quickly diagnose cervical diseases, but also the classification performance can meet some clinical requirements in practice.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"9 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141552652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual model knowledge distillation for industrial anomaly detection 用于工业异常检测的双模型知识提炼
IF 3.9 4区 计算机科学
Pattern Analysis and Applications Pub Date : 2024-07-02 DOI: 10.1007/s10044-024-01295-8
Simon Thomine, Hichem Snoussi
{"title":"Dual model knowledge distillation for industrial anomaly detection","authors":"Simon Thomine, Hichem Snoussi","doi":"10.1007/s10044-024-01295-8","DOIUrl":"https://doi.org/10.1007/s10044-024-01295-8","url":null,"abstract":"<p>Unsupervised anomaly detection holds significant importance in large-scale industrial manufacturing. Recent methods have capitalized on the benefits of employing a classifier pretrained on natural images to extract representative features from specific layers, which are subsequently processed using various techniques. Notably, memory bank-based methods, which have demonstrated exceptional accuracy, often incur a trade-off in terms of latency, posing a challenge in real-time industrial applications where prompt anomaly detection and response are crucial. Indeed, alternative approaches such as knowledge distillation and normalized flow have demonstrated promising performance in unsupervised anomaly detection while maintaining low latency. In this paper, we aim to revisit the concept of knowledge distillation in the context of unsupervised anomaly detection, emphasizing the significance of feature selection. By employing distinctive features and leveraging different models, we intend to highlight the importance of carefully selecting and utilizing relevant features specifically tailored for the task of anomaly detection. This article presents a novel approach for anomaly detection, which employs dual model knowledge distillation and incorporates various types of semantic information by leveraging high and low-level semantic information.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"183 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141517457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discriminative binary pattern descriptor for face recognition 用于人脸识别的判别式二进制模式描述符
IF 3.9 4区 计算机科学
Pattern Analysis and Applications Pub Date : 2024-07-02 DOI: 10.1007/s10044-024-01293-w
Shekhar Karanwal
{"title":"Discriminative binary pattern descriptor for face recognition","authors":"Shekhar Karanwal","doi":"10.1007/s10044-024-01293-w","DOIUrl":"https://doi.org/10.1007/s10044-024-01293-w","url":null,"abstract":"<p>Among several local descriptors invented in literature, the local binary pattern (LBP) is the prolific one. Despite its advantages like low computational complexity and monotonic gray invariance property, there are various demerits are observed in LBP and these are limited spatial patch, high dimension feature, noisy thresholding function and un-affective in harsh illumination variations. To overcome these issues presented work introduces the novel local descriptor called as discriminative binary pattern (DBP). Precisely two descriptors are introduced under DBP so-called Radial orthogonal binary pattern (ROBP) and radial variance binary pattern (RVBP). In former proposed descriptor, for neighborhood comparison, the center pixel is replaced by mean of medians computed from [orthogonal pixels + center pixel] of two 3 × 3 pixel window, formed from radius S1 and S2 of the 5 × 5 image patch. In latter proposed descriptor, the radial variances generated from 8 pair of two pixels are utilized for comparison with their mean value. In case of the both proposed descriptors, the sub-region wise histograms are extracted and fused to develop the entire feature size. Further the feature length of ROBP and RVBP are merged to form the size of the DBP descriptor. The compression is conducted by principal component analysis (PCA) and Fishers linear discriminant analysis). For matching support vector machines is used. Experiments conducted on 8 benchmark datasets reveals the effectiveness of the proposed DBP as compared to the other state of art benchmark methods.</p>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"16 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141517458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer-aided diagnosis of Alzheimer’s disease and neurocognitive disorders with multimodal Bi-Vision Transformer (BiViT) 利用多模态 Bi-Vision Transformer (BiViT) 对阿尔茨海默病和神经认知障碍进行计算机辅助诊断
IF 3.9 4区 计算机科学
Pattern Analysis and Applications Pub Date : 2024-07-01 DOI: 10.1007/s10044-024-01297-6
S. Muhammad Ahmed Hassan Shah, Muhammad Qasim Khan, Atif Rizwan, Sana Ullah Jan, Nagwan Abdel Samee, Mona M. Jamjoom
{"title":"Computer-aided diagnosis of Alzheimer’s disease and neurocognitive disorders with multimodal Bi-Vision Transformer (BiViT)","authors":"S. Muhammad Ahmed Hassan Shah, Muhammad Qasim Khan, Atif Rizwan, Sana Ullah Jan, Nagwan Abdel Samee, Mona M. Jamjoom","doi":"10.1007/s10044-024-01297-6","DOIUrl":"https://doi.org/10.1007/s10044-024-01297-6","url":null,"abstract":"<p>Cognitive disorders affect various cognitive functions that can have a substantial impact on individual’s daily life. Alzheimer’s disease (AD) is one of such well-known cognitive disorders. Early detection and treatment of cognitive diseases using artificial intelligence can help contain them. However, the complex spatial relationships and long-range dependencies found in medical imaging data present challenges in achieving the objective. Moreover, for a few years, the application of transformers in imaging has emerged as a promising area of research. A reason can be transformer’s impressive capabilities of tackling spatial relationships and long-range dependency challenges in two ways, i.e., (1) using their self-attention mechanism to generate comprehensive features, and (2) capture complex patterns by incorporating global context and long-range dependencies. In this work, a Bi-Vision Transformer (BiViT) architecture is proposed for classifying different stages of AD, and multiple types of cognitive disorders from 2-dimensional MRI imaging data. More specifically, the transformer is composed of two novel modules, namely Mutual Latent Fusion (MLF) and Parallel Coupled Encoding Strategy (PCES), for effective feature learning. Two different datasets have been used to evaluate the performance of proposed BiViT-based architecture. The first dataset contain several classes such as mild or moderate demented stages of the AD. The other dataset is composed of samples from patients with AD and different cognitive disorders such as mild, early, or moderate impairments. For comprehensive comparison, a multiple transfer learning algorithm and a deep autoencoder have been each trained on both datasets. The results show that the proposed BiViT-based model achieves an accuracy of 96.38% on the AD dataset. However, when applied to cognitive disease data, the accuracy slightly decreases below 96% which can be resulted due to smaller amount of data and imbalance in data distribution. Nevertheless, given the results, it can be hypothesized that the proposed algorithm can perform better if the imbalanced distribution and limited availability problems in data can be addressed.</p><h3 data-test=\"abstract-sub-heading\">Graphical abstract</h3>","PeriodicalId":54639,"journal":{"name":"Pattern Analysis and Applications","volume":"60 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信