Chiron Bang , Ali Salem Altaher , Ahmed Altaher , Hawraa Moamin , Thamer Alshammari , Basmh Alkanjr , Hasan Altaher , Mohammed G. Al-Jassani , Hanqi Zhuang
{"title":"指关节指纹分类:利用曼巴视觉低复杂性和高精度","authors":"Chiron Bang , Ali Salem Altaher , Ahmed Altaher , Hawraa Moamin , Thamer Alshammari , Basmh Alkanjr , Hasan Altaher , Mohammed G. Al-Jassani , Hanqi Zhuang","doi":"10.1016/j.fraope.2025.100261","DOIUrl":null,"url":null,"abstract":"<div><div>Although FKP has been recognized as a viable alternative to other biometric modalities, its adoption remains in the early stages due to its accuracy being lower than that of more competitive options. This research aims to bridge this performance gap by presenting an advancement in the classification of Finger Knuckle Prints (FKPs) using the Vision Mamba (ViM) model. The experimental study, conducted on a dataset from Hong Kong Polytechnic University containing 7,920 images from 165 individuals, evaluated the ViM model’s performance against several pretrained classification models. ViM achieved an impressive accuracy of 99.1%, outperforming other models such as AlexNet (96.2%), SCNN (98.3%), and EfficientNet (98.0%), highlighting its superior capability in FKP classification. With around 7 million parameters, ViM balances complexity and performance, engineered specifically to capture fine-grained FKP features, such as texture and line patterns. Its use of weight decay mitigates overfitting, and it demonstrates resilience in occlusion scenarios by maintaining performance despite missing FKP components. Spatial attention mechanisms further enhance classification accuracy by prioritizing the most informative regions. Seventeen pretrained deep neural networks were evaluated for their effectiveness in classifying FKPs, with experimental results consistently demonstrating the superior performance of the ViM model. ViM exemplifies an advanced deep learning approach for biometric applications, combining high precision with efficient resource usage. Its versatile design – incorporating bidirectional SSMs for global context modeling and position embeddings for spatial awareness – extends its applicability to visual tasks beyond FKP identification. However, users should take its complexity and resource requirements into account for practical implementation.</div></div>","PeriodicalId":100554,"journal":{"name":"Franklin Open","volume":"11 ","pages":"Article 100261"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Finger Knuckle Print classification: Leveraging vision Mamba for low complexity and high accuracy\",\"authors\":\"Chiron Bang , Ali Salem Altaher , Ahmed Altaher , Hawraa Moamin , Thamer Alshammari , Basmh Alkanjr , Hasan Altaher , Mohammed G. Al-Jassani , Hanqi Zhuang\",\"doi\":\"10.1016/j.fraope.2025.100261\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Although FKP has been recognized as a viable alternative to other biometric modalities, its adoption remains in the early stages due to its accuracy being lower than that of more competitive options. This research aims to bridge this performance gap by presenting an advancement in the classification of Finger Knuckle Prints (FKPs) using the Vision Mamba (ViM) model. The experimental study, conducted on a dataset from Hong Kong Polytechnic University containing 7,920 images from 165 individuals, evaluated the ViM model’s performance against several pretrained classification models. ViM achieved an impressive accuracy of 99.1%, outperforming other models such as AlexNet (96.2%), SCNN (98.3%), and EfficientNet (98.0%), highlighting its superior capability in FKP classification. With around 7 million parameters, ViM balances complexity and performance, engineered specifically to capture fine-grained FKP features, such as texture and line patterns. Its use of weight decay mitigates overfitting, and it demonstrates resilience in occlusion scenarios by maintaining performance despite missing FKP components. Spatial attention mechanisms further enhance classification accuracy by prioritizing the most informative regions. Seventeen pretrained deep neural networks were evaluated for their effectiveness in classifying FKPs, with experimental results consistently demonstrating the superior performance of the ViM model. ViM exemplifies an advanced deep learning approach for biometric applications, combining high precision with efficient resource usage. Its versatile design – incorporating bidirectional SSMs for global context modeling and position embeddings for spatial awareness – extends its applicability to visual tasks beyond FKP identification. However, users should take its complexity and resource requirements into account for practical implementation.</div></div>\",\"PeriodicalId\":100554,\"journal\":{\"name\":\"Franklin Open\",\"volume\":\"11 \",\"pages\":\"Article 100261\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-04-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Franklin Open\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2773186325000519\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Franklin Open","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2773186325000519","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Finger Knuckle Print classification: Leveraging vision Mamba for low complexity and high accuracy
Although FKP has been recognized as a viable alternative to other biometric modalities, its adoption remains in the early stages due to its accuracy being lower than that of more competitive options. This research aims to bridge this performance gap by presenting an advancement in the classification of Finger Knuckle Prints (FKPs) using the Vision Mamba (ViM) model. The experimental study, conducted on a dataset from Hong Kong Polytechnic University containing 7,920 images from 165 individuals, evaluated the ViM model’s performance against several pretrained classification models. ViM achieved an impressive accuracy of 99.1%, outperforming other models such as AlexNet (96.2%), SCNN (98.3%), and EfficientNet (98.0%), highlighting its superior capability in FKP classification. With around 7 million parameters, ViM balances complexity and performance, engineered specifically to capture fine-grained FKP features, such as texture and line patterns. Its use of weight decay mitigates overfitting, and it demonstrates resilience in occlusion scenarios by maintaining performance despite missing FKP components. Spatial attention mechanisms further enhance classification accuracy by prioritizing the most informative regions. Seventeen pretrained deep neural networks were evaluated for their effectiveness in classifying FKPs, with experimental results consistently demonstrating the superior performance of the ViM model. ViM exemplifies an advanced deep learning approach for biometric applications, combining high precision with efficient resource usage. Its versatile design – incorporating bidirectional SSMs for global context modeling and position embeddings for spatial awareness – extends its applicability to visual tasks beyond FKP identification. However, users should take its complexity and resource requirements into account for practical implementation.