Ahmed Mahfouz, Hebatollah Mostafa, Tarek M. Mahmoud, Ahmed Sharaf Eldin
{"title":"M2auth: A multimodal behavioral biometric authentication using feature-level fusion","authors":"Ahmed Mahfouz, Hebatollah Mostafa, Tarek M. Mahmoud, Ahmed Sharaf Eldin","doi":"10.1007/s00521-024-10403-y","DOIUrl":null,"url":null,"abstract":"<p>Conventional authentication methods, such as passwords and PINs, are vulnerable to multiple threats, from sophisticated hacking attempts to the inherent weaknesses of human memory. This highlights a critical need for a more secure, convenient, and user-friendly approach to authentication. This paper introduces M2auth, a novel multimodal behavioral biometric authentication framework for smartphones. M2auth leverages a combination of multiple authentication modalities, including touch gestures, keystrokes, and accelerometer data, with a focus on capturing high-quality, intervention-free data. To validate the efficacy of M2auth, we conducted a large-scale field study involving 52 participants over two months, collecting data from touch gestures, keystrokes, and smartphone sensors. The resulting dataset, comprising over 5.5 million action points, serves as a valuable resource for behavioral biometric research. Our evaluation involved two fusion scenarios, feature-level fusion and decision-level fusion, that play a pivotal role in elevating authentication performance. These fusion approaches effectively mitigate challenges associated with noise and variability in behavioral data, enhancing the robustness of the system. We found that the decision-level fusion outperforms the feature level, reaching a 99.98% authentication success rate and an EER reduced to 0.84%, highlighting the robustness of M2auth in real-world scenarios.</p>","PeriodicalId":18925,"journal":{"name":"Neural Computing and Applications","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Computing and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00521-024-10403-y","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Conventional authentication methods, such as passwords and PINs, are vulnerable to multiple threats, from sophisticated hacking attempts to the inherent weaknesses of human memory. This highlights a critical need for a more secure, convenient, and user-friendly approach to authentication. This paper introduces M2auth, a novel multimodal behavioral biometric authentication framework for smartphones. M2auth leverages a combination of multiple authentication modalities, including touch gestures, keystrokes, and accelerometer data, with a focus on capturing high-quality, intervention-free data. To validate the efficacy of M2auth, we conducted a large-scale field study involving 52 participants over two months, collecting data from touch gestures, keystrokes, and smartphone sensors. The resulting dataset, comprising over 5.5 million action points, serves as a valuable resource for behavioral biometric research. Our evaluation involved two fusion scenarios, feature-level fusion and decision-level fusion, that play a pivotal role in elevating authentication performance. These fusion approaches effectively mitigate challenges associated with noise and variability in behavioral data, enhancing the robustness of the system. We found that the decision-level fusion outperforms the feature level, reaching a 99.98% authentication success rate and an EER reduced to 0.84%, highlighting the robustness of M2auth in real-world scenarios.