Fadi Farag, Sophia Fu, Aashika Jagadeesh, Aashi Mishra, Andrew Noviello, Yingying Chen, Yilin Yang
{"title":"A Novel Approach to Secure Smartwatch Authentication: Structure-Borne Sound Identification and Gesture Recognition","authors":"Fadi Farag, Sophia Fu, Aashika Jagadeesh, Aashi Mishra, Andrew Noviello, Yingying Chen, Yilin Yang","doi":"10.1109/URTC56832.2022.10002176","DOIUrl":null,"url":null,"abstract":"With the need to conveniently secure devices, manufacturers have pushed to explore new methods of authentication. We propose a smartwatch authentication system based on structure-borne sound emitted from the contact between a user’s wrist and smartwatch. Audio recordings from users in loud and quiet settings, with and without hand movements (‘gestures’) were collected. After extracting relevant features from the data, numerous machine learning models, including Support Vector Machines (SVMs), K-Nearest Neighbors (KNNs), and linear discriminants, were tested for authentication accuracy. Among these models, the linear discriminant model had the highest identification accuracy for recordings without gestures, and the K-Nearest Neighbors model performed the best for gesture-based authentication. Unlike more complex architectures, the relative simplicity and accuracy of linear discriminant models demonstrated the computational efficiency of structure-borne sound authentication.","PeriodicalId":330213,"journal":{"name":"2022 IEEE MIT Undergraduate Research Technology Conference (URTC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE MIT Undergraduate Research Technology Conference (URTC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/URTC56832.2022.10002176","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the need to conveniently secure devices, manufacturers have pushed to explore new methods of authentication. We propose a smartwatch authentication system based on structure-borne sound emitted from the contact between a user’s wrist and smartwatch. Audio recordings from users in loud and quiet settings, with and without hand movements (‘gestures’) were collected. After extracting relevant features from the data, numerous machine learning models, including Support Vector Machines (SVMs), K-Nearest Neighbors (KNNs), and linear discriminants, were tested for authentication accuracy. Among these models, the linear discriminant model had the highest identification accuracy for recordings without gestures, and the K-Nearest Neighbors model performed the best for gesture-based authentication. Unlike more complex architectures, the relative simplicity and accuracy of linear discriminant models demonstrated the computational efficiency of structure-borne sound authentication.