Philipp Mock, Jörg Edelmann, A. Schilling, W. Rosenstiel
{"title":"User identification using raw sensor data from typing on interactive displays","authors":"Philipp Mock, Jörg Edelmann, A. Schilling, W. Rosenstiel","doi":"10.1145/2557500.2557503","DOIUrl":null,"url":null,"abstract":"Personalized soft-keyboards which adapt to a user's individual typing behavior can reduce typing errors on interactive displays. In multi-user scenarios a personalized model has to be loaded for each participant. In this paper we describe a user identification technique that is based on raw sensor data from an optical touch screen. For classification of users we use a multi-class support vector machine that is trained with grayscale images from the optical sensor. Our implementation can identify a specific user from a set of 12 users with an average accuracy of 97.51% after one keystroke. It can be used to automatically select individual typing models during free-text entry. The resulting authentication process is completely implicit. We furthermore describe how the approach can be extended to automatic loading of personal information and settings.","PeriodicalId":287073,"journal":{"name":"Proceedings of the 19th international conference on Intelligent User Interfaces","volume":"80 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 19th international conference on Intelligent User Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2557500.2557503","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16
Abstract
Personalized soft-keyboards which adapt to a user's individual typing behavior can reduce typing errors on interactive displays. In multi-user scenarios a personalized model has to be loaded for each participant. In this paper we describe a user identification technique that is based on raw sensor data from an optical touch screen. For classification of users we use a multi-class support vector machine that is trained with grayscale images from the optical sensor. Our implementation can identify a specific user from a set of 12 users with an average accuracy of 97.51% after one keystroke. It can be used to automatically select individual typing models during free-text entry. The resulting authentication process is completely implicit. We furthermore describe how the approach can be extended to automatic loading of personal information and settings.