{"title":"“我能看到你的密码”:基于混合现实的智能制造应用空中交互网络安全风险案例研究","authors":"Wenhao Yang, Xiwen Dengxiong, Xueting Wang, Yidan Hu, Yunbo Zhang","doi":"10.1115/1.4062658","DOIUrl":null,"url":null,"abstract":"\n This paper aims to present a potential cybersecurity risk existing in Mixed Reality (MR)-based smart manufacturing applications that decipher digital passwords through a single RGB camera to capture the user's mid-air gestures. We first created a testbed, which is an MR-based smart factory management system consisting of mid-air gesture-based user interfaces (UIs) on a video see-through MR head-mounted display (HMD). To interact with UIs and input information, the user's hand movements and gestures are tracked by the MR system. We set up the experiment to be the estimation of the password input by users through mid-air hand gestures on a virtual numeric keypad. To achieve this goal, we developed a lightweight machine learning-based hand position tracking and gesture recognition method. This method takes either video streaming or recorded video clips (taken by a single RGB camera in front of the user) as input, where the videos record the users' hand movements and gestures but not the virtual UIs. With the assumption of the known size, position, and layout of the keypad, the machine learning method estimates the password through hand gesture recognition and finger position detection. The evaluation result indicates the effectiveness of the proposed method, with a high accuracy of 97.03%, 94.06%, and 83.83% for 2-digit, 4-digit, and 6-digit passwords, respectively, using real-time video streaming as input.","PeriodicalId":54856,"journal":{"name":"Journal of Computing and Information Science in Engineering","volume":" ","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"“I can see your password”: A case study about cybersecurity risks in mid-air interactions of mixed reality-based smart manufacturing applications\",\"authors\":\"Wenhao Yang, Xiwen Dengxiong, Xueting Wang, Yidan Hu, Yunbo Zhang\",\"doi\":\"10.1115/1.4062658\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n This paper aims to present a potential cybersecurity risk existing in Mixed Reality (MR)-based smart manufacturing applications that decipher digital passwords through a single RGB camera to capture the user's mid-air gestures. We first created a testbed, which is an MR-based smart factory management system consisting of mid-air gesture-based user interfaces (UIs) on a video see-through MR head-mounted display (HMD). To interact with UIs and input information, the user's hand movements and gestures are tracked by the MR system. We set up the experiment to be the estimation of the password input by users through mid-air hand gestures on a virtual numeric keypad. To achieve this goal, we developed a lightweight machine learning-based hand position tracking and gesture recognition method. This method takes either video streaming or recorded video clips (taken by a single RGB camera in front of the user) as input, where the videos record the users' hand movements and gestures but not the virtual UIs. With the assumption of the known size, position, and layout of the keypad, the machine learning method estimates the password through hand gesture recognition and finger position detection. The evaluation result indicates the effectiveness of the proposed method, with a high accuracy of 97.03%, 94.06%, and 83.83% for 2-digit, 4-digit, and 6-digit passwords, respectively, using real-time video streaming as input.\",\"PeriodicalId\":54856,\"journal\":{\"name\":\"Journal of Computing and Information Science in Engineering\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2023-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Computing and Information Science in Engineering\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1115/1.4062658\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computing and Information Science in Engineering","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1115/1.4062658","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
“I can see your password”: A case study about cybersecurity risks in mid-air interactions of mixed reality-based smart manufacturing applications
This paper aims to present a potential cybersecurity risk existing in Mixed Reality (MR)-based smart manufacturing applications that decipher digital passwords through a single RGB camera to capture the user's mid-air gestures. We first created a testbed, which is an MR-based smart factory management system consisting of mid-air gesture-based user interfaces (UIs) on a video see-through MR head-mounted display (HMD). To interact with UIs and input information, the user's hand movements and gestures are tracked by the MR system. We set up the experiment to be the estimation of the password input by users through mid-air hand gestures on a virtual numeric keypad. To achieve this goal, we developed a lightweight machine learning-based hand position tracking and gesture recognition method. This method takes either video streaming or recorded video clips (taken by a single RGB camera in front of the user) as input, where the videos record the users' hand movements and gestures but not the virtual UIs. With the assumption of the known size, position, and layout of the keypad, the machine learning method estimates the password through hand gesture recognition and finger position detection. The evaluation result indicates the effectiveness of the proposed method, with a high accuracy of 97.03%, 94.06%, and 83.83% for 2-digit, 4-digit, and 6-digit passwords, respectively, using real-time video streaming as input.
期刊介绍:
The ASME Journal of Computing and Information Science in Engineering (JCISE) publishes articles related to Algorithms, Computational Methods, Computing Infrastructure, Computer-Interpretable Representations, Human-Computer Interfaces, Information Science, and/or System Architectures that aim to improve some aspect of product and system lifecycle (e.g., design, manufacturing, operation, maintenance, disposal, recycling etc.). Applications considered in JCISE manuscripts should be relevant to the mechanical engineering discipline. Papers can be focused on fundamental research leading to new methods, or adaptation of existing methods for new applications.
Scope: Advanced Computing Infrastructure; Artificial Intelligence; Big Data and Analytics; Collaborative Design; Computer Aided Design; Computer Aided Engineering; Computer Aided Manufacturing; Computational Foundations for Additive Manufacturing; Computational Foundations for Engineering Optimization; Computational Geometry; Computational Metrology; Computational Synthesis; Conceptual Design; Cybermanufacturing; Cyber Physical Security for Factories; Cyber Physical System Design and Operation; Data-Driven Engineering Applications; Engineering Informatics; Geometric Reasoning; GPU Computing for Design and Manufacturing; Human Computer Interfaces/Interactions; Industrial Internet of Things; Knowledge Engineering; Information Management; Inverse Methods for Engineering Applications; Machine Learning for Engineering Applications; Manufacturing Planning; Manufacturing Automation; Model-based Systems Engineering; Multiphysics Modeling and Simulation; Multiscale Modeling and Simulation; Multidisciplinary Optimization; Physics-Based Simulations; Process Modeling for Engineering Applications; Qualification, Verification and Validation of Computational Models; Symbolic Computing for Engineering Applications; Tolerance Modeling; Topology and Shape Optimization; Virtual and Augmented Reality Environments; Virtual Prototyping