Huma Zia, Bara Fteiha, Maha Abdulnasser, Tafleh Saleh, Fatima Suliemn, Kawther Alagha, Jawad Yousaf, Mohammed Ghazal
{"title":"Gesture-controlled omnidirectional autonomous vehicle: A web-based approach for gesture recognition","authors":"Huma Zia, Bara Fteiha, Maha Abdulnasser, Tafleh Saleh, Fatima Suliemn, Kawther Alagha, Jawad Yousaf, Mohammed Ghazal","doi":"10.1016/j.array.2025.100408","DOIUrl":null,"url":null,"abstract":"<div><div>This paper presents a novel web-based hand/thumb gesture recognition model, validated through the implementation of a gesture-controlled omnidirectional autonomous vehicle. Utilizing a custom-trained YOLOv5s model, the system efficiently translates user gestures into precise control signals, facilitating real-time vehicle operation under five commands: forward, backward, left, right, and stop. Integration with Raspberry Pi hardware, including a camera and peripherals, enables rapid live video processing with a latency of 150–300 ms and stable frame rates of 12–18 FPS. The system demonstrates reliable performance with a classification accuracy of 94.2%, validated across multiple gesture classes through statistical analysis, including confusion matrices and ANOVA testing. A user-friendly web interface, built using TensorFlow.js, Node.js, and WebSocket, enhances usability by providing seamless live video streaming and real-time, device-agnostic control directly in the browser without requiring wearable sensors or external processing. The system’s key contributions include: (1) robust real-time hand gesture recognition using YOLOv5s; (2) seamless Raspberry Pi–Arduino integration; (3) a browser-based interface enabling accessible, scalable deployment; and (4) empirical validation across functional, environmental, and statistical performance metrics. This innovation marks a significant advancement in the practical application of hand gesture control within robotics. It offers a flexible and cost-effective alternative to sensor-based systems and serves as a foundation for future developments in autonomous vehicles, human-machine interaction, assistive technologies, automation, and AI-driven interfaces. By eliminating the existing systems’ need for wearable technology, specialized hardware, or complex setups, this work expands the potential for deploying intuitive, touch-free control systems across diverse real-world domains.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"26 ","pages":"Article 100408"},"PeriodicalIF":2.3000,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Array","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590005625000359","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents a novel web-based hand/thumb gesture recognition model, validated through the implementation of a gesture-controlled omnidirectional autonomous vehicle. Utilizing a custom-trained YOLOv5s model, the system efficiently translates user gestures into precise control signals, facilitating real-time vehicle operation under five commands: forward, backward, left, right, and stop. Integration with Raspberry Pi hardware, including a camera and peripherals, enables rapid live video processing with a latency of 150–300 ms and stable frame rates of 12–18 FPS. The system demonstrates reliable performance with a classification accuracy of 94.2%, validated across multiple gesture classes through statistical analysis, including confusion matrices and ANOVA testing. A user-friendly web interface, built using TensorFlow.js, Node.js, and WebSocket, enhances usability by providing seamless live video streaming and real-time, device-agnostic control directly in the browser without requiring wearable sensors or external processing. The system’s key contributions include: (1) robust real-time hand gesture recognition using YOLOv5s; (2) seamless Raspberry Pi–Arduino integration; (3) a browser-based interface enabling accessible, scalable deployment; and (4) empirical validation across functional, environmental, and statistical performance metrics. This innovation marks a significant advancement in the practical application of hand gesture control within robotics. It offers a flexible and cost-effective alternative to sensor-based systems and serves as a foundation for future developments in autonomous vehicles, human-machine interaction, assistive technologies, automation, and AI-driven interfaces. By eliminating the existing systems’ need for wearable technology, specialized hardware, or complex setups, this work expands the potential for deploying intuitive, touch-free control systems across diverse real-world domains.