{"title":"实时手语检测","authors":"Sangeeta Kurundkar, Arya Joshi, Aryan Thaploo, Sarthak Auti, Anish Awalgaonkar","doi":"10.1109/ViTECoN58111.2023.10157784","DOIUrl":null,"url":null,"abstract":"Lack of communication or miscommunication brought on by linguistic problems can lead to awkward situations in today's culture. Those who are deaf or have trouble hearing use sign language, a visual form of communication, to communicate. It communicates meaning through hand gestures and body language. Yet not everyone is able to interpret sign language, which might result in miscommunication. A system was developed employing cutting-edge technologies to address this problem, including deep learning, machine learning, convolutional neural networks, computer vision, TensorFlow, and Python. This technology is made to accurately detect and identify sign language motions in real-time. The system develops a real-time sign language recognition tool using OpenCV. The 26 letters in American Sign were categorised using a CNN classifier. The use of technology to bridge communication gaps and create inclusive environments is crucial in our society. This system's high accuracy rate is an excellent indication of its reliability, providing a promising solution for individuals who use sign language to communicate. It would be interesting to learn more about the specific CNN classifier used in the project and how it was trained to recognize ASL gestures. Overall, the implementation of this technology can create more inclusive and accessible environments, ensuring that everyone can communicate effectively, regardless of their abilities or differences.","PeriodicalId":407488,"journal":{"name":"2023 2nd International Conference on Vision Towards Emerging Trends in Communication and Networking Technologies (ViTECoN)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Real-Time Sign Language Detection\",\"authors\":\"Sangeeta Kurundkar, Arya Joshi, Aryan Thaploo, Sarthak Auti, Anish Awalgaonkar\",\"doi\":\"10.1109/ViTECoN58111.2023.10157784\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Lack of communication or miscommunication brought on by linguistic problems can lead to awkward situations in today's culture. Those who are deaf or have trouble hearing use sign language, a visual form of communication, to communicate. It communicates meaning through hand gestures and body language. Yet not everyone is able to interpret sign language, which might result in miscommunication. A system was developed employing cutting-edge technologies to address this problem, including deep learning, machine learning, convolutional neural networks, computer vision, TensorFlow, and Python. This technology is made to accurately detect and identify sign language motions in real-time. The system develops a real-time sign language recognition tool using OpenCV. The 26 letters in American Sign were categorised using a CNN classifier. The use of technology to bridge communication gaps and create inclusive environments is crucial in our society. This system's high accuracy rate is an excellent indication of its reliability, providing a promising solution for individuals who use sign language to communicate. It would be interesting to learn more about the specific CNN classifier used in the project and how it was trained to recognize ASL gestures. Overall, the implementation of this technology can create more inclusive and accessible environments, ensuring that everyone can communicate effectively, regardless of their abilities or differences.\",\"PeriodicalId\":407488,\"journal\":{\"name\":\"2023 2nd International Conference on Vision Towards Emerging Trends in Communication and Networking Technologies (ViTECoN)\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 2nd International Conference on Vision Towards Emerging Trends in Communication and Networking Technologies (ViTECoN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ViTECoN58111.2023.10157784\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 2nd International Conference on Vision Towards Emerging Trends in Communication and Networking Technologies (ViTECoN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ViTECoN58111.2023.10157784","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Lack of communication or miscommunication brought on by linguistic problems can lead to awkward situations in today's culture. Those who are deaf or have trouble hearing use sign language, a visual form of communication, to communicate. It communicates meaning through hand gestures and body language. Yet not everyone is able to interpret sign language, which might result in miscommunication. A system was developed employing cutting-edge technologies to address this problem, including deep learning, machine learning, convolutional neural networks, computer vision, TensorFlow, and Python. This technology is made to accurately detect and identify sign language motions in real-time. The system develops a real-time sign language recognition tool using OpenCV. The 26 letters in American Sign were categorised using a CNN classifier. The use of technology to bridge communication gaps and create inclusive environments is crucial in our society. This system's high accuracy rate is an excellent indication of its reliability, providing a promising solution for individuals who use sign language to communicate. It would be interesting to learn more about the specific CNN classifier used in the project and how it was trained to recognize ASL gestures. Overall, the implementation of this technology can create more inclusive and accessible environments, ensuring that everyone can communicate effectively, regardless of their abilities or differences.