{"title":"Computer Vision and Neural Networks for Libras Recognition","authors":"Silas Luiz Furtado, Jauvane de Oliveira","doi":"10.5753/wvc.2021.18903","DOIUrl":null,"url":null,"abstract":"In recent years, one can find several efforts to increase the inclusion of people with some type of disability. As a result, the global study of sign language has become an important research area. Therefore, this project aims at developing an information system for the automatic recognition of the Brazilian Sign Language (LIBRAS). The recognition shall be done through the processing of videos, without relying on support hardware. Given the great difficulty of creating a system for this purpose, an approach was developed by dividing the process into stages. In addition to dynamically identifying signs and context, neural network concepts and tools were used to extract the characteristics of interest and classify them accordingly. In addition, a dataset of signs, referring to the alphabet in LIBRAS, was built as well as a tool to interpret, with the aid of a webcam, the signal executed by a user, transcribing it on the screen.","PeriodicalId":311431,"journal":{"name":"Anais do XVII Workshop de Visão Computacional (WVC 2021)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Anais do XVII Workshop de Visão Computacional (WVC 2021)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5753/wvc.2021.18903","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In recent years, one can find several efforts to increase the inclusion of people with some type of disability. As a result, the global study of sign language has become an important research area. Therefore, this project aims at developing an information system for the automatic recognition of the Brazilian Sign Language (LIBRAS). The recognition shall be done through the processing of videos, without relying on support hardware. Given the great difficulty of creating a system for this purpose, an approach was developed by dividing the process into stages. In addition to dynamically identifying signs and context, neural network concepts and tools were used to extract the characteristics of interest and classify them accordingly. In addition, a dataset of signs, referring to the alphabet in LIBRAS, was built as well as a tool to interpret, with the aid of a webcam, the signal executed by a user, transcribing it on the screen.