{"title":"AzSLD:阿塞拜疆手语数据集,用于手指拼写,单词和句子翻译与基线软件。","authors":"Nigar Alishzade , Jamaladdin Hasanov","doi":"10.1016/j.dib.2024.111230","DOIUrl":null,"url":null,"abstract":"<div><div>Advancements in sign language processing technology hinge on the availability of extensive, reliable datasets, comprehensive instructions, and adherence to ethical guidelines. To facilitate progress in gesture recognition and translation systems and to support the Azerbaijani sign language community we present the Azerbaijani Sign Language Dataset (AzSLD). This comprehensive dataset was collected from a diverse group of sign language users, encompassing a range of linguistic parameters. Developed within the framework of a vision-based Azerbaijani Sign Language translation project, AzSLD includes recordings of the fingerspelling alphabet, individual words, and sentences. The data acquisition process involved recording signers across various age groups, genders, and proficiency levels to ensure broad representation. Sign language sentences were captured using two cameras from different angles, providing comprehensive visual coverage of each gesture. This approach enables robust training and evaluation of gesture recognition algorithms. The dataset comprises 30,000 meticulously annotated videos, each labeled with precise gesture identifiers and corresponding linguistic translations. To facilitate efficient usage of the dataset, we provide technical instructions and source code for a data loader. Researchers and developers working on sign language recognition, translation, and synthesis systems will find AzSLD invaluable, as it offers a rich repository of labeled data for training and evaluation purposes.</div></div>","PeriodicalId":10973,"journal":{"name":"Data in Brief","volume":"58 ","pages":"Article 111230"},"PeriodicalIF":1.0000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11730573/pdf/","citationCount":"0","resultStr":"{\"title\":\"AzSLD: Azerbaijani sign language dataset for fingerspelling, word, and sentence translation with baseline software\",\"authors\":\"Nigar Alishzade , Jamaladdin Hasanov\",\"doi\":\"10.1016/j.dib.2024.111230\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Advancements in sign language processing technology hinge on the availability of extensive, reliable datasets, comprehensive instructions, and adherence to ethical guidelines. To facilitate progress in gesture recognition and translation systems and to support the Azerbaijani sign language community we present the Azerbaijani Sign Language Dataset (AzSLD). This comprehensive dataset was collected from a diverse group of sign language users, encompassing a range of linguistic parameters. Developed within the framework of a vision-based Azerbaijani Sign Language translation project, AzSLD includes recordings of the fingerspelling alphabet, individual words, and sentences. The data acquisition process involved recording signers across various age groups, genders, and proficiency levels to ensure broad representation. Sign language sentences were captured using two cameras from different angles, providing comprehensive visual coverage of each gesture. This approach enables robust training and evaluation of gesture recognition algorithms. The dataset comprises 30,000 meticulously annotated videos, each labeled with precise gesture identifiers and corresponding linguistic translations. To facilitate efficient usage of the dataset, we provide technical instructions and source code for a data loader. Researchers and developers working on sign language recognition, translation, and synthesis systems will find AzSLD invaluable, as it offers a rich repository of labeled data for training and evaluation purposes.</div></div>\",\"PeriodicalId\":10973,\"journal\":{\"name\":\"Data in Brief\",\"volume\":\"58 \",\"pages\":\"Article 111230\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2025-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11730573/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Data in Brief\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2352340924011922\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data in Brief","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2352340924011922","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
AzSLD: Azerbaijani sign language dataset for fingerspelling, word, and sentence translation with baseline software
Advancements in sign language processing technology hinge on the availability of extensive, reliable datasets, comprehensive instructions, and adherence to ethical guidelines. To facilitate progress in gesture recognition and translation systems and to support the Azerbaijani sign language community we present the Azerbaijani Sign Language Dataset (AzSLD). This comprehensive dataset was collected from a diverse group of sign language users, encompassing a range of linguistic parameters. Developed within the framework of a vision-based Azerbaijani Sign Language translation project, AzSLD includes recordings of the fingerspelling alphabet, individual words, and sentences. The data acquisition process involved recording signers across various age groups, genders, and proficiency levels to ensure broad representation. Sign language sentences were captured using two cameras from different angles, providing comprehensive visual coverage of each gesture. This approach enables robust training and evaluation of gesture recognition algorithms. The dataset comprises 30,000 meticulously annotated videos, each labeled with precise gesture identifiers and corresponding linguistic translations. To facilitate efficient usage of the dataset, we provide technical instructions and source code for a data loader. Researchers and developers working on sign language recognition, translation, and synthesis systems will find AzSLD invaluable, as it offers a rich repository of labeled data for training and evaluation purposes.
期刊介绍:
Data in Brief provides a way for researchers to easily share and reuse each other''s datasets by publishing data articles that: -Thoroughly describe your data, facilitating reproducibility. -Make your data, which is often buried in supplementary material, easier to find. -Increase traffic towards associated research articles and data, leading to more citations. -Open up doors for new collaborations. Because you never know what data will be useful to someone else, Data in Brief welcomes submissions that describe data from all research areas.