{"title":"An innovative approach to advanced voice classification of sacred Quranic recitations through multimodal fusion","authors":"Esraa Hassan , Abeer Saber , Omar Alqahtani , Nora El-Rashidy , Samar Elbedwehy","doi":"10.1016/j.eij.2025.100640","DOIUrl":null,"url":null,"abstract":"<div><div>The Quran is the most important book we have ever read or recited. Perfecting recitation of the Holy Quran is challenging. In this paper, we integrate the use of multimodal fusion to result in advanced voice classification of sacred Quranic recitations. The proposed work called Voice Shortcut Connection Fusion (VSCF) architecture also targets restrictions corresponding to the dataset size and reciters’ variations into which Residual Neural Network (ResNet50) with the Fusion Layer incorporated in voice classification is integrated. The VSCF architecture is designed in a highly complex manner and is designed to be highly sophisticated about the extent to which it can approximate high-level features as well as higher-level features within a wide range of acoustic signals. The Fusion Layer proves to be an important layer that combines the ResNet50 model’s final layer with the Global Average Pooling of the raw MFCC features of the audios. This synergistic fusion enhances the ability of the model by a vast extent to identify the underlying stylistic features inherent in each reciter’s performance. The dataset consists of a Quranic Recitation Dataset having 7144 WAV format audio files from 12 Quran reciters. Compared with the traditional voice classification strategies, VSCF aims at solving issues regarding limitations of the adopted datasets and variations among different reciters. The results from our experiment showcase the effectiveness of the VSCF architecture, achieving an accuracy of 0.97683%. Further metrics include sensitivity at 0.9752, specificity at 0.9785, precision at 0.9875, and an F1 score of 0.9813.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"30 ","pages":"Article 100640"},"PeriodicalIF":5.0000,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Egyptian Informatics Journal","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1110866525000337","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The Quran is the most important book we have ever read or recited. Perfecting recitation of the Holy Quran is challenging. In this paper, we integrate the use of multimodal fusion to result in advanced voice classification of sacred Quranic recitations. The proposed work called Voice Shortcut Connection Fusion (VSCF) architecture also targets restrictions corresponding to the dataset size and reciters’ variations into which Residual Neural Network (ResNet50) with the Fusion Layer incorporated in voice classification is integrated. The VSCF architecture is designed in a highly complex manner and is designed to be highly sophisticated about the extent to which it can approximate high-level features as well as higher-level features within a wide range of acoustic signals. The Fusion Layer proves to be an important layer that combines the ResNet50 model’s final layer with the Global Average Pooling of the raw MFCC features of the audios. This synergistic fusion enhances the ability of the model by a vast extent to identify the underlying stylistic features inherent in each reciter’s performance. The dataset consists of a Quranic Recitation Dataset having 7144 WAV format audio files from 12 Quran reciters. Compared with the traditional voice classification strategies, VSCF aims at solving issues regarding limitations of the adopted datasets and variations among different reciters. The results from our experiment showcase the effectiveness of the VSCF architecture, achieving an accuracy of 0.97683%. Further metrics include sensitivity at 0.9752, specificity at 0.9785, precision at 0.9875, and an F1 score of 0.9813.
期刊介绍:
The Egyptian Informatics Journal is published by the Faculty of Computers and Artificial Intelligence, Cairo University. This Journal provides a forum for the state-of-the-art research and development in the fields of computing, including computer sciences, information technologies, information systems, operations research and decision support. Innovative and not-previously-published work in subjects covered by the Journal is encouraged to be submitted, whether from academic, research or commercial sources.