Altaf Osman Mulani , Nilima S. Warade , Osamah Ibrahim Khalaf , Qusay Bsoul , Dattatray Waghole , Makarand Jadhav , Vaishali Satish Jadhav , Akram Bennour , Firas Zawaideh , Deema Mohammed Alsekait , Diaa Salama AbdElminaam
{"title":"Resource management optimisation of OFDM-IDMA system for 5G multi-tier backhaul networks","authors":"Altaf Osman Mulani , Nilima S. Warade , Osamah Ibrahim Khalaf , Qusay Bsoul , Dattatray Waghole , Makarand Jadhav , Vaishali Satish Jadhav , Akram Bennour , Firas Zawaideh , Deema Mohammed Alsekait , Diaa Salama AbdElminaam","doi":"10.1016/j.eij.2025.100756","DOIUrl":"10.1016/j.eij.2025.100756","url":null,"abstract":"<div><div>In anticipation of 5G, there has been a surge in demand lately for the quickest wireless networks. Radio interference and resource control are the primary problems in heterogeneous and multi-tier 5G networks. The capacity of a 5G network has been enhanced by the suggested resource and interference control technique. The QPSK modulation technique is used in the simulation on a MIMO channel. In addition to achieving the maximum system throughput, the suggested power control and joint distributed cell association approach lowered energy usage, delayed traffic, and latency.</div><div>Furthermore, a reduced signal-to-interference ratio is necessary for high-priority consumers. Comparing the simulation results to the current resource-aware and distance-aware approaches, it is evident that an improvement in total data rate performance is achieved. As a result, the suggested strategy may effectively manage radio resources for 5G networks. Ultimately, there is a 35 % increase in QoS performance in terms of throughput, a 32 % reduction in delay, and a 28 % reduction in jitter.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"31 ","pages":"Article 100756"},"PeriodicalIF":4.3,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144826606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan José Herrera-Martín , Gonçal Costa , Iván Castilla-Rodríguez , Evelio José González
{"title":"A semantic-based model for the management of people with reduced mobility in airport facilities","authors":"Juan José Herrera-Martín , Gonçal Costa , Iván Castilla-Rodríguez , Evelio José González","doi":"10.1016/j.eij.2025.100747","DOIUrl":"10.1016/j.eij.2025.100747","url":null,"abstract":"<div><div>The growing need for inclusive transportation systems has emphasized the importance of addressing the challenges faced by people with reduced mobility (PRM) in airport environments. Nonetheless, its correct performance is subject to different threats and uncontrollable aspects: flight delays, no-show passengers, gate changes, or a high volume of non-registered last-minute PRM passengers, among others. Increasingly, PRM service providers try to deal with such problems by relying on software tools connected to airport information systems to obtain up-to-date data (e.g., flight status, estimated time of arrival or departure of flights, etc.). However, there is no standard representation of the data within this domain that may support the development of aiding tools and enhance their features. To respond to this need, in this article we present an ontology to represent the data domain of PRM services management in airport facilities. The aim is to facilitate the access and combination of the data from different sources under a standardized approach and common understanding of the terminology. The article describes the steps that have been followed to develop the ontology, as well as some usage examples.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"31 ","pages":"Article 100747"},"PeriodicalIF":4.3,"publicationDate":"2025-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144809434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yan Cao , Changbo Ke , Dajuan Fan , Yuan Ping , Quanxin Yang , MengKe Yao
{"title":"State-aware access control for cyber-physical-social space: Model and policy security assurance","authors":"Yan Cao , Changbo Ke , Dajuan Fan , Yuan Ping , Quanxin Yang , MengKe Yao","doi":"10.1016/j.eij.2025.100749","DOIUrl":"10.1016/j.eij.2025.100749","url":null,"abstract":"<div><div>The cyber–physical-social space provides its users with a comfortable and convenient environment for work or living, achieved through the integration of state information from the cyber world, the physical world and the social world. While this integration creates a smart environment, it also presents a significant challenge to access control methods. This paper addresses the evolving access control requirements in the cyber–physical-social space and introduces a state-aware access control model along with an access control policy security assurance mechanism. To articulate the contextual state and state transformations in the cyber, physical, and social worlds, we propose the cyber–physical-social state description method. Building upon this method, we construct a state-aware access control model to precisely define the security requirements of the cyber–physical-social space. Additionally, we introduce a liveness requirement-oriented access control policy generation method and a safety requirement verification method to analyze how changes in the state of humans, cyber data, and physical entities impact authorization. Through a case study involving an intelligent hospital, we demonstrate that the proposed model possesses rich semantics and effectively conveys the security requirements of the cyber–physical-social space. The access control policy set, generated using our proposed methods, successfully avoids issues of missing and incorrect authorizations, ensuring a robust and reliable security system.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"31 ","pages":"Article 100749"},"PeriodicalIF":4.3,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144779773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ActivFairNet: A novel framework for mitigating bias in deep learning networks using activation map-based fairness regularization","authors":"Asmaa AbdulQawy , Elsayed Sallam , Amr Elkholy","doi":"10.1016/j.eij.2025.100739","DOIUrl":"10.1016/j.eij.2025.100739","url":null,"abstract":"<div><div>The rapid advancements in Artificial Intelligence underscore the pressing need to address fairness in deep learning, particularly in critical fields like healthcare where decisions have direct and significant impacts on lives. Despite considerable progress, biases associated with sensitive attributes such as gender, race, and age remain pervasive, presenting substantial challenges to achieving equitable and reliable outcomes. This paper introduces ActivFairNet, a novel bias mitigation framework that integrates activation maps as a fairness regularizer. The framework ensures unbiased representation learning across demographic groups while maintaining or enhancing predictive accuracy, making it both effective and practical for real-world applications. The ActivFairNet is evaluated in a COVID-19 detection case study using a chest X-ray dataset collected from five public repositories. Its effectiveness was tested on three models with varying gender distributions, employing two widely recognized deep learning architectures, DenseNet121 and Xception. The results demonstrate that the ActivFairNet Regularizer consistently outperforms three established bias mitigation techniques, significantly reducing bias across key fairness metrics. Specifically, the method achieves substantial improvements, including a Statistical Parity Difference (SPD) of 0.003 (down from 0.162), an Equal Opportunity Difference (EOD) of 0.000 (down from 0.276), and an Average Odds Difference (AOD) of 0.002 (down from 0.185). The ActivFairNet Regularizer offers a practical, scalable, and ethically aligned solution for mitigating demographic bias in medical imaging, contributing to the advancement of fair and reliable AI systems in real-world healthcare environments.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"31 ","pages":"Article 100739"},"PeriodicalIF":4.3,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144772531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sadia Khalid , Uzair Rasheed , Uzair Khaleeq uz Zaman , Majed Alfayad , Mohammed Assiri , Wasi Haider Butt , Mamoona Humayun , Mahmood Niazi
{"title":"Ensuring quality in software requirement engineering process: A comparative study","authors":"Sadia Khalid , Uzair Rasheed , Uzair Khaleeq uz Zaman , Majed Alfayad , Mohammed Assiri , Wasi Haider Butt , Mamoona Humayun , Mahmood Niazi","doi":"10.1016/j.eij.2025.100754","DOIUrl":"10.1016/j.eij.2025.100754","url":null,"abstract":"<div><div>Poorly managed requirements can lead to a software failure; hence, qualitative Requirement Engineering (RE) is essential for the success of software. Researchers have pointed out numerous ways to aid the RE process. This study aims to find practices deemed fit and unfit for quality improvement of the Software Requirement Engineering (SRE) process from the literature and verify them by software professionals. The study selects 57 articles published in journals and conferences of IEEE Xplore, ScienceDirect, and ACM Digital Library, from 2018 to 2023 to answer three research questions, yielding 8 quality practices. An industrial survey is then formulated to find the trends against those practices from the software industry. The findings from the literature and industrial survey are then compared. The comparison between literature and professional views proved ambiguous requirements to be the top cause of prolonged analysis and project failure. Also, requirement elicitation and analysis are the toughest RE activities. The quality practices pointed out by the literature make a positive difference in the quality of the developmental process of software and, if not followed, result in poorly managed or low-quality software products. Insufficient investment of time in engineering requirements can lead to cost and budget overruns, ultimately culminating in software failure.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"31 ","pages":"Article 100754"},"PeriodicalIF":4.3,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144779772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Karthiga , K.R. Praneeth , V. Saravanan , T.K. Rama Krishna Rao
{"title":"Enhancing cancer detection in medical imaging through federated learning and explainable artificial intelligence: A hybrid approach for optimized diagnostics","authors":"B. Karthiga , K.R. Praneeth , V. Saravanan , T.K. Rama Krishna Rao","doi":"10.1016/j.eij.2025.100751","DOIUrl":"10.1016/j.eij.2025.100751","url":null,"abstract":"<div><div>The diagnosis of cancer are crucial medical responsibilities that assist medical practitioners correctly classify and treat them accordingly. Machine learning applications are widely used in medical field as they identify patterns from clinical data. Traditional machine learning approaches often struggle with accurately identifying malignancies due to the complexity and variability of medical data. This study aims to enhance the accuracy and interpretability of cancer detection models by integrating LightGBM with SHAP (SHapley Additive exPlanations) within a federated learning framework. The innovation of this research lies in the combination of LightGBM’s ability in handling high dimensional feature of large data size with SHAP’s detailed interpretability metrics. This integration not only facilitates accurate cancer detection but also provides insights into the contributing factors of the model’s predictions, making it easier for healthcare professionals to trust and utilize these models. The federated learning approach allows multiple institutions to collaborate in training the model without sharing raw patient data, ensuring data privacy while benefiting from diverse datasets. The integrated framework achieved a remarkable accuracy of 98.3% in cancer detection, with precision, recall, and F1 scores of 97.8%, 97.2%, and 95%, respectively. These results indicate that the proposed method effectively identifies cancer cases while maintaining high interpretability, allowing for better decision-making in clinical settings. The integration of LightGBM with SHAP within a federated learning framework provides a powerful and effective solution for cancer detection.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"31 ","pages":"Article 100751"},"PeriodicalIF":4.3,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144772541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An analysis of acoustic features for accented speech classification","authors":"Apar Garg , Yassine Aribi , Turke Althobaiti , Tanmay Bhowmik","doi":"10.1016/j.eij.2025.100743","DOIUrl":"10.1016/j.eij.2025.100743","url":null,"abstract":"<div><div>Spoken language is a topic which lured researchers for a long duration. Due to the variety of different voice-based products, the application of spoken language can be observed in various places. Several home assistant systems have become an integral part of our lives as they make mundane tasks such as setting up reminders and checking emails easy. However, non-native English speakers frequently face problems in using automated assistants because of accented speech. This study presents an analysis of speech accent features for accented speech classification. The aim is to identify which speech features are the most important for accurately classifying accents in spoken language. We collected a dataset of accented speech samples and used various feature extraction techniques to extract relevant features from the speech signal. These features included mel frequency cepstral coefficients, zero-crossing rate, spectral features, chroma features, etc. Machine learning algorithms are used to classify the accents based on the extracted features and achieve an overall accuracy of 86.67%. This research work is prompted by the increasing need to develop robust speech recognition systems that can generalize across regional accents. The performance of standard automatic speech recognition systems drops very often due to accented speech. Several studies tend towards deep learning-based solutions; however, there is a lack of focused analysis of the performance of traditional acoustic features in accent discrimination tasks. This study targets to bridge that gap by performing a comparative study on selected acoustic features. The analysis of speech accent features presented in this study can be useful to develop robust speech accent classification systems for applications such as language learning, speech recognition, and accent identification.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"31 ","pages":"Article 100743"},"PeriodicalIF":5.0,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144678822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Basim Najim Al-din Abed, J. Karimpour, Farnaz Mahan
{"title":"A deep fake detection approach for cyber security threat based on deep learning and diffusion-osmosis model","authors":"Basim Najim Al-din Abed, J. Karimpour, Farnaz Mahan","doi":"10.1016/j.eij.2025.100748","DOIUrl":"10.1016/j.eij.2025.100748","url":null,"abstract":"<div><div>Advancements in the realm of digital image alteration have resulted in the widespread adoption of deep fake technology, presenting notable obstacles to the field of digital forensics and cyber security. This article suggests an original methodology that combines hybrid anisotropic diffusion-variational osmosis for image filtration, U-Net for segmentation and edge detection, and a hybrid graph convolutional network (GCN) employing separable convolution for categorization in deep fake detection. The amalgamated filtration technique improves image fidelity while conserving crucial particulars, facilitating accurate segmenting and edge detection essential for recognizing altered areas. The GCN framework utilizes graph-oriented learning to extract intricate features, supported by effective separable convolutions for precise categorization of genuine and fake images. The suggested approach is assessed using extensive datasets, showcasing superior performance in detecting deep fake images compared to conventional techniques. Besides technological advancements, this study highlights the broader repercussions of deep fake detection in cyber security. Deep fakes, adept at duping automated systems and human perception, present substantial risks encompassing misinformation, identity theft, financial deception, and breaches in national security. Efficient detection techniques, such as those outlined here, play a crucial role in mitigating these dangers and upholding digital trust and integrity. The proposed methodology is positioned within the wider framework of cyber security, accentuating its significance in combating the escalating threats and offenses linked to deep fake technology. It accentuates the technical progressions while emphasizing the vital necessity of robust identification mechanisms in tackling cyber security challenges arising from digital alterations. The empirical results illustrate that the suggested technique outperforms other deep fake image detectors and achieves a remarkable accuracy rate of 99.71% in the dataset utilized in this study.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"31 ","pages":"Article 100748"},"PeriodicalIF":5.0,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144663360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ismail Chetoui , Essaid El Bachari , Mohamed El Adnani , Mohamed Ouhssini
{"title":"Anomaly detection in graph databases using graph neural networks: Identifying unusual patterns in graphs","authors":"Ismail Chetoui , Essaid El Bachari , Mohamed El Adnani , Mohamed Ouhssini","doi":"10.1016/j.eij.2025.100735","DOIUrl":"10.1016/j.eij.2025.100735","url":null,"abstract":"<div><div>Anomaly detection in graph-structured data is a critical task in various applications, including social networks, fraud detection, and educational platforms. This paper introduces a novel hybrid architecture that leverages Graph Convolutional Networks (GCN), Graph Attention Networks (GAT), and Graph Autoencoders (GAEs) to detect anomalies across nodes, edges, and subgraphs. The proposed model combines the strengths of GCNs for extracting local structural features, GATs for adaptive attention-based neighborhood aggregation, and GAEs for unsupervised graph reconstruction. By integrating these components, our approach generates robust embeddings that are used to calculate anomaly scores based on reconstruction errors, density estimation, and embedding distances. These scores are then aggregated using a weighted hybrid function, enabling adaptive and flexible anomaly scoring. Experimental results on benchmark datasets demonstrate that the hybrid model significantly improves the detection of anomalous nodes, edges, and subgraphs compared to existing methods. This work provides a scalable and effective framework for anomaly detection in graphs, offering insights into the interpretability and adaptability of GNN-based anomaly scoring functions.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"31 ","pages":"Article 100735"},"PeriodicalIF":5.0,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144633404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimization of real-time transmission and coding algorithm for high quality film and television content based on 6G wireless communication technology","authors":"Jiabin Fu, Shu Zhang","doi":"10.1016/j.eij.2025.100745","DOIUrl":"10.1016/j.eij.2025.100745","url":null,"abstract":"<div><div>Although existing video coding standards, such as H.264/AVC, H.265/HEVC, and H.266/VVC, have made some progress in compression efficiency, they still have limitations such as high computational complexity and poor adaptability to different content types. In terms of AI-based coding methods, there is still a lack of systematic research in fully leveraging the potential of 6G networks and achieving real-time transmission of high-quality film and television content. In this paper, we propose an innovative video coding framework that aims to achieve efficient and adaptive video transmission by combining traditional video coding techniques with deep learning models. The core of the framework lies in the use of convolutional neural network (CNN) to enhance the motion estimation accuracy and optimize the residual information by adaptive loop filter. In the motion estimation stage, the CNN-based model generates a high-precision motion vector field, and trains the model by minimizing the mean square error between the predicted and true values. Meanwhile, a multi-layer coding technique is introduced to adapt to different network conditions, where each layer represents a different bit rate and quality level, enabling the end device to select the appropriate decoding layer according to the current network conditions. In addition, the adaptive loop filter dynamically adjusts its parameters according to the video content to reduce compression artifacts and maintain image details. To evaluate the performance of the framework, we conduct experiments on multiple publicly available datasets, including Vid4, UHD-TEST, and HEVC standard test sequences. The experimental results show that our framework significantly outperforms conventional H.264/AVC, H.265/HEVC, VP9 and next-generation VVC coding standards, as well as deep learning-based DVC and DLVC methods in objective metrics such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). In particular, our framework achieves a PSNR of 42.5 dB and a SSIM value of 0.98 at 1000 kbps bit rate. In addition, our framework also performs well in 6G wireless communication environments in terms of transmission delay and packet loss, with an average transmission delay of 100 ms and a packet loss rate of 1.5 %.</div></div>","PeriodicalId":56010,"journal":{"name":"Egyptian Informatics Journal","volume":"31 ","pages":"Article 100745"},"PeriodicalIF":5.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144633409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}