ArrayPub Date : 2025-07-14DOI: 10.1016/j.array.2025.100463
Keyan Abdul-Aziz Mutlaq , Mushtaq A. Hasson , Zaid Ameen Abduljabbar , Vincent Omollo Nyangaresi , Mustafa A. Al Sibahee , Junchao Ma , Samir M. Umran , Ali Hasan Ali , Abdulla J.Y. Aldarwish , Husam A. Neamah
{"title":"Anonymous wireless body area networks authentication protocol based on biometrics and asymmetric key cryptography","authors":"Keyan Abdul-Aziz Mutlaq , Mushtaq A. Hasson , Zaid Ameen Abduljabbar , Vincent Omollo Nyangaresi , Mustafa A. Al Sibahee , Junchao Ma , Samir M. Umran , Ali Hasan Ali , Abdulla J.Y. Aldarwish , Husam A. Neamah","doi":"10.1016/j.array.2025.100463","DOIUrl":"10.1016/j.array.2025.100463","url":null,"abstract":"<div><div>Wireless Body Area Networks (WBANs) have been extensively deployed to offer remote patient monitoring that facilitate timely diagnosis and medication. This has greatly helped reduce costs and stress on the limited healthcare resources. However, the exchange of sensory patient data across wireless public communication media exposes the communication process to a myriad of security threats. To curb these security challenges, past research work has deployed techniques such as identity-based and public key cryptosystems to develop schemes for this environment. Nevertheless, the complex mathematical computations in majority of these schemes render them inefficient for sensors. In this current work, we utilize an amalgamation of asymmetric cryptography and user biometrics to develop a robust authentication protocol for WBANs. The famous Burrows–Abadi–Needham (BAN) logic is then deployed to formally analyze the security posture of the developed scheme, with results indicating that it offers secrecy and reliability of the negotiated session keys. In addition, the informal security analysis shows that it is robust against typical WBAN attacks such as impersonation and privileged insiders. From the performance perspective, our protocol incurs the least communication and computation costs.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"27 ","pages":"Article 100463"},"PeriodicalIF":2.3,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144678950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ArrayPub Date : 2025-07-14DOI: 10.1016/j.array.2025.100445
Rui Wang , Duyun Xu , Lucia Cascone , Yaoyang Wang , Hui Chen , Jianbo Zheng , Xianxun Zhu
{"title":"RAFT: Robust Adversarial Fusion Transformer for multimodal sentiment analysis","authors":"Rui Wang , Duyun Xu , Lucia Cascone , Yaoyang Wang , Hui Chen , Jianbo Zheng , Xianxun Zhu","doi":"10.1016/j.array.2025.100445","DOIUrl":"10.1016/j.array.2025.100445","url":null,"abstract":"<div><div>Multimodal sentiment analysis (MSA) has emerged as a key technology for understanding human emotions by jointly processing text, audio, and visual cues. Despite significant progress, existing fusion models remain vulnerable to real-world challenges such as modality noise, missing channels, and weak inter-modal coupling. This paper addresses these limitations by introducing RAFT (Robust Adversarial Fusion Transformer), which integrates cross-modal and self-attention mechanisms with noise-imitation adversarial training to strengthen feature interactions and resilience under imperfect inputs. We first formalize the problem of noisy and incomplete data in MSA and demonstrate how adversarial noise simulation can bridge the gap between clean and corrupted modalities. RAFT is evaluated on two benchmark datasets, MOSI and MOSEI, where it achieves competitive binary classification accuracy (greater than 80%) and fine-grained sentiment performance (5-class accuracy 57%), while reducing mean absolute error and improving Pearson correlation by up to 2% over state-of-the-art baselines. Ablation studies confirm that both adversarial training and context-aware modules contribute substantially to robustness gains. Looking ahead, we plan to refine noise-generation strategies, explore more expressive fusion architectures, and extend RAFT to handle long-form dialogues and culturally diverse expressions. Our results suggest that RAFT lays a solid foundation for reliable, real-world sentiment analysis in noisy environments.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"27 ","pages":"Article 100445"},"PeriodicalIF":2.3,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144655496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dual CNN for photovoltaic electroluminescence images microcrack detection","authors":"Khouloud Samrouth , Souha Nazir , Nader Bakir , Nadine Khodor","doi":"10.1016/j.array.2025.100442","DOIUrl":"10.1016/j.array.2025.100442","url":null,"abstract":"<div><div>Accurate detection of microcracks in photovoltaic (PV) cells is crucial for ensuring the efficiency and longevity of solar panels. This study proposes a dual convolutional neural network (Dual-CNN) architecture to enhance microcrack detection in electroluminescence (EL) PV images. By integrating shallow feature extraction with deep semantic analysis, the proposed model effectively captures both fine-grained local textures and high-level structural patterns, addressing the limitations of conventional single-stream CNN models that primarily focus on coarse-grained features. Experimental evaluations on an EL image dataset demonstrate that the Dual-CNN approach significantly improves defect localization and classification with an accuracy of 85.33%, a recall of 71.71% and an F1 score of 73.9%, paving the way for more robust and automated PV inspection systems in the solar energy sector.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"27 ","pages":"Article 100442"},"PeriodicalIF":2.3,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144655498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ArrayPub Date : 2025-07-11DOI: 10.1016/j.array.2025.100459
Xiaohui Cheng , Xukun Wang , Yun Deng , Qiu Lu , Yanping Kang , Jian Tang , Yuanyuan Shi , Junyu Zhao
{"title":"A lightweight remote sensing image detection model with feature aggregation diffusion network","authors":"Xiaohui Cheng , Xukun Wang , Yun Deng , Qiu Lu , Yanping Kang , Jian Tang , Yuanyuan Shi , Junyu Zhao","doi":"10.1016/j.array.2025.100459","DOIUrl":"10.1016/j.array.2025.100459","url":null,"abstract":"<div><div>With accelerating land-use changes driven by urbanization and resource extraction, accurate detection of landscape objects in remote sensing imagery has become pivotal for sustainable land management. However, existing deep learning models often face challenges in balancing detection accuracy and computational efficiency, especially for small objects in complex scenes. To address this, we propose LightFAD-YOLO, a lightweight model integrating feature aggregation diffusion for multi-scale context propagation, enhancing small object detection in complex scenes. The central convolutional detection head combines detail-enhanced convolution and group normalization, reducing computational costs by 23.4 % while maintaining precision. A dilation-wise residual module further optimizes multi-scale feature extraction. Evaluated on benchmark datasets, LightFAD-YOLO achieves 1.7 % higher <span><math><mrow><msub><mrow><mi>m</mi><mi>A</mi><mi>P</mi></mrow><mn>0.5</mn></msub></mrow></math></span> and 6.4 % improved <span><math><mrow><msub><mrow><mi>m</mi><mi>A</mi><mi>P</mi></mrow><mrow><mn>0.5</mn><mo>:</mo><mn>0.95</mn></mrow></msub></mrow></math></span> over baseline models, with 9.9 % lower computational load. Operating at 297.2 FPS with only 2.3M parameters, it enables real-time deployment on edge devices for land-use monitoring and infrastructure detection, supporting sustainable land management.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"27 ","pages":"Article 100459"},"PeriodicalIF":2.3,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144605794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ArrayPub Date : 2025-07-10DOI: 10.1016/j.array.2025.100456
Md. Alamgir Hossain
{"title":"FED-GEM-CN: A federated dual-CNN architecture with contrastive cross-attention for maritime radar intrusion detection","authors":"Md. Alamgir Hossain","doi":"10.1016/j.array.2025.100456","DOIUrl":"10.1016/j.array.2025.100456","url":null,"abstract":"<div><div>The escalating complexity of maritime operations and the integration of advanced radar systems have heightened the susceptibility of maritime infrastructures to sophisticated cyber intrusions. Ensuring resilient and privacy-preserving intrusion detection in such environments necessitates innovative solutions capable of learning from distributed, heterogeneous data sources without compromising sensitive information. This study introduces FED-GEM-CN, a novel federated learning framework designed explicitly for maritime radar intrusion detection. The proposed architecture integrates dual parallel convolutional neural network (CNN) pipelines to independently process network and radar modality features, which are subsequently fused via a multi-head cross-attention mechanism to capture intricate inter-modal dependencies. To enhance feature discriminability, a supervised contrastive learning paradigm is incorporated, while a gradient episodic memory (GEM) buffer strategically retains challenging instances to bolster model robustness against hard-to-detect intrusions. Operating under a federated learning scheme, FED-GEM-CN facilitates collaborative model optimization across distributed radar nodes, preserving data locality and mitigating privacy risks inherent in centralized approaches. Experimental evaluations conducted on a comprehensive real-world maritime radar dataset reveal that FED-GEM-CN achieves superior performance, attaining an overall accuracy exceeding 99 % and macro F1-scores above 0.97 across federated rounds, with convergence typically observed within 15 communication iterations. These findings substantiate the efficacy of the proposed system in delivering robust, energy-efficient, and privacy-aware intrusion detection tailored to the constraints of maritime radar networks. The approach underscores a significant advancement toward deploying intelligent, distributed cybersecurity solutions within critical maritime infrastructures.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"27 ","pages":"Article 100456"},"PeriodicalIF":2.3,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144623621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ArrayPub Date : 2025-07-09DOI: 10.1016/j.array.2025.100451
Chenjia Li , Ali Matin Nazar
{"title":"Artificial Intelligence for data modeling in triboelectric nanogenerators","authors":"Chenjia Li , Ali Matin Nazar","doi":"10.1016/j.array.2025.100451","DOIUrl":"10.1016/j.array.2025.100451","url":null,"abstract":"<div><div>This review presents a comprehensive study on the integration of Artificial Intelligence (AI) with Triboelectric Nanogenerators (TENGs), emphasizing their convergence in advancing real-time sensing, signal interpretation, and self-powered systems. Over 20 experimental implementations are analyzed, combining AI models such as Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), Support Vector Machines (SVM), and Long Short-Term Memory (LSTM) networks with TENGs across multiple operational modes including contact-separation, lateral sliding, and freestanding configurations. Application cases discussed include AI-powered triboelectric smart socks achieving 96.67 % activity recognition accuracy, soft robotic grippers with 98.1 % object identification precision, and wearable pulse sensors for continuous blood pressure monitoring using personalized machine learning algorithms. Quantitative analyses of machine learning frameworks are presented, with CNNs and ANNs demonstrating up to 99.32 % accuracy in TENG signal processing tasks. Deep learning techniques are shown to enhance noise filtering, feature extraction, and adaptive feedback, transforming TENGs into intelligent platforms for healthcare, robotics, IoT systems, and smart environments. The review also addresses key challenges such as data variability, environmental robustness, and algorithmic scalability, and future directions in hybrid energy systems, adaptive algorithms, and cross-disciplinary collaboration for sustainable, intelligent sensing technologies.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"27 ","pages":"Article 100451"},"PeriodicalIF":2.3,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144632334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ArrayPub Date : 2025-07-09DOI: 10.1016/j.array.2025.100449
Nyaradzo Alice Tsedura , Ernest Bhero , Colin Chibaya
{"title":"Towards the design of a particle swarm optimization ontology for object classification","authors":"Nyaradzo Alice Tsedura , Ernest Bhero , Colin Chibaya","doi":"10.1016/j.array.2025.100449","DOIUrl":"10.1016/j.array.2025.100449","url":null,"abstract":"<div><div>This article proposes an ontology blueprint inspired by key components of the particle swarm system to address the object classification problem. The identified key components particle, swarm, search space, goal, environment and fitness measures were independently evaluated based on their sub-entities, relationships, data flow and storage. These unit designs were integrated into a comprehensive particle swarm system ontology. A technology assessment model, in the form of a questionnaire, was distributed to 15 software engineering experts to evaluate the ontology based on 10 metrics, including completeness, correctness, usefulness and scalability. Results showed that 88 % of responses rated the designs as good, while 12 % found them to be average or poor. These findings confirm the proposed ontology designs as valid, with potential for further refinement based on expert feedback.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"27 ","pages":"Article 100449"},"PeriodicalIF":2.3,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144588575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficacy of Smart City Data Layers in virtual reality for emergency evacuation behaviours","authors":"Reinout Wiltenburg , William Hurst , Frida Ruiz Mendoza , Caspar Krampe , Bedir Tekinerdogan","doi":"10.1016/j.array.2025.100457","DOIUrl":"10.1016/j.array.2025.100457","url":null,"abstract":"<div><div>Virtual reality is increasingly finding a footing within a range of movement-based studies for different fire emergency settings. Compared to real-world experiments, the core advantage of a virtual reality approach is the safe experiment process and the possibility of creating bespoke and flexible solutions for both testing and training scenarios of smart city buildings. Additionally, virtual reality environments cater for the integration of various layers of information relating to evacuation protocols within digitally recreated public buildings; thus, enabling studies on understanding how the role of extra information can support decision-making when evacuating a building. One way of presenting this information is by employing the Smart City Data Layers which has yet to be investigated, thus, in this study, a Smart City Data Layer-enhanced virtual reality environment is presented. The environment is set up to provide extra information for the participants on how to escape the building provided and, with a control environment providing no additional information for the participants. The behavioural analysis showed that the Smart City Data Layer led to longer time and distance behaviour during the escape in two out of three escape scenarios. The result shows how crucial the design is, and the amount of information presented. For example, findings indicate that too much information leads to potential information overload. In addition, nausea and dizziness were found to be significant variables (p = 0.001, t = −3.611 and p = 0.006, t = −2.965) influencing the time and distance for escape when tested in this environment. Despite the contradictory results, 71 % of the 34 participants found the VR technology very helpful in visualising green outlines for doors during the escape. However, the results suggest that the inclusion of the Smart City Data Layer may lead to overconfidence, resulting in a longer evacuation time.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"27 ","pages":"Article 100457"},"PeriodicalIF":2.3,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144632336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ArrayPub Date : 2025-07-05DOI: 10.1016/j.array.2025.100452
Feudjio Ghislain , Saha Tchinda Beaudelaire , Romain Atangana , Tchiotsop Daniel
{"title":"An improved deep learning approach for automated detection of multiclass eye diseases","authors":"Feudjio Ghislain , Saha Tchinda Beaudelaire , Romain Atangana , Tchiotsop Daniel","doi":"10.1016/j.array.2025.100452","DOIUrl":"10.1016/j.array.2025.100452","url":null,"abstract":"<div><h3>Context</h3><div>Early detection of ophthalmic diseases, such as drusen and glaucoma, can be facilitated by analyzing changes in the retinal microvascular structure. The implementation of algorithms based on convolutional neural networks (CNNs) has seen significant growth in the automation of disease identification. However, the complexity of these algorithms increases with the diversity of pathologies to be classified. In this study, we introduce a new lightweight algorithm based on CNNs for the classification of multiple categories of eye diseases, using discrete wavelet transforms to enhance feature extraction.</div></div><div><h3>Methods</h3><div>The proposed approach integrates a simple CNN architecture optimized for multi-class and multi-label classification, with an emphasis on maintaining a compact model size. We improved the feature extraction phase by implementing multi-scale decomposition techniques, such as biorthogonal wavelet transforms, allowing us to capture both fine and coarse features. The developed model was evaluated using a dataset of retinal images categorized into four classes, including a composite class for less common pathologies.</div></div><div><h3>Results</h3><div>The feature extraction based on biorthogonal wavelets enabled our model to achieve perfect values of precision, recall, and F1-score for half of the targeted classes. The overall average accuracy of the model reached 0.9621.</div></div><div><h3>Conclusion</h3><div>The integration of biorthogonal wavelet transforms into our CNN model has proven effective, surpassing the performance of several similar algorithms reported in the literature. This advancement not only enhances the accuracy of real-time diagnoses but also supports the development of sophisticated tools for the detection of a wide range of retinal pathologies, thereby improving clinical decision-making processes.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"27 ","pages":"Article 100452"},"PeriodicalIF":2.3,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144570995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing Wireless Sensor Network performance: A Novel Adaptive Grid-Based Clustering Hierarchy protocol","authors":"Mohammad Ridwan , Teguh Wahyono , Irwan Sembiring , Rini Darmastuti","doi":"10.1016/j.array.2025.100440","DOIUrl":"10.1016/j.array.2025.100440","url":null,"abstract":"<div><div>Wireless Sensor Networks (WSNs) are essential for data collection in remote and energy-constrained environments such as forests and deserts. However, traditional clustering protocols like LEACH often face limitations including uneven energy consumption, inefficient Cluster Head (CH) selection, and high communication overhead, which collectively degrade network performance. This paper introduces AG-LEACH (Adaptive Grid-Based LEACH), a novel clustering protocol that incorporates dynamic grid partitioning to optimize cluster formation, CH selection, and data transmission. AG-LEACH employs adaptive grid sizing, adaptive grid merging, and adaptive cluster head selection, enabling dynamic responses to variations in node energy and network topology changes. Simulation results demonstrate that AG-LEACH outperforms conventional protocols by maintaining higher energy efficiency, prolonging network lifetime, and improving data throughput while minimizing packet loss. The protocol also reduces communication distance by intelligently routing data, leading to lower transmission energy and latency. These findings indicate that AG-LEACH is a scalable and adaptive solution for energy-efficient WSN deployments, with strong potential for real-world applications in large-scale and dynamic sensing environments.</div></div>","PeriodicalId":8417,"journal":{"name":"Array","volume":"27 ","pages":"Article 100440"},"PeriodicalIF":2.3,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144564108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}