Yanxuan Wei , Mingsen Du , Teng Li , Xiangwei Zheng , Cun Ji
{"title":"Feature-fused residual network for time series classification","authors":"Yanxuan Wei , Mingsen Du , Teng Li , Xiangwei Zheng , Cun Ji","doi":"10.1016/j.jksuci.2024.102227","DOIUrl":"10.1016/j.jksuci.2024.102227","url":null,"abstract":"<div><div>In various fields such as healthcare and transportation, accurately classifying time series data can provide important support for decision-making. To further improve the accuracy of time series classification, we propose a Feature-fused Residual Network based on Multi-scale Signed Recurrence Plot (MSRP-FFRN). This method transforms one-dimensional time series into two-dimensional images, representing the temporal correlation of time series in a two-dimensional space and revealing hidden details within the data. To enhance these details further, we extract multi-scale features by setting receptive fields of different sizes and using adaptive network depths, which improves image quality. To evaluate the performance of this method, we conducted experiments on 43 UCR datasets and compared it with nine state-of-the-art baseline methods. The experimental results show that MSRP-FFRN ranks first on critical difference diagram, achieving the highest accuracy on 18 datasets with an average accuracy of 89.9%, making it the best-performing method overall. Additionally, the effectiveness of this method is further validated through metrics such as Precision, Recall, and F1 score. Results from ablation experiments also highlight the efficacy of the improvements made by MSRP-FFRN.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102227"},"PeriodicalIF":5.2,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142657771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Low-light image enhancement: A comprehensive review on methods, datasets and evaluation metrics","authors":"Zhan Jingchun , Goh Eg Su , Mohd Shahrizal Sunar","doi":"10.1016/j.jksuci.2024.102234","DOIUrl":"10.1016/j.jksuci.2024.102234","url":null,"abstract":"<div><div>Enhancing low-light images in computer vision is a significant challenge that requires innovative methods to improve its robustness. Low-light image enhancement (LLIE) enhances the quality of images affected by poor lighting conditions by implementing various loss functions such as reconstruction, perceptual, smoothness, adversarial, and exposure. This review analyses and compares different methods, ranging from traditional to cutting-edge deep learning methods, showcasing the significant advancements in the field. Although similar reviews have been studied on LLIE, this paper not only updates the knowledge but also focuses on recent deep learning methods from various perspectives or interpretations. The methodology used in this paper compares different methods from the literature and identifies the potential research gaps. This paper highlights the recent advancements in the field by classifying them into three classes, demonstrated by the continuous enhancements in LLIE methods. These improved methods use different loss functions showing higher efficacy through metrics such as Peak Signal-to-Noise Ratio, Structural Similarity Index Measure, and Naturalness Image Quality Evaluator. The research emphasizes the significance of advanced deep learning techniques and comprehensively compares different LLIE methods on various benchmark image datasets. This research is a foundation for scientists to illustrate potential future research directions.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102234"},"PeriodicalIF":5.2,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142657773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Binocular camera-based visual localization with optimized keypoint selection and multi-epipolar constraints","authors":"Guanyuan Feng, Yu Liu, Weili Shi, Yu Miao","doi":"10.1016/j.jksuci.2024.102228","DOIUrl":"10.1016/j.jksuci.2024.102228","url":null,"abstract":"<div><div>In recent years, visual localization has gained significant attention as a key technology for indoor navigation due to its outstanding accuracy and low deployment costs. However, it still encounters two primary challenges: the requirement for multiple database images to match the query image and the potential degradation of localization precision resulting from the keypoints clustering and mismatches. In this research, a novel visual localization framework based on a binocular camera is proposed to estimate the absolute positions of the query camera. The framework integrates three core methods: the multi-epipolar constraints-based localization (MELoc) method, the Optimal keypoint selection (OKS) method, and a robust measurement method. MELoc constructs multiple geometric constraints to enable absolute position estimation with only a single database image, while OKS and the robust measurement method further enhance localization accuracy by refining the precision of these geometric constraints. Experimental results demonstrate that the proposed system consistently outperforms existing visual localization systems across various scene scales, database sampling intervals, and lighting conditions</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102228"},"PeriodicalIF":5.2,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142657774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammed A.M. Elhassan , Changjun Zhou , Ali Khan , Amina Benabid , Abuzar B.M. Adam , Atif Mehmood , Naftaly Wambugu
{"title":"Real-time semantic segmentation for autonomous driving: A review of CNNs, Transformers, and Beyond","authors":"Mohammed A.M. Elhassan , Changjun Zhou , Ali Khan , Amina Benabid , Abuzar B.M. Adam , Atif Mehmood , Naftaly Wambugu","doi":"10.1016/j.jksuci.2024.102226","DOIUrl":"10.1016/j.jksuci.2024.102226","url":null,"abstract":"<div><div>Real-time semantic segmentation is a crucial component of autonomous driving systems, where accurate and efficient scene interpretation is essential to ensure both safety and operational reliability. This review provides an in-depth analysis of state-of-the-art approaches in real-time semantic segmentation, with a particular focus on Convolutional Neural Networks (CNNs), Transformers, and hybrid models. We systematically evaluate these methods and benchmark their performance in terms of frames per second (FPS), memory consumption, and CPU runtime. Our analysis encompasses a wide range of architectures, highlighting their novel features and the inherent trade-offs between accuracy and computational efficiency. Additionally, we identify emerging trends, and propose future directions to advance the field. This work aims to serve as a valuable resource for both researchers and practitioners in autonomous driving, providing a clear roadmap for future developments in real-time semantic segmentation. More resources and updates can be found at our GitHub repository: <span><span>https://github.com/mohamedac29/Real-time-Semantic-Segmentation-Survey</span><svg><path></path></svg></span></div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102226"},"PeriodicalIF":5.2,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142657830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TFDNet: A triple focus diffusion network for object detection in urban congestion with accurate multi-scale feature fusion and real-time capability","authors":"Caoyu Gu , Xiaodong Miao , Chaojie Zuo","doi":"10.1016/j.jksuci.2024.102223","DOIUrl":"10.1016/j.jksuci.2024.102223","url":null,"abstract":"<div><div>Vehicle detection in congested urban scenes is essential for traffic control and safety management. However, the dense arrangement and occlusion of multi-scale vehicles in such environments present considerable challenges for detection systems. To tackle these challenges, this paper introduces a novel object detection method, dubbed the triple focus diffusion network (TFDNet). Firstly, the gradient convolution is introduced to construct the C2f-EIRM module, replacing the original C2f module, thereby enhancing the network’s capacity to extract edge information. Secondly, by leveraging the concept of the Asymptotic Feature Pyramid Network on the foundation of the Path Aggregation Network, the triple focus diffusion module structure is proposed to improve the network’s ability to fuse multi-scale features. Finally, the SPPF-ELA module employs an Efficient Local Attention mechanism to integrate multi-scale information, thereby significantly reducing the impact of background noise on detection accuracy. Experiments on the VisDrone 2021 dataset reveal that the average detection accuracy of the TFDNet algorithm reached 38.4%, which represents a 6.5% improvement over the original algorithm; similarly, its mAP50:90 performance has increased by 3.7%. Furthermore, on the UAVDT dataset, the TFDNet achieved a 3.3% enhancement in performance compared to the original algorithm. TFDNet, with a processing speed of 55.4 FPS, satisfies the real-time requirements for vehicle detection.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102223"},"PeriodicalIF":5.2,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142553417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Corrigendum to “Effective and scalable black-box fuzzing approach for modern web applications” [J. King Saud Univ. Comp. Info. Sci. 34(10) (2022) 10068–10078]","authors":"Aseel Alsaedi, Abeer Alhuzali, Omaimah Bamasag","doi":"10.1016/j.jksuci.2024.102216","DOIUrl":"10.1016/j.jksuci.2024.102216","url":null,"abstract":"","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102216"},"PeriodicalIF":5.2,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142721482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongli Wan, Minqing Zhang, Yan Ke, Zongbao Jiang, Fuqiang Di
{"title":"General secure encryption algorithm for separable reversible data hiding in encrypted domain","authors":"Hongli Wan, Minqing Zhang, Yan Ke, Zongbao Jiang, Fuqiang Di","doi":"10.1016/j.jksuci.2024.102217","DOIUrl":"10.1016/j.jksuci.2024.102217","url":null,"abstract":"<div><div>The separable reversible data hiding in encrypted domain (RDH-ED) algorithm leaves out the embedding space for the information before or after encryption and makes the operation of extracting the information and restoring the image not interfere with each other. The encryption method employed not only affects the embedding space of the information and separability, but is more crucial for ensuring security. However, the commonly used XOR, scram-bling or combination methods fall short in security, especially against known plaintext attack (KPA). Therefore, in order to improve the security of RDH-ED and be widely applicable, this paper proposes a high-security RDH-ED encryption algorithm that can be used to reserve space before encryption (RSBE) and free space after encryption (FSAE). During encryption, the image undergoes block XOR, global intra-block bit-plane scrambling (GIBS) and inter-block scrambling sequentially. The GIBS key is created through chaotic mapping transformation. Subsequently, two RDH-ED algorithms based on this encryption are proposed. Experimental results indicate that the algorithm outlined in this paper maintains consistent key communication traffic post key conversion. Additionally, its computational complexity remains at a constant level, satisfying separability criteria, and is suitable for both RSBE and FSAE methods. Simultaneously, while satisfying the security of a single encryption technique, we have expanded the key space to 2<span><math><mrow><msup><mrow></mrow><mrow><mn>8</mn><mi>N</mi><mi>p</mi></mrow></msup><mo>×</mo><mi>N</mi><mi>p</mi><mo>!</mo><mo>×</mo><mn>8</mn><msup><mrow><mo>!</mo></mrow><mrow><mi>N</mi><mi>p</mi></mrow></msup></mrow></math></span>, enabling resilience against various existing attack methods. Notably, particularly in KPA testing scenarios, the average decryption success rate is a mere 0.0067% and 0.0045%, highlighting its exceptional security. Overall, this virtually unbreakable system significantly enhances image security while preserving an appropriate embedding capacity.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102217"},"PeriodicalIF":5.2,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chengke Bao , Qianxi Wu , Weidong Ji , Min Wang , Haoyu Wang
{"title":"Quantum computing enhanced knowledge tracing: Personalized KT research for mitigating data sparsity","authors":"Chengke Bao , Qianxi Wu , Weidong Ji , Min Wang , Haoyu Wang","doi":"10.1016/j.jksuci.2024.102224","DOIUrl":"10.1016/j.jksuci.2024.102224","url":null,"abstract":"<div><div>With the development of artificial intelligence in education, knowledge tracing (KT) has become a current research hotspot and is the key to the success of personalized instruction. However, data sparsity remains a significant challenge in the KT domain. To address this challenge, this paper applies quantum computing (QC) technology to KT for the first time. It proposes two personalized KT models incorporating quantum mechanics (QM): quantum convolutional enhanced knowledge tracing (QCE-KT) and quantum variational enhanced knowledge tracing (QVE-KT). Through quantum superposition and entanglement properties, QCE-KT and QVE-KT effectively alleviate the data sparsity problem in the KT domain through quantum convolutional layers and variational quantum circuits, respectively, and significantly improve the quality of the representation and prediction accuracy of students’ knowledge states. Experiments on three datasets show that our models outperform ten benchmark models. On the most sparse dataset, QCE-KT and QVE-KT improve their performance by 16.44% and 14.78%, respectively, compared to DKT. Although QC is still in the developmental stage, this study reveals the great potential of QM in personalized KT, which provides new perspectives for solving personalized instruction problems and opens up new directions for applying QC in education.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102224"},"PeriodicalIF":5.2,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142553296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DA-Net: A classification-guided network for dental anomaly detection from dental and maxillofacial images","authors":"Jiaxing Li","doi":"10.1016/j.jksuci.2024.102229","DOIUrl":"10.1016/j.jksuci.2024.102229","url":null,"abstract":"<div><div>Dental abnormalities (DA) are frequent signs of disorders of the mouth that cause discomfort, infection, and loss of teeth. Early and reasonably priced treatment may be possible if defective teeth in the oral cavity are automatically detected. Several research works have endeavored to create a potent deep learning model capable of identifying DA from pictures. However, because of the following problems, aberrant teeth from the oral cavity are difficult to detect: 1) Normal teeth and crowded dentition frequently overlap; 2) The lesion area on the tooth surface is tiny. This paper proposes a professional dental anomaly detection network (DA-Net) to address such issues. First, a multi-scale dense connection module (MSDC) is designed to distinguish crowded teeth from normal teeth by learning multi-scale spatial information of dentition. Then, a pixel differential convolution (PDC) module is designed to perform pathological tooth recognition by extracting small lesion features. Finally, a multi-stage convolutional attention module (MSCA) is developed to integrate spatial information and channel information to obtain abnormal teeth in small areas. Experiments on benchmarks show that DA-Net performs well in dental anomaly detection and can further assist doctors in making treatment plans. Specifically, the DA-Net method performs best on multiple detection evaluation metrics: IoU, PRE, REC, and mAP. In terms of REC and mAP indicators, the proposed DA-Net method is 1.1% and 1.3% higher than the second-ranked YOLOv7 method.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 9","pages":"Article 102229"},"PeriodicalIF":5.2,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chao Xu , Cheng Han , Huamin Yang , Chao Zhang , Shiyu Lu
{"title":"Deep indoor illumination estimation based on spherical gaussian representation with scene prior knowledge","authors":"Chao Xu , Cheng Han , Huamin Yang , Chao Zhang , Shiyu Lu","doi":"10.1016/j.jksuci.2024.102222","DOIUrl":"10.1016/j.jksuci.2024.102222","url":null,"abstract":"<div><div>High dynamic range (HDR) illumination estimation from a single low dynamic range image is a critical task in the fields of computer vision, graphics and augmented reality. However, directly learning the full HDR environment map or parametric lighting information from a single image is extremely difficult and inaccurate. As a result, we propose a two-stage network approach for illumination estimation that integrates spherical gaussian (SG) representation with scene prior knowledge. In the first stage, a convolutional neural network is utilized to generate material and geometric information about the scene, which serves as prior knowledge for lighting prediction. In the second stage, we model indoor environment illumination using 128 SG functions with fixed center direction and bandwidth, allowing only the amplitude to vary. Subsequently, a Transformer-based lighting parameter regressor is employed to capture the complex relationship between the input images with scene prior information and its SG illumination. Additionally, we introduce a hybrid loss function, which combines a masked loss for high-frequency illumination with a rendering loss for improving the visual quality. By training and evaluating the lighting model on the created SG illumination dataset, the proposed method achieves competitive results in both quantitative metrics and visual quality, outperforming state-of-the-art methods.</div></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":"36 10","pages":"Article 102222"},"PeriodicalIF":5.2,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142657924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}