{"title":"Abnormal lower limb posture recognition based on spatial gait feature dynamic threshold detection","authors":"","doi":"10.1016/j.jksuci.2024.102161","DOIUrl":"10.1016/j.jksuci.2024.102161","url":null,"abstract":"<div><p>Lower limb rehabilitation training often involves the use of assistive standing devices. However, elderly individuals frequently experience reduced exercise effectiveness or suffer muscle injuries when utilizing these devices. The ability to recognize abnormal lower limb postures can significantly enhance training efficiency and minimize the risk of injury. To address this, we propose a model based on dynamic threshold detection of spatial gait features to identify such abnormal postures. A human-assisted standing rehabilitation device platform was developed to build a lower limb gait depth dataset. RGB data is employed for keypoint detection, enabling the establishment of a 3D lower limb posture recognition model that extracts gait, time, spatial features, and keypoints. The predicted joint angles, stride length, and step frequency demonstrate errors of 4 %, 8 %, and 1.3 %, respectively, with an average confidence of 0.95 for 3D key points. We employed the WOA-BP neural network to develop a dynamic threshold algorithm based on gait features and propose a model for recognizing abnormal postures. Compared to other models, our model achieves a 96 % accuracy rate in recognizing abnormal postures, with a recall rate of 83 % and an F1 score of 90 %. ROC curve analysis and AUC values reveal that the WOA-BP algorithm performs farthest from the pure chance line, with the highest AUC value of 0.89, indicating its superior performance over other models. Experimental results demonstrate that this model possesses a strong capability in recognizing abnormal lower limb postures, encouraging patients to correct these postures, thereby reducing muscle injuries and improving exercise effectiveness.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002507/pdfft?md5=27cec39130c542af88b8b1f0132833cd&pid=1-s2.0-S1319157824002507-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A formal specification language and automatic modeling method of asset securitization contract","authors":"","doi":"10.1016/j.jksuci.2024.102163","DOIUrl":"10.1016/j.jksuci.2024.102163","url":null,"abstract":"<div><p>Asset securitization is an important financial derivative involving complicated asset transfer operations. Therefore, digitizing traditional asset securitization contracts will improve efficiency and facilitate reliability verification. Furthermore, accurate and verifiable requirement description is essential for collaborative development between financial professionals and software engineers. A domain specific language for writing asset securitization contract has been proposed. This solves the problem of difficulty for financial professionals to directly write smart contract by simplifying writing rules. However, due to existing design of the language focused on some simple scenarios, it is insufficient and informal to describe various detailed scenarios. What is more, there are still many reliability issues, such as verifying the correctness of the logical properties of the contract and ensuring the consistency between the contract text and the contract code, within the language in the generation and execution of smart contracts. To overcome the challenges stated above, we extend, simplify and innovate the syntax subset of the domain specific language and name it AS-SC (Asset Securitization – Smart Contract), which can be used by financial professionals to accurately describe requirements. Besides, because formal methods are math-based techniques that describe system properties and can generate programs in a more formal and reliable manner, we propose a semantic consistent code conversion method, named AS2EB, for converting from AS-SC to Event-B, a common and useful formal language. AS2EB method can be used by software engineers to verify requirements. The combination of AS-SC and AS2EB ensures consistency and reliability of the requirements, and reduces the cost of repeated communications and later testing. Taking the credit asset securitization contract as case study, the feasibility and rationality of AS-SC and AS2EB are validated. In addition, by carrying out experiments on three randomly selected real cases in different classic scenarios, we show high-efficiency and reliability of AS2EB method.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002520/pdfft?md5=9af49e4b57c4f2d8d674b3287497b478&pid=1-s2.0-S1319157824002520-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DAW-FA: Domain-aware adaptive weighting with fine-grain attention for unsupervised MRI harmonization","authors":"","doi":"10.1016/j.jksuci.2024.102157","DOIUrl":"10.1016/j.jksuci.2024.102157","url":null,"abstract":"<div><p>Magnetic resonance (MR) imaging often lacks standardized acquisition protocols across various sites, leading to contrast variations that reduce image quality and hinder automated analysis. MR harmonization improves consistency by integrating data from multiple sources, ensuring reproducible analysis. Recent advances leverage image-to-image translation and disentangled representation learning to decompose anatomical and contrast representations, achieving consistent cross-site harmonization. However, these methods face two significant drawbacks: imbalanced contrast availability during training affects adaptation performance, and insufficient utilization of spatial variability in local anatomical structures limits model adaptability to different sites. To address these challenges, we propose Domain-aware Adaptive Weighting with Fine-Grain Attention (DAW-FA) for Unsupervised MRI Harmonization. DAW-FA incorporates an adaptive weighting mechanism and enhanced self-attention to mitigate MR contrast imbalance during training and account for spatial variability in local anatomical structures. This facilitates robust cross-site harmonization without requiring paired inter-site images. We evaluated DAW-FA on MR datasets with varying scanners and acquisition protocols. Experimental results show DAW-FA outperforms existing methods, with an average increase of 1.92 ± 0.56 in Peak Signal-to-Noise Ratio (PSNR) and 0.023 ± 0.011 in Structural Similarity Index Measure (SSIM). Additionally, we demonstrate DAW-FA’s impact on downstream tasks: Alzheimer’s disease classification and whole-brain segmentation, highlighting its potential clinical relevance.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002465/pdfft?md5=3acf98b5530f688283d52f1b4e9b2c0d&pid=1-s2.0-S1319157824002465-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142041103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SARD: Fake news detection based on CLIP contrastive learning and multimodal semantic alignment","authors":"","doi":"10.1016/j.jksuci.2024.102160","DOIUrl":"10.1016/j.jksuci.2024.102160","url":null,"abstract":"<div><p>The automatic detection of multimodal fake news can be used to effectively identify potential risks in cyberspace. Most of the existing multimodal fake news detection methods focus on fully exploiting textual and visual features in news content, thus neglecting the full utilization of news social context features that play an important role in improving fake news detection. To this end, we propose a new fake news detection method based on CLIP contrastive learning and multimodal semantic alignment (SARD). SARD leverages cutting-edge multimodal learning techniques, such as CLIP, and robust cross-modal contrastive learning methods to integrate features of news-oriented heterogeneous information networks (N-HIN) with multi-level textual and visual features into a unified framework for the first time. This framework not only achieves cross-modal alignment between deep textual and visual features but also considers cross-modal associations and semantic alignments across different modalities. Furthermore, SARD enhances fake news detection by aligning semantic features between news content and N-HIN features, an aspect largely overlooked by existing methods. We test and evaluate SARD on three real-world datasets. Experimental results demonstrate that SARD significantly outperforms the twelve state-of-the-art competitors in fake news detection, with an average improvement of 2.89% in Mac.F1 score and 2.13% in accuracy compared to the leading baseline models across three datasets.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002490/pdfft?md5=497eb195281148df13643994f201fe62&pid=1-s2.0-S1319157824002490-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142076508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Anomalous behavior detection based on optimized graph embedding representation in social networks","authors":"","doi":"10.1016/j.jksuci.2024.102158","DOIUrl":"10.1016/j.jksuci.2024.102158","url":null,"abstract":"<div><p>Anomalous behaviors in social networks can lead to privacy leaks and the spread of false information. In this paper, we propose an anomalous behavior detection method based on optimized graph embedding representation. Specifically, the user behavior logs are first extracted into a social network user behavior temporal knowledge graph, based on which the graph embedding representation method is used to transform the network topology and temporal information in the user behavior knowledge graph into structural embedding vectors and temporal information embedding vectors, and the hybrid attention mechanism is used to merge the two types of vectors to obtain the final entity embedding to complete the prediction and complementation of the temporal knowledge graph of user behavior. We use graph neural networks, which use the temporal information of user behaviors as a time constraint and capture both user behavioral and semantic information. It converts the two parts of information into vectors for concatenation and linear transformation to obtain a comprehensive representation vector of the whole subgraph, as well as joint deep learning models to evaluate abnormal behavior. Finally, we perform experiments on the Yelp dataset to validate that our method achieves a 9.56% improvement in the F1-score.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002477/pdfft?md5=05d482d90b47cc00a3f0c9a6ac74bdda&pid=1-s2.0-S1319157824002477-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient Wear-Leveling-Aware Data Placement for LSM-Tree based key-value store on ZNS SSDs","authors":"","doi":"10.1016/j.jksuci.2024.102156","DOIUrl":"10.1016/j.jksuci.2024.102156","url":null,"abstract":"<div><p>Emerging Zoned Namespace (ZNS) is a new-style Solid State Drive (SSD) that manages data in a zoned manner, which can achieve higher performance by strictly obeying the sequential write mode in each zone and alleviating the redundant overhead of garbage collections. Unfortunately, flash memory usually has a serious problem with limited program/erase cycles. Meanwhile, inappropriate data placement strategy of storage systems can lead to imbalanced wear among zones, severely reducing the lifespan of ZNS SSDs. In this paper, we propose a Wear-Leveling-Aware Data Placement (WADP) to solve this problem with negligible performance cost. First, WADP employs a wear-aware empty zone allocation algorithm to quantify the resets of zones and choose the less-worn zone for each allocation. Second, to prevent long-term zone occupation of infrequently written data (namely cold data), we propose a wear-leveling cold zone monitoring mechanism to identify cold zones dynamically. Finally, WADP adopts a real-time I/O pressure-aware data migration mechanism to adaptively migrate cold data for achieving wear-leveling among zones. We implement the proposed WADP in ZenFS and evaluate it with widely used workloads. Compared with state-of-the-art solutions, i.e., LIZA and FAR, the experimental results show that WADP can significantly reduce the standard deviation of zone resets while maintaining decent performance.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002453/pdfft?md5=b3f5e8288e8205e799d78965f416b571&pid=1-s2.0-S1319157824002453-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142041102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Structure recovery from single omnidirectional image with distortion-aware learning","authors":"","doi":"10.1016/j.jksuci.2024.102151","DOIUrl":"10.1016/j.jksuci.2024.102151","url":null,"abstract":"<div><p>Recovering structures from images with 180<span><math><msup><mrow></mrow><mrow><mo>∘</mo></mrow></msup></math></span> or 360<span><math><msup><mrow></mrow><mrow><mo>∘</mo></mrow></msup></math></span> FoV is pivotal in computer vision and computational photography, particularly for VR/AR/MR and autonomous robotics applications. Due to varying distortions and the complexity of indoor scenes, recovering flexible structures from a single image is challenging. We introduce OmniSRNet, a comprehensive deep learning framework that merges distortion-aware learning with bidirectional LSTM. Utilizing a curated dataset with optimized panorama and expanded fisheye images, our framework features a distortion-aware module (DAM) for extracting features and a horizontal and vertical step module (HVSM) of LSTM for contextual predictions. OmniSRNet excels in applications such as VR-based house viewing and MR-based video surveillance, achieving leading results on cuboid and non-cuboid datasets. The code and dataset can be accessed at <span><span>https://github.com/mmlph/OmniSRNet/</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002404/pdfft?md5=7e463774b7098668fef54fdff2ad3e21&pid=1-s2.0-S1319157824002404-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142013028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance analysis of cloud resource allocation scheme with virtual machine inter-group asynchronous failure","authors":"","doi":"10.1016/j.jksuci.2024.102155","DOIUrl":"10.1016/j.jksuci.2024.102155","url":null,"abstract":"<div><p>The recent rapid expansion of cloud computing has led to the prominence of Cloud Data Center (CDC) emerging. However, user requests’ waiting time might be greatly increased for a single physical machine (PM) in the CDC. We provide a cloud resource allocation scheme with virtual machine (VM) inter-group asynchronous failure. This method improves requests’ throughput and reduces wait time of requests. In particular, two PMs with different service rates for mapping multiple VMs are deployed in order to equally distribute cloud users’ requests, and we assume that the two PMs will fail and repair at different probabilities. A finite cache is also introduced to reduce the requests’ blocking rate. We model the VMs and user requests and create a 3-dimensional Markov chain (3DMC) to gauge the requests’ performance metrics. Numerical experiments are performed to obtain multiple performance metrics graphs for the requests. By comparing our scheme with the traditional cloud resource allocation scheme that involves synchronization failure in VM, we find that our scheme has an improvement in throughput, and each scheme has advantages and disadvantages in blocking rate of requests.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002441/pdfft?md5=d0b96a172006c37607e17d7e394616cf&pid=1-s2.0-S1319157824002441-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141993465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LDNet: High Accuracy Fish Counting Framework using Limited training samples with Density map generation Network","authors":"","doi":"10.1016/j.jksuci.2024.102143","DOIUrl":"10.1016/j.jksuci.2024.102143","url":null,"abstract":"<div><p>Fish counting is crucial in fish farming. Density map-based fish counting methods hold promise for fish counting in high-density scenarios; however, they suffer from ineffective ground truth density map generation. High labeling complexities and disturbance to fish growth during data collection are also challenging to mitigate. To address these issues, LDNet, a versatile network with attention implemented is introduced in this study. An imbalanced Optimal Transport (OT)-based loss function was used to effectively supervise density map generation. Additionally, an Image Manipulation-Based Data Augmentation (IMBDA) strategy was applied to simulate training data from diverse scenarios in fixed viewpoints in order to build a model that is robust to different environmental changes. Leveraging a limited number of training samples, our approach achieved notable performances with an 8.27 MAE, 9.97 RMSE, and 99.01% Accuracy on our self-curated Fish Count-824 dataset. Impressively, our method also demonstrated superior counting performances on both vehicle count datasets CARPK and PURPK+, and Penaeus_1k Penaeus Larvae dataset when only 5%–10% of the training data was used. These outcomes compellingly showcased our proposed approach with a wide applicability potential across various cases. This innovative approach can potentially contribute to aquaculture management and ecological preservation through counting fish accurately.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002325/pdfft?md5=ec92694818fa8a8041843f53d8c6b66e&pid=1-s2.0-S1319157824002325-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141979572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Leveraging syntax-aware models and triaffine interactions for nominal compound chain extraction","authors":"","doi":"10.1016/j.jksuci.2024.102153","DOIUrl":"10.1016/j.jksuci.2024.102153","url":null,"abstract":"<div><p>Recently, Nominal Compound Chain Extraction (NCCE) has been proposed to detect related mentions in a document to improve understanding of the document’s topic. NCCE involves longer span detection and more complicated rules for relation decisions, making it more difficult than previous chain extraction tasks, such as coreference resolution. Current methods achieve certain progress on the NCCE task, but they suffer from insufficient syntax information utilization and incomplete mention relation mining, which are helpful for NCCE. To fill these gaps, we propose a syntax-guided model using a triaffine interaction to improve the performance of the NCCE task. Instead of solely relying on the text information to detect compound mentions, we also utilize the noun-phrase (NP) boundary information in constituency trees to incorporate prior boundary knowledge. In addition, we use biaffine and triaffine operations to mine the mention interactions in the local and global context of a document. To show the effectiveness of our methods, we conduct a series of experiments on a human-annotated NCCE dataset. Experimental results show that our model significantly outperforms the baseline systems. Moreover, in-depth analyses reveal the effect of utilizing syntactic information and mention interactions in the local and global contexts.</p></div>","PeriodicalId":48547,"journal":{"name":"Journal of King Saud University-Computer and Information Sciences","volume":null,"pages":null},"PeriodicalIF":5.2,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1319157824002428/pdfft?md5=68d28a739630245dadca6d14bfb1c2d3&pid=1-s2.0-S1319157824002428-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141984671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}