IET SoftwarePub Date : 2025-09-29DOI: 10.1049/sfw2/5511556
Haihua Tu, Lingbo Han
{"title":"Word-Level Nonequivalence and Translation Strategies in English–Chinese Translation Based on Image Processing Technology","authors":"Haihua Tu, Lingbo Han","doi":"10.1049/sfw2/5511556","DOIUrl":"https://doi.org/10.1049/sfw2/5511556","url":null,"abstract":"<p>The process of translation is the process of accurately understanding the original work. It uses other languages to express the meaning of the original work and reproduce the original text in other languages. However, translation equivalence is a relative term, and there is no complete equivalence. In translation practice, translators often face different inequalities. The inequality of lexical levels means that no words matching the original text can be found in the specified language. These equivalence relationships are different to some extent, which brings great difficulties to translation. This paper first made a relevant interpretation of the common phenomenon of word-level inequality in English–Chinese translation, and analyzed the differences of source language concepts in translation. It made a relevant study on the lexical inequality in English–Chinese translation, and described the cultural inequality. After that, this paper studied and planned the equivalence requirements and solutions in English–Chinese translation. It was proposed to strengthen the learning and understanding of Chinese and Western cultures, and to translate based on the cultural characteristics of different regions. It was also proposed that transliteration should be used to ensure the accuracy of English–Chinese translation and reduce the nonequivalence between word levels. Subsequently, this paper introduced image processing technology into translation and used image processing technology to strengthen translation strategies. It also focused on analyzing the main types of image processing technology and used image processing technology to fully understand the translation process. It was necessary to use image processing technology to correctly express the translation. Finally, image processing technology was used to strengthen translation strategies and research. According to experiments and surveys, the use of image processing technology to create new English–Chinese translation strategies could effectively improve the satisfaction of 18% of translators.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/5511556","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145224163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET SoftwarePub Date : 2025-09-27DOI: 10.1049/sfw2/8953863
María-Isabel Limaylla-Lunarejo, Nelly Condori-Fernandez, Miguel Rodríguez Luaces
{"title":"Systematic Mapping of AI-Based Approaches for Requirements Prioritization","authors":"María-Isabel Limaylla-Lunarejo, Nelly Condori-Fernandez, Miguel Rodríguez Luaces","doi":"10.1049/sfw2/8953863","DOIUrl":"https://doi.org/10.1049/sfw2/8953863","url":null,"abstract":"<p><b>Context and Motivation:</b> Requirements prioritization (RP) is a main concern of requirements engineering (RE). Traditional prioritization techniques, while effective, often involve manual effort and are time-consuming. In recent years, thanks to the advances in AI-based techniques and algorithms, several promising alternatives have emerged to optimize this process.</p><p><b>Question:</b> The main goal of this work is to review the current state of requirement prioritization, focusing on AI-based techniques and a classification scheme to provide a comprehensive overview. Additionally, we examine the criteria utilized by these AI-based techniques, as well as the datasets and evaluation metrics employed. For this purpose, we conducted a systematic mapping study (SMS) of studies published between 2011 and 2023.</p><p><b>Results:</b> Our analysis reveals a diverse range of AI-based techniques in use, with fuzzy logic being the most commonly applied. Moreover, most studies continue to depend on stakeholder input as a key criterion, limiting the potential for full automation of the prioritization process. Finally, there appears to be no standardized evaluation metric or dataset across the reviewed papers, focusing on the need for standardized approaches across studies.</p><p><b>Contribution:</b> This work provides a systematic categorization of current AI-based techniques used for automating RP. Additionally, it updates and expands existing reviews, offering a valuable resource for practitioners and nonspecialists.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/8953863","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145146844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Web-Based Early Dementia Detection Using Deep Learning, Ensemble Machine Learning, and Model Explainability Through LIME and SHAP","authors":"Khandaker Mohammad Mohi Uddin, Abir Chowdhury, Md Mahbubur Rahman Druvo, Md. Shariful Islam, Md Ashraf Uddin","doi":"10.1049/sfw2/5455082","DOIUrl":"https://doi.org/10.1049/sfw2/5455082","url":null,"abstract":"<p>Dementia is a gradual and incapacitating illness that impairs cognitive abilities and causes memory loss, disorientation, and challenges with daily tasks. Treatment of the disease and better patient outcomes depend on early identification of dementia. In this paper, the study uses a publicly available dataset to develop a comprehensive ensemble model of machine learning (ML) and deep learning (DL) framework for classifying the dementia stages. Before using SMOTE to balance the data, the procedure starts with data preprocessing which includes handling missing values, normalization and encoding. <i>F</i>-value and <i>p</i>-value help to select the best seven features, and the dataset is divided into training (70%) and testing (30%) portions. In addition, four DL models like long short-term memory (LSTM), convolutional neural networks (CNNs), multilayer perceptron (MLP), artificial neural networks (ANNs), and 12 ML models are trained such as logistic regression (LR), random forest (RF) and support vector machine (SVM). Hyperparameter tuning was utilized to further enhance each model’s performance and an ensemble voting technique was applied to aggregate predictions from several ML and DL algorithms, providing more reliable and accurate outcomes. For ensuring model transparency, interpretability strategies like as shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME) are applied in ANN and LR. The suggested model’s ANN shows a promising accuracy of 97.32% demonstrating its efficacy in the early diagnosis and categorization of dementia which can support clinical decisions. Furthermore, the proposed work, created a web-based solution for diagnosing dementia in real-time.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/5455082","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145146843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET SoftwarePub Date : 2025-09-19DOI: 10.1049/sfw2/5227350
Hafiza Maria Maqsood, Joelma Choma, Eduardo Guerra, Andrea Bondavalli
{"title":"A Systematic Literature Review on Application of Agile Software Development Process Models for the Development of Safety-Critical Systems in Multiple Domains","authors":"Hafiza Maria Maqsood, Joelma Choma, Eduardo Guerra, Andrea Bondavalli","doi":"10.1049/sfw2/5227350","DOIUrl":"10.1049/sfw2/5227350","url":null,"abstract":"<p>This paper presents a literature review on using agile for safety-critical systems (SCSs). We have systematically selected and evaluated relevant literature to find out major areas of concern for adapting agile in the development of SCSs. In the paper, we have listed the most used Agile process models and reasons for their suitability for SCS, then we have outlined phases of the software development life cycle (SDLC) where changes are required to make an agile process suitable for the development of SCSs. Thirdly, we have elaborated on problems and other important aspects according to specific domains where agile is used for SCS. This paper serves as an insight into the latest trends and problems regarding the use of Agile process models to develop SCSs.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/5227350","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET SoftwarePub Date : 2025-09-19DOI: 10.1049/sfw2/5567569
Faraz Masood, Ali Haider Shamsan, Arman Rasool Faridi
{"title":"BLOCKVISA: A Blockchain-Based System for Efficient and Secure Visa, Passport, and Immigration Verification","authors":"Faraz Masood, Ali Haider Shamsan, Arman Rasool Faridi","doi":"10.1049/sfw2/5567569","DOIUrl":"10.1049/sfw2/5567569","url":null,"abstract":"<p>In the fast-changing landscape of global mobility, the need for secure, efficient, and interoperable visa, passport, and immigration verification systems has never been higher. Traditional systems are inefficient, have security vulnerabilities, and exhibit poor interoperability. This study introduces a novel approach for the blockchain solution in passport verification inefficiencies-BLOCKVISA. BLOCKVISA, in its nature, uses decentralized and immutable blockchain technology to make the system more secure, automate the verification process, and ensure data sharing frictionlessly across jurisdictions. Core components of the system include smart contracts developed in Solidity, a user interface (UI) created with Next.js, and integration with MetaMask and Web3.js for safe interactions with the blockchain. Rigorous testing was done using Mocha, and more intensive benchmarking was done using Hyperledger Caliper against Ganache, Hyperledger Besu, as well as all the test networks, that is, Rinkeby, Ropsten, Goerli, Kovan, among others. Experiments showed that with BLOCKVISA, high throughput and low latency in controlled settings can be achieved, with almost perfect success rates being recorded. It also gave insights into how it would perform even better when deployed on a public network. The article undertakes a comparative analysis of performance metrics, brings out robust security features of the system, and discusses its scalability and feasibility for real-world implementation. By integrating advanced blockchain technology into the visa, passport, and immigration verification process, BLOCKVISA sets a new standard for global mobility solutions, promising enhanced efficiency, security, and interoperability.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/5567569","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET SoftwarePub Date : 2025-09-10DOI: 10.1049/sfw2/8820546
Kiran Rao P., Suman Prakash P., Sreenivasulu K., Surbhi B. Khan, Fatima Asiri, Ahlam Almusharraf, Rubal Jeet
{"title":"AI-Driven Dynamic Resource Allocation for IoT Networks Using Graph-Convolutional Transformer and Hybrid Optimization","authors":"Kiran Rao P., Suman Prakash P., Sreenivasulu K., Surbhi B. Khan, Fatima Asiri, Ahlam Almusharraf, Rubal Jeet","doi":"10.1049/sfw2/8820546","DOIUrl":"10.1049/sfw2/8820546","url":null,"abstract":"<p>Effective resource allocation is a fundamental challenge for software systems in Internet of Things (IoT) networks, influencing their performance, energy consumption, and scalability in dynamic environments. This study introduces a new framework, DRANet–graph convolutional network (GCN)+, which integrates GCNs, transformer architectures, and reinforcement learning (RL) with adaptive metaheuristics to improve real-time decision making in IoT resource allocation. The framework employs GCNs to model spatial relationships among heterogeneous IoT devices, transformer-based architectures to capture temporal patterns in resource demands, and RL with fairness-aware reward functions to dynamically optimize allocation strategies. Unlike previous approaches, DRANet–GCN+ addresses computational overhead through efficient graph partitioning and parallel processing, making it suitable for resource-constrained environments. Comprehensive evaluation includes sensitivity analysis of key parameters and benchmarking against recent hybrid approaches, including GCN–RL and attention-enhanced multiagent RL (MARL) methods. Performance evaluation on real-world and large-scale synthetic datasets (up to 5000 nodes) demonstrates the framework’s capabilities under varied conditions, achieving 93.2% resource allocation efficiency, 50 ms average latency with 12 ms standard deviation, and 990 Mbps throughput while consuming 15% less energy than baseline approaches. These findings establish DRANet–GCN+ as a robust solution for intelligent resource management in heterogeneous IoT networks, with detailed quantification of computational overhead, scalability limitations, and fairness–energy–throughput trade-offs.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/8820546","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145022233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET SoftwarePub Date : 2025-09-02DOI: 10.1049/sfw2/6662968
Ameen Shaheen, Ahmad Alkhatib, Mahmoud Farfoura, Rand Albustanji
{"title":"Developing a User-Centric Quality Model for Gaming as a Service (GaaS): Enhancing User Satisfaction Through Key Quality Factors","authors":"Ameen Shaheen, Ahmad Alkhatib, Mahmoud Farfoura, Rand Albustanji","doi":"10.1049/sfw2/6662968","DOIUrl":"10.1049/sfw2/6662968","url":null,"abstract":"<p>This study presents a comprehensive and user-centric quality model for gaming as a service (GaaS), grounded in a multistage survey methodology involving pretest, postgame, and posttest evaluations. The research identifies and empirically validates key quality attributes that influence user satisfaction, including controllability, responsiveness, accessibility, cost transparency, security, and social features. Data from 62 cloud gamers, analyzed through ANOVA and regression techniques, reveal that users prioritize high-resolution graphics, diverse game libraries, intuitive controls (ICs), and seamless audio–visual performance. The findings highlight a strong alignment between user expectations and the proposed quality model. Practical recommendations are offered for GaaS providers, focusing on improved user onboarding, transparent system requirements, enhanced social features, and robust security protocols. The study also discusses emerging technologies such as AI-driven personalization and adaptive streaming, which hold promise for enhancing quality of experience (QoE) in dynamic network conditions. Future research should include larger and more diverse user samples, longitudinal analysis, and cross-cultural perspectives to further validate and refine the model.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/6662968","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144927300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Elevating Cloud Security With Advanced Trust Evaluation and Optimization of Hybrid Fireberg Technique","authors":"Himani Saini, Gopal Singh, Amrinder Kaur, Sunil Saini, Niyaz Ahmad Wani, Vikram Chopra, Rashiq Rafiq Marie, Tehseen Mazhar, Mamoon M. Saeed","doi":"10.1049/sfw2/3296533","DOIUrl":"10.1049/sfw2/3296533","url":null,"abstract":"<p>The rapid expansion of the cloud service industry has raised the critical challenge of ensuring efficient job allocation and trust within a backdrop of heightened privacy concerns. Existing models often struggle to achieve an optimal balance between these factors, particularly in dynamic cloud environments. This research introduces a comprehensive approach that optimizes trust-based job allocation in cloud services while addressing privacy issues. Our proposed hybrid model integrates k-anonymity techniques for privacy preservation, coupled with a firefly-Levenberg (Fireberg) optimization to bolster trust generation. It also employs the time-aware modified best fit decreasing (T-MBFD) allocation policy to make resource allocation time-sensitive. This strategic allocation approach enhances cloud computing system performance and scalability. Simulations using a dataset of 95,000 records demonstrate that our model achieves an impressive 96% accuracy, surpassing existing literature by 5%–14%. The results highlight the model’s ability to provide robust privacy protection while ensuring efficient resource allocation. The proposed hybrid model promises cloud service users high-quality, secure, and efficient job allocations, thereby improving customer satisfaction and trust. This research makes significant contributions to fortifying the reliability and appeal of cloud services in an evolving digital landscape.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/3296533","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144881286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET SoftwarePub Date : 2025-07-29DOI: 10.1049/sfw2/4518420
Salvador de Haro, Esteban Becerra, Pilar González-Férez, José M. García, Gregorio Bernabé
{"title":"A Real Time Cardiomyopathy Detection Tool Using Ml Ensemble Models","authors":"Salvador de Haro, Esteban Becerra, Pilar González-Férez, José M. García, Gregorio Bernabé","doi":"10.1049/sfw2/4518420","DOIUrl":"10.1049/sfw2/4518420","url":null,"abstract":"<p>Left Ventricular noncompaction (LVNC) is a recently classified form of cardiomyopathy. Although various methods have been proposed for accurately quantifying trabeculae in the left ventricle (LV), consensus on the optimal approach remains elusive. Previous research introduced DL-LVTQ, a deep learning solution for trabecular quantification based on a UNet 2D convolutional neural network (CNN) architecture and a graphical user interface (GUI) to streamline its use in clinical workflows. Building on this foundation, this work presents LVNC detector, an enhanced application designed to support cardiologists in the automated diagnosis of LVNC. The application integrates two segmentation models: DL-LVTQ and ViTUNet, the latter inspired by modern hybrid architectures combining convolutional neural networks (CNNs) and transformer-based designs. These models, implemented within an ensemble framework, leverage advancements in deep learning to improve the accuracy and robustness of magnetic resonance imaging (MRI) segmentation. Key innovations include multithreading to optimize model loading times and ensemble methods to enhance segmentation consistency across MRI slices. Additionally, the platform-independent design ensures compatibility with Windows and Linux, eliminating complex setup requirements. The LVNC detector delivers an efficient and user-friendly solution for LVNC diagnosis. It enables real-time performance and allows cardiologists to select and compare segmentation models for improved diagnostic outcomes. This work demonstrates how state-of-the-art machine learning techniques can seamlessly integrate into clinical practice to reduce human error and expedite diagnostic processes.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/4518420","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144725499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET SoftwarePub Date : 2025-07-16DOI: 10.1049/sfw2/4145192
Xin Cheng, Feng Wang, Ali Akbar Siddique, Zain Anwar Ali
{"title":"Implementation of Neural Style Transformation Technique for Artistic Image Processing Using VGG19","authors":"Xin Cheng, Feng Wang, Ali Akbar Siddique, Zain Anwar Ali","doi":"10.1049/sfw2/4145192","DOIUrl":"10.1049/sfw2/4145192","url":null,"abstract":"<p>Image transformation is performed for basic image generation and color correction. In many applications, images are used for visual analysis or mainly for creating content. Similarly, stylized transformation is a process of transforming images into art-based content. To perform this artistic rendition through the process of image-stylized transformation, this article used the VGG19 classifier. The procedure begins by preprocessing both the content image and style image for reference, which includes resizing them to a maximum dimension while keeping their initial aspect ratio and transforming them into an array. The utility function reprocesses the image by clipping and normalizing pixel values. Content loss is calculated by comparing the feature maps of the derived content with the processed or stylized image generated by the model. Gradients of the loss concerning the generated image are computed and used to iteratively update the generated image. The process involves sequential display and processing of intermediate images until the process reaches 1000 iterations. In the end, the process produced a stylized image that depicts the artwork as its original counterpart.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/4145192","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144635252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}