{"title":"A Hybrid Deep Learning Approach for Multi-Output Short-Term Electricity Demand Forecasting","authors":"Yıldırım Özüpak, Shuhratjon Mansurov","doi":"10.1002/cpe.70356","DOIUrl":"https://doi.org/10.1002/cpe.70356","url":null,"abstract":"<div>\u0000 \u0000 <p>This study proposes hybrid deep learning architectures that integrate convolutional and recurrent layers for short-term electricity demand forecasting. A multivariate half-hourly dataset from Great Britain's National Grid Electricity System Operator (ESO), covering January 2009 to early 2024 (279,264 records), was used for model development. Features include national demand (ND), transmission system demand (TSD), embedded wind and solar generation, interconnector flows, and calendar indicators. Models were evaluated using normalized root mean squared error (nRMSE), normalized mean absolute error (nMAE), and symmetric mean absolute percentage error (SMAPE). Across averaged test metrics, the standalone LSTM achieved the lowest errors (Loss 8.8 × 10<sup>−4</sup>, MSE 0.0018, and MAE 0.0320), while the hybrid CNN + LSTM + DNN and CNN + GRU + DNN attained comparable accuracy and demonstrated greater robustness during peak-load and holiday intervals. Statistical testing indicated that CNN + GRU + DNN significantly outperformed GRU (<i>p</i> = 0.035), but no significant difference was observed when compared with LSTM. These results highlight that while LSTM provides the most accurate overall performance, hybrid architectures offer enhanced stability under volatile demand conditions, ensuring a balanced trade-off between predictive accuracy and operational reliability.</p>\u0000 </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 25-26","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145317566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Learning Techniques for Predictive Modeling of Diabetic Eye Disease in Type 2 Diabetes: A Systematic Review","authors":"Pawandeep Sharma, Amanpreet Kaur Sandhu","doi":"10.1111/coin.70134","DOIUrl":"https://doi.org/10.1111/coin.70134","url":null,"abstract":"<div>\u0000 \u0000 <p>Diabetic retinopathy (DR) is a common complication of type 2 diabetes. It occurs when high blood sugar levels damage the blood vessels in the retina, the light-sensitive tissue at the back of the eye. While diabetic retinopathy can occur in both type 1 and type 2 diabetes, it is indeed more commonly associated with type 2 diabetes due to its higher prevalence and longer duration in many cases. Type 2 diabetes often develops gradually over time, allowing for prolonged exposure to elevated blood sugar levels. This prolonged exposure increases the risk of developing diabetic retinopathy and other diabetes-related complications. The aim of this paper is to analyze the various deep learning models for effective prediction of diabetic retinopathy in patients suffering from Type 2 Diabetes. Furthermore, standard datasets consisting of 38,788 training and 55,504 test images for diabetic retinopathy and blindness are obtained. On the other hand, deep learning models such as ResNet101V2, DenseNet201, InceptionResNetV2, EfficientNetB7, and Xception CNNs are applied to the dataset and trained as well. Moreover, the performance of all the models is assessed on the basis of certain quality measures, such as accuracy, F1 score, recall, precision, RMSE values, and loss. On the other hand, results indicate the potential of deep learning models in accurately predicting diabetic retinopathy, thereby aiding in early diagnosis and intervention to prevent vision loss in patients with Type 2 Diabetes.</p>\u0000 </div>","PeriodicalId":55228,"journal":{"name":"Computational Intelligence","volume":"41 5","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145317685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yufeng Wang, Fei Xie, Xun Huang, Jianhua Ma, Qun Jin
{"title":"A personalized recommendation framework through exploiting jump-enhanced random walk based multiple heterogeneous graph neural networks","authors":"Yufeng Wang, Fei Xie, Xun Huang, Jianhua Ma, Qun Jin","doi":"10.1007/s10489-025-06938-9","DOIUrl":"10.1007/s10489-025-06938-9","url":null,"abstract":"<div><p>Due to the powerful representation ability to learn the embedding of each node in heterogenous graph (HG), heterogenous graph neural network (HGNN) based personalized recommender can effectively alleviate the notorious issues of user-item interaction sparsity and cold-start in recommendation systems. However, the existing schemes always rely on meta-paths and/or random walks for generating embeddings of nodes in HG. However, the former requires prior domain knowledge to determine the optimal meta-paths, and the latter will bias to the high-degree nodes in HG. To overcome these issues, this paper proposes a novel personalized recommendation framework, MHRec, based on multiple heterogeneous sub-graphs generated by jump-enhanced random walk (JerW). Specifically, our work’s contributions are following. First, the whole HG is explicitly constructed, which not only naturally includes multiple type nodes, i.e., user, item, user attribute, item attribute, and their connections, but also explicitly adds the user-user and item-item edges based on their interactively historical data. Then, starting from each node as ego, JerW is used to construct multiple heterogeneous sub-graphs for the ego, which can balance the distribution of different types of nodes in the formed sub-graphs, and appropriately model the multiple relationships between the ego and its multiple-hop neighboring nodes. Second, on each heterogeneous sub-graph, hierarchical graph representation is designed to formulate the ego’s representation, which is explicitly composed of same-type and cross-type aggregation using GNN with multi-head attention mechanism. Thorough experiments on multiple real-world datasets demonstrate our proposed MHRec outperforms state-of-the-art HGNN based personalized recommendation schemes, in terms of multiple evaluation metrics.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 16","pages":""},"PeriodicalIF":3.5,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145316355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Erratum to ‘Deep Learning-Based Design of Binary Signalling for Optical Wireless Communication Systems With 2D Receiver’","authors":"","doi":"10.1049/ote2.70023","DOIUrl":"https://doi.org/10.1049/ote2.70023","url":null,"abstract":"<p>Yongwoon Hwang, Chung Ghiu Lee, and Soeun Kim. “Deep Learning Based Design of Binary Signalling for Optical Wireless Communication Systems With 2D Receiver.” <i>IET Optoelectronics</i>, 2025; 19:e70015. https://doi.org/10.1049/ote2.70015.</p><p>Figures 7, 9 and 10 in the originally published version were incorrect. The correct figures are given below:</p><p></p><p></p><p></p><p>In addition, the first sentence of Section 6.3, ‘Figure 10 presents the third LED signal pattern set’. was incorrect in the published version. This should have read: ‘Figure 8 presents the third LED signal pattern set’.</p><p>We apologise for this error.</p>","PeriodicalId":13408,"journal":{"name":"Iet Optoelectronics","volume":"19 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/ote2.70023","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145317658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samet Tenekeci, Hüseyin Ünlü, Bedir Arda Gül, Damla Keleş, Murat Küük, Onur Demirörs
{"title":"Automating software size measurement from python code using language models","authors":"Samet Tenekeci, Hüseyin Ünlü, Bedir Arda Gül, Damla Keleş, Murat Küük, Onur Demirörs","doi":"10.1007/s10515-025-00571-z","DOIUrl":"10.1007/s10515-025-00571-z","url":null,"abstract":"<div><p>Software size is a key input for project planning, effort estimation, and productivity analysis. While pre-trained language models have shown promise in deriving functional size from natural-language requirements, measuring size directly from source code remains under-explored. Yet, code-based size measurement is critical in modern workflows where requirement documents are often incomplete or unavailable, especially in Agile development environments. This exploratory study investigates the use of CodeBERT, a pre-trained bimodal transformer model, for measuring software size directly from Python source code according to two measurement methods: COSMIC Function Points and MicroM. We construct two curated datasets from the Python subset of the CodeSearchNet corpus, and manually annotate each function with its corresponding size. Our experimental results show that CodeBERT can successfully measure COSMIC data movements with up to 91.4% accuracy and generalize to the functional, architectural, and algorithmic event types defined in MicroM, reaching up to 81.5% accuracy. These findings highlight the potential of code-based language models for automated functional size measurement when requirement artifacts are absent or unreliable.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"33 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145316540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Personalized safety training for construction workers: A large language model-driven multi-agent framework integrated with knowledge graph reasoning","authors":"Qihua Chen, Xianfei Yin, Beifei Yuan, Qirong Chen","doi":"10.1016/j.compind.2025.104399","DOIUrl":"https://doi.org/10.1016/j.compind.2025.104399","url":null,"abstract":"Construction sites are inherently high-risk environments, making safety training for workers crucial to enhancing operational skills, reinforcing safety awareness, and reducing accident risks. Traditional centralized training often fails to engage workers due to monotonous nature and lack of relevance, leading to low efficiency. Moreover, critical resources such as operating instructions, safety guidelines, and accident reports are frequently mismanaged or underutilized. Therefore, this study proposes ConSTRAG, an innovative personalized construction safety training framework. By integrating large language model-empowered agents with knowledge graph reasoning, ConSTRAG generates tailored training materials, significantly improving the relevance and effectiveness of safety training. Validation tests conducted on a dataset of 11,020 questions achieved an average score of 81.25, exceeding the benchmark by 6.94. The generated personalized training materials were evaluated through an expert questionnaire survey, with an average score of 4.16 out of 5 across five dimensions. This research contributes to overcoming individual heterogeneity in construction safety training, enhances training efficiency and effectiveness, and holds potential for extension to other personnel training industries.","PeriodicalId":55219,"journal":{"name":"Computers in Industry","volume":"99 1","pages":""},"PeriodicalIF":10.0,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A systematic exploration of C-to-rust code translation based on large language models: prompt strategies and automated repair","authors":"Ruxin Zhang, Shanxin Zhang, Linbo Xie","doi":"10.1007/s10515-025-00570-0","DOIUrl":"10.1007/s10515-025-00570-0","url":null,"abstract":"<div><p>C is widely used in system programming due to its low-level flexibility. However, as demands for memory safety and code reliability grow, Rust has become a more favorable alternative owing to its modern design principles. Migrating existing C code to Rust has therefore emerged as a key approach for enhancing the security and maintainability of software systems. Nevertheless, automating such migrations remains challenging due to fundamental differences between the two languages in terms of language design philosophy, type systems, and levels of abstraction. Most current code transformation tools focus on mappings of basic data types and syntactic replacements, such as handling pointers or conversion of lock mechanisms. These approaches often fail to deeply model the semantic features and programming paradigms of the target language. To address this limitation, this paper proposes RustFlow, a C-to-Rust code translation framework based on large language models (LLMs), designed to generate idiomatic and semantically accurate Rust code. This framework employs a multi-stage progressive architecture, which decomposes the overall translation task into several sequential stages, namely translation, validation, and repair. During the translation phase, a collaborative prompting strategy is employed to guide the LLM in achieving cross-language semantic alignment, thereby improving the accuracy of the generated code. Subsequently, a validation mechanism is introduced to perform syntactic and semantic checks on the generated output, and a conversational iterative repair strategy is employed to further enhance the quality of the final result. Experimental results show that RustFlow outperforms most of the latest baseline approaches, achieving an average improvement of 50.67% in translation performance compared to the base LLM. This work offers a novel technical approach and practical support for efficient and reliable cross-language code migration.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"33 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145316543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel probabilistic linguistic group decision-making method driven by DEA cross-efficiency and trust relationship","authors":"Feifei Jin, Shuyan Guo, Jinpei Liu","doi":"10.1007/s10489-025-06696-8","DOIUrl":"10.1007/s10489-025-06696-8","url":null,"abstract":"<div><p>In this paper, a new group decision-making (GDM) method is proposed to improve the quality and efficiency of decision-making. This method considers the degree of preference of decision makers (DMs) for different linguistic terms and adopts the probabilistic linguistic preference relations (PLPRs) model. First, a multiplicative consistency adjustment procedure is proposed to obtain a PLPR with acceptable consistency. Then, the trust matrix among experts is used to determine the weight vector of experts and realize the effective integration of information. After obtaining the collective PLPR, a DEA cross-efficiency model is designed to seek the target decision-making units (DMUs), which are the most efficient in the production possibility set. In addition, an integrated GDM method is designed to rank all alternatives adequately. Finally, the numerical analysis is carried out using the real estate company evaluation as an example. Comparative analysis with other methods quantifies the results, which enables us to evaluate the presented GDM method objectively.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 16","pages":""},"PeriodicalIF":3.5,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145316473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liam Todd, Kashumi Madampe, Hourieh Khalajzadeh, Mojtaba Shahin, John Grundy
{"title":"Towards integrated dashboards for better management of human-centric issues in software development","authors":"Liam Todd, Kashumi Madampe, Hourieh Khalajzadeh, Mojtaba Shahin, John Grundy","doi":"10.1007/s10515-025-00565-x","DOIUrl":"10.1007/s10515-025-00565-x","url":null,"abstract":"<div><p>GitHub and Jira projects typically contain many issues and issue comments used to track project tasks and defects. An important class of issues that needs appropriate consideration is called “<i>human-centric issues</i>”. These issues relate to different human characteristics of end users that need to be identified, tracked and managed differently from traditional technical-related issues. Current management of these human-centric issues during defect management is limited. We introduce a novel dashboard – the (Human-centric Issue Visualiser – HCIV) that categorises and tags these HCIss. We built HCIV prototypes for the two platforms, GitHub and Jira. These tag issues and present them in various visual forms to software practitioners. Using the dashboard, human-centric issues can be prioritised and tracked, and machine learning-generated classifications can be overridden. To reflect these interactions, associated GitHub and Jira issue tags are updated while the user interacts with our dashboard. The user evaluations of our dashboard prototypes show their potential for human-centric issue management. A demo of the GitHub version of the tool being used can be viewed at https://youtu.be/v49aiRiDIPs, and the Jira version can be viewed at https://youtu.be/qQM72SErmqs.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"33 1","pages":""},"PeriodicalIF":3.1,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145316539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Seasonal Characterisation of Sonar Performance for Effective Underwater Surveillance in the Marmara Sea","authors":"Murat Murat, Ugur Kesen","doi":"10.1049/rsn2.70085","DOIUrl":"https://doi.org/10.1049/rsn2.70085","url":null,"abstract":"<p>This study analyses sonar performance for underwater object detection in four regions of the Marmara Sea, using oceanographic data from the Turkish Naval Forces and open source datasets. Simulations were conducted with LYBIN acoustic modelling software across four seasons (January, May, July and October), evaluating variable-depth sonar (VDS) and hull-mounted sonar (HMS) systems for coverage and detection performance. Results identified optimal sonar coverage zones, highlighting seasonal impacts on propagation, with temperature and salinity fluctuations directly influencing performance. Seasonal stratification in the Marmara Sea generates surface ducts and shadow zones that strongly constrain HMS performance, while VDS consistently mitigates these effects. Simulations demonstrate that VDS reduces shadowed areas by 25% across all seasons and regions, extending reliable detection ranges compared with HMS. The study provides a foundation for designing efficient underwater surveillance systems in the Marmara Sea, offering insights for optimising operational strategies. Future research should explore diverse marine conditions and sonar configurations to enhance detection capabilities.</p>","PeriodicalId":50377,"journal":{"name":"Iet Radar Sonar and Navigation","volume":"19 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/rsn2.70085","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145317759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}