Francisco Ruiz-Lopez , Jean-Pierre Micaelli , Eric Bonjour , Javier Ortiz-Hernandez
{"title":"A preliminary conceptual structure for Computer-based Process Maturity Models, using a Cone-Based Conceptual Network and NIAM diagrams","authors":"Francisco Ruiz-Lopez , Jean-Pierre Micaelli , Eric Bonjour , Javier Ortiz-Hernandez","doi":"10.1016/j.csi.2024.103950","DOIUrl":"10.1016/j.csi.2024.103950","url":null,"abstract":"<div><div>Process maturity models support audit and assessment missions (AAMs) focusing on organizational routines. Under Traditional Process Maturity Models (TP2Ms), auditors use document-based surveys whereas organizations produce data in their daily activities employing Information Technologies (IT). Therefore, how to bridge the gap between IT capabilities and AAMs? One answer could be to design a Trace-Based System (TBS) capturing raw data of the daily activity and transmuting it into transformed traces from which it can be possible to automatically reconstruct processes and evaluate their maturity. Despite its obvious practical value, this processing is not so easy. A first phase must be realized, which consists of modeling the conceptual structure of current TP2Ms and of future possible Computer-based Process Maturity Models (CP2Ms) based on TBSs. To achieve this goal, this paper proposes a Cone-Based Conceptual Network (CBCN) to give the big picture of TP2Ms' and CP2Ms' scopes, then it proposes to model this CBCN with the use of Nijssen Information Analysis Method (NIAM) and to verify the semantic consistency of this preliminary conceptual structure. The result is a first step (at an early stage) of the development of computer-based (or trace-based) process maturity assessment tools. It allows auditors and IT specialists to have a big picture of the domain of interest and to map the different knowledge areas they need to acquire and combine to perform AAMs.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"93 ","pages":"Article 103950"},"PeriodicalIF":4.1,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142759661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The use of large language models for program repair","authors":"Fida Zubair, Maryam Al-Hitmi, Cagatay Catal","doi":"10.1016/j.csi.2024.103951","DOIUrl":"10.1016/j.csi.2024.103951","url":null,"abstract":"<div><div>Large Language Models (LLMs) have emerged as a promising approach for automated program repair, offering code comprehension and generation capabilities that can address software bugs. Several program repair models based on LLMs have been developed recently. However, findings and insights from these efforts are scattered across various studies, lacking a systematic overview of LLMs' utilization in program repair. Therefore, this Systematic Literature Review (SLR) was conducted to investigate the current landscape of LLM utilization in program repair. This study defined seven research questions and thoroughly selected 41 relevant studies from scientific databases to explore these questions. The results showed the diverse capabilities of LLMs for program repair. The findings revealed that Encoder-Decoder architectures emerged as the most common LLM design for program repair tasks and that mostly open-access datasets were used. Several evaluation metrics were applied, primarily consisting of accuracy, exact match, and BLEU scores. Additionally, the review investigated several LLM fine-tuning methods, including fine-tuning on specialized datasets, curriculum learning, iterative approaches, and knowledge-intensified techniques. These findings pave the way for further research on utilizing the full potential of LLMs to revolutionize automated program repair.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"93 ","pages":"Article 103951"},"PeriodicalIF":4.1,"publicationDate":"2024-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142747598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mustafa Asci , Zuleyha Akusta Dagdeviren , Vahid Khalilpour Akram , Huseyin Ugur Yildiz , Orhan Dagdeviren , Bulent Tavli
{"title":"Enhancing drone network resilience: Investigating strategies for k-connectivity restoration","authors":"Mustafa Asci , Zuleyha Akusta Dagdeviren , Vahid Khalilpour Akram , Huseyin Ugur Yildiz , Orhan Dagdeviren , Bulent Tavli","doi":"10.1016/j.csi.2024.103941","DOIUrl":"10.1016/j.csi.2024.103941","url":null,"abstract":"<div><div>Drones have recently become more popular due to technological improvements that have made them useful in many other industries, including agriculture, emergency services, and military operations. Coordination of communication amongst drones is often required for the efficient performance of missions. With an emphasis on building robust <span><math><mi>k</mi></math></span>-connected networks and restoration procedures, this paper investigates the relevance of connection in drone swarms. Specifically, we tackle the <span><math><mi>k</mi></math></span>-connectivity restoration problem, which aims to create <span><math><mi>k</mi></math></span>-connected networks by moving the drones as little as possible. We propose four novel approaches, including an integer programming model, an integer programming-based heuristic approach, a node converging heuristic, and a cluster moving heuristic. Through extensive measurements taken from various drone networking setups, we provide a comparative analysis of the proposed approaches. Our evaluations reveal that the drone movements produced by the integer programming-based heuristics are nearly the same as the original mathematical formulation, whereas the other heuristics are favorable in terms of execution time.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103941"},"PeriodicalIF":4.1,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142700964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ning Tao , Anthony Ventresque , Vivek Nallur , Takfarinas Saber
{"title":"Grammar-obeying program synthesis: A novel approach using large language models and many-objective genetic programming","authors":"Ning Tao , Anthony Ventresque , Vivek Nallur , Takfarinas Saber","doi":"10.1016/j.csi.2024.103938","DOIUrl":"10.1016/j.csi.2024.103938","url":null,"abstract":"<div><div>Program synthesis is an important challenge that has attracted significant research interest, especially in recent years with advancements in Large Language Models (LLMs). Although LLMs have demonstrated success in program synthesis, there remains a lack of trust in the generated code due to documented risks (e.g., code with known and risky vulnerabilities). Therefore, it is important to restrict the search space and avoid bad programs. In this work, pre-defined restricted Backus–Naur Form (BNF) grammars are utilised, which are considered ‘safe’, and the focus is on identifying the most effective technique for <em>grammar-obeying program synthesis</em>, where the generated code must be correct and conform to the predefined grammar. It is shown that while LLMs perform well in generating correct programs, they often fail to produce code that adheres to the grammar. To address this, a novel Similarity-Based Many-Objective Grammar Guided Genetic Programming (SBMaOG3P) approach is proposed, leveraging the programs generated by LLMs in two ways: (i) as seeds following a grammar mapping process and (ii) as targets for similarity measure objectives. Experiments on a well-known and widely used program synthesis dataset indicate that the proposed approach successfully improves the rate of grammar-obeying program synthesis compared to various LLMs and the state-of-the-art Grammar-Guided Genetic Programming. Additionally, the proposed approach significantly improved the solution in terms of the best fitness value of each run for 21 out of 28 problems compared to G3P.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103938"},"PeriodicalIF":4.1,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142663154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yihao Li , Pan Liu , Haiyang Wang , Jie Chu , W. Eric Wong
{"title":"Evaluating large language models for software testing","authors":"Yihao Li , Pan Liu , Haiyang Wang , Jie Chu , W. Eric Wong","doi":"10.1016/j.csi.2024.103942","DOIUrl":"10.1016/j.csi.2024.103942","url":null,"abstract":"<div><div>Large language models (LLMs) have demonstrated significant prowess in code analysis and natural language processing, making them highly valuable for software testing. This paper conducts a comprehensive evaluation of LLMs applied to software testing, with a particular emphasis on test case generation, error tracing, and bug localization across twelve open-source projects. The advantages and limitations, as well as recommendations associated with utilizing LLMs for these tasks, are delineated. Furthermore, we delve into the phenomenon of hallucination in LLMs, examining its impact on software testing processes and presenting solutions to mitigate its effects. The findings of this work contribute to a deeper understanding of integrating LLMs into software testing, providing insights that pave the way for enhanced effectiveness in the field.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"93 ","pages":"Article 103942"},"PeriodicalIF":4.1,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142723153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marc Alier , Juanan Pereira , Francisco José García-Peñalvo , Maria Jose Casañ , Jose Cabré
{"title":"LAMB: An open-source software framework to create artificial intelligence assistants deployed and integrated into learning management systems","authors":"Marc Alier , Juanan Pereira , Francisco José García-Peñalvo , Maria Jose Casañ , Jose Cabré","doi":"10.1016/j.csi.2024.103940","DOIUrl":"10.1016/j.csi.2024.103940","url":null,"abstract":"<div><div>This paper presents LAMB (Learning Assistant Manager and Builder), an innovative open-source software framework designed to create AI-powered Learning Assistants tailored for integration into learning management systems. LAMB addresses critical gaps in existing educational AI solutions by providing a framework specifically designed for the unique requirements of the education sector. It introduces novel features, including a modular architecture for seamless integration of AI assistants into existing LMS platforms and an intuitive interface for educators to create custom AI assistants without coding skills. Unlike existing AI tools in education, LAMB provides a comprehensive framework that addresses privacy concerns, ensures alignment with institutional policies, and promotes using authoritative sources. LAMB leverages the capabilities of large language models and associated generative artificial intelligence technologies to create generative intelligent learning assistants that enhance educational experiences by providing personalized learning support based on clear directions and authoritative fonts of information. Key features of LAMB include its modular architecture, which supports prompt engineering, retrieval-augmented generation, and the creation of extensive knowledge bases from diverse educational content, including video sources. The development and deployment of LAMB were iteratively refined using a minimum viable product approach, exemplified by the learning assistant: “Macroeconomics Study Coach,” which effectively integrated lecture transcriptions and other course materials to support student inquiries. Initial validations in various educational settings demonstrate the potential that learning assistants created with LAMB have to enhance teaching methodologies, increase student engagement, and provide personalized learning experiences. The system's usability, scalability, security, and interoperability with existing LMS platforms make it a robust solution for integrating artificial intelligence into educational environments. LAMB's open-source nature encourages collaboration and innovation among educators, researchers, and developers, fostering a community dedicated to advancing the role of artificial intelligence in education. This paper outlines the system architecture, implementation details, use cases, and the significant benefits and challenges encountered, offering valuable insights for future developments in artificial intelligence assistants for any sector.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103940"},"PeriodicalIF":4.1,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142663153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A lightweight finger multimodal recognition model based on detail optimization and perceptual compensation embedding","authors":"Zishuo Guo, Hui Ma, Ao Li","doi":"10.1016/j.csi.2024.103937","DOIUrl":"10.1016/j.csi.2024.103937","url":null,"abstract":"<div><div>Multimodal biometric recognition technology has attracted the attention of many scholars due to its higher security and stability than single-modal recognition, but its additional parameter quantity and computational cost have brought challenges to the lightweight deployment of the model. In order to meet the needs of a wider range of application scenarios, this paper proposes a lightweight model DPNet using fingerprint and finger vein images for multimodal recognition, which adopts a double-branch lightweight feature extraction structure combining detail optimization and perception compensation. Among them, the detail extraction optimization branch uses multi-scale dimensionality reduction filtering to obtain low-redundant detail information, and combines the depth extension operation to enhance the generalization ability of detail features. The perception compensation branch expands and compensates the model's perceptual field of view through lightweight spatial location query and global information attention. In addition, this paper designs a perceptual feature embedding method to embed perceptual compensation information in the way of importance adjustment to improve the consistency of embedded features. The ABFM fusion module is proposed to carry out multi-level lightweight and deep interactive fusion of the extracted finger modal features from the global to the spatial region, so as to improve the degree and utilization rate of feature fusion. In this paper, the model recognition performance and lightweight advantages are verified on three multimodal datasets. Experimental results show that the proposed model achieves the most advanced lightweight effect and recognition performance in the experimental comparison of all datasets.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103937"},"PeriodicalIF":4.1,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142573008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Developing a behavioural cybersecurity strategy: A five-step approach for organisations","authors":"Tommy van Steen","doi":"10.1016/j.csi.2024.103939","DOIUrl":"10.1016/j.csi.2024.103939","url":null,"abstract":"<div><div>With cybercriminals’ increased attention for human error as attack vector, organisations need to develop strategies to address behavioural risks if they want to keep their organisation secure. The traditional focus on awareness campaigns does not seem suitable for this goal and other avenues of applying the behavioural sciences to this field need to be explored. This paper outlines a five-step approach to developing a behavioural cybersecurity strategy to address this issue. The five steps consist of first deciding whether a solely technical solution is feasible before turning to nudging and affordances, cybersecurity training, and behavioural change campaigns for specific behaviours. The final step is to develop and implement a feedback loop that is used to assess the effectiveness of the strategy and inform organisations about next steps that can be taken. Beyond outlining the five-step approach, a research agenda is discussed aimed at strengthening each of the five steps and helping organisations in becoming more cybersecure by implementing a behavioural cybersecurity strategy.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103939"},"PeriodicalIF":4.1,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142593392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yue Dai , Lulu Xue , Bo Yang , Tao Wang , Kejia Zhang
{"title":"A traceable and revocable decentralized attribute-based encryption scheme with fully hidden access policy for cloud-based smart healthcare","authors":"Yue Dai , Lulu Xue , Bo Yang , Tao Wang , Kejia Zhang","doi":"10.1016/j.csi.2024.103936","DOIUrl":"10.1016/j.csi.2024.103936","url":null,"abstract":"<div><div>Smart healthcare is an emerging technology for enabling interaction between patients and medical personnel, medical institutions, and medical devices utilizing advanced Internet of Things (IoT) technologies. It has attracted significant attention from researchers because of the convenience of storing and sharing electronic medical records (EMRs) in the cloud. Given that a patient’s EMR contains sensitive individual information, it must be encrypted before uploading it to the cloud. As a solution for data confidentiality and fine-grained access control, the Ciphertext Policy Attribute-Based Encryption (CP-ABE) technique is proposed, which helps manipulate private personal data without explicit authorization. However, most CP-ABE schemes use a centralized mechanism which may lead to performance bottlenecks and single-point-of-failure issues. They will also be at risk of key abuse and privacy breaches in smart healthcare applications. To this end, in this paper, we investigate a traceable and revocable decentralized attribute-based encryption scheme with a fully hidden access policy (TR-HP-DABE). Firstly, to overcome the issues of user privacy leakage and single-point-of-failure, a fully hidden access policy is established for multiple attribute authorities. Secondly, to prevent key abuse, the proposed TR-HP-DABE can achieve the tracking and revocation of malicious users by using Key Encryption Key (KEK) trees and updating the partial ciphertext. Furthermore, the online/offline encryption and verifiable outsourced decryption are applied to improve its efficiency in practical smart healthcare. According to our analysis, the security and traceability of TR-HP-DABE can be proved. Finally, the performance evaluation of TR-HP-DABE is more effective than some existing typical ones.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103936"},"PeriodicalIF":4.1,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luis E. Sánchez , Antonio Santos-Olmo , David G. Rosado , Carlos Blanco , Manuel A. Serrano , Haralambos Mouratidis , Eduardo Fernández-Medina
{"title":"MARISMA: A modern and context-aware framework for assessing and managing information cybersecurity risks","authors":"Luis E. Sánchez , Antonio Santos-Olmo , David G. Rosado , Carlos Blanco , Manuel A. Serrano , Haralambos Mouratidis , Eduardo Fernández-Medina","doi":"10.1016/j.csi.2024.103935","DOIUrl":"10.1016/j.csi.2024.103935","url":null,"abstract":"<div><div>In a globalised world dependent on information technology, ensuring adequate protection of an organisation’s information assets has become a decisive factor for the longevity of the organisation’s operation. This is especially important when these organisations are critical infrastructures that provide essential services to nations and their citizens. However, to protect these assets, we must first be able to understand the risks to which they are subject and how to manage them properly. To understand and manage such the risks, we need first to acknowledge that organisations have changed, and they now have an increasing reliance on information assets, which in many cases are shared with other organisations. Such reliance and interconnectivity means that risks are constantly changing, they are dynamic, and potential mitigation does not just rely on the organisation’s own controls, but also on the controls put in place by the organisations with which it shares those assets. Taking the above requirements as essential, we have reviewed the state of the art, and we have concluded that current risk analysis and management systems are unable to meet all the needs inherent in this dynamic and evolving risk environment. This gap in the state of the art requires novel approaches that draw on the foundations of risk management, but they are adapted to the new challenges.</div><div>This article fulfils this gap in the literature with the introduction of MARISMA, a novel security risk analysis and management framework. MARISMA is oriented towards dynamic and adaptive risk management, considering external factors such as associative risks between organisations. MARISMA also contributes to the state of the art through newly developed mechanisms for knowledge reuse and dynamic learning. An important advantage of MARISMA is the connections between its elements that make it possible to reduce the subjectivity inherent in classical risk analysis systems, thereby generating suggestions that allow the translation of perceived security risks into real security risks. The framework comprises a reusable meta-pattern comprising different elements and their interdependencies, a supporting method that guides the entire process, and a cloud-based tool that automates data management and risk methods. MARISMA has been applied to many companies from different countries and sectors (government, maritime, energy, and pharmaceutical). In this paper, we demonstrate its applicability through its application to a real world case study involving a company in the technology sector.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"92 ","pages":"Article 103935"},"PeriodicalIF":4.1,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142446082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}