{"title":"Time series correlated key–value data collection with local differential privacy","authors":"Yuling Luo, Yali Wan, Xue Ouyang, Junxiu Liu, Qiang Fu, Sheng Qin, Ziqi Yuan, Tinghua Hu","doi":"10.1016/j.cose.2025.104610","DOIUrl":"10.1016/j.cose.2025.104610","url":null,"abstract":"<div><div>The data generated by users in various scenarios, such as video-sharing applications or smart home energy systems, requires robust privacy protection due to its sensitive nature. This includes estimating user behaviour over time, such as the proportion of users watching video, the average watching ratio, or household energy consumption and average electricity usage. After privacy protection is applied, the processed data is used to analyse user behaviour and optimize systems. However, this specific requirement for high accuracy in frequency and mean estimation after privacy protection is not effectively addressed by existing methods. To fill this gap, the Time Correlated Key–Value with Local Differential Privacy (TSCKV) is proposed in this paper. A tighter privacy budget composition bound is obtained by a perturbation scheme that exploits key–value (<span><math><mrow><mi>k</mi><mo>−</mo><mi>v</mi></mrow></math></span>) pair correlations while sacrificing some of the value data. By setting a threshold, values that change below it can be set to zero directly, saving the privacy budget. Estimators and correctors for the <span><math><mrow><mi>k</mi><mo>−</mo><mi>v</mi></mrow></math></span> pairs are proposed by this work. Using the real Kuairec dataset, experiments show that the overall statistical utility of TSCKV, including frequency and mean estimation, is higher than that of the time series data mechanism alone and the <span><math><mrow><mi>k</mi><mo>−</mo><mi>v</mi></mrow></math></span> pair mechanism with simple privacy budget allocation. Additionally, TSCKV achieves more accurate early frequency estimation compared to the static <span><math><mrow><mi>k</mi><mo>−</mo><mi>v</mi></mrow></math></span> pair correlated perturbation mechanism.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"157 ","pages":"Article 104610"},"PeriodicalIF":5.4,"publicationDate":"2025-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144809542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuqi Zhai , Rui Ma , Zheng Zhang , Siqi Zhao , Yuche Yang
{"title":"MSNFuzz: Multi-criteria state-sensitive network protocol fuzzing","authors":"Yuqi Zhai , Rui Ma , Zheng Zhang , Siqi Zhao , Yuche Yang","doi":"10.1016/j.cose.2025.104621","DOIUrl":"10.1016/j.cose.2025.104621","url":null,"abstract":"<div><div>Existing protocol fuzzing techniques suffer a lot from lacking state guidance on seed evaluation during seed selection and energy allocation. That reduces fuzzing efficiency and effectiveness. We thus conduct a research focusing on seed evaluation in grey-box protocol fuzzing and propose a multi-criteria state-sensitive network protocol fuzzing method named MSNFuzz. To improve seed evaluation, we firstly re-think and re-evaluate seed potential in protocol fuzzing and improve the evaluation by introducing fine-grained state-sensitive criteria. Based on the multi-criteria evaluation, a probability-based greedy algorithm is adopted to prioritize selecting promising seeds to better explore the state space of the protocol. Moreover, we also assign different mutation energies for seeds based on the occurrence frequency of its corresponding state to be selected. That allows for flexible adjustment of mutation energy. We further evaluate the performance of MSNFuzz by comparing with AFLNET, AFLNWE, StateAFL and NSFuzz, on 13 typical protocol programs from ProFuzzBench. The experimental results show that MSNFuzz discovers 17.7%, 57.7% and 30.0% more paths, 52.4%, 123.6% and 71.0% more crashes than AFLNET, AFLNWE, and StateAFL on average, and discovers 0.18% more paths and 1.8% less crashes than NSFuzz, which is the state-of-the-art but relatively heavy solution. Besides, MSNFuzz discovers 22.1% more states and 16.5% state transitions than AFLNET on average. That highlights MSNFuzz could improve the efficiency and effectiveness of fuzzing.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"158 ","pages":"Article 104621"},"PeriodicalIF":5.4,"publicationDate":"2025-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144893163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Priva Chassem Kamdem , Alain Zemkoho , Laurent Njilla , M. Nkenlifack , Charles Kamhoua
{"title":"Multi-domain deception for enhanced security in automotive networks","authors":"Priva Chassem Kamdem , Alain Zemkoho , Laurent Njilla , M. Nkenlifack , Charles Kamhoua","doi":"10.1016/j.cose.2025.104600","DOIUrl":"10.1016/j.cose.2025.104600","url":null,"abstract":"<div><div>As the automotive industry increasingly integrates digital technologies, the threat of cyberattacks has emerged as a critical concern. In this work, we propose two distinct cyber deception strategies: reactive deception, which leverages multi-domain architectures to mitigate remote attacks, and proactive deception, focused on the strategic allocation of honeypots. The reactive approach addresses coordination and synchronization challenges in interconnected automotive systems by implementing an interdependent deception framework, thereby enhancing protection against multi-faceted cyber threats. In contrast, the proactive strategy employs a multi-objective optimization framework to allocate honeypots effectively, achieving Pareto Nash equilibrium solutions that balance competing defense objectives. We quantitatively compare our multi-domain reactive approach with traditional single-domain strategies, demonstrating significant defensive advantages in complex, cross-domain attack scenarios. Experimental results reveal that the multi-domain strategy improves defense effectiveness by approximately 19% compared to conventional methods.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"157 ","pages":"Article 104600"},"PeriodicalIF":5.4,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144830067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gianpietro Castiglione , Daniele Francesco Santamaria , Giampaolo Bella , Laura Brisindi , Gaetano Puccia
{"title":"Guiding cybersecurity compliance: An ontology for the NIS 2 directive","authors":"Gianpietro Castiglione , Daniele Francesco Santamaria , Giampaolo Bella , Laura Brisindi , Gaetano Puccia","doi":"10.1016/j.cose.2025.104617","DOIUrl":"10.1016/j.cose.2025.104617","url":null,"abstract":"<div><div>Security compliance constitutes a significant source of concern for many corporate decision-makers due to its complexity and cost. These may be due, first and foremost, to the style of juridical language, which is often challenging to translate into concrete operational procedures. To facilitate such a translation and ultimately optimise the compliance effort, this article presents “NIS2Onto”, an <em>Web Ontology Language</em> (OWL) ontology designed to translate the <em>Network and Information Security Directive</em> version 2 (NIS 2) into an ontological format aimed to favour unambiguous understanding and security operations of cybersecurity professionals, legal experts, and all organisational stakeholders. Through the semantic representation of the NIS 2 entities, relationships, and security measures, NIS2Onto enables automated compliance verification, streamlined risk assessments, and effective policy implementation. Our evaluation employs both metrical and qualitative analysis through a real case study to witness the robustness and practical applicability of NIS2Onto. The ontology not only supports the accurate interpretation of complex legal texts but also aids in systematically enforcing cybersecurity measures. Furthermore, the extensibility of NIS2Onto allows for integration with other regulatory frameworks, thereby fostering a comprehensive and unified approach to cybersecurity governance.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"157 ","pages":"Article 104617"},"PeriodicalIF":5.4,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144830064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nilantha Prasad , Abebe Diro , Matthew Warren , Mahesh Fernando
{"title":"A survey of cyber threat attribution: Challenges, techniques, and future directions","authors":"Nilantha Prasad , Abebe Diro , Matthew Warren , Mahesh Fernando","doi":"10.1016/j.cose.2025.104606","DOIUrl":"10.1016/j.cose.2025.104606","url":null,"abstract":"<div><div>The escalating sophistication of cyberattacks, exemplified by supply chain compromises, AI-driven obfuscation, and politically motivated campaigns, makes accurate attribution a critical yet elusive challenge for national security and economic stability. The inability to reliably trace attacks to their source undermines deterrence, distorts policy responses, and erodes trust in digital ecosystems. Traditional methods struggle with the sheer volume of digital evidence, rapidly evolving adversary tactics, and the inherent complexities of cross-border operations. Moreover, existing literature often provides fragmented analyses, focuses narrowly on cyber threat intelligence sharing or specific threat types, or predates significant advancements in AI/ML tailored for attribution. This survey offers a comprehensive, interdisciplinary review of cyber threat attribution, bridging these critical gaps by systematically analyzing its multifaceted dimensions: technical, legal, geopolitical, social, and economic. Employing a rigorous, PRISMA-ScR compliant methodology that included structured screening and quality assessment across six major databases, we critically appraise current techniques and identify a paradigm shift toward data-driven, intelligent approaches. A key contribution is our novel taxonomy, which structures attribution research by attribution confidence & granularity (the Level of attribution), analytical domains (the “How” and “Where” of evidence processing) and adversarial motivation & profile (the “Why” and “Who”), providing a crucial framework for systematic cross-study comparisons in a complex field. Our findings underscore the transformative potential of emerging AI/ML techniques, particularly graph neural networks, in automating analysis, identifying subtle patterns, and extracting crucial insights from vast datasets, thereby revolutionizing attribution accuracy. This research provides actionable insights for practitioners and policymakers, offering a comprehensive roadmap to advance cyber defense and foster a more resilient and secure global digital ecosystem.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"157 ","pages":"Article 104606"},"PeriodicalIF":5.4,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144830066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Methodological reliability evaluation of trust and reputation management systems","authors":"Marek Janiszewski , Krzysztof Szczypiorski","doi":"10.1016/j.cose.2025.104620","DOIUrl":"10.1016/j.cose.2025.104620","url":null,"abstract":"<div><div>Trust and Reputation Management (TRM) systems are used in various environments, and their main goal is to ensure efficiency despite malicious or unreliable agents striving to maximize their usefulness or to disrupt the operations of other agents. However, TRM systems can be targeted by specific attacks, which can reduce the efficiency of the environment. The impact of such attacks on a specific system cannot be easily anticipated and evaluated. The article presents models of the environment and the TRM system operating in the environment. On that basis, measures of the reliability of TRM systems were defined to enable a comprehensive and quantitative evaluation of the resistance of such systems to attacks. The presented methodology is then used to evaluate an example TRM system (RefTRM), through the created and briefly described tool TRM-RET (Trust and Reputation Management – Reliability Evaluation Testbed). The results indicate that the system's specific properties can be indicated on the basis of the tests and metrics proposed; for example, the RefTRM system is quite vulnerable to an attack tailored to the parameters used by this system.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"158 ","pages":"Article 104620"},"PeriodicalIF":5.4,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144866728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An efficient and commercial proof of storage scheme supporting dynamic data updates","authors":"Zhenwu Xu , Xingshu Chen , Liangguo Chen , Xiao Lan , Hao Ren , Changxiang Shen","doi":"10.1016/j.cose.2025.104609","DOIUrl":"10.1016/j.cose.2025.104609","url":null,"abstract":"<div><div>With the advancement of distributed computing technology, cloud services have achieved significant breakthroughs in both computing and storage. To fully leverage the achievements in these two areas, a multitude of intelligent endpoints (clients) are being connected to the cloud. Although this arrangement minimizes the expenses associated with constructing and maintaining cloud infrastructure, the integrity of remote data is at considerable risk in this scenario. Current data integrity verification schemes can be categorized into two types: one that does not take storage duration into account (i.e., pay-as-you-go model) and another that does. Unfortunately, these current schemes have gained notoriety for their complex computing requirements. Furthermore, existing research has not made significant progress in optimizing the efficiency of data integrity audit, particularly when it comes to audit large-batches of data. In light of these challenges, we propose an efficient and commercial proof of storage scheme supporting dynamic data updates (ECPOS-SDDU). As per our knowledge, our proposal is the first that not only aligns with the pay-as-you-go model but also enables low-computation and low-storage terminals(client)/third-party auditors(TPA) to perform large-batches audit. The ECPOS-SDDU not only ensures the lightweight client and TPA can conduct efficient audits on data integrity but also maintains the privacy of the data owner (i.e., the client data) amidst third-party audit processes. Besides this, we have designed the large-batches auditing based on the knowledge of vector inner products and polynomials. Whether the verifier is a client or a TPA, they can configure parameters suitable for their needs to audit more data blocks with an appropriate number of communications. Equally important, we have designed an efficient data structure to support the dynamic operation of data, which further highlights the superiority of the solution and enhances its comprehensiveness. Through both theoretical and experimental analysis, we provide evidence of the protocol’s security, practicality and superiority, in this discourse.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"157 ","pages":"Article 104609"},"PeriodicalIF":5.4,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144826495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A personalized and semantic-aware approach for trajectory protection","authors":"Yong-Yi Chen, Yu-Ling Hsueh","doi":"10.1016/j.cose.2025.104608","DOIUrl":"10.1016/j.cose.2025.104608","url":null,"abstract":"<div><div>State-of-the-art privacy protection research often aims to reduce the computational costs required of entire trajectories by typically omitting less significant location information. Considering locations where users frequently stay for a longer duration or frequently visit as stay points, techniques such as location generalization, location deception, location perturbation, <span><math><mi>k</mi></math></span>-anonymity, cryptography, and the involvement of a trusted third party (TTP for short) are employed to achieve privacy protection at these stay points. Semantic-aware trajectory privacy methods typically either categorize semantic values or use user role differences in locations to establish LBS queries with similar or different semantic types of point of interest (POI for short) to protect users’ semantic privacy. However, techniques such as generalization, deception, and perturbation often yield less accurate results. The <span><math><mi>k</mi></math></span>-anonymity technique requires handling numerous service requests, cryptography entails significant computational costs, and TTP might become a target for attacks leading to severe privacy breaches. Identifying stay points or user role differences can only be done after the trajectory has been completely established. Classifying semantic values cannot effectively achieve the semantic privacy users require. To address these shortcomings and establish spatial–temporal correlations between trajectories and semantic values, we propose a novel personalized semantic-aware obfuscation scheme (PSAS for short) combined with differential privacy. PSAS utilizes Markov chains to establish spatial–temporal correlations and to predict user movement points to reduce query frequency. This study introduces a novel graph structure to represent semantic relationships, and calculates semantic importance using term frequency-inverse document frequency (TF-IDF for short). By adopting differential privacy, trajectories are added with noise based on different location privacy budgets to protect users’ privacy of locations, POIs, and trajectories. Experimental results demonstrate that PSAS effectively and comprehensively protects trajectory data and semantic privacy without sacrificing quality of service (QoS for short).</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"157 ","pages":"Article 104608"},"PeriodicalIF":5.4,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144779795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A login page transparency and visual similarity-based zero-day phishing defense protocol","authors":"Gaurav Varshney , Akanksha Raj , Divya Sangwan , Sharif Abuadbba , Rina Mishra , Yansong Gao","doi":"10.1016/j.cose.2025.104598","DOIUrl":"10.1016/j.cose.2025.104598","url":null,"abstract":"<div><div>Phishing is a prevalent cyberattack that uses look-alike websites to deceive users into revealing sensitive information. Numerous efforts have been made by the Internet community and security organizations to detect, prevent, or train users to avoid falling victim to phishing attacks. Most of this research over the years has been highly diverse and application-oriented, often serving as standalone solutions for HTTP clients, servers, or third parties. However, limited work has been done to develop a comprehensive or proactive protocol-oriented solution to effectively counter phishing attacks. Inspired by the concept of certificate transparency, which allows certificates issued by Certificate Authorities (CAs) to be publicly verified by clients, thereby enhancing transparency, we propose a concept called Page Transparency (PT) for the web. The proposed PT requires login pages that capture users’ sensitive information to be publicly logged via PLS and made available to web clients for verification. The pages are verified to be logged using cryptographic proofs. Since all pages are logged on a PLS and visually compared with existing pages through a comprehensive visual page-matching algorithm, it becomes impossible for an attacker to register a deceptive look-alike page on the PLS and receive the cryptographic proof required for client verification. All implementations occur on the client side, facilitated by the introduction of a new HTTP PT header, eliminating the need for platform-specific changes or the installation of third-party solutions for phishing prevention.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"158 ","pages":"Article 104598"},"PeriodicalIF":5.4,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144866729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing security requirements specification with SECRET-SCORE: A template-driven and ontology-based approach","authors":"Hiba Hnaini , Raúl Mazo , Paola Vallejo , Andrés López , Joël Champeau","doi":"10.1016/j.cose.2025.104605","DOIUrl":"10.1016/j.cose.2025.104605","url":null,"abstract":"<div><div>In our rapidly changing world, where technology is integral to every aspect of our lives, ensuring our systems’ security is paramount. As industries become increasingly interconnected, the risk of security vulnerabilities and targeted attacks increases. Establishing robust security requirements is crucial to safeguard sensitive information and protect against malicious threats. To simplify and improve the quality of these requirements, researchers have proposed templates or boilerplates that can guide requirements engineers when defining requirements. However, this approach can help define the requirement structure without suggesting what security requirements to define to have a well-secured system. This paper proposes a guided strategy that combines (i) SECRET (SECurity REquirements specification Template) for a guided specification of each requirement and (ii) SCORE (Security Criteria Ontology for security Requirements Engineering) to suggest additional security requirements. We implemented the SECRET-SCORE approach using an autocomplete service that connects the SECRET template to the SCORE ontology. We then used the service to create a new language for security requirements in the VariaMos online tool. Finally, to test the usability of the implemented language, we conducted a usability test that reported high results in usability and user satisfaction with the developed tool.</div></div>","PeriodicalId":51004,"journal":{"name":"Computers & Security","volume":"157 ","pages":"Article 104605"},"PeriodicalIF":5.4,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144763768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}