{"title":"A Survey-Based Quantitative Analysis of Stress Factors and Their Impacts Among Cybersecurity Professionals","authors":"Sunil Arora, John D. Hastings","doi":"arxiv-2409.12047","DOIUrl":"https://doi.org/arxiv-2409.12047","url":null,"abstract":"This study investigates the prevalence and underlying causes of work-related\u0000stress and burnout among cybersecurity professionals using a quantitative\u0000survey approach guided by the Job Demands-Resources model. Analysis of\u0000responses from 50 cybersecurity practitioners reveals an alarming reality: 44%\u0000report experiencing severe work-related stress and burnout, while an additional\u000028% are uncertain about their condition. The demanding nature of cybersecurity\u0000roles, unrealistic expectations, and unsupportive organizational cultures\u0000emerge as primary factors fueling this crisis. Notably, 66% of respondents\u0000perceive cybersecurity jobs as more stressful than other IT positions, with 84%\u0000facing additional challenges due to the pandemic and recent high-profile\u0000breaches. The study finds that most cybersecurity experts are reluctant to\u0000report their struggles to management, perpetuating a cycle of silence and\u0000neglect. To address this critical issue, the paper recommends that\u0000organizations foster supportive work environments, implement mindfulness\u0000programs, and address systemic challenges. By prioritizing the mental health of\u0000cybersecurity professionals, organizations can cultivate a more resilient and\u0000effective workforce to protect against an ever-evolving threat landscape.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"232 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zitong Shen, Kangzhong Wang, Youqian Zhang, Grace Ngai, Eugene Y. Fu
{"title":"Combating Phone Scams with LLM-based Detection: Where Do We Stand?","authors":"Zitong Shen, Kangzhong Wang, Youqian Zhang, Grace Ngai, Eugene Y. Fu","doi":"arxiv-2409.11643","DOIUrl":"https://doi.org/arxiv-2409.11643","url":null,"abstract":"Phone scams pose a significant threat to individuals and communities, causing\u0000substantial financial losses and emotional distress. Despite ongoing efforts to\u0000combat these scams, scammers continue to adapt and refine their tactics, making\u0000it imperative to explore innovative countermeasures. This research explores the\u0000potential of large language models (LLMs) to provide detection of fraudulent\u0000phone calls. By analyzing the conversational dynamics between scammers and\u0000victims, LLM-based detectors can identify potential scams as they occur,\u0000offering immediate protection to users. While such approaches demonstrate\u0000promising results, we also acknowledge the challenges of biased datasets,\u0000relatively low recall, and hallucinations that must be addressed for further\u0000advancement in this field","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hidde Lycklama, Alexander Viand, Nikolay Avramov, Nicolas Küchler, Anwar Hithnawi
{"title":"Artemis: Efficient Commit-and-Prove SNARKs for zkML","authors":"Hidde Lycklama, Alexander Viand, Nikolay Avramov, Nicolas Küchler, Anwar Hithnawi","doi":"arxiv-2409.12055","DOIUrl":"https://doi.org/arxiv-2409.12055","url":null,"abstract":"The widespread adoption of machine learning (ML) in various critical\u0000applications, from healthcare to autonomous systems, has raised significant\u0000concerns about privacy, accountability, and trustworthiness. To address these\u0000concerns, recent research has focused on developing zero-knowledge machine\u0000learning (zkML) techniques that enable the verification of various aspects of\u0000ML models without revealing sensitive information. Recent advances in zkML have\u0000substantially improved efficiency; however, these efforts have primarily\u0000optimized the process of proving ML computations correct, often overlooking the\u0000substantial overhead associated with verifying the necessary commitments to the\u0000model and data. To address this gap, this paper introduces two new\u0000Commit-and-Prove SNARK (CP-SNARK) constructions (Apollo and Artemis) that\u0000effectively address the emerging challenge of commitment verification in zkML\u0000pipelines. Apollo operates on KZG commitments and requires white-box use of the\u0000underlying proof system, whereas Artemis is compatible with any homomorphic\u0000polynomial commitment and only makes black-box use of the proof system. As a\u0000result, Artemis is compatible with state-of-the-art proof systems without\u0000trusted setup. We present the first implementation of these CP-SNARKs, evaluate\u0000their performance on a diverse set of ML models, and show substantial\u0000improvements over existing methods, achieving significant reductions in prover\u0000costs and maintaining efficiency even for large-scale models. For example, for\u0000the VGG model, we reduce the overhead associated with commitment checks from\u000011.5x to 1.2x. Our results suggest that these contributions can move zkML\u0000towards practical deployment, particularly in scenarios involving large and\u0000complex ML models.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruiqiang Li, Brian Yecies, Qin Wang, Shiping Chen, Jun Shen
{"title":"Empowering Visual Artists with Tokenized Digital Assets with NFTs","authors":"Ruiqiang Li, Brian Yecies, Qin Wang, Shiping Chen, Jun Shen","doi":"arxiv-2409.11790","DOIUrl":"https://doi.org/arxiv-2409.11790","url":null,"abstract":"The Non-Fungible Tokens (NFTs) has the transformative impact on the visual\u0000arts industry by examining the nexus between empowering art practices and\u0000leveraging blockchain technology. First, we establish the context for this\u0000study by introducing some basic but critical technological aspects and\u0000affordances of the blockchain domain. Second, we revisit the creative practices\u0000involved in producing traditional artwork, covering various types, production\u0000processes, trading, and monetization methods. Third, we introduce and define\u0000the key fundamentals of the blockchain ecosystem, including its structure,\u0000consensus algorithms, smart contracts, and digital wallets. Fourth, we narrow\u0000the focus to NFTs, detailing their history, mechanics, lifecycle, and\u0000standards, as well as their application in the art world. In particular, we\u0000outline the key processes for minting and trading NFTs in various marketplaces\u0000and discuss the relevant market dynamics and pricing. We also consider major\u0000security concerns, such as wash trading, to underscore some of the central\u0000cybersecurity issues facing this domain. Finally, we conclude by considering\u0000future research directions, emphasizing improvements in user experience,\u0000security, and privacy. Through this innovative research overview, which\u0000includes input from creative industry and cybersecurity sdomain expertise, we\u0000offer some new insights into how NFTs can empower visual artists and reshape\u0000the wider copyright industries.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Chen, Xiaoyang Dong, Jian Guo, Yantian Shen, Anyu Wang, Xiaoyun Wang
{"title":"Hard-Label Cryptanalytic Extraction of Neural Network Models","authors":"Yi Chen, Xiaoyang Dong, Jian Guo, Yantian Shen, Anyu Wang, Xiaoyun Wang","doi":"arxiv-2409.11646","DOIUrl":"https://doi.org/arxiv-2409.11646","url":null,"abstract":"The machine learning problem of extracting neural network parameters has been\u0000proposed for nearly three decades. Functionally equivalent extraction is a\u0000crucial goal for research on this problem. When the adversary has access to the\u0000raw output of neural networks, various attacks, including those presented at\u0000CRYPTO 2020 and EUROCRYPT 2024, have successfully achieved this goal. However,\u0000this goal is not achieved when neural networks operate under a hard-label\u0000setting where the raw output is inaccessible. In this paper, we propose the first attack that theoretically achieves\u0000functionally equivalent extraction under the hard-label setting, which applies\u0000to ReLU neural networks. The effectiveness of our attack is validated through\u0000practical experiments on a wide range of ReLU neural networks, including neural\u0000networks trained on two real benchmarking datasets (MNIST, CIFAR10) widely used\u0000in computer vision. For a neural network consisting of $10^5$ parameters, our\u0000attack only requires several hours on a single core.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"72 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What to Consider When Considering Differential Privacy for Policy","authors":"Priyanka Nanayakkara, Jessica Hullman","doi":"arxiv-2409.11680","DOIUrl":"https://doi.org/arxiv-2409.11680","url":null,"abstract":"Differential privacy (DP) is a mathematical definition of privacy that can be\u0000widely applied when publishing data. DP has been recognized as a potential\u0000means of adhering to various privacy-related legal requirements. However, it\u0000can be difficult to reason about whether DP may be appropriate for a given\u0000context due to tensions that arise when it is brought from theory into\u0000practice. To aid policymaking around privacy concerns, we identify three\u0000categories of challenges to understanding DP along with associated questions\u0000that policymakers can ask about the potential deployment context to anticipate\u0000its impacts.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haodi Wang, Tangyu Jiang, Yu Guo, Xiaohua Jia, Chengjun Cai
{"title":"GReDP: A More Robust Approach for Differential Privacy Training with Gradient-Preserving Noise Reduction","authors":"Haodi Wang, Tangyu Jiang, Yu Guo, Xiaohua Jia, Chengjun Cai","doi":"arxiv-2409.11663","DOIUrl":"https://doi.org/arxiv-2409.11663","url":null,"abstract":"Deep learning models have been extensively adopted in various regions due to\u0000their ability to represent hierarchical features, which highly rely on the\u0000training set and procedures. Thus, protecting the training process and deep\u0000learning algorithms is paramount in privacy preservation. Although Differential\u0000Privacy (DP) as a powerful cryptographic primitive has achieved satisfying\u0000results in deep learning training, the existing schemes still fall short in\u0000preserving model utility, i.e., they either invoke a high noise scale or\u0000inevitably harm the original gradients. To address the above issues, in this\u0000paper, we present a more robust approach for DP training called GReDP.\u0000Specifically, we compute the model gradients in the frequency domain and adopt\u0000a new approach to reduce the noise level. Unlike the previous work, our GReDP\u0000only requires half of the noise scale compared to DPSGD [1] while keeping all\u0000the gradient information intact. We present a detailed analysis of our method\u0000both theoretically and empirically. The experimental results show that our\u0000GReDP works consistently better than the baselines on all models and training\u0000settings.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongyu Zhu, Xin Jin, Hongchao Liao, Yan Xiang, Mounim A. El-Yacoubi, Huafeng Qin
{"title":"Relax DARTS: Relaxing the Constraints of Differentiable Architecture Search for Eye Movement Recognition","authors":"Hongyu Zhu, Xin Jin, Hongchao Liao, Yan Xiang, Mounim A. El-Yacoubi, Huafeng Qin","doi":"arxiv-2409.11652","DOIUrl":"https://doi.org/arxiv-2409.11652","url":null,"abstract":"Eye movement biometrics is a secure and innovative identification method.\u0000Deep learning methods have shown good performance, but their network\u0000architecture relies on manual design and combined priori knowledge. To address\u0000these issues, we introduce automated network search (NAS) algorithms to the\u0000field of eye movement recognition and present Relax DARTS, which is an\u0000improvement of the Differentiable Architecture Search (DARTS) to realize more\u0000efficient network search and training. The key idea is to circumvent the issue\u0000of weight sharing by independently training the architecture parameters\u0000$alpha$ to achieve a more precise target architecture. Moreover, the\u0000introduction of module input weights $beta$ allows cells the flexibility to\u0000select inputs, to alleviate the overfitting phenomenon and improve the model\u0000performance. Results on four public databases demonstrate that the Relax DARTS\u0000achieves state-of-the-art recognition performance. Notably, Relax DARTS\u0000exhibits adaptability to other multi-feature temporal classification tasks.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"212 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PAD-FT: A Lightweight Defense for Backdoor Attacks via Data Purification and Fine-Tuning","authors":"Yukai Xu, Yujie Gu, Kouichi Sakurai","doi":"arxiv-2409.12072","DOIUrl":"https://doi.org/arxiv-2409.12072","url":null,"abstract":"Backdoor attacks pose a significant threat to deep neural networks,\u0000particularly as recent advancements have led to increasingly subtle\u0000implantation, making the defense more challenging. Existing defense mechanisms\u0000typically rely on an additional clean dataset as a standard reference and\u0000involve retraining an auxiliary model or fine-tuning the entire victim model.\u0000However, these approaches are often computationally expensive and not always\u0000feasible in practical applications. In this paper, we propose a novel and\u0000lightweight defense mechanism, termed PAD-FT, that does not require an\u0000additional clean dataset and fine-tunes only a very small part of the model to\u0000disinfect the victim model. To achieve this, our approach first introduces a\u0000simple data purification process to identify and select the most-likely clean\u0000data from the poisoned training dataset. The self-purified clean dataset is\u0000then used for activation clipping and fine-tuning only the last classification\u0000layer of the victim model. By integrating data purification, activation\u0000clipping, and classifier fine-tuning, our mechanism PAD-FT demonstrates\u0000superior effectiveness across multiple backdoor attack methods and datasets, as\u0000confirmed through extensive experimental evaluation.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Log2graphs: An Unsupervised Framework for Log Anomaly Detection with Efficient Feature Extraction","authors":"Caihong Wang, Du Xu, Zonghang Li","doi":"arxiv-2409.11890","DOIUrl":"https://doi.org/arxiv-2409.11890","url":null,"abstract":"In the era of rapid Internet development, log data has become indispensable\u0000for recording the operations of computer devices and software. These data\u0000provide valuable insights into system behavior and necessitate thorough\u0000analysis. Recent advances in text analysis have enabled deep learning to\u0000achieve significant breakthroughs in log anomaly detection. However, the high\u0000cost of manual annotation and the dynamic nature of usage scenarios present\u0000major challenges to effective log analysis. This study proposes a novel log\u0000feature extraction model called DualGCN-LogAE, designed to adapt to various\u0000scenarios. It leverages the expressive power of large models for log content\u0000analysis and the capability of graph structures to encapsulate correlations\u0000between logs. It retains key log information while integrating the causal\u0000relationships between logs to achieve effective feature extraction.\u0000Additionally, we introduce Log2graphs, an unsupervised log anomaly detection\u0000method based on the feature extractor. By employing graph clustering algorithms\u0000for log anomaly detection, Log2graphs enables the identification of abnormal\u0000logs without the need for labeled data. We comprehensively evaluate the feature\u0000extraction capability of DualGCN-LogAE and the anomaly detection performance of\u0000Log2graphs using public log datasets across five different scenarios. Our\u0000evaluation metrics include detection accuracy and graph clustering quality\u0000scores. Experimental results demonstrate that the log features extracted by\u0000DualGCN-LogAE outperform those obtained by other methods on classic\u0000classifiers. Moreover, Log2graphs surpasses existing unsupervised log detection\u0000methods, providing a robust tool for advancing log anomaly detection research.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"88 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}