Proceedings of the 3rd Workshop on Cyber-Security Arms Race最新文献

筛选
英文 中文
Your Smart Contracts Are Not Secure: Investigating Arbitrageurs and Oracle Manipulators in Ethereum 你的智能合约不安全:调查以太坊中的套利者和Oracle操纵者
Proceedings of the 3rd Workshop on Cyber-Security Arms Race Pub Date : 2021-11-15 DOI: 10.1145/3474374.3486916
Kevin Tjiam, Rui Wang, H. Chen, K. Liang
{"title":"Your Smart Contracts Are Not Secure: Investigating Arbitrageurs and Oracle Manipulators in Ethereum","authors":"Kevin Tjiam, Rui Wang, H. Chen, K. Liang","doi":"10.1145/3474374.3486916","DOIUrl":"https://doi.org/10.1145/3474374.3486916","url":null,"abstract":"Smart contracts on Ethereum enable billions of dollars to be transacted in a decentralized, transparent and trustless environment. However, adversaries lie await in the Dark Forest, waiting to exploit any and all smart contract vulnerabilities in order to extract profits from unsuspecting victims in this new financial system. As the blockchain space moves at a breakneck pace, exploits on smart contract vulnerabilities rapidly evolve, and existing research quickly becomes obsolete. It is imperative that smart contract developers stay up to date on the current most damaging vulnerabilities and countermeasures to ensure the security of users' funds, and to collectively ensure the future of Ethereum as a financial settlement layer. This research work focuses on two smart contract vulnerabilities: transaction-ordering dependency and oracle manipulation. Combined, these two vulnerabilities have been exploited to extract hundreds of millions of dollars from smart contracts in the past year (2020-2021). For each of them, this paper presents: (1) a literary survey from recent (as of 2021) formal and informal sources; (2) a reproducible experiment as code demonstrating the vulnerability and, where applicable, countermeasures to mitigate the vulnerability; and (3) analysis and discussion on proposed countermeasures. To conclude, strengths, weaknesses and trade-offs of these countermeasures are summarised, inspiring directions for future research.","PeriodicalId":319965,"journal":{"name":"Proceedings of the 3rd Workshop on Cyber-Security Arms Race","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132808695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Regulation TL;DR: Adversarial Text Summarization of Federal Register Articles 法规TL;DR:联邦公报条款的对抗性文本摘要
Proceedings of the 3rd Workshop on Cyber-Security Arms Race Pub Date : 2021-11-15 DOI: 10.1145/3474374.3486917
Filipo Sharevski, Peter Jachim, Emma Pieroni
{"title":"Regulation TL;DR: Adversarial Text Summarization of Federal Register Articles","authors":"Filipo Sharevski, Peter Jachim, Emma Pieroni","doi":"10.1145/3474374.3486917","DOIUrl":"https://doi.org/10.1145/3474374.3486917","url":null,"abstract":"Short on time with a reduced attention span, people disengage from reading long text with a \"too long, didn't read\" justification. While a useful heuristic of managing reading resources, we believe that \"tl;dr\" is prone to adversarial manipulation. In a seemingly noble effort to produce a bite-sized segments of information fitting social media posts, an adversity could reduce a long text to a short but polarizing summary. In this paper we demonstrate an adversarial text summarization that reduces Federal Register long texts to summaries with obvious liberal or conservative leanings. Contextualizing summaries to a political agenda is hardly new, but a barrage of polarizing \"tl;dr\" social media posts could derail the public debate about important public policy matters with an unprecedented lack of effort. We show and elaborate on such example \"tl;dr\" posts to showcase a new and relatively unexplored avenue for information operations on social media.","PeriodicalId":319965,"journal":{"name":"Proceedings of the 3rd Workshop on Cyber-Security Arms Race","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114093350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The More, the Better: A Study on Collaborative Machine Learning for DGA Detection 越多越好:用于DGA检测的协同机器学习研究
Proceedings of the 3rd Workshop on Cyber-Security Arms Race Pub Date : 2021-09-24 DOI: 10.1145/3474374.3486915
Arthur Drichel, Benedikt Holmes, Justus von Brandt, U. Meyer
{"title":"The More, the Better: A Study on Collaborative Machine Learning for DGA Detection","authors":"Arthur Drichel, Benedikt Holmes, Justus von Brandt, U. Meyer","doi":"10.1145/3474374.3486915","DOIUrl":"https://doi.org/10.1145/3474374.3486915","url":null,"abstract":"Domain generation algorithms (DGAs) prevent the connection between a botnet and its master from being blocked by generating a large number of domain names. Promising single-data-source approaches have been proposed for separating benign from DGA-generated domains. Collaborative machine learning (ML) can be used in order to enhance a classifier's detection rate, reduce its false positive rate (FPR), and to improve the classifier's generalization capability to different networks. In this paper, we complement the research area of DGA detection by conducting a comprehensive collaborative learning study, including a total of 13,440 evaluation runs. In two real-world scenarios we evaluate a total of eleven different variations of collaborative learning using three different state-of-the-art classifiers. We show that collaborative ML can lead to a reduction in FPR by up to 51.7%. However, while collaborative ML is beneficial for DGA detection, not all approaches and classifier types profit equally. We round up our comprehensive study with a thorough discussion of the privacy threats implicated by the different collaborative ML approaches.","PeriodicalId":319965,"journal":{"name":"Proceedings of the 3rd Workshop on Cyber-Security Arms Race","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132787383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-Stage Attack Detection via Kill Chain State Machines 基于杀伤链状态机的多阶段攻击检测
Proceedings of the 3rd Workshop on Cyber-Security Arms Race Pub Date : 2021-03-26 DOI: 10.1145/3474374.3486918
Florian Wilkens, Felix Ortmann, Steffen Haas, Matthias Vallentin, Mathias Fischer
{"title":"Multi-Stage Attack Detection via Kill Chain State Machines","authors":"Florian Wilkens, Felix Ortmann, Steffen Haas, Matthias Vallentin, Mathias Fischer","doi":"10.1145/3474374.3486918","DOIUrl":"https://doi.org/10.1145/3474374.3486918","url":null,"abstract":"Today, human security analysts need to sift through large volumes of alerts they have to triage during investigations. This alert fatigue results in failure to detect complex attacks, such as advanced persistent threats (APTs), because they manifest over long time frames and attackers tread carefully to evade detection mechanisms. In this paper, we contribute a new method to synthesize scenario graphs from state machines. We use the network direction to derive potential attack stages from single and meta-alerts and model resulting attack scenarios in a kill chain state machine(KCSM). Our algorithm yields a graphical summary of the attack, called APT scenario graphs, where nodes represent involved hosts and edges infection activity. We evaluate the feasibility of our approach by injecting an APT campaign into a network traffic data set containing both benign and malicious activity. Our approach then generates a set of APT scenario graphs that contain our injected campaign while reducing the overall alert set by up to three orders of magnitude. This reduction makes it feasible for human analysts to effectively triage potential incidents.","PeriodicalId":319965,"journal":{"name":"Proceedings of the 3rd Workshop on Cyber-Security Arms Race","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121277737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信