Opportunistic Adversaries: On Imminent Threats to Learning-Based Business Automation

Michiaki Tatsubori, Shohei Hido
{"title":"Opportunistic Adversaries: On Imminent Threats to Learning-Based Business Automation","authors":"Michiaki Tatsubori, Shohei Hido","doi":"10.1109/SRII.2012.24","DOIUrl":null,"url":null,"abstract":"False positives and negatives are inevitable in real-world classification problems. In general, machine-learning-based business process automation is still viable with reduced classification accuracy due to such false decisions, thanks to business models that replace human decision processes with automated decision processes covering the costs of introducing automation and the losses from rare mistakes by the automation with the profits from relatively large savings in human-factor costs. However, under certain conditions, it is possible for attackers to outsmart a classifier at a reasonable cost and thus destroy the business model that the learner system depends on. Attackers may eventually detect the misclassification cases they can benefit from and try to create similar inputs that will be misclassified by the unaware learner system. We call adversaries of this type \"opportunistic adversaries\". This paper specifies the environmental patterns that can expose vulnerabilities to opportunistic adversaries and presents some likely business scenarios for these threats. Then we propose a countermeasure algorithm to detect such attacks based on change detection in the post-classification data distributions. Experimental results show that our algorithm has higher detection accuracy than other approaches based on outlier detection or change-point detection.","PeriodicalId":110778,"journal":{"name":"2012 Annual SRII Global Conference","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 Annual SRII Global Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SRII.2012.24","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

False positives and negatives are inevitable in real-world classification problems. In general, machine-learning-based business process automation is still viable with reduced classification accuracy due to such false decisions, thanks to business models that replace human decision processes with automated decision processes covering the costs of introducing automation and the losses from rare mistakes by the automation with the profits from relatively large savings in human-factor costs. However, under certain conditions, it is possible for attackers to outsmart a classifier at a reasonable cost and thus destroy the business model that the learner system depends on. Attackers may eventually detect the misclassification cases they can benefit from and try to create similar inputs that will be misclassified by the unaware learner system. We call adversaries of this type "opportunistic adversaries". This paper specifies the environmental patterns that can expose vulnerabilities to opportunistic adversaries and presents some likely business scenarios for these threats. Then we propose a countermeasure algorithm to detect such attacks based on change detection in the post-classification data distributions. Experimental results show that our algorithm has higher detection accuracy than other approaches based on outlier detection or change-point detection.
机会主义对手:基于学习的业务自动化迫在眉睫的威胁
在现实世界的分类问题中,假阳性和假阴性是不可避免的。一般来说,基于机器学习的业务流程自动化仍然是可行的,尽管由于这些错误的决策而降低了分类精度,这要归功于商业模型,这些商业模型用自动化决策过程取代了人工决策过程,这些决策过程涵盖了引入自动化的成本和自动化因罕见错误而造成的损失,而自动化的利润来自于相对较大的人为因素成本节约。然而,在某些条件下,攻击者有可能以合理的成本胜过分类器,从而破坏学习器系统所依赖的商业模式。攻击者最终可能会发现他们可以从中受益的错误分类案例,并尝试创建类似的输入,这些输入将被不知情的学习系统错误分类。我们称这种类型的对手为“机会主义对手”。本文指定了可以将漏洞暴露给机会主义对手的环境模式,并为这些威胁提供了一些可能的业务场景。然后,我们提出了一种基于分类后数据分布变化检测的攻击检测对策算法。实验结果表明,该算法比其他基于离群点检测或变化点检测的方法具有更高的检测精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信