Ensemble of SVM Classifiers for Spam Filtering

Ángela Blanco, M. Martín-Merino
{"title":"Ensemble of SVM Classifiers for Spam Filtering","authors":"Ángela Blanco, M. Martín-Merino","doi":"10.4018/978-1-59904-849-9.CH086","DOIUrl":null,"url":null,"abstract":"Unsolicited commercial email also known as Spam is becoming a serious problem for Internet users and providers (Fawcett, 2003). Several researchers have applied machine learning techniques in order to improve the detection of spam messages. Naive Bayes models are the most popular (Androutsopoulos, 2000) but other authors have applied Support Vector Machines (SVM) (Drucker, 1999), boosting and decision trees (Carreras, 2001) with remarkable results. SVM has revealed particularly attractive in this application because it is robust against noise and is able to handle a large number of features (Vapnik, 1998). Errors in anti-spam email filtering are strongly asymmetric. Thus, false positive errors or valid messages that are blocked, are prohibitively expensive. Several authors have proposed new versions of the original SVM algorithm that help to reduce the false positive errors (Kolz, 2001, Valentini, 2004 & Kittler, 1998). In particular, it has been suggested that combining non-optimal classifiers can help to reduce particularly the variance of the predictor (Valentini, 2004 & Kittler, 1998) and consequently the misclassification errors. In order to achieve this goal, different versions of the classifier are usually built by sampling the patterns or the features (Breiman, 1996). However, in our application it is expected that the aggregation of strong classifiers will help to reduce more the false positive errors (Provost, 2001 & Hershop, 2005). In this paper, we address the problem of reducing the false positive errors by combining classifiers based on multiple dissimilarities. To this aim, a diversity of classifiers is built considering dissimilarities that reflect different features of the data. The dissimilarities are first embedded into an Euclidean space where a SVM is adjusted for each measure. Next, the classifiers are aggregated using a voting strategy (Kittler, 1998). The method proposed has been applied to the Spam UCI machine learning database (Hastie, 2001) with remarkable results.","PeriodicalId":320314,"journal":{"name":"Encyclopedia of Artificial Intelligence","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Encyclopedia of Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4018/978-1-59904-849-9.CH086","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Unsolicited commercial email also known as Spam is becoming a serious problem for Internet users and providers (Fawcett, 2003). Several researchers have applied machine learning techniques in order to improve the detection of spam messages. Naive Bayes models are the most popular (Androutsopoulos, 2000) but other authors have applied Support Vector Machines (SVM) (Drucker, 1999), boosting and decision trees (Carreras, 2001) with remarkable results. SVM has revealed particularly attractive in this application because it is robust against noise and is able to handle a large number of features (Vapnik, 1998). Errors in anti-spam email filtering are strongly asymmetric. Thus, false positive errors or valid messages that are blocked, are prohibitively expensive. Several authors have proposed new versions of the original SVM algorithm that help to reduce the false positive errors (Kolz, 2001, Valentini, 2004 & Kittler, 1998). In particular, it has been suggested that combining non-optimal classifiers can help to reduce particularly the variance of the predictor (Valentini, 2004 & Kittler, 1998) and consequently the misclassification errors. In order to achieve this goal, different versions of the classifier are usually built by sampling the patterns or the features (Breiman, 1996). However, in our application it is expected that the aggregation of strong classifiers will help to reduce more the false positive errors (Provost, 2001 & Hershop, 2005). In this paper, we address the problem of reducing the false positive errors by combining classifiers based on multiple dissimilarities. To this aim, a diversity of classifiers is built considering dissimilarities that reflect different features of the data. The dissimilarities are first embedded into an Euclidean space where a SVM is adjusted for each measure. Next, the classifiers are aggregated using a voting strategy (Kittler, 1998). The method proposed has been applied to the Spam UCI machine learning database (Hastie, 2001) with remarkable results.
用于垃圾邮件过滤的SVM分类器集成
未经请求的商业电子邮件也被称为垃圾邮件,正在成为互联网用户和提供商的一个严重问题(Fawcett, 2003)。一些研究人员已经应用机器学习技术来提高对垃圾邮件的检测。朴素贝叶斯模型是最流行的(Androutsopoulos, 2000),但其他作者也应用了支持向量机(SVM) (Drucker, 1999)、提升和决策树(Carreras, 2001),并取得了显著的成果。支持向量机在这个应用中显示出特别的吸引力,因为它对噪声具有鲁棒性,并且能够处理大量的特征(Vapnik, 1998)。反垃圾邮件过滤中的错误是强烈不对称的。因此,误报错误或阻止有效消息的代价非常高昂。几位作者提出了原始SVM算法的新版本,有助于减少假阳性误差(Kolz, 2001, Valentini, 2004 & Kittler, 1998)。特别是,有人建议结合非最优分类器可以帮助减少预测器的方差(Valentini, 2004 & Kittler, 1998),从而减少误分类误差。为了实现这一目标,通常通过对模式或特征进行采样来构建不同版本的分类器(Breiman, 1996)。然而,在我们的应用中,预计强分类器的聚合将有助于减少更多的假阳性错误(Provost, 2001 & Hershop, 2005)。在本文中,我们通过组合基于多个不相似性的分类器来解决减少假阳性错误的问题。为此,考虑到反映数据不同特征的差异性,构建了多样性分类器。首先将差异嵌入到欧几里得空间中,其中支持向量机对每个度量进行调整。接下来,使用投票策略对分类器进行聚合(Kittler, 1998)。该方法已被应用于Spam UCI机器学习数据库(Hastie, 2001),并取得了显著的效果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信