基于支持向量机和逻辑回归模型预计算技术的决策树高效私有评分

Ewit
{"title":"基于支持向量机和逻辑回归模型预计算技术的决策树高效私有评分","authors":"Ewit","doi":"10.30534/ijccn/2018/16722018","DOIUrl":null,"url":null,"abstract":"Numerous information driven customized administrations require that private information of clients is scored against a prepared machine learning model. In this paper we propose a novel convention for security protecting order of choice trees, a famous machine learning model in these situations. Our answers is made out of building squares, to be specific a safe correlation convention, a convention for negligently choosing inputs, and a convention for increase. By joining a portion of the building hinders for our choice tree order convention, we additionally enhance beforehand proposed answers for characterization of help vector machines and calculated relapse models. Our conventions are data hypothetically secure and, dissimilar to already proposed arrangements, don't require secluded exponentiations. We demonstrate that our conventions for protection saving arrangement prompt more proficient outcomes from the perspective of computational and correspondence complexities. We introduce exactness and runtime comes about for 7 characterization benchmark datasets from the UCI archive. modular addition and multiplications. Our classification data shows that we have scored a new data with accuracy and the performance evaluation is highly efficient. Our results for privacy-preserving machine learning classification are highly secured. Support Vector Machines and Hypervisor based Classifiers Hyper plane Based Classifiers and Support Vector Machines hyper plane-based classifiers are parametric, discriminative classifiers. For a setting with t features3 and k classes, the model comprises of k vectors w = (w1,...,wk) with wi ∈ Rt and the classification result is gotten by deciding, for Alice's element vector x ∈ Rt, the file k ∗ = argmaxi ∈ [k]hwi,xi, where h•,•i is the inward item.","PeriodicalId":313852,"journal":{"name":"International Journal of Computing, Communications and Networking","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Efficient and Private Scoring of Decision Trees, based on Pre-Computation Technique with Support Vector Machines and Logistic Regression Model\",\"authors\":\"Ewit\",\"doi\":\"10.30534/ijccn/2018/16722018\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Numerous information driven customized administrations require that private information of clients is scored against a prepared machine learning model. In this paper we propose a novel convention for security protecting order of choice trees, a famous machine learning model in these situations. Our answers is made out of building squares, to be specific a safe correlation convention, a convention for negligently choosing inputs, and a convention for increase. By joining a portion of the building hinders for our choice tree order convention, we additionally enhance beforehand proposed answers for characterization of help vector machines and calculated relapse models. Our conventions are data hypothetically secure and, dissimilar to already proposed arrangements, don't require secluded exponentiations. We demonstrate that our conventions for protection saving arrangement prompt more proficient outcomes from the perspective of computational and correspondence complexities. We introduce exactness and runtime comes about for 7 characterization benchmark datasets from the UCI archive. modular addition and multiplications. Our classification data shows that we have scored a new data with accuracy and the performance evaluation is highly efficient. Our results for privacy-preserving machine learning classification are highly secured. Support Vector Machines and Hypervisor based Classifiers Hyper plane Based Classifiers and Support Vector Machines hyper plane-based classifiers are parametric, discriminative classifiers. For a setting with t features3 and k classes, the model comprises of k vectors w = (w1,...,wk) with wi ∈ Rt and the classification result is gotten by deciding, for Alice's element vector x ∈ Rt, the file k ∗ = argmaxi ∈ [k]hwi,xi, where h•,•i is the inward item.\",\"PeriodicalId\":313852,\"journal\":{\"name\":\"International Journal of Computing, Communications and Networking\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-06-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Computing, Communications and Networking\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.30534/ijccn/2018/16722018\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computing, Communications and Networking","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.30534/ijccn/2018/16722018","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

许多信息驱动的定制管理需要根据准备好的机器学习模型对客户的私人信息进行评分。在本文中,我们提出了一种新的约定,用于安全保护选择树的顺序,这是在这些情况下著名的机器学习模型。我们的答案是通过构建正方形,具体来说是一个安全的相关惯例,一个疏忽选择输入的惯例,以及一个增加的惯例。通过加入我们选择树顺序约定的部分构建障碍,我们进一步增强了先前提出的帮助向量机表征和计算复发模型的答案。我们的约定假设数据是安全的,并且与已经提出的安排不同,不需要隐蔽的幂运算。我们证明,从计算和通信复杂性的角度来看,我们的保护保存安排惯例促进了更熟练的结果。我们介绍了来自UCI存档的7个表征基准数据集的准确性和运行时间。模加法和乘法。我们的分类数据表明,我们准确地评分了一个新的数据,性能评估是高效的。我们的隐私保护机器学习分类结果是高度安全的。基于超平面的分类器和基于支持向量机的超平面分类器是参数化的判别分类器。对于一个有t个特征3和k个类的集合,模型由k个向量w = (w1,…,wk)组成,其中wi∈Rt,对于Alice的元素向量x∈Rt,决定文件k∗= argmaxi∈[k]hwi,xi,其中h•,•i为内向项,从而得到分类结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Efficient and Private Scoring of Decision Trees, based on Pre-Computation Technique with Support Vector Machines and Logistic Regression Model
Numerous information driven customized administrations require that private information of clients is scored against a prepared machine learning model. In this paper we propose a novel convention for security protecting order of choice trees, a famous machine learning model in these situations. Our answers is made out of building squares, to be specific a safe correlation convention, a convention for negligently choosing inputs, and a convention for increase. By joining a portion of the building hinders for our choice tree order convention, we additionally enhance beforehand proposed answers for characterization of help vector machines and calculated relapse models. Our conventions are data hypothetically secure and, dissimilar to already proposed arrangements, don't require secluded exponentiations. We demonstrate that our conventions for protection saving arrangement prompt more proficient outcomes from the perspective of computational and correspondence complexities. We introduce exactness and runtime comes about for 7 characterization benchmark datasets from the UCI archive. modular addition and multiplications. Our classification data shows that we have scored a new data with accuracy and the performance evaluation is highly efficient. Our results for privacy-preserving machine learning classification are highly secured. Support Vector Machines and Hypervisor based Classifiers Hyper plane Based Classifiers and Support Vector Machines hyper plane-based classifiers are parametric, discriminative classifiers. For a setting with t features3 and k classes, the model comprises of k vectors w = (w1,...,wk) with wi ∈ Rt and the classification result is gotten by deciding, for Alice's element vector x ∈ Rt, the file k ∗ = argmaxi ∈ [k]hwi,xi, where h•,•i is the inward item.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信