机械心灵的永恒阳光》:机器学习与被遗忘权的不可调和性

Meem Arafat Manab
{"title":"机械心灵的永恒阳光》:机器学习与被遗忘权的不可调和性","authors":"Meem Arafat Manab","doi":"arxiv-2403.05592","DOIUrl":null,"url":null,"abstract":"As we keep rapidly advancing toward an era where artificial intelligence is a\nconstant and normative experience for most of us, we must also be aware of what\nthis vision and this progress entail. By first approximating neural connections\nand activities in computer circuits and then creating more and more\nsophisticated versions of this crude approximation, we are now facing an age to\ncome where modern deep learning-based artificial intelligence systems can\nrightly be called thinking machines, and they are sometimes even lauded for\ntheir emergent behavior and black-box approaches. But as we create more\npowerful electronic brains, with billions of neural connections and parameters,\ncan we guarantee that these mammoths built of artificial neurons will be able\nto forget the data that we store in them? If they are at some level like a\nbrain, can the right to be forgotten still be protected while dealing with\nthese AIs? The essential gap between machine learning and the RTBF is explored\nin this article, with a premonition of far-reaching conclusions if the gap is\nnot bridged or reconciled any time soon. The core argument is that deep\nlearning models, due to their structure and size, cannot be expected to forget\nor delete a data as it would be expected from a tabular database, and they\nshould be treated more like a mechanical brain, albeit still in development.","PeriodicalId":501533,"journal":{"name":"arXiv - CS - General Literature","volume":"51 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Eternal Sunshine of the Mechanical Mind: The Irreconcilability of Machine Learning and the Right to be Forgotten\",\"authors\":\"Meem Arafat Manab\",\"doi\":\"arxiv-2403.05592\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As we keep rapidly advancing toward an era where artificial intelligence is a\\nconstant and normative experience for most of us, we must also be aware of what\\nthis vision and this progress entail. By first approximating neural connections\\nand activities in computer circuits and then creating more and more\\nsophisticated versions of this crude approximation, we are now facing an age to\\ncome where modern deep learning-based artificial intelligence systems can\\nrightly be called thinking machines, and they are sometimes even lauded for\\ntheir emergent behavior and black-box approaches. But as we create more\\npowerful electronic brains, with billions of neural connections and parameters,\\ncan we guarantee that these mammoths built of artificial neurons will be able\\nto forget the data that we store in them? If they are at some level like a\\nbrain, can the right to be forgotten still be protected while dealing with\\nthese AIs? The essential gap between machine learning and the RTBF is explored\\nin this article, with a premonition of far-reaching conclusions if the gap is\\nnot bridged or reconciled any time soon. The core argument is that deep\\nlearning models, due to their structure and size, cannot be expected to forget\\nor delete a data as it would be expected from a tabular database, and they\\nshould be treated more like a mechanical brain, albeit still in development.\",\"PeriodicalId\":501533,\"journal\":{\"name\":\"arXiv - CS - General Literature\",\"volume\":\"51 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - General Literature\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2403.05592\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - General Literature","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2403.05592","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

当我们不断向人工智能时代快速迈进,让人工智能成为我们大多数人的常态体验时,我们也必须意识到这一愿景和进步意味着什么。通过首先在计算机电路中近似神经连接和活动,然后在这种粗略近似的基础上创造出更多更复杂的版本,我们现在正面临着这样一个时代的到来:基于深度学习的现代人工智能系统可以名正言顺地称为思考机器,它们有时甚至因其突发行为和黑箱方法而备受赞誉。但是,当我们创造出拥有数十亿神经连接和参数的更强大的电子大脑时,我们能保证这些由人工神经元组成的猛犸象能够忘记我们存储在其中的数据吗?如果它们在某种程度上与大脑相似,那么在与这些人工智能打交道时,被遗忘的权利还能得到保护吗?本文探讨了机器学习与 RTBF 之间的本质差距,并预言如果这一差距不能在短期内弥合或调和,将会产生影响深远的结论。本文的核心论点是,深度学习模型由于其结构和规模,不可能像表格数据库那样遗忘或删除数据,它们更应被视为机械大脑,尽管仍处于开发阶段。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Eternal Sunshine of the Mechanical Mind: The Irreconcilability of Machine Learning and the Right to be Forgotten
As we keep rapidly advancing toward an era where artificial intelligence is a constant and normative experience for most of us, we must also be aware of what this vision and this progress entail. By first approximating neural connections and activities in computer circuits and then creating more and more sophisticated versions of this crude approximation, we are now facing an age to come where modern deep learning-based artificial intelligence systems can rightly be called thinking machines, and they are sometimes even lauded for their emergent behavior and black-box approaches. But as we create more powerful electronic brains, with billions of neural connections and parameters, can we guarantee that these mammoths built of artificial neurons will be able to forget the data that we store in them? If they are at some level like a brain, can the right to be forgotten still be protected while dealing with these AIs? The essential gap between machine learning and the RTBF is explored in this article, with a premonition of far-reaching conclusions if the gap is not bridged or reconciled any time soon. The core argument is that deep learning models, due to their structure and size, cannot be expected to forget or delete a data as it would be expected from a tabular database, and they should be treated more like a mechanical brain, albeit still in development.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信