{"title":"Eternal Sunshine of the Mechanical Mind: The Irreconcilability of Machine Learning and the Right to be Forgotten","authors":"Meem Arafat Manab","doi":"arxiv-2403.05592","DOIUrl":null,"url":null,"abstract":"As we keep rapidly advancing toward an era where artificial intelligence is a\nconstant and normative experience for most of us, we must also be aware of what\nthis vision and this progress entail. By first approximating neural connections\nand activities in computer circuits and then creating more and more\nsophisticated versions of this crude approximation, we are now facing an age to\ncome where modern deep learning-based artificial intelligence systems can\nrightly be called thinking machines, and they are sometimes even lauded for\ntheir emergent behavior and black-box approaches. But as we create more\npowerful electronic brains, with billions of neural connections and parameters,\ncan we guarantee that these mammoths built of artificial neurons will be able\nto forget the data that we store in them? If they are at some level like a\nbrain, can the right to be forgotten still be protected while dealing with\nthese AIs? The essential gap between machine learning and the RTBF is explored\nin this article, with a premonition of far-reaching conclusions if the gap is\nnot bridged or reconciled any time soon. The core argument is that deep\nlearning models, due to their structure and size, cannot be expected to forget\nor delete a data as it would be expected from a tabular database, and they\nshould be treated more like a mechanical brain, albeit still in development.","PeriodicalId":501533,"journal":{"name":"arXiv - CS - General Literature","volume":"51 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - General Literature","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2403.05592","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
As we keep rapidly advancing toward an era where artificial intelligence is a
constant and normative experience for most of us, we must also be aware of what
this vision and this progress entail. By first approximating neural connections
and activities in computer circuits and then creating more and more
sophisticated versions of this crude approximation, we are now facing an age to
come where modern deep learning-based artificial intelligence systems can
rightly be called thinking machines, and they are sometimes even lauded for
their emergent behavior and black-box approaches. But as we create more
powerful electronic brains, with billions of neural connections and parameters,
can we guarantee that these mammoths built of artificial neurons will be able
to forget the data that we store in them? If they are at some level like a
brain, can the right to be forgotten still be protected while dealing with
these AIs? The essential gap between machine learning and the RTBF is explored
in this article, with a premonition of far-reaching conclusions if the gap is
not bridged or reconciled any time soon. The core argument is that deep
learning models, due to their structure and size, cannot be expected to forget
or delete a data as it would be expected from a tabular database, and they
should be treated more like a mechanical brain, albeit still in development.