{"title":"EntUn:通过熵来缓解遗忘-保留困境","authors":"Dahuin Jung","doi":"10.1016/j.icte.2025.06.007","DOIUrl":null,"url":null,"abstract":"<div><div>Advancements in natural language processing and computer vision have raised concerns about models inadvertently exposing private data and confidently misclassifying inputs. Machine unlearning has emerged as a solution, enabling the removal of specific data influences to meet privacy standards. This work focuses on unlearning in Instance-Removal (IR) and Class-Removal (CR) scenarios: IR targets the removal of individual data points, while CR eliminates all data related to a specific class. We propose <strong>EntUn</strong>, which maximizes entropy for the forget-set to reduce confidence in data to be forgotten and minimizes it for the retain-set to preserve discriminative power. An entropy-based intra-class mixup further stabilizes this process, using higher-entropy samples to guide controlled information removal. Experiments on CIFAR10, CIFAR100, and TinyImageNet show that <strong>EntUn</strong> outperforms state-of-the-art baselines, improving forgetting and enhancing privacy protection as confirmed by membership inference attack tests. This demonstrates entropy maximization as a robust strategy for effective unlearning.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 643-647"},"PeriodicalIF":4.2000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"EntUn: Mitigating the forget-retain dilemma in unlearning via entropy\",\"authors\":\"Dahuin Jung\",\"doi\":\"10.1016/j.icte.2025.06.007\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Advancements in natural language processing and computer vision have raised concerns about models inadvertently exposing private data and confidently misclassifying inputs. Machine unlearning has emerged as a solution, enabling the removal of specific data influences to meet privacy standards. This work focuses on unlearning in Instance-Removal (IR) and Class-Removal (CR) scenarios: IR targets the removal of individual data points, while CR eliminates all data related to a specific class. We propose <strong>EntUn</strong>, which maximizes entropy for the forget-set to reduce confidence in data to be forgotten and minimizes it for the retain-set to preserve discriminative power. An entropy-based intra-class mixup further stabilizes this process, using higher-entropy samples to guide controlled information removal. Experiments on CIFAR10, CIFAR100, and TinyImageNet show that <strong>EntUn</strong> outperforms state-of-the-art baselines, improving forgetting and enhancing privacy protection as confirmed by membership inference attack tests. This demonstrates entropy maximization as a robust strategy for effective unlearning.</div></div>\",\"PeriodicalId\":48526,\"journal\":{\"name\":\"ICT Express\",\"volume\":\"11 4\",\"pages\":\"Pages 643-647\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2025-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ICT Express\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2405959525000827\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICT Express","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2405959525000827","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
EntUn: Mitigating the forget-retain dilemma in unlearning via entropy
Advancements in natural language processing and computer vision have raised concerns about models inadvertently exposing private data and confidently misclassifying inputs. Machine unlearning has emerged as a solution, enabling the removal of specific data influences to meet privacy standards. This work focuses on unlearning in Instance-Removal (IR) and Class-Removal (CR) scenarios: IR targets the removal of individual data points, while CR eliminates all data related to a specific class. We propose EntUn, which maximizes entropy for the forget-set to reduce confidence in data to be forgotten and minimizes it for the retain-set to preserve discriminative power. An entropy-based intra-class mixup further stabilizes this process, using higher-entropy samples to guide controlled information removal. Experiments on CIFAR10, CIFAR100, and TinyImageNet show that EntUn outperforms state-of-the-art baselines, improving forgetting and enhancing privacy protection as confirmed by membership inference attack tests. This demonstrates entropy maximization as a robust strategy for effective unlearning.
期刊介绍:
The ICT Express journal published by the Korean Institute of Communications and Information Sciences (KICS) is an international, peer-reviewed research publication covering all aspects of information and communication technology. The journal aims to publish research that helps advance the theoretical and practical understanding of ICT convergence, platform technologies, communication networks, and device technologies. The technology advancement in information and communication technology (ICT) sector enables portable devices to be always connected while supporting high data rate, resulting in the recent popularity of smartphones that have a considerable impact in economic and social development.