{"title":"Dataset Distillation from First Principles: Integrating Core Information Extraction and Purposeful Learning","authors":"Vyacheslav Kungurtsev, Yuanfang Peng, Jianyang Gu, Saeed Vahidian, Anthony Quinn, Fadwa Idlahcen, Yiran Chen","doi":"arxiv-2409.01410","DOIUrl":null,"url":null,"abstract":"Dataset distillation (DD) is an increasingly important technique that focuses\non constructing a synthetic dataset capable of capturing the core information\nin training data to achieve comparable performance in models trained on the\nlatter. While DD has a wide range of applications, the theory supporting it is\nless well evolved. New methods of DD are compared on a common set of\nbenchmarks, rather than oriented towards any particular learning task. In this\nwork, we present a formal model of DD, arguing that a precise characterization\nof the underlying optimization problem must specify the inference task\nassociated with the application of interest. Without this task-specific focus,\nthe DD problem is under-specified, and the selection of a DD algorithm for a\nparticular task is merely heuristic. Our formalization reveals novel\napplications of DD across different modeling environments. We analyze existing\nDD methods through this broader lens, highlighting their strengths and\nlimitations in terms of accuracy and faithfulness to optimal DD operation.\nFinally, we present numerical results for two case studies important in\ncontemporary settings. Firstly, we address a critical challenge in medical data\nanalysis: merging the knowledge from different datasets composed of\nintersecting, but not identical, sets of features, in order to construct a\nlarger dataset in what is usually a small sample setting. Secondly, we consider\nout-of-distribution error across boundary conditions for physics-informed\nneural networks (PINNs), showing the potential for DD to provide more\nphysically faithful data. By establishing this general formulation of DD, we\naim to establish a new research paradigm by which DD can be understood and from\nwhich new DD techniques can arise.","PeriodicalId":501215,"journal":{"name":"arXiv - STAT - Computation","volume":"6 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - STAT - Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.01410","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Dataset distillation (DD) is an increasingly important technique that focuses
on constructing a synthetic dataset capable of capturing the core information
in training data to achieve comparable performance in models trained on the
latter. While DD has a wide range of applications, the theory supporting it is
less well evolved. New methods of DD are compared on a common set of
benchmarks, rather than oriented towards any particular learning task. In this
work, we present a formal model of DD, arguing that a precise characterization
of the underlying optimization problem must specify the inference task
associated with the application of interest. Without this task-specific focus,
the DD problem is under-specified, and the selection of a DD algorithm for a
particular task is merely heuristic. Our formalization reveals novel
applications of DD across different modeling environments. We analyze existing
DD methods through this broader lens, highlighting their strengths and
limitations in terms of accuracy and faithfulness to optimal DD operation.
Finally, we present numerical results for two case studies important in
contemporary settings. Firstly, we address a critical challenge in medical data
analysis: merging the knowledge from different datasets composed of
intersecting, but not identical, sets of features, in order to construct a
larger dataset in what is usually a small sample setting. Secondly, we consider
out-of-distribution error across boundary conditions for physics-informed
neural networks (PINNs), showing the potential for DD to provide more
physically faithful data. By establishing this general formulation of DD, we
aim to establish a new research paradigm by which DD can be understood and from
which new DD techniques can arise.