{"title":"机器学习中的参数化复杂性","authors":"Robert Ganian","doi":"10.1016/j.cosrev.2025.100836","DOIUrl":null,"url":null,"abstract":"<div><div>Classifying the complexity of problems into those which can be seen as “tractable” and those which are “intractable” has been a core topic of theoretical computer science already since its inception. For the latter class, the parameterized complexity paradigm pioneered by Downey and Fellows provides a powerful set of tools to identify the exact boundaries of tractability for each specific problem under consideration. And yet, in many subfields of machine learning, there has historically been a distinct lack of research targeting the parameterized complexity of fundamental problems.</div><div>In this survey, we take aim at some of the recent developments at the interface between machine learning and parameterized complexity which successfully bridge the gap between these two areas of research. The survey focuses primarily on three subfields of machine learning where significant progress towards this direction has been made in recent years: Bayesian Networks, Data Completion and Neural Network Training. The survey also provides pointers to some related developments in other subfields of machine learning, such as Decision Tree Learning and Sample Complexity.</div></div>","PeriodicalId":48633,"journal":{"name":"Computer Science Review","volume":"59 ","pages":"Article 100836"},"PeriodicalIF":12.7000,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Parameterized Complexity in Machine Learning\",\"authors\":\"Robert Ganian\",\"doi\":\"10.1016/j.cosrev.2025.100836\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Classifying the complexity of problems into those which can be seen as “tractable” and those which are “intractable” has been a core topic of theoretical computer science already since its inception. For the latter class, the parameterized complexity paradigm pioneered by Downey and Fellows provides a powerful set of tools to identify the exact boundaries of tractability for each specific problem under consideration. And yet, in many subfields of machine learning, there has historically been a distinct lack of research targeting the parameterized complexity of fundamental problems.</div><div>In this survey, we take aim at some of the recent developments at the interface between machine learning and parameterized complexity which successfully bridge the gap between these two areas of research. The survey focuses primarily on three subfields of machine learning where significant progress towards this direction has been made in recent years: Bayesian Networks, Data Completion and Neural Network Training. The survey also provides pointers to some related developments in other subfields of machine learning, such as Decision Tree Learning and Sample Complexity.</div></div>\",\"PeriodicalId\":48633,\"journal\":{\"name\":\"Computer Science Review\",\"volume\":\"59 \",\"pages\":\"Article 100836\"},\"PeriodicalIF\":12.7000,\"publicationDate\":\"2025-10-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Science Review\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1574013725001121\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Science Review","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1574013725001121","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Classifying the complexity of problems into those which can be seen as “tractable” and those which are “intractable” has been a core topic of theoretical computer science already since its inception. For the latter class, the parameterized complexity paradigm pioneered by Downey and Fellows provides a powerful set of tools to identify the exact boundaries of tractability for each specific problem under consideration. And yet, in many subfields of machine learning, there has historically been a distinct lack of research targeting the parameterized complexity of fundamental problems.
In this survey, we take aim at some of the recent developments at the interface between machine learning and parameterized complexity which successfully bridge the gap between these two areas of research. The survey focuses primarily on three subfields of machine learning where significant progress towards this direction has been made in recent years: Bayesian Networks, Data Completion and Neural Network Training. The survey also provides pointers to some related developments in other subfields of machine learning, such as Decision Tree Learning and Sample Complexity.
期刊介绍:
Computer Science Review, a publication dedicated to research surveys and expository overviews of open problems in computer science, targets a broad audience within the field seeking comprehensive insights into the latest developments. The journal welcomes articles from various fields as long as their content impacts the advancement of computer science. In particular, articles that review the application of well-known Computer Science methods to other areas are in scope only if these articles advance the fundamental understanding of those methods.