{"title":"协变量和概念漂移的树适应机制","authors":"Felipe Leno da Silva, Raphael Cóbe, R. Vicente","doi":"10.52591/2021072414","DOIUrl":null,"url":null,"abstract":"Although Machine Learning algorithms are solving tasks of ever-increasing complexity, gathering data and building training sets remains an error prone, costly, and difficult problem. However, reusing knowledge from related previouslysolved tasks enables reducing the amount of data required to learn a new task. We here propose a method for reusing a tree-based model learned in a source task with abundant data in a target task with scarce data. We perform an empirical evaluation showing that our method is useful, especially in scenarios where the labels are unavailable in the target task.","PeriodicalId":196347,"journal":{"name":"LatinX in AI at International Conference on Machine Learning 2021","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Tree-Adaptation Mechanism for Covariate and Concept Drift\",\"authors\":\"Felipe Leno da Silva, Raphael Cóbe, R. Vicente\",\"doi\":\"10.52591/2021072414\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Although Machine Learning algorithms are solving tasks of ever-increasing complexity, gathering data and building training sets remains an error prone, costly, and difficult problem. However, reusing knowledge from related previouslysolved tasks enables reducing the amount of data required to learn a new task. We here propose a method for reusing a tree-based model learned in a source task with abundant data in a target task with scarce data. We perform an empirical evaluation showing that our method is useful, especially in scenarios where the labels are unavailable in the target task.\",\"PeriodicalId\":196347,\"journal\":{\"name\":\"LatinX in AI at International Conference on Machine Learning 2021\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"LatinX in AI at International Conference on Machine Learning 2021\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.52591/2021072414\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"LatinX in AI at International Conference on Machine Learning 2021","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.52591/2021072414","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Tree-Adaptation Mechanism for Covariate and Concept Drift
Although Machine Learning algorithms are solving tasks of ever-increasing complexity, gathering data and building training sets remains an error prone, costly, and difficult problem. However, reusing knowledge from related previouslysolved tasks enables reducing the amount of data required to learn a new task. We here propose a method for reusing a tree-based model learned in a source task with abundant data in a target task with scarce data. We perform an empirical evaluation showing that our method is useful, especially in scenarios where the labels are unavailable in the target task.