Thomas Feutrier, Nadarajen Veerapen, Marie-Éléonore Kessaci
{"title":"Improving the Relevance of Artificial Instances for Curriculum-Based Course Timetabling through Feasibility Prediction","authors":"Thomas Feutrier, Nadarajen Veerapen, Marie-Éléonore Kessaci","doi":"10.1145/3583133.3590690","DOIUrl":null,"url":null,"abstract":"Solvers for Curriculum-Based Course Timetabling were until recently difficult to configure and evaluate because of the limited number of benchmark instances. Recent work has proposed new real-world instances, as well as thousands of generated ones that can be used to train configurators and for machine learning applications. The less numerous real-world instances can then be used as a test set. To assess whether the generated instances exhibit sufficiently similar behavior to the real ones, we choose to consider a basic indicator: feasibility. We find that 38 % of the artificial instances are infeasible versus 6% of real-world ones, and show that a feasibility prediction model trained on artificial instances performs extremely poorly on real-world ones. The objective of this paper is therefore to be able to predict which generated instances behave like the real-world instances in order to improve the quality of the training set. As a first step, we propose a selection procedure for the artificial training set that produces a feasibility prediction model that works as well as if it were trained on real-world instances. Then, we propose a pipeline to build a selection model that picks artificial instances that match the infeasibility behavior of the real-world ones.","PeriodicalId":422029,"journal":{"name":"Proceedings of the Companion Conference on Genetic and Evolutionary Computation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Companion Conference on Genetic and Evolutionary Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3583133.3590690","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Solvers for Curriculum-Based Course Timetabling were until recently difficult to configure and evaluate because of the limited number of benchmark instances. Recent work has proposed new real-world instances, as well as thousands of generated ones that can be used to train configurators and for machine learning applications. The less numerous real-world instances can then be used as a test set. To assess whether the generated instances exhibit sufficiently similar behavior to the real ones, we choose to consider a basic indicator: feasibility. We find that 38 % of the artificial instances are infeasible versus 6% of real-world ones, and show that a feasibility prediction model trained on artificial instances performs extremely poorly on real-world ones. The objective of this paper is therefore to be able to predict which generated instances behave like the real-world instances in order to improve the quality of the training set. As a first step, we propose a selection procedure for the artificial training set that produces a feasibility prediction model that works as well as if it were trained on real-world instances. Then, we propose a pipeline to build a selection model that picks artificial instances that match the infeasibility behavior of the real-world ones.