{"title":"Multi-objective deep learning: Taxonomy and survey of the state of the art","authors":"Sebastian Peitz, Sèdjro Salomon Hotegni","doi":"10.1016/j.mlwa.2025.100700","DOIUrl":null,"url":null,"abstract":"<div><div>Simultaneously considering multiple objectives in machine learning has been a popular approach for several decades, with various benefits for multi-task learning, the consideration of secondary goals such as sparsity, or multicriteria hyperparameter tuning. However – as multi-objective optimization is significantly more costly than single-objective optimization – the recent focus on deep learning architectures poses considerable additional challenges due to the very large number of parameters, strong nonlinearities and stochasticity. On the other hand considering multiple criteria in deep learning presents many benefits, such as the just-mentioned multi-task learning, the consideration of performance versus adversarial robustness, or a more interpretable way for interactively adapting to changing preferences. This survey covers recent advancements in the area of multi-objective deep learning. We introduce a taxonomy of existing methods – based on the type of training algorithm as well as the decision maker’s needs – before listing recent advancements, and also successful applications. All three main learning paradigms supervised learning, unsupervised learning and reinforcement learning are covered, and we also address the recently very popular area of generative modeling. With a focus on the advantages and disadvantages of the existing training algorithms, this survey is formulated from an optimization perspective rather than organizing according to different learning paradigms or application areas.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"21 ","pages":"Article 100700"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning with applications","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666827025000830","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Simultaneously considering multiple objectives in machine learning has been a popular approach for several decades, with various benefits for multi-task learning, the consideration of secondary goals such as sparsity, or multicriteria hyperparameter tuning. However – as multi-objective optimization is significantly more costly than single-objective optimization – the recent focus on deep learning architectures poses considerable additional challenges due to the very large number of parameters, strong nonlinearities and stochasticity. On the other hand considering multiple criteria in deep learning presents many benefits, such as the just-mentioned multi-task learning, the consideration of performance versus adversarial robustness, or a more interpretable way for interactively adapting to changing preferences. This survey covers recent advancements in the area of multi-objective deep learning. We introduce a taxonomy of existing methods – based on the type of training algorithm as well as the decision maker’s needs – before listing recent advancements, and also successful applications. All three main learning paradigms supervised learning, unsupervised learning and reinforcement learning are covered, and we also address the recently very popular area of generative modeling. With a focus on the advantages and disadvantages of the existing training algorithms, this survey is formulated from an optimization perspective rather than organizing according to different learning paradigms or application areas.