Multi-objective deep learning: Taxonomy and survey of the state of the art

Sebastian Peitz, Sèdjro Salomon Hotegni
{"title":"Multi-objective deep learning: Taxonomy and survey of the state of the art","authors":"Sebastian Peitz,&nbsp;Sèdjro Salomon Hotegni","doi":"10.1016/j.mlwa.2025.100700","DOIUrl":null,"url":null,"abstract":"<div><div>Simultaneously considering multiple objectives in machine learning has been a popular approach for several decades, with various benefits for multi-task learning, the consideration of secondary goals such as sparsity, or multicriteria hyperparameter tuning. However – as multi-objective optimization is significantly more costly than single-objective optimization – the recent focus on deep learning architectures poses considerable additional challenges due to the very large number of parameters, strong nonlinearities and stochasticity. On the other hand considering multiple criteria in deep learning presents many benefits, such as the just-mentioned multi-task learning, the consideration of performance versus adversarial robustness, or a more interpretable way for interactively adapting to changing preferences. This survey covers recent advancements in the area of multi-objective deep learning. We introduce a taxonomy of existing methods – based on the type of training algorithm as well as the decision maker’s needs – before listing recent advancements, and also successful applications. All three main learning paradigms supervised learning, unsupervised learning and reinforcement learning are covered, and we also address the recently very popular area of generative modeling. With a focus on the advantages and disadvantages of the existing training algorithms, this survey is formulated from an optimization perspective rather than organizing according to different learning paradigms or application areas.</div></div>","PeriodicalId":74093,"journal":{"name":"Machine learning with applications","volume":"21 ","pages":"Article 100700"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning with applications","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666827025000830","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Simultaneously considering multiple objectives in machine learning has been a popular approach for several decades, with various benefits for multi-task learning, the consideration of secondary goals such as sparsity, or multicriteria hyperparameter tuning. However – as multi-objective optimization is significantly more costly than single-objective optimization – the recent focus on deep learning architectures poses considerable additional challenges due to the very large number of parameters, strong nonlinearities and stochasticity. On the other hand considering multiple criteria in deep learning presents many benefits, such as the just-mentioned multi-task learning, the consideration of performance versus adversarial robustness, or a more interpretable way for interactively adapting to changing preferences. This survey covers recent advancements in the area of multi-objective deep learning. We introduce a taxonomy of existing methods – based on the type of training algorithm as well as the decision maker’s needs – before listing recent advancements, and also successful applications. All three main learning paradigms supervised learning, unsupervised learning and reinforcement learning are covered, and we also address the recently very popular area of generative modeling. With a focus on the advantages and disadvantages of the existing training algorithms, this survey is formulated from an optimization perspective rather than organizing according to different learning paradigms or application areas.
多目标深度学习:分类和现状调查
几十年来,同时考虑机器学习中的多个目标一直是一种流行的方法,对于多任务学习,考虑次要目标(如稀疏性)或多标准超参数调优具有各种好处。然而,由于多目标优化比单目标优化成本高得多,最近对深度学习架构的关注带来了相当多的额外挑战,因为参数数量非常大,非线性和随机性很强。另一方面,在深度学习中考虑多个标准会带来许多好处,例如刚刚提到的多任务学习,对性能与对抗性鲁棒性的考虑,或者一种更可解释的方式来交互适应不断变化的偏好。本调查涵盖了多目标深度学习领域的最新进展。在列出最近的进展和成功的应用之前,我们介绍了现有方法的分类——基于训练算法的类型以及决策者的需求。所有三种主要的学习范式都涵盖了监督学习、无监督学习和强化学习,我们还讨论了最近非常流行的生成建模领域。本调查着眼于现有训练算法的优缺点,并不是根据不同的学习范式或应用领域进行组织,而是从优化的角度进行制定。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Machine learning with applications
Machine learning with applications Management Science and Operations Research, Artificial Intelligence, Computer Science Applications
自引率
0.00%
发文量
0
审稿时长
98 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信