多尺度卷积与知识精馏相结合的掩膜自编码器人脸美值预测。

IF 3.9 2区 综合性期刊 Q1 MULTIDISCIPLINARY SCIENCES
Junying Gan, Junling Xiong
{"title":"多尺度卷积与知识精馏相结合的掩膜自编码器人脸美值预测。","authors":"Junying Gan, Junling Xiong","doi":"10.1038/s41598-025-86831-0","DOIUrl":null,"url":null,"abstract":"<p><p>Facial beauty prediction (FBP) is a leading area of research in artificial intelligence. Currently, there is a small amount of labeled data and a large amount of unlabeled data in the FBP database. The features extracted by the model based on supervised training are limited, resulting in low prediction accuracy. Masked autoencoder (MAE) is a self-supervised learning method that outperforms supervised learning methods without relying on large-scale databases. The MAE can improve the feature extraction ability of the model effectively. The multi-scale convolution strategy can expand the receptive field and combine the attention mechanism of the MAE to capture the dependency between distant pixels and acquire shallow and deep image features. Knowledge distillation can take the abundant knowledge from the teacher net to the student net, reduce the number of parameters, and compress the model. In this paper, the MAE of the multi-scale convolution strategy is combined with knowledge distillation for FBP. First, the MAE model with a multi-scale convolution strategy is constructed and used in the teacher net for pretraining. Second, the MAE model is constructed for the student net. Finally, the teacher net performs knowledge distillation, and the student net receives the loss function transmitted from the teacher net for optimization. The experimental results show that the proposed method outperforms other methods on the FBP task, improves FBP accuracy, and can be widely applied in tasks such as image classification.</p>","PeriodicalId":21811,"journal":{"name":"Scientific Reports","volume":"15 1","pages":"2784"},"PeriodicalIF":3.9000,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11754610/pdf/","citationCount":"0","resultStr":"{\"title\":\"Masked autoencoder of multi-scale convolution strategy combined with knowledge distillation for facial beauty prediction.\",\"authors\":\"Junying Gan, Junling Xiong\",\"doi\":\"10.1038/s41598-025-86831-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Facial beauty prediction (FBP) is a leading area of research in artificial intelligence. Currently, there is a small amount of labeled data and a large amount of unlabeled data in the FBP database. The features extracted by the model based on supervised training are limited, resulting in low prediction accuracy. Masked autoencoder (MAE) is a self-supervised learning method that outperforms supervised learning methods without relying on large-scale databases. The MAE can improve the feature extraction ability of the model effectively. The multi-scale convolution strategy can expand the receptive field and combine the attention mechanism of the MAE to capture the dependency between distant pixels and acquire shallow and deep image features. Knowledge distillation can take the abundant knowledge from the teacher net to the student net, reduce the number of parameters, and compress the model. In this paper, the MAE of the multi-scale convolution strategy is combined with knowledge distillation for FBP. First, the MAE model with a multi-scale convolution strategy is constructed and used in the teacher net for pretraining. Second, the MAE model is constructed for the student net. Finally, the teacher net performs knowledge distillation, and the student net receives the loss function transmitted from the teacher net for optimization. The experimental results show that the proposed method outperforms other methods on the FBP task, improves FBP accuracy, and can be widely applied in tasks such as image classification.</p>\",\"PeriodicalId\":21811,\"journal\":{\"name\":\"Scientific Reports\",\"volume\":\"15 1\",\"pages\":\"2784\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2025-01-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11754610/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Scientific Reports\",\"FirstCategoryId\":\"103\",\"ListUrlMain\":\"https://doi.org/10.1038/s41598-025-86831-0\",\"RegionNum\":2,\"RegionCategory\":\"综合性期刊\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MULTIDISCIPLINARY SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Scientific Reports","FirstCategoryId":"103","ListUrlMain":"https://doi.org/10.1038/s41598-025-86831-0","RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

面部美丽预测(FBP)是人工智能研究的一个前沿领域。目前,FBP数据库中存在少量的标记数据和大量的未标记数据。基于监督训练的模型提取的特征有限,导致预测精度较低。掩码自编码器(mask autoencoder, MAE)是一种自监督学习方法,在不依赖大规模数据库的情况下优于监督学习方法。MAE可以有效地提高模型的特征提取能力。多尺度卷积策略可以扩展接收野,结合MAE的注意机制,捕捉远像素之间的依赖关系,获取图像的浅、深特征。知识蒸馏可以将丰富的知识从教师网络中提取到学生网络中,减少参数数量,压缩模型。本文将多尺度卷积策略的MAE与FBP的知识精馏相结合。首先,构建了具有多尺度卷积策略的MAE模型,并将其用于教师网络的预训练。其次,建立了面向学生网络的MAE模型。最后,教师网络进行知识提炼,学生网络接收从教师网络传递过来的损失函数进行优化。实验结果表明,该方法在FBP任务上优于其他方法,提高了FBP的精度,可广泛应用于图像分类等任务。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Masked autoencoder of multi-scale convolution strategy combined with knowledge distillation for facial beauty prediction.

Masked autoencoder of multi-scale convolution strategy combined with knowledge distillation for facial beauty prediction.

Masked autoencoder of multi-scale convolution strategy combined with knowledge distillation for facial beauty prediction.

Masked autoencoder of multi-scale convolution strategy combined with knowledge distillation for facial beauty prediction.

Facial beauty prediction (FBP) is a leading area of research in artificial intelligence. Currently, there is a small amount of labeled data and a large amount of unlabeled data in the FBP database. The features extracted by the model based on supervised training are limited, resulting in low prediction accuracy. Masked autoencoder (MAE) is a self-supervised learning method that outperforms supervised learning methods without relying on large-scale databases. The MAE can improve the feature extraction ability of the model effectively. The multi-scale convolution strategy can expand the receptive field and combine the attention mechanism of the MAE to capture the dependency between distant pixels and acquire shallow and deep image features. Knowledge distillation can take the abundant knowledge from the teacher net to the student net, reduce the number of parameters, and compress the model. In this paper, the MAE of the multi-scale convolution strategy is combined with knowledge distillation for FBP. First, the MAE model with a multi-scale convolution strategy is constructed and used in the teacher net for pretraining. Second, the MAE model is constructed for the student net. Finally, the teacher net performs knowledge distillation, and the student net receives the loss function transmitted from the teacher net for optimization. The experimental results show that the proposed method outperforms other methods on the FBP task, improves FBP accuracy, and can be widely applied in tasks such as image classification.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Scientific Reports
Scientific Reports Natural Science Disciplines-
CiteScore
7.50
自引率
4.30%
发文量
19567
审稿时长
3.9 months
期刊介绍: We publish original research from all areas of the natural sciences, psychology, medicine and engineering. You can learn more about what we publish by browsing our specific scientific subject areas below or explore Scientific Reports by browsing all articles and collections. Scientific Reports has a 2-year impact factor: 4.380 (2021), and is the 6th most-cited journal in the world, with more than 540,000 citations in 2020 (Clarivate Analytics, 2021). •Engineering Engineering covers all aspects of engineering, technology, and applied science. It plays a crucial role in the development of technologies to address some of the world''s biggest challenges, helping to save lives and improve the way we live. •Physical sciences Physical sciences are those academic disciplines that aim to uncover the underlying laws of nature — often written in the language of mathematics. It is a collective term for areas of study including astronomy, chemistry, materials science and physics. •Earth and environmental sciences Earth and environmental sciences cover all aspects of Earth and planetary science and broadly encompass solid Earth processes, surface and atmospheric dynamics, Earth system history, climate and climate change, marine and freshwater systems, and ecology. It also considers the interactions between humans and these systems. •Biological sciences Biological sciences encompass all the divisions of natural sciences examining various aspects of vital processes. The concept includes anatomy, physiology, cell biology, biochemistry and biophysics, and covers all organisms from microorganisms, animals to plants. •Health sciences The health sciences study health, disease and healthcare. This field of study aims to develop knowledge, interventions and technology for use in healthcare to improve the treatment of patients.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信