轻量级碰撞前乘员伤害预测模型从碰撞后模型中提炼知识。

IF 1.7 4区 医学 Q4 BIOPHYSICS
Qingfan Wang, Ruiyang Li, Shi Shang, Qing Zhou, Bingbing Nie
{"title":"轻量级碰撞前乘员伤害预测模型从碰撞后模型中提炼知识。","authors":"Qingfan Wang, Ruiyang Li, Shi Shang, Qing Zhou, Bingbing Nie","doi":"10.1115/1.4063033","DOIUrl":null,"url":null,"abstract":"<p><p>Accurate occupant injury prediction in near-collision scenarios is vital in guiding intelligent vehicles to find the optimal collision condition with minimal injury risks. Existing studies focused on boosting prediction performance by introducing deep-learning models but encountered computational burdens due to the inherent high model complexity. To better balance these two traditionally contradictory factors, this study proposed a training method for pre-crash injury prediction models, namely, knowledge distillation (KD)-based training. This method was inspired by the idea of knowledge distillation, an emerging model compression method. Technically, we first trained a high-accuracy injury prediction model using informative post-crash sequence inputs (i.e., vehicle crash pulses) and a relatively complex network architecture as an experienced \"teacher\". Following this, a lightweight pre-crash injury prediction model (\"student\") learned both from the ground truth in output layers (i.e., conventional prediction loss) and its teacher in intermediate layers (i.e., distillation loss). In such a step-by-step teaching framework, the pre-crash model significantly improved the prediction accuracy of occupant's head abbreviated injury scale (AIS) (i.e., from 77.2% to 83.2%) without sacrificing computational efficiency. Multiple validation experiments proved the effectiveness of the proposed KD-based training framework. This study is expected to provide reference to balancing prediction accuracy and computational efficiency of pre-crash injury prediction models, promoting the further safety improvement of next-generation intelligent vehicles.</p>","PeriodicalId":54871,"journal":{"name":"Journal of Biomechanical Engineering-Transactions of the Asme","volume":" ","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Lightweight Pre-Crash Occupant Injury Prediction Model Distills Knowledge From Its Post-Crash Counterpart.\",\"authors\":\"Qingfan Wang, Ruiyang Li, Shi Shang, Qing Zhou, Bingbing Nie\",\"doi\":\"10.1115/1.4063033\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Accurate occupant injury prediction in near-collision scenarios is vital in guiding intelligent vehicles to find the optimal collision condition with minimal injury risks. Existing studies focused on boosting prediction performance by introducing deep-learning models but encountered computational burdens due to the inherent high model complexity. To better balance these two traditionally contradictory factors, this study proposed a training method for pre-crash injury prediction models, namely, knowledge distillation (KD)-based training. This method was inspired by the idea of knowledge distillation, an emerging model compression method. Technically, we first trained a high-accuracy injury prediction model using informative post-crash sequence inputs (i.e., vehicle crash pulses) and a relatively complex network architecture as an experienced \\\"teacher\\\". Following this, a lightweight pre-crash injury prediction model (\\\"student\\\") learned both from the ground truth in output layers (i.e., conventional prediction loss) and its teacher in intermediate layers (i.e., distillation loss). In such a step-by-step teaching framework, the pre-crash model significantly improved the prediction accuracy of occupant's head abbreviated injury scale (AIS) (i.e., from 77.2% to 83.2%) without sacrificing computational efficiency. Multiple validation experiments proved the effectiveness of the proposed KD-based training framework. This study is expected to provide reference to balancing prediction accuracy and computational efficiency of pre-crash injury prediction models, promoting the further safety improvement of next-generation intelligent vehicles.</p>\",\"PeriodicalId\":54871,\"journal\":{\"name\":\"Journal of Biomechanical Engineering-Transactions of the Asme\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2024-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Biomechanical Engineering-Transactions of the Asme\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1115/1.4063033\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"BIOPHYSICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Biomechanical Engineering-Transactions of the Asme","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1115/1.4063033","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"BIOPHYSICS","Score":null,"Total":0}
引用次数: 0

摘要

准确预测近碰撞场景中的乘员伤害对于引导智能车辆找到伤害风险最小的最佳碰撞条件至关重要。现有的研究侧重于通过引入深度学习模型来提高预测性能,但由于模型本身的高复杂性而遇到了计算负担。为了更好地平衡这两个传统上相互矛盾的因素,本研究提出了一种碰撞前伤害预测模型的训练方法,即基于知识蒸馏(KD)的训练。该方法的灵感来源于知识蒸馏这一新兴的模型压缩方法。在技术上,我们首先使用信息丰富的碰撞后序列输入(即车辆碰撞脉冲)和相对复杂的网络结构作为经验丰富的 "老师",训练出一个高精度的伤害预测模型。随后,一个轻量级的碰撞前伤害预测模型("学生")在输出层(即传统预测损失)和中间层(即蒸馏损失)从地面实况中学习。在这种循序渐进的教学框架下,碰撞前模型在不牺牲计算效率的情况下,显著提高了乘员头部简略损伤量表(AIS)的预测准确率(即从 77.2% 提高到 83.2%)。多重验证实验证明了所提出的基于 KD 的训练框架的有效性。本研究有望为平衡碰撞前伤害预测模型的预测精度和计算效率提供参考,促进下一代智能汽车安全性的进一步提高。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Lightweight Pre-Crash Occupant Injury Prediction Model Distills Knowledge From Its Post-Crash Counterpart.

Accurate occupant injury prediction in near-collision scenarios is vital in guiding intelligent vehicles to find the optimal collision condition with minimal injury risks. Existing studies focused on boosting prediction performance by introducing deep-learning models but encountered computational burdens due to the inherent high model complexity. To better balance these two traditionally contradictory factors, this study proposed a training method for pre-crash injury prediction models, namely, knowledge distillation (KD)-based training. This method was inspired by the idea of knowledge distillation, an emerging model compression method. Technically, we first trained a high-accuracy injury prediction model using informative post-crash sequence inputs (i.e., vehicle crash pulses) and a relatively complex network architecture as an experienced "teacher". Following this, a lightweight pre-crash injury prediction model ("student") learned both from the ground truth in output layers (i.e., conventional prediction loss) and its teacher in intermediate layers (i.e., distillation loss). In such a step-by-step teaching framework, the pre-crash model significantly improved the prediction accuracy of occupant's head abbreviated injury scale (AIS) (i.e., from 77.2% to 83.2%) without sacrificing computational efficiency. Multiple validation experiments proved the effectiveness of the proposed KD-based training framework. This study is expected to provide reference to balancing prediction accuracy and computational efficiency of pre-crash injury prediction models, promoting the further safety improvement of next-generation intelligent vehicles.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.40
自引率
5.90%
发文量
169
审稿时长
4-8 weeks
期刊介绍: Artificial Organs and Prostheses; Bioinstrumentation and Measurements; Bioheat Transfer; Biomaterials; Biomechanics; Bioprocess Engineering; Cellular Mechanics; Design and Control of Biological Systems; Physiological Systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信