{"title":"The effect of the head number for multi-head self-attention in remaining useful life prediction of rolling bearing and interpretability","authors":"Qiwu Zhao, Xiaoli Zhang, Fangzhen Wang, Panfeng Fan, Erick Mbeka","doi":"10.1016/j.neucom.2024.128946","DOIUrl":null,"url":null,"abstract":"<div><div>As one of the machine learning (ML) models, the multi-head self-attention mechanism (MSM) is competent in encoding high-level feature representations, providing computing superiorities, and systematically processing sequences bypassing the recurrent neural networks (RNN) models. However, the model performance and computational results are affected by head number, and the lack of impact interpretability has become a primary obstacle due to the complex internal working mechanisms. Therefore, the effects of the head number of the MSM on the accuracy of the result, the robustness of the model, and computation efficiency are investigated in the remaining useful life (RUL) prediction of rolling bearings. The results show that the accuracy of prediction results will be reduced caused by large or few head numbers. In addition, the more heads are selected, the more robust and higher the predictive efficiency of the model is achieved. The above effects are explained relying on the visualization of the attention weight distribution and functional networks, which are constructed and solved by the equivalent fully connected layer and graph theory analysis, respectively. The model's attention coefficient distribution during training and prediction shows that the representative information will be captured inadequately if fewer heads are selected, which causes MSM to neglect to assign large attention coefficients to degraded information. On the contrary, representational degradation information and redundant information will be acquired by models with too many heads. MSM will be disturbed by this redundant information in the attention weight distribution, resulting in incorrect allocation of attention. Both of these cases will reduce the accuracy of the prediction results. In addition, the selection rules of the head number are established based on the feature complexity that is measured by the sample entropy (SamEn). The local range for head selection is also found based on the relationship between head number and feature complexity; The effects of the head number of the MSM on the robustness of the model and computation efficiency are explained by the changes in the three parameters (average of the clustering coefficients, global efficiency, and of the average shortest path length) of the graph, which is constructed after solving the function network. The research provides a reference for rolling bearing prediction with high computational accuracy, calculation efficiency, and strong robustness using MSM.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"616 ","pages":"Article 128946"},"PeriodicalIF":5.5000,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S092523122401717X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
As one of the machine learning (ML) models, the multi-head self-attention mechanism (MSM) is competent in encoding high-level feature representations, providing computing superiorities, and systematically processing sequences bypassing the recurrent neural networks (RNN) models. However, the model performance and computational results are affected by head number, and the lack of impact interpretability has become a primary obstacle due to the complex internal working mechanisms. Therefore, the effects of the head number of the MSM on the accuracy of the result, the robustness of the model, and computation efficiency are investigated in the remaining useful life (RUL) prediction of rolling bearings. The results show that the accuracy of prediction results will be reduced caused by large or few head numbers. In addition, the more heads are selected, the more robust and higher the predictive efficiency of the model is achieved. The above effects are explained relying on the visualization of the attention weight distribution and functional networks, which are constructed and solved by the equivalent fully connected layer and graph theory analysis, respectively. The model's attention coefficient distribution during training and prediction shows that the representative information will be captured inadequately if fewer heads are selected, which causes MSM to neglect to assign large attention coefficients to degraded information. On the contrary, representational degradation information and redundant information will be acquired by models with too many heads. MSM will be disturbed by this redundant information in the attention weight distribution, resulting in incorrect allocation of attention. Both of these cases will reduce the accuracy of the prediction results. In addition, the selection rules of the head number are established based on the feature complexity that is measured by the sample entropy (SamEn). The local range for head selection is also found based on the relationship between head number and feature complexity; The effects of the head number of the MSM on the robustness of the model and computation efficiency are explained by the changes in the three parameters (average of the clustering coefficients, global efficiency, and of the average shortest path length) of the graph, which is constructed after solving the function network. The research provides a reference for rolling bearing prediction with high computational accuracy, calculation efficiency, and strong robustness using MSM.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.