{"title":"BET-BiLSTM Model: A Robust Solution for Automated Requirements Classification","authors":"Jalil Abbas, Cheng Zhang, Bin Luo","doi":"10.1002/smr.70012","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Transformer methods have revolutionized software requirements classification by combining advanced natural language processing to accurately understand and categorize requirements. While traditional methods like Doc2Vec and TF-IDF are useful, they often fail to capture the deep contextual relationships and subtle meanings inherent in textual data. Transformer models possess unique strengths and weaknesses, impacting their ability to capture various aspects of the data. Consequently, relying on a single model can lead to suboptimal feature representations, limiting the overall performance of the classification task. To address this challenge, our study introduces an innovative BET-BiLSTM (balanced ensemble transformers using Bi-LSTM) model. This model combines the strengths of five transformer–based models BERT, RoBERTa, XLNet, GPT-2, and T5 through weighted averaging ensemble, resulting in a sophisticated and resilient feature set. By employing data balancing techniques, we ensure a well-distributed representation of features, addressing the issue of class imbalance. The BET-BiLSTM model plays a crucial role in the classification process, achieving an impressive accuracy of 96%. Moreover, the practical applicability of this model is validated through its successful implementation on three publicly available unlabeled datasets and one additional labeled dataset. The model significantly improved the completeness and reliability of these datasets by accurately predicting labels for previously unclassified requirements. This makes our approach a powerful tool for large-scale requirements analysis and classification tasks, outperforming traditional single-model methods and showcasing its real-world effectiveness.</p>\n </div>","PeriodicalId":48898,"journal":{"name":"Journal of Software-Evolution and Process","volume":"37 3","pages":""},"PeriodicalIF":1.7000,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Software-Evolution and Process","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/smr.70012","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Transformer methods have revolutionized software requirements classification by combining advanced natural language processing to accurately understand and categorize requirements. While traditional methods like Doc2Vec and TF-IDF are useful, they often fail to capture the deep contextual relationships and subtle meanings inherent in textual data. Transformer models possess unique strengths and weaknesses, impacting their ability to capture various aspects of the data. Consequently, relying on a single model can lead to suboptimal feature representations, limiting the overall performance of the classification task. To address this challenge, our study introduces an innovative BET-BiLSTM (balanced ensemble transformers using Bi-LSTM) model. This model combines the strengths of five transformer–based models BERT, RoBERTa, XLNet, GPT-2, and T5 through weighted averaging ensemble, resulting in a sophisticated and resilient feature set. By employing data balancing techniques, we ensure a well-distributed representation of features, addressing the issue of class imbalance. The BET-BiLSTM model plays a crucial role in the classification process, achieving an impressive accuracy of 96%. Moreover, the practical applicability of this model is validated through its successful implementation on three publicly available unlabeled datasets and one additional labeled dataset. The model significantly improved the completeness and reliability of these datasets by accurately predicting labels for previously unclassified requirements. This makes our approach a powerful tool for large-scale requirements analysis and classification tasks, outperforming traditional single-model methods and showcasing its real-world effectiveness.