Xianchang Luo, Yinxing Xue, Zhenchang Xing, Jiamou Sun
{"title":"基于bert的预训练语言模型的需求分类快速学习","authors":"Xianchang Luo, Yinxing Xue, Zhenchang Xing, Jiamou Sun","doi":"10.1145/3551349.3560417","DOIUrl":null,"url":null,"abstract":"Software requirement classification is a longstanding and important problem in requirement engineering. Previous studies have applied various machine learning techniques for this problem, including Support Vector Machine (SVM) and decision trees. With the recent popularity of NLP technique, the state-of-the-art approach NoRBERT utilizes the pre-trained language model BERT and achieves a satisfactory performance. However, the dataset PROMISE used by the existing approaches for this problem consists of only hundreds of requirements that are outdated according to today’s technology and market trends. Besides, the NLP technique applied in these approaches might be obsolete. In this paper, we propose an approach of prompt learning for requirement classification using BERT-based pretrained language models (PRCBERT), which applies flexible prompt templates to achieve accurate requirements classification. Experiments conducted on two existing small-size requirement datasets (PROMISE and NFR-Review) and our collected large-scale requirement dataset NFR-SO prove that PRCBERT exhibits moderately better classification performance than NoRBERT and MLM-BERT (BERT with the standard prompt template). On the de-labeled NFR-Review and NFR-SO datasets, Trans_PRCBERT (the version of PRCBERT which is fine-tuned on PROMISE) is able to have a satisfactory zero-shot performance with 53.27% and 72.96% F1-score when enabling a self-learning strategy.","PeriodicalId":197939,"journal":{"name":"Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"PRCBERT: Prompt Learning for Requirement Classification using BERT-based Pretrained Language Models\",\"authors\":\"Xianchang Luo, Yinxing Xue, Zhenchang Xing, Jiamou Sun\",\"doi\":\"10.1145/3551349.3560417\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Software requirement classification is a longstanding and important problem in requirement engineering. Previous studies have applied various machine learning techniques for this problem, including Support Vector Machine (SVM) and decision trees. With the recent popularity of NLP technique, the state-of-the-art approach NoRBERT utilizes the pre-trained language model BERT and achieves a satisfactory performance. However, the dataset PROMISE used by the existing approaches for this problem consists of only hundreds of requirements that are outdated according to today’s technology and market trends. Besides, the NLP technique applied in these approaches might be obsolete. In this paper, we propose an approach of prompt learning for requirement classification using BERT-based pretrained language models (PRCBERT), which applies flexible prompt templates to achieve accurate requirements classification. Experiments conducted on two existing small-size requirement datasets (PROMISE and NFR-Review) and our collected large-scale requirement dataset NFR-SO prove that PRCBERT exhibits moderately better classification performance than NoRBERT and MLM-BERT (BERT with the standard prompt template). On the de-labeled NFR-Review and NFR-SO datasets, Trans_PRCBERT (the version of PRCBERT which is fine-tuned on PROMISE) is able to have a satisfactory zero-shot performance with 53.27% and 72.96% F1-score when enabling a self-learning strategy.\",\"PeriodicalId\":197939,\"journal\":{\"name\":\"Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3551349.3560417\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3551349.3560417","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
PRCBERT: Prompt Learning for Requirement Classification using BERT-based Pretrained Language Models
Software requirement classification is a longstanding and important problem in requirement engineering. Previous studies have applied various machine learning techniques for this problem, including Support Vector Machine (SVM) and decision trees. With the recent popularity of NLP technique, the state-of-the-art approach NoRBERT utilizes the pre-trained language model BERT and achieves a satisfactory performance. However, the dataset PROMISE used by the existing approaches for this problem consists of only hundreds of requirements that are outdated according to today’s technology and market trends. Besides, the NLP technique applied in these approaches might be obsolete. In this paper, we propose an approach of prompt learning for requirement classification using BERT-based pretrained language models (PRCBERT), which applies flexible prompt templates to achieve accurate requirements classification. Experiments conducted on two existing small-size requirement datasets (PROMISE and NFR-Review) and our collected large-scale requirement dataset NFR-SO prove that PRCBERT exhibits moderately better classification performance than NoRBERT and MLM-BERT (BERT with the standard prompt template). On the de-labeled NFR-Review and NFR-SO datasets, Trans_PRCBERT (the version of PRCBERT which is fine-tuned on PROMISE) is able to have a satisfactory zero-shot performance with 53.27% and 72.96% F1-score when enabling a self-learning strategy.