{"title":"自动汇总堆栈溢出职位","authors":"Bonan Kou, Muhao Chen, Tianyi Zhang","doi":"10.1109/ICSE48619.2023.00158","DOIUrl":null,"url":null,"abstract":"Software developers often resort to Stack Overflow (SO) to fill their programming needs. Given the abundance of relevant posts, navigating them and comparing different solutions is tedious and time-consuming. Recent work has proposed to automatically summarize SO posts to concise text to facilitate the navigation of SO posts. However, these techniques rely only on information retrieval methods or heuristics for text summarization, which is insufficient to handle the ambiguity and sophistication of natural language. This paper presents a deep learning based framework called Assortfor SO post summarization. Assortincludes two complementary learning methods, $\\mathbf{Assort}_{S}$ and $\\mathbf{Assort}_{IS}$, to address the lack of labeled training data for SO post summarization. $\\mathbf{Assort}_{S}$ is designed to directly train a novel ensemble learning model with BERT embeddings and domain-specific features to account for the unique characteristics of SO posts. By contrast, $\\mathbf{Assort}_{IS}$ is designed to reuse pre-trained models while addressing the domain shift challenge when no training data is present (i.e., zero-shot learning). Both $\\mathbf{Assort}_{S}$ and $\\mathbf{Assort}_{IS}$ outperform six existing techniques by at least 13% and 7% respectively in terms of the F1 score. Furthermore, a human study shows that participants significantly preferred summaries generated by $\\mathbf{Assort}_{S}$ and $\\mathbf{Assort}_{IS}$ over the best baseline, while the preference difference between $\\mathbf{Assort}_{S}$ and $\\mathbf{Assort}_{IS}$ was small.","PeriodicalId":376379,"journal":{"name":"2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Automated Summarization of Stack Overflow Posts\",\"authors\":\"Bonan Kou, Muhao Chen, Tianyi Zhang\",\"doi\":\"10.1109/ICSE48619.2023.00158\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Software developers often resort to Stack Overflow (SO) to fill their programming needs. Given the abundance of relevant posts, navigating them and comparing different solutions is tedious and time-consuming. Recent work has proposed to automatically summarize SO posts to concise text to facilitate the navigation of SO posts. However, these techniques rely only on information retrieval methods or heuristics for text summarization, which is insufficient to handle the ambiguity and sophistication of natural language. This paper presents a deep learning based framework called Assortfor SO post summarization. Assortincludes two complementary learning methods, $\\\\mathbf{Assort}_{S}$ and $\\\\mathbf{Assort}_{IS}$, to address the lack of labeled training data for SO post summarization. $\\\\mathbf{Assort}_{S}$ is designed to directly train a novel ensemble learning model with BERT embeddings and domain-specific features to account for the unique characteristics of SO posts. By contrast, $\\\\mathbf{Assort}_{IS}$ is designed to reuse pre-trained models while addressing the domain shift challenge when no training data is present (i.e., zero-shot learning). Both $\\\\mathbf{Assort}_{S}$ and $\\\\mathbf{Assort}_{IS}$ outperform six existing techniques by at least 13% and 7% respectively in terms of the F1 score. Furthermore, a human study shows that participants significantly preferred summaries generated by $\\\\mathbf{Assort}_{S}$ and $\\\\mathbf{Assort}_{IS}$ over the best baseline, while the preference difference between $\\\\mathbf{Assort}_{S}$ and $\\\\mathbf{Assort}_{IS}$ was small.\",\"PeriodicalId\":376379,\"journal\":{\"name\":\"2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE)\",\"volume\":\"54 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSE48619.2023.00158\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSE48619.2023.00158","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Software developers often resort to Stack Overflow (SO) to fill their programming needs. Given the abundance of relevant posts, navigating them and comparing different solutions is tedious and time-consuming. Recent work has proposed to automatically summarize SO posts to concise text to facilitate the navigation of SO posts. However, these techniques rely only on information retrieval methods or heuristics for text summarization, which is insufficient to handle the ambiguity and sophistication of natural language. This paper presents a deep learning based framework called Assortfor SO post summarization. Assortincludes two complementary learning methods, $\mathbf{Assort}_{S}$ and $\mathbf{Assort}_{IS}$, to address the lack of labeled training data for SO post summarization. $\mathbf{Assort}_{S}$ is designed to directly train a novel ensemble learning model with BERT embeddings and domain-specific features to account for the unique characteristics of SO posts. By contrast, $\mathbf{Assort}_{IS}$ is designed to reuse pre-trained models while addressing the domain shift challenge when no training data is present (i.e., zero-shot learning). Both $\mathbf{Assort}_{S}$ and $\mathbf{Assort}_{IS}$ outperform six existing techniques by at least 13% and 7% respectively in terms of the F1 score. Furthermore, a human study shows that participants significantly preferred summaries generated by $\mathbf{Assort}_{S}$ and $\mathbf{Assort}_{IS}$ over the best baseline, while the preference difference between $\mathbf{Assort}_{S}$ and $\mathbf{Assort}_{IS}$ was small.