自动汇总堆栈溢出职位

Bonan Kou, Muhao Chen, Tianyi Zhang
{"title":"自动汇总堆栈溢出职位","authors":"Bonan Kou, Muhao Chen, Tianyi Zhang","doi":"10.1109/ICSE48619.2023.00158","DOIUrl":null,"url":null,"abstract":"Software developers often resort to Stack Overflow (SO) to fill their programming needs. Given the abundance of relevant posts, navigating them and comparing different solutions is tedious and time-consuming. Recent work has proposed to automatically summarize SO posts to concise text to facilitate the navigation of SO posts. However, these techniques rely only on information retrieval methods or heuristics for text summarization, which is insufficient to handle the ambiguity and sophistication of natural language. This paper presents a deep learning based framework called Assortfor SO post summarization. Assortincludes two complementary learning methods, $\\mathbf{Assort}_{S}$ and $\\mathbf{Assort}_{IS}$, to address the lack of labeled training data for SO post summarization. $\\mathbf{Assort}_{S}$ is designed to directly train a novel ensemble learning model with BERT embeddings and domain-specific features to account for the unique characteristics of SO posts. By contrast, $\\mathbf{Assort}_{IS}$ is designed to reuse pre-trained models while addressing the domain shift challenge when no training data is present (i.e., zero-shot learning). Both $\\mathbf{Assort}_{S}$ and $\\mathbf{Assort}_{IS}$ outperform six existing techniques by at least 13% and 7% respectively in terms of the F1 score. Furthermore, a human study shows that participants significantly preferred summaries generated by $\\mathbf{Assort}_{S}$ and $\\mathbf{Assort}_{IS}$ over the best baseline, while the preference difference between $\\mathbf{Assort}_{S}$ and $\\mathbf{Assort}_{IS}$ was small.","PeriodicalId":376379,"journal":{"name":"2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Automated Summarization of Stack Overflow Posts\",\"authors\":\"Bonan Kou, Muhao Chen, Tianyi Zhang\",\"doi\":\"10.1109/ICSE48619.2023.00158\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Software developers often resort to Stack Overflow (SO) to fill their programming needs. Given the abundance of relevant posts, navigating them and comparing different solutions is tedious and time-consuming. Recent work has proposed to automatically summarize SO posts to concise text to facilitate the navigation of SO posts. However, these techniques rely only on information retrieval methods or heuristics for text summarization, which is insufficient to handle the ambiguity and sophistication of natural language. This paper presents a deep learning based framework called Assortfor SO post summarization. Assortincludes two complementary learning methods, $\\\\mathbf{Assort}_{S}$ and $\\\\mathbf{Assort}_{IS}$, to address the lack of labeled training data for SO post summarization. $\\\\mathbf{Assort}_{S}$ is designed to directly train a novel ensemble learning model with BERT embeddings and domain-specific features to account for the unique characteristics of SO posts. By contrast, $\\\\mathbf{Assort}_{IS}$ is designed to reuse pre-trained models while addressing the domain shift challenge when no training data is present (i.e., zero-shot learning). Both $\\\\mathbf{Assort}_{S}$ and $\\\\mathbf{Assort}_{IS}$ outperform six existing techniques by at least 13% and 7% respectively in terms of the F1 score. Furthermore, a human study shows that participants significantly preferred summaries generated by $\\\\mathbf{Assort}_{S}$ and $\\\\mathbf{Assort}_{IS}$ over the best baseline, while the preference difference between $\\\\mathbf{Assort}_{S}$ and $\\\\mathbf{Assort}_{IS}$ was small.\",\"PeriodicalId\":376379,\"journal\":{\"name\":\"2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE)\",\"volume\":\"54 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSE48619.2023.00158\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSE48619.2023.00158","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

摘要

软件开发人员经常求助于Stack Overflow (SO)来满足他们的编程需求。考虑到大量相关的帖子,浏览它们并比较不同的解决方案是乏味而耗时的。最近的工作提出了自动将SO帖子总结为简洁的文本,以方便SO帖子的导航。然而,这些技术仅依靠信息检索方法或启发式方法进行文本摘要,不足以处理自然语言的模糊性和复杂性。本文提出了一个基于深度学习的框架,称为分类后摘要。assort包括两个互补的学习方法,$\mathbf{sort}_{S}$和$\mathbf{sort}_{IS}$,以解决缺乏标记训练数据的问题。$\mathbf{sort}_{S}$旨在直接训练一个具有BERT嵌入和领域特定特征的新型集成学习模型,以解释SO帖子的独特特征。相比之下,$\mathbf{sort}_{IS}$旨在重用预训练的模型,同时在没有训练数据存在时解决领域转移挑战(即零次学习)。在F1得分方面,$\mathbf{Assort}_{S}$和$\mathbf{Assort}_{IS}$分别比六种现有技术高出至少13%和7%。此外,一项人类研究表明,参与者对$\mathbf{Assort}_{S}$和$\mathbf{Assort}_{IS}$生成的摘要的偏好明显高于最佳基线,而$\mathbf{Assort}_{S}$和$\mathbf{Assort}_{IS}$之间的偏好差异很小。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Automated Summarization of Stack Overflow Posts
Software developers often resort to Stack Overflow (SO) to fill their programming needs. Given the abundance of relevant posts, navigating them and comparing different solutions is tedious and time-consuming. Recent work has proposed to automatically summarize SO posts to concise text to facilitate the navigation of SO posts. However, these techniques rely only on information retrieval methods or heuristics for text summarization, which is insufficient to handle the ambiguity and sophistication of natural language. This paper presents a deep learning based framework called Assortfor SO post summarization. Assortincludes two complementary learning methods, $\mathbf{Assort}_{S}$ and $\mathbf{Assort}_{IS}$, to address the lack of labeled training data for SO post summarization. $\mathbf{Assort}_{S}$ is designed to directly train a novel ensemble learning model with BERT embeddings and domain-specific features to account for the unique characteristics of SO posts. By contrast, $\mathbf{Assort}_{IS}$ is designed to reuse pre-trained models while addressing the domain shift challenge when no training data is present (i.e., zero-shot learning). Both $\mathbf{Assort}_{S}$ and $\mathbf{Assort}_{IS}$ outperform six existing techniques by at least 13% and 7% respectively in terms of the F1 score. Furthermore, a human study shows that participants significantly preferred summaries generated by $\mathbf{Assort}_{S}$ and $\mathbf{Assort}_{IS}$ over the best baseline, while the preference difference between $\mathbf{Assort}_{S}$ and $\mathbf{Assort}_{IS}$ was small.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信