Generating High Quality Titles in StackOverflow via Data Denoising Method

Shikai Guo, Bowen Ping, Zixuan Song, Hui Li, Rong Chen
{"title":"Generating High Quality Titles in StackOverflow via Data Denoising Method","authors":"Shikai Guo, Bowen Ping, Zixuan Song, Hui Li, Rong Chen","doi":"10.1109/PAAP56126.2022.10010656","DOIUrl":null,"url":null,"abstract":"StackOverflow is one of the most popular question-and-answer platforms on the internet and whether posts on StackOverflow will be answered largely depends on their titles’ quality. Based on recurrent neural networks (RNN) or transformers, previous studies have attempted to use real posts from StackOverflow to generate better titles. However, the challenge of noise in existing data has been ignored, leading models can’t generate higher quality titles. To address this issue, we propose the K-clusters confidence learning for code titles (KCL-CT) model, which contains code clustering and confident learning (CL) denoising components. Specifically, the code clustering component is used to capture the word order and semantic information in code and classify code into different functional categories. The CL denoising component receives the output from the code clustering component and employs a heuristic method based on a confidence threshold to prune raw datasets. We conducted experiments based on Java, Python, JavaScript, SQL and C# datasets, the results of which indicated that in terms of the BLEU and ROUGE scores, the proposed KCL-CT model can outperform previous state-of-the-art models by 2.0%–11.1% and 2.5%–14.0%, respectively.","PeriodicalId":336339,"journal":{"name":"2022 IEEE 13th International Symposium on Parallel Architectures, Algorithms and Programming (PAAP)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 13th International Symposium on Parallel Architectures, Algorithms and Programming (PAAP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PAAP56126.2022.10010656","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

StackOverflow is one of the most popular question-and-answer platforms on the internet and whether posts on StackOverflow will be answered largely depends on their titles’ quality. Based on recurrent neural networks (RNN) or transformers, previous studies have attempted to use real posts from StackOverflow to generate better titles. However, the challenge of noise in existing data has been ignored, leading models can’t generate higher quality titles. To address this issue, we propose the K-clusters confidence learning for code titles (KCL-CT) model, which contains code clustering and confident learning (CL) denoising components. Specifically, the code clustering component is used to capture the word order and semantic information in code and classify code into different functional categories. The CL denoising component receives the output from the code clustering component and employs a heuristic method based on a confidence threshold to prune raw datasets. We conducted experiments based on Java, Python, JavaScript, SQL and C# datasets, the results of which indicated that in terms of the BLEU and ROUGE scores, the proposed KCL-CT model can outperform previous state-of-the-art models by 2.0%–11.1% and 2.5%–14.0%, respectively.
通过数据去噪方法在StackOverflow中生成高质量的标题
StackOverflow是互联网上最受欢迎的问答平台之一,StackOverflow上的帖子是否会得到回答在很大程度上取决于它们的标题质量。基于循环神经网络(RNN)或变压器,之前的研究试图使用StackOverflow上的真实帖子来生成更好的标题。然而,现有数据中噪声的挑战被忽视了,领先的模型无法生成更高质量的标题。为了解决这个问题,我们提出了代码标题的k -聚类置信度学习(KCL-CT)模型,该模型包含代码聚类和置信度学习(CL)去噪组件。具体而言,代码聚类组件用于捕获代码中的词序和语义信息,并将代码划分为不同的功能类别。CL去噪组件接收代码聚类组件的输出,并使用基于置信度阈值的启发式方法来修剪原始数据集。我们基于Java、Python、JavaScript、SQL和c#数据集进行了实验,结果表明,就BLEU和ROUGE得分而言,所提出的KCL-CT模型分别比现有的最先进模型高出2.0%-11.1%和2.5%-14.0%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信