{"title":"Mask-based Text Scoring for Product Title Summarization","authors":"Xinyi Guan, Shun Long, Weiheng Zhu, Silei Cao, Fangting Liao","doi":"10.1109/ICSAI57119.2022.10005399","DOIUrl":null,"url":null,"abstract":"In e-commerce, long product titles with rich information help attract users, but they are usually truncated for display on small-screen mobile devices, which results in neglection of important information and in turn low click-through rate. This paper presents a novel product title summarization method via the use of a mask-based text information scoring network. Via quantified evaluation of expressiveness, the most telling points are identified from the original title for a concise version which best retains its content. Our experiments show that, even without external information, our proposed method MPTS outperforms established benchmark models by 1.48% (ROUGE-1), 5.11% (ROUGE-2) and 1.37% (ROUGE-L) respectively.","PeriodicalId":339547,"journal":{"name":"2022 8th International Conference on Systems and Informatics (ICSAI)","volume":"465 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 8th International Conference on Systems and Informatics (ICSAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSAI57119.2022.10005399","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In e-commerce, long product titles with rich information help attract users, but they are usually truncated for display on small-screen mobile devices, which results in neglection of important information and in turn low click-through rate. This paper presents a novel product title summarization method via the use of a mask-based text information scoring network. Via quantified evaluation of expressiveness, the most telling points are identified from the original title for a concise version which best retains its content. Our experiments show that, even without external information, our proposed method MPTS outperforms established benchmark models by 1.48% (ROUGE-1), 5.11% (ROUGE-2) and 1.37% (ROUGE-L) respectively.