{"title":"Seq2Seq dynamic planning network for progressive text generation","authors":"Di Wu, Peng Cheng, Yuying Zheng","doi":"10.1016/j.csl.2024.101687","DOIUrl":null,"url":null,"abstract":"<div><p>Long text generation is a hot topic in natural language processing. To address the problem of insufficient semantic representation and incoherent text generation in existing long text models, the Seq2Seq dynamic planning network progressive text generation model (DPPG-BART) is proposed. In the data pre-processing stage, the lexical division sorting algorithm is used. To obtain hierarchical sequences of keywords with clear information content, word weight values are calculated and ranked by TF-IDF of word embedding. To enhance the input representation, the dynamic planning progressive generation network is constructed. Positional features and word embedding vector features are integrated at the input side of the model. At the same time, to enrich the semantic information and expand the content of the text, the relevant concept words are generated by the concept expansion module. The scoring network and feedback mechanism are used to adjust the concept expansion module. Experimental results show that the DPPG-BART model is optimized over GPT2-S, GPT2-L, BART and ProGen-2 model approaches in terms of metric values of MSJ, B-BLEU and FBD on long text datasets from two different domains, CNN and Writing Prompts.</p></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885230824000706/pdfft?md5=9c314286f96f095183826029b974049f&pid=1-s2.0-S0885230824000706-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Speech and Language","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0885230824000706","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Long text generation is a hot topic in natural language processing. To address the problem of insufficient semantic representation and incoherent text generation in existing long text models, the Seq2Seq dynamic planning network progressive text generation model (DPPG-BART) is proposed. In the data pre-processing stage, the lexical division sorting algorithm is used. To obtain hierarchical sequences of keywords with clear information content, word weight values are calculated and ranked by TF-IDF of word embedding. To enhance the input representation, the dynamic planning progressive generation network is constructed. Positional features and word embedding vector features are integrated at the input side of the model. At the same time, to enrich the semantic information and expand the content of the text, the relevant concept words are generated by the concept expansion module. The scoring network and feedback mechanism are used to adjust the concept expansion module. Experimental results show that the DPPG-BART model is optimized over GPT2-S, GPT2-L, BART and ProGen-2 model approaches in terms of metric values of MSJ, B-BLEU and FBD on long text datasets from two different domains, CNN and Writing Prompts.
期刊介绍:
Computer Speech & Language publishes reports of original research related to the recognition, understanding, production, coding and mining of speech and language.
The speech and language sciences have a long history, but it is only relatively recently that large-scale implementation of and experimentation with complex models of speech and language processing has become feasible. Such research is often carried out somewhat separately by practitioners of artificial intelligence, computer science, electronic engineering, information retrieval, linguistics, phonetics, or psychology.