Jianhe Cen, Kun Zhang, Jingyuan Li, Shiqi Sun, Yuanzhuo Wang
{"title":"MTPL-G2T:基于混合模板提示学习的图到文本生成任务","authors":"Jianhe Cen, Kun Zhang, Jingyuan Li, Shiqi Sun, Yuanzhuo Wang","doi":"10.1109/WI-IAT55865.2022.00089","DOIUrl":null,"url":null,"abstract":"The Graph-to-Text(G2T) generation tasks are mainly done by pre-training and fine-tuning currently, but the drawback of fine-tuning is that it changes all parameters of the pre-trained model. In this paper, we aim to accomplish the text generation task through prompt learning so that no or a small number of model parameters can be changed. Also, we analyze the impact of three different prompt templates on the generation results. The results show that when the pre-trained language model is large (e.g., T5), prompt learning is competitive with finetuning, but the number of parameters that need to be modified for prompt learning is much smaller than for fine-tuning; meanwhile, compared with text templates and soft templates, using mixed prompt templates can make the model converge faster.","PeriodicalId":345445,"journal":{"name":"2022 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT)","volume":"194 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MTPL-G2T: Graph-to-Text Generation Task Based on Mixed Template Prompt Learning\",\"authors\":\"Jianhe Cen, Kun Zhang, Jingyuan Li, Shiqi Sun, Yuanzhuo Wang\",\"doi\":\"10.1109/WI-IAT55865.2022.00089\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The Graph-to-Text(G2T) generation tasks are mainly done by pre-training and fine-tuning currently, but the drawback of fine-tuning is that it changes all parameters of the pre-trained model. In this paper, we aim to accomplish the text generation task through prompt learning so that no or a small number of model parameters can be changed. Also, we analyze the impact of three different prompt templates on the generation results. The results show that when the pre-trained language model is large (e.g., T5), prompt learning is competitive with finetuning, but the number of parameters that need to be modified for prompt learning is much smaller than for fine-tuning; meanwhile, compared with text templates and soft templates, using mixed prompt templates can make the model converge faster.\",\"PeriodicalId\":345445,\"journal\":{\"name\":\"2022 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT)\",\"volume\":\"194 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/WI-IAT55865.2022.00089\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WI-IAT55865.2022.00089","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
MTPL-G2T: Graph-to-Text Generation Task Based on Mixed Template Prompt Learning
The Graph-to-Text(G2T) generation tasks are mainly done by pre-training and fine-tuning currently, but the drawback of fine-tuning is that it changes all parameters of the pre-trained model. In this paper, we aim to accomplish the text generation task through prompt learning so that no or a small number of model parameters can be changed. Also, we analyze the impact of three different prompt templates on the generation results. The results show that when the pre-trained language model is large (e.g., T5), prompt learning is competitive with finetuning, but the number of parameters that need to be modified for prompt learning is much smaller than for fine-tuning; meanwhile, compared with text templates and soft templates, using mixed prompt templates can make the model converge faster.