{"title":"在ICASSP 2023 SPGC共享任务:利用有限资源的预训练和蒸馏方法生成标题","authors":"Tianxiao Xu, Zihao Zheng, Xinshuo Hu, Zetian Sun, Yu Zhao, Baotian Hu","doi":"10.1109/icassp49357.2023.10097026","DOIUrl":null,"url":null,"abstract":"In this paper, we present our proposed method for the shared task of the ICASSP 2023 Signal Processing Grand Challenge (SPGC). We participate in Topic Title Generation (TTG), Track 3 of General Meeting Understanding and Generation (MUG) [1] in SPGC. The primary objective of this task is to generate a title that effectively summarizes the given topic segment. With the constraints of limited model size and external dataset availability, we propose a method as Pre-training - Distillation / Fine-tuning (PDF), which can efficiently leverage the knowledge from large model and corpus. Our method achieves first place during preliminary and final contests in ICASSP2023 MUG Challenge Track 3.","PeriodicalId":113072,"journal":{"name":"ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"HITSZ TMG at ICASSP 2023 SPGC Shared Task: Leveraging Pre-Training and Distillation Method for Title Generation with Limited Resource\",\"authors\":\"Tianxiao Xu, Zihao Zheng, Xinshuo Hu, Zetian Sun, Yu Zhao, Baotian Hu\",\"doi\":\"10.1109/icassp49357.2023.10097026\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we present our proposed method for the shared task of the ICASSP 2023 Signal Processing Grand Challenge (SPGC). We participate in Topic Title Generation (TTG), Track 3 of General Meeting Understanding and Generation (MUG) [1] in SPGC. The primary objective of this task is to generate a title that effectively summarizes the given topic segment. With the constraints of limited model size and external dataset availability, we propose a method as Pre-training - Distillation / Fine-tuning (PDF), which can efficiently leverage the knowledge from large model and corpus. Our method achieves first place during preliminary and final contests in ICASSP2023 MUG Challenge Track 3.\",\"PeriodicalId\":113072,\"journal\":{\"name\":\"ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)\",\"volume\":\"17 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/icassp49357.2023.10097026\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icassp49357.2023.10097026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
摘要
在本文中,我们为ICASSP 2023信号处理大挑战(SPGC)的共享任务提出了我们提出的方法。我们参加了SPGC的Topic Title Generation (TTG), General Meeting Understanding and Generation (MUG) [1] Track 3。这个任务的主要目标是生成一个标题,有效地总结给定的主题段。在模型大小和外部数据可用性有限的约束下,我们提出了一种预训练-蒸馏/微调(PDF)方法,该方法可以有效地利用大型模型和语料库中的知识。我们的方法在ICASSP2023 MUG挑战赛Track 3的初赛和决赛中获得了第一名。
HITSZ TMG at ICASSP 2023 SPGC Shared Task: Leveraging Pre-Training and Distillation Method for Title Generation with Limited Resource
In this paper, we present our proposed method for the shared task of the ICASSP 2023 Signal Processing Grand Challenge (SPGC). We participate in Topic Title Generation (TTG), Track 3 of General Meeting Understanding and Generation (MUG) [1] in SPGC. The primary objective of this task is to generate a title that effectively summarizes the given topic segment. With the constraints of limited model size and external dataset availability, we propose a method as Pre-training - Distillation / Fine-tuning (PDF), which can efficiently leverage the knowledge from large model and corpus. Our method achieves first place during preliminary and final contests in ICASSP2023 MUG Challenge Track 3.