Zakaria Kaddari, Youssef Mellah, Jamal Berrich, T. Bouchentouf, M. Belkasmi
{"title":"Applying the T5 language model and duration units normalization to address temporal common sense understanding on the MCTACO dataset","authors":"Zakaria Kaddari, Youssef Mellah, Jamal Berrich, T. Bouchentouf, M. Belkasmi","doi":"10.1109/ISCV49265.2020.9204142","DOIUrl":null,"url":null,"abstract":"In this paper, we present the work we did on the MCTACO dataset, which is concerned with temporal common sense understanding in natural language processing. We begin by describing our approach that we called T5NCSU (T5 Normalization Common Sense Understanding), which relies on preprocessing techniques like duration units normalization and the use of the recently released T5 text-to-text pre-trained language model, we then present and discuss our results. Using our approach we were able to obtain the state-of-the-art on the MCTACO dataset leaderboard.","PeriodicalId":313743,"journal":{"name":"2020 International Conference on Intelligent Systems and Computer Vision (ISCV)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on Intelligent Systems and Computer Vision (ISCV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCV49265.2020.9204142","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
In this paper, we present the work we did on the MCTACO dataset, which is concerned with temporal common sense understanding in natural language processing. We begin by describing our approach that we called T5NCSU (T5 Normalization Common Sense Understanding), which relies on preprocessing techniques like duration units normalization and the use of the recently released T5 text-to-text pre-trained language model, we then present and discuss our results. Using our approach we were able to obtain the state-of-the-art on the MCTACO dataset leaderboard.