Wengang Zhou, Weichao Zhao, Hezhen Hu, Zecheng Li, Houqiang Li
{"title":"Scaling up Multimodal Pre-Training for Sign Language Understanding.","authors":"Wengang Zhou, Weichao Zhao, Hezhen Hu, Zecheng Li, Houqiang Li","doi":"10.1109/TPAMI.2025.3599313","DOIUrl":null,"url":null,"abstract":"<p><p>Sign language pre-training (SLP) has significantly improved the performance of diverse sign language understanding (SLU) tasks. However, many existing methods employ pre-training techniques that are tailored to a specific task with small data scale, resulting in limited model generalization. Some others focus solely on exploring visual cues, neglecting semantically textual cues embedded in sign translation texts. These limitations inherently diminish the representative capacity of pre-trained models. To this end, we present a multimodal SLP framework to leverage rich visual contextual information and vision-language semantic consistency with massively available data to enhance the representative capability of sign language video. Specifically, we first curate a large-scale text-labeled sign pose dataset ($\\sim$ 1.5M), namely SL-1.5M, from various sources to alleviate the scarcity of pre-training data. Subsequently, we propose a pre-training framework, which integrates sign-text contrastive learning with masked pose modeling as the pretext task. In this way, our framework is empowered to effectively capture contextual cues within sign pose sequences and learn visual representation by aligning semantical text-rich features in a latent space. Moreover, in order to grasp the comprehensive meaning of sign language videos, we concurrently model manual and non-manual information to ensure the holistic integrity of visual content. To validate the generalization and superiority of our proposed pre-trained framework, we conduct extensive experiments without intricate design on diverse SLU tasks, achieving new state-of-the-art performance on multiple benchmarks.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":18.6000,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TPAMI.2025.3599313","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Sign language pre-training (SLP) has significantly improved the performance of diverse sign language understanding (SLU) tasks. However, many existing methods employ pre-training techniques that are tailored to a specific task with small data scale, resulting in limited model generalization. Some others focus solely on exploring visual cues, neglecting semantically textual cues embedded in sign translation texts. These limitations inherently diminish the representative capacity of pre-trained models. To this end, we present a multimodal SLP framework to leverage rich visual contextual information and vision-language semantic consistency with massively available data to enhance the representative capability of sign language video. Specifically, we first curate a large-scale text-labeled sign pose dataset ($\sim$ 1.5M), namely SL-1.5M, from various sources to alleviate the scarcity of pre-training data. Subsequently, we propose a pre-training framework, which integrates sign-text contrastive learning with masked pose modeling as the pretext task. In this way, our framework is empowered to effectively capture contextual cues within sign pose sequences and learn visual representation by aligning semantical text-rich features in a latent space. Moreover, in order to grasp the comprehensive meaning of sign language videos, we concurrently model manual and non-manual information to ensure the holistic integrity of visual content. To validate the generalization and superiority of our proposed pre-trained framework, we conduct extensive experiments without intricate design on diverse SLU tasks, achieving new state-of-the-art performance on multiple benchmarks.