{"title":"走向同步手语生产:未来情境感知方法","authors":"Biao Fu;Tong Sun;Xiaodong Shi;Yidong Chen","doi":"10.1109/LSP.2025.3610359","DOIUrl":null,"url":null,"abstract":"Sign Language Production (SLP) has achieved promising progress in offline settings, where full input text is available before generation. However, such methods are unsuitable for real-time applications requiring low latency. In this work, we introduce Simultaneous Sign Language Production (SimulSLP), a new task that generates sign pose sequences incrementally from streaming text input. We first formalize the SimulSLP task and adapt the Average Token Delay metric to quantify latency. Then, we benchmark this task using three strong baselines from offline SLP—an end-to-end system and two cascaded pipelines with neural and dictionary-based Gloss-to-Pose modules—under a wait-<inline-formula><tex-math>$k$</tex-math></inline-formula> policy. However, all baselines suffer from a mismatch between full-sequence training and partial-input inference. To mitigate this, we propose a Future-Context-Aware Inference (FCAI) strategy. FCAI enhances partial input representations by predicting a small number of future tokens using a large language model. Before decoding, speculative features from the predicted tokens are discarded to ensure alignment with the observed input. Experiments on PHOENIX2014 T show that FCAI significantly improves the quality-latency trade-off, especially in low-latency settings, offering a promising step toward SimulSLP.","PeriodicalId":13154,"journal":{"name":"IEEE Signal Processing Letters","volume":"32 ","pages":"3764-3768"},"PeriodicalIF":3.9000,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Towards Simultaneous Sign Language Production: A Future-Context-Aware Approach\",\"authors\":\"Biao Fu;Tong Sun;Xiaodong Shi;Yidong Chen\",\"doi\":\"10.1109/LSP.2025.3610359\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sign Language Production (SLP) has achieved promising progress in offline settings, where full input text is available before generation. However, such methods are unsuitable for real-time applications requiring low latency. In this work, we introduce Simultaneous Sign Language Production (SimulSLP), a new task that generates sign pose sequences incrementally from streaming text input. We first formalize the SimulSLP task and adapt the Average Token Delay metric to quantify latency. Then, we benchmark this task using three strong baselines from offline SLP—an end-to-end system and two cascaded pipelines with neural and dictionary-based Gloss-to-Pose modules—under a wait-<inline-formula><tex-math>$k$</tex-math></inline-formula> policy. However, all baselines suffer from a mismatch between full-sequence training and partial-input inference. To mitigate this, we propose a Future-Context-Aware Inference (FCAI) strategy. FCAI enhances partial input representations by predicting a small number of future tokens using a large language model. Before decoding, speculative features from the predicted tokens are discarded to ensure alignment with the observed input. Experiments on PHOENIX2014 T show that FCAI significantly improves the quality-latency trade-off, especially in low-latency settings, offering a promising step toward SimulSLP.\",\"PeriodicalId\":13154,\"journal\":{\"name\":\"IEEE Signal Processing Letters\",\"volume\":\"32 \",\"pages\":\"3764-3768\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2025-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Signal Processing Letters\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11164872/\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Signal Processing Letters","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/11164872/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Towards Simultaneous Sign Language Production: A Future-Context-Aware Approach
Sign Language Production (SLP) has achieved promising progress in offline settings, where full input text is available before generation. However, such methods are unsuitable for real-time applications requiring low latency. In this work, we introduce Simultaneous Sign Language Production (SimulSLP), a new task that generates sign pose sequences incrementally from streaming text input. We first formalize the SimulSLP task and adapt the Average Token Delay metric to quantify latency. Then, we benchmark this task using three strong baselines from offline SLP—an end-to-end system and two cascaded pipelines with neural and dictionary-based Gloss-to-Pose modules—under a wait-$k$ policy. However, all baselines suffer from a mismatch between full-sequence training and partial-input inference. To mitigate this, we propose a Future-Context-Aware Inference (FCAI) strategy. FCAI enhances partial input representations by predicting a small number of future tokens using a large language model. Before decoding, speculative features from the predicted tokens are discarded to ensure alignment with the observed input. Experiments on PHOENIX2014 T show that FCAI significantly improves the quality-latency trade-off, especially in low-latency settings, offering a promising step toward SimulSLP.
期刊介绍:
The IEEE Signal Processing Letters is a monthly, archival publication designed to provide rapid dissemination of original, cutting-edge ideas and timely, significant contributions in signal, image, speech, language and audio processing. Papers published in the Letters can be presented within one year of their appearance in signal processing conferences such as ICASSP, GlobalSIP and ICIP, and also in several workshop organized by the Signal Processing Society.