推理引导的法学硕士翻译优化:一个使用多维后编辑反馈的框架

IF 3.7 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yan Huang, Xiaogang Zang, Chenyang Ji, Zhuo Chen
{"title":"推理引导的法学硕士翻译优化:一个使用多维后编辑反馈的框架","authors":"Yan Huang,&nbsp;Xiaogang Zang,&nbsp;Chenyang Ji,&nbsp;Zhuo Chen","doi":"10.1155/int/9971702","DOIUrl":null,"url":null,"abstract":"<p>While Large Language Models (LLMs) demonstrate strong translation capabilities, optimizing their output towards human-level refinement necessitates reasoning-guided approaches that move beyond simple generation. This paper introduces Multidimensional Feedback and Postedit Thought (MFPE), a novel framework specifically designed for reasoning-guided LLM translation optimization. MFPE operationalizes this guidance by leveraging multidimensional postediting feedback, which acts as explicit reasoning signals to the LLM. This feedback mechanism simulates the human postediting process, where errors are systematically identified and corrected. Generated by a dedicated optimization model trained on a synthetic dataset (using GLM-4 and inspired by multidimensional quality metrics (MQM), this feedback provides fine-grained error details including spans, categories, and quantities from initial LLM translations. We conduct experiments across four language pairs: Chinese-English, German-English, English-Chinese, and English-German. The results show that fine-tuning with structured, reasoning-like feedback significantly enhances translation quality and outperforms standard bilingual fine-tuning approaches. Our findings highlight the effectiveness of simulating postediting reasoning through structured feedback, offering a promising direction for harnessing and improving the inferential capabilities of LLMs for complex tasks like high-quality machine translation.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":3.7000,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/9971702","citationCount":"0","resultStr":"{\"title\":\"Reasoning-Guided LLM Translation Optimization: A Framework Using Multidimensional Postediting Feedback\",\"authors\":\"Yan Huang,&nbsp;Xiaogang Zang,&nbsp;Chenyang Ji,&nbsp;Zhuo Chen\",\"doi\":\"10.1155/int/9971702\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>While Large Language Models (LLMs) demonstrate strong translation capabilities, optimizing their output towards human-level refinement necessitates reasoning-guided approaches that move beyond simple generation. This paper introduces Multidimensional Feedback and Postedit Thought (MFPE), a novel framework specifically designed for reasoning-guided LLM translation optimization. MFPE operationalizes this guidance by leveraging multidimensional postediting feedback, which acts as explicit reasoning signals to the LLM. This feedback mechanism simulates the human postediting process, where errors are systematically identified and corrected. Generated by a dedicated optimization model trained on a synthetic dataset (using GLM-4 and inspired by multidimensional quality metrics (MQM), this feedback provides fine-grained error details including spans, categories, and quantities from initial LLM translations. We conduct experiments across four language pairs: Chinese-English, German-English, English-Chinese, and English-German. The results show that fine-tuning with structured, reasoning-like feedback significantly enhances translation quality and outperforms standard bilingual fine-tuning approaches. Our findings highlight the effectiveness of simulating postediting reasoning through structured feedback, offering a promising direction for harnessing and improving the inferential capabilities of LLMs for complex tasks like high-quality machine translation.</p>\",\"PeriodicalId\":14089,\"journal\":{\"name\":\"International Journal of Intelligent Systems\",\"volume\":\"2025 1\",\"pages\":\"\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2025-10-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/9971702\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1155/int/9971702\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1155/int/9971702","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

虽然大型语言模型(llm)展示了强大的翻译能力,但将其输出优化到人类水平的细化需要推理引导的方法,而不仅仅是简单的生成。本文介绍了一种专门为推理引导的法学硕士翻译优化设计的新框架——多维反馈和post - dit思想(MFPE)。MFPE通过利用多维后期编辑反馈来实现该指导,该反馈作为向LLM发出的明确推理信号。这种反馈机制模拟了人类的后期编辑过程,在这个过程中,错误被系统地识别和纠正。该反馈由在合成数据集(使用GLM-4)上训练的专用优化模型生成,并受到多维质量度量(MQM)的启发,提供细粒度的错误细节,包括初始LLM翻译的范围、类别和数量。我们进行了四种语言对的实验:汉语-英语、德语-英语、英语-汉语和英语-德语。结果表明,使用结构化的、类似推理的反馈进行微调可以显著提高翻译质量,并且优于标准的双语微调方法。我们的研究结果强调了通过结构化反馈模拟编辑后推理的有效性,为利用和提高法学硕士在高质量机器翻译等复杂任务中的推理能力提供了一个有希望的方向。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Reasoning-Guided LLM Translation Optimization: A Framework Using Multidimensional Postediting Feedback

Reasoning-Guided LLM Translation Optimization: A Framework Using Multidimensional Postediting Feedback

While Large Language Models (LLMs) demonstrate strong translation capabilities, optimizing their output towards human-level refinement necessitates reasoning-guided approaches that move beyond simple generation. This paper introduces Multidimensional Feedback and Postedit Thought (MFPE), a novel framework specifically designed for reasoning-guided LLM translation optimization. MFPE operationalizes this guidance by leveraging multidimensional postediting feedback, which acts as explicit reasoning signals to the LLM. This feedback mechanism simulates the human postediting process, where errors are systematically identified and corrected. Generated by a dedicated optimization model trained on a synthetic dataset (using GLM-4 and inspired by multidimensional quality metrics (MQM), this feedback provides fine-grained error details including spans, categories, and quantities from initial LLM translations. We conduct experiments across four language pairs: Chinese-English, German-English, English-Chinese, and English-German. The results show that fine-tuning with structured, reasoning-like feedback significantly enhances translation quality and outperforms standard bilingual fine-tuning approaches. Our findings highlight the effectiveness of simulating postediting reasoning through structured feedback, offering a promising direction for harnessing and improving the inferential capabilities of LLMs for complex tasks like high-quality machine translation.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Intelligent Systems
International Journal of Intelligent Systems 工程技术-计算机:人工智能
CiteScore
11.30
自引率
14.30%
发文量
304
审稿时长
9 months
期刊介绍: The International Journal of Intelligent Systems serves as a forum for individuals interested in tapping into the vast theories based on intelligent systems construction. With its peer-reviewed format, the journal explores several fascinating editorials written by today''s experts in the field. Because new developments are being introduced each day, there''s much to be learned — examination, analysis creation, information retrieval, man–computer interactions, and more. The International Journal of Intelligent Systems uses charts and illustrations to demonstrate these ground-breaking issues, and encourages readers to share their thoughts and experiences.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信