Random forest of thoughts: Reasoning path fusion for LLM inference in computational social science

IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Xiaohua Wu , Xiaohui Tao , Wenjie Wu , Jianwei Zhang , Yuefeng Li , Lin Li
{"title":"Random forest of thoughts: Reasoning path fusion for LLM inference in computational social science","authors":"Xiaohua Wu ,&nbsp;Xiaohui Tao ,&nbsp;Wenjie Wu ,&nbsp;Jianwei Zhang ,&nbsp;Yuefeng Li ,&nbsp;Lin Li","doi":"10.1016/j.inffus.2025.103791","DOIUrl":null,"url":null,"abstract":"<div><div>Large language models (LLMs) have demonstrated significant promise for reasoning problems. They are among the leading techniques for context inference, particularly in scenarios with strong sequential dependencies, where earlier inputs dynamically influence subsequent responses. However, existing reasoning paradigms such as X-of-thoughts (XoT) typically rely on unidirectional, left-to-right inference with limited inference paths. This renders them ineffective in handling inherent skip logic and multi-path reasoning, especially for contexts such as a multi-turn social survey. To address this, we propose Random Forest of Thoughts (RFoT), a novel prompting framework grounded in the principles of reasoning path fusion for skip logic. It uses Iterative Chain-of-Thought (ICoT) prompting to generate a diverse set of reasoning thoughts. These thoughts are then assessed using a cooperative contribution evaluator to estimate their contribution. By randomly sampling and fusing the top-<span><math><mi>k</mi></math></span> reasoning thoughts, RFoT simulates uncertain skip logic and constructs a rich forest of plausible thoughts. This enables it to achieve robust multi-path reasoning, where each question sequence formed by the skip logic is treated as an independent reasoning path. RFoT is validated on two classic social problems featuring strong skip logic, using three open-source LLMs and five datasets that have been categorized as structured social surveys and public social media data. Experimental results demonstrate that RFoT significantly enhances inference performance on problems that require complex, non-linear reasoning across both survey and social media data. The transparency and trustworthiness of the results stem from the interpretable fusion of diverse reasoning paths and the principled integration of cooperative evaluation mechanisms.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"127 ","pages":"Article 103791"},"PeriodicalIF":15.5000,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S156625352500853X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Large language models (LLMs) have demonstrated significant promise for reasoning problems. They are among the leading techniques for context inference, particularly in scenarios with strong sequential dependencies, where earlier inputs dynamically influence subsequent responses. However, existing reasoning paradigms such as X-of-thoughts (XoT) typically rely on unidirectional, left-to-right inference with limited inference paths. This renders them ineffective in handling inherent skip logic and multi-path reasoning, especially for contexts such as a multi-turn social survey. To address this, we propose Random Forest of Thoughts (RFoT), a novel prompting framework grounded in the principles of reasoning path fusion for skip logic. It uses Iterative Chain-of-Thought (ICoT) prompting to generate a diverse set of reasoning thoughts. These thoughts are then assessed using a cooperative contribution evaluator to estimate their contribution. By randomly sampling and fusing the top-k reasoning thoughts, RFoT simulates uncertain skip logic and constructs a rich forest of plausible thoughts. This enables it to achieve robust multi-path reasoning, where each question sequence formed by the skip logic is treated as an independent reasoning path. RFoT is validated on two classic social problems featuring strong skip logic, using three open-source LLMs and five datasets that have been categorized as structured social surveys and public social media data. Experimental results demonstrate that RFoT significantly enhances inference performance on problems that require complex, non-linear reasoning across both survey and social media data. The transparency and trustworthiness of the results stem from the interpretable fusion of diverse reasoning paths and the principled integration of cooperative evaluation mechanisms.
思想的随机森林:计算社会科学中LLM推理的推理路径融合
大型语言模型(llm)在推理问题上表现出了巨大的前景。它们是上下文推理的主要技术之一,特别是在具有强顺序依赖性的场景中,其中早期的输入动态地影响后续的响应。然而,现有的推理范式,如思想x (XoT),通常依赖于单向的、从左到右的推理,并且推理路径有限。这使得它们在处理固有的跳过逻辑和多路径推理时效率低下,特别是在处理多回合社会调查等上下文时。为了解决这个问题,我们提出了随机思想森林(RFoT),这是一种基于跳过逻辑推理路径融合原则的新型提示框架。它使用迭代思维链(ICoT)提示来生成一组不同的推理思维。然后使用合作贡献评估器对这些想法进行评估,以估计其贡献。通过随机抽样和融合top-k推理思想,RFoT模拟了不确定的跳过逻辑,构建了一个丰富的似是而非的思想森林。这使它能够实现健壮的多路径推理,其中由跳过逻辑形成的每个问题序列被视为独立的推理路径。RFoT在两个具有强跳过逻辑的经典社会问题上进行了验证,使用了三个开源llm和五个数据集,这些数据集被分类为结构化社会调查和公共社交媒体数据。实验结果表明,rft显著提高了在调查和社交媒体数据中需要复杂、非线性推理的问题上的推理性能。结果的透明度和可信度源于多种推理路径的可解释性融合和合作评估机制的原则性整合。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信