Xinyuan Lu, Liangming Pan, Yubo Ma, Preslav Nakov, Min-Yen Kan
{"title":"TART:基于表格的可解释推理的开源工具增强框架","authors":"Xinyuan Lu, Liangming Pan, Yubo Ma, Preslav Nakov, Min-Yen Kan","doi":"arxiv-2409.11724","DOIUrl":null,"url":null,"abstract":"Current Large Language Models (LLMs) exhibit limited ability to understand\ntable structures and to apply precise numerical reasoning, which is crucial for\ntasks such as table question answering (TQA) and table-based fact verification\n(TFV). To address these challenges, we introduce our Tool-Augmented Reasoning\nframework for Tables (TART), which integrates LLMs with specialized tools. TART\ncontains three key components: a table formatter to ensure accurate data\nrepresentation, a tool maker to develop specific computational tools, and an\nexplanation generator to maintain explainability. We also present the TOOLTAB\ndataset, a new benchmark designed specifically for training LLMs in table-tool\nintegration. Our experiments indicate that TART achieves substantial\nimprovements over existing methods (e.g., Chain-of-Thought) by improving both\nthe precision of data processing and the clarity of the reasoning process.\nNotably, TART paired with CodeLlama achieves 90.0% of the accuracy of the\nclosed-sourced LLM GPT-3.5-turbo, highlighting its robustness in diverse\nreal-world scenarios. All the code and data are available at\nhttps://github.com/XinyuanLu00/TART.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"27 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"TART: An Open-Source Tool-Augmented Framework for Explainable Table-based Reasoning\",\"authors\":\"Xinyuan Lu, Liangming Pan, Yubo Ma, Preslav Nakov, Min-Yen Kan\",\"doi\":\"arxiv-2409.11724\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Current Large Language Models (LLMs) exhibit limited ability to understand\\ntable structures and to apply precise numerical reasoning, which is crucial for\\ntasks such as table question answering (TQA) and table-based fact verification\\n(TFV). To address these challenges, we introduce our Tool-Augmented Reasoning\\nframework for Tables (TART), which integrates LLMs with specialized tools. TART\\ncontains three key components: a table formatter to ensure accurate data\\nrepresentation, a tool maker to develop specific computational tools, and an\\nexplanation generator to maintain explainability. We also present the TOOLTAB\\ndataset, a new benchmark designed specifically for training LLMs in table-tool\\nintegration. Our experiments indicate that TART achieves substantial\\nimprovements over existing methods (e.g., Chain-of-Thought) by improving both\\nthe precision of data processing and the clarity of the reasoning process.\\nNotably, TART paired with CodeLlama achieves 90.0% of the accuracy of the\\nclosed-sourced LLM GPT-3.5-turbo, highlighting its robustness in diverse\\nreal-world scenarios. All the code and data are available at\\nhttps://github.com/XinyuanLu00/TART.\",\"PeriodicalId\":501030,\"journal\":{\"name\":\"arXiv - CS - Computation and Language\",\"volume\":\"27 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computation and Language\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11724\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11724","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
TART: An Open-Source Tool-Augmented Framework for Explainable Table-based Reasoning
Current Large Language Models (LLMs) exhibit limited ability to understand
table structures and to apply precise numerical reasoning, which is crucial for
tasks such as table question answering (TQA) and table-based fact verification
(TFV). To address these challenges, we introduce our Tool-Augmented Reasoning
framework for Tables (TART), which integrates LLMs with specialized tools. TART
contains three key components: a table formatter to ensure accurate data
representation, a tool maker to develop specific computational tools, and an
explanation generator to maintain explainability. We also present the TOOLTAB
dataset, a new benchmark designed specifically for training LLMs in table-tool
integration. Our experiments indicate that TART achieves substantial
improvements over existing methods (e.g., Chain-of-Thought) by improving both
the precision of data processing and the clarity of the reasoning process.
Notably, TART paired with CodeLlama achieves 90.0% of the accuracy of the
closed-sourced LLM GPT-3.5-turbo, highlighting its robustness in diverse
real-world scenarios. All the code and data are available at
https://github.com/XinyuanLu00/TART.