HiddenTables and PyQTax: A Cooperative Game and Dataset For TableQA to Ensure Scale and Data Privacy Across a Myriad of Taxonomies

William Watson, Nicole Cho, T. Balch, Manuela Veloso
{"title":"HiddenTables and PyQTax: A Cooperative Game and Dataset For TableQA to Ensure Scale and Data Privacy Across a Myriad of Taxonomies","authors":"William Watson, Nicole Cho, T. Balch, Manuela Veloso","doi":"10.18653/v1/2023.emnlp-main.442","DOIUrl":null,"url":null,"abstract":"A myriad of different Large Language Models (LLMs) face a common challenge in contextually analyzing table question-answering tasks. These challenges are engendered from (1) finite context windows for large tables, (2) multi-faceted discrepancies amongst tokenization patterns against cell boundaries, and (3) various limitations stemming from data confidentiality in the process of using external models such as gpt-3.5-turbo. We propose a cooperative game dubbed\"HiddenTables\"as a potential resolution to this challenge. In essence,\"HiddenTables\"is played between the code-generating LLM\"Solver\"and the\"Oracle\"which evaluates the ability of the LLM agents to solve Table QA tasks. This game is based on natural language schemas and importantly, ensures the security of the underlying data. We provide evidential experiments on a diverse set of tables that demonstrate an LLM's collective inability to generalize and perform on complex queries, handle compositional dependencies, and align natural language to programmatic commands when concrete table schemas are provided. Unlike encoder-based models, we have pushed the boundaries of\"HiddenTables\"to not be limited by the number of rows - therefore we exhibit improved efficiency in prompt and completion tokens. Our infrastructure has spawned a new dataset\"PyQTax\"that spans across 116,671 question-table-answer triplets and provides additional fine-grained breakdowns&labels for varying question taxonomies. Therefore, in tandem with our academic contributions regarding LLMs' deficiency in TableQA tasks,\"HiddenTables\"is a tactile manifestation of how LLMs can interact with massive datasets while ensuring data security and minimizing generation costs.","PeriodicalId":505350,"journal":{"name":"Conference on Empirical Methods in Natural Language Processing","volume":"2 10","pages":"7144-7159"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Conference on Empirical Methods in Natural Language Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18653/v1/2023.emnlp-main.442","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

A myriad of different Large Language Models (LLMs) face a common challenge in contextually analyzing table question-answering tasks. These challenges are engendered from (1) finite context windows for large tables, (2) multi-faceted discrepancies amongst tokenization patterns against cell boundaries, and (3) various limitations stemming from data confidentiality in the process of using external models such as gpt-3.5-turbo. We propose a cooperative game dubbed"HiddenTables"as a potential resolution to this challenge. In essence,"HiddenTables"is played between the code-generating LLM"Solver"and the"Oracle"which evaluates the ability of the LLM agents to solve Table QA tasks. This game is based on natural language schemas and importantly, ensures the security of the underlying data. We provide evidential experiments on a diverse set of tables that demonstrate an LLM's collective inability to generalize and perform on complex queries, handle compositional dependencies, and align natural language to programmatic commands when concrete table schemas are provided. Unlike encoder-based models, we have pushed the boundaries of"HiddenTables"to not be limited by the number of rows - therefore we exhibit improved efficiency in prompt and completion tokens. Our infrastructure has spawned a new dataset"PyQTax"that spans across 116,671 question-table-answer triplets and provides additional fine-grained breakdowns&labels for varying question taxonomies. Therefore, in tandem with our academic contributions regarding LLMs' deficiency in TableQA tasks,"HiddenTables"is a tactile manifestation of how LLMs can interact with massive datasets while ensuring data security and minimizing generation costs.
HiddenTables 和 PyQTax:用于 TableQA 的合作游戏和数据集,可确保各种分类标准的规模和数据隐私
在对表格答题任务进行上下文分析时,无数不同的大型语言模型(LLM)都面临着共同的挑战。这些挑战来自于:(1)大型表格的上下文窗口有限;(2)标记化模式与单元边界之间存在多方面差异;(3)在使用外部模型(如 gpt-3.5-turbo)的过程中,数据保密性带来的各种限制。我们提出了一个名为 "HiddenTables "的合作游戏,作为解决这一难题的潜在方案。从本质上讲,"HiddenTables "游戏是在代码生成 LLM "求解器 "和 "Oracle "之间进行的,"Oracle "负责评估 LLM 代理解决表 QA 任务的能力。这个游戏以自然语言模式为基础,重要的是确保了底层数据的安全性。我们在一组不同的表上进行了实证实验,结果表明,当提供具体的表模式时,LLM 在概括和执行复杂查询、处理组成依赖关系以及将自然语言与编程命令相统一方面存在集体能力不足的问题。与基于编码器的模型不同,我们突破了 "HiddenTables "的界限,使其不受行数的限制--因此,我们在提示和完成标记方面表现出了更高的效率。我们的基础架构催生了一个新的数据集 "PyQTax",该数据集横跨 116,671 个问题-表格-答案三元组,并针对不同的问题分类法提供了额外的细粒度分类和标签。因此,"HiddenTables "与我们对 LLMs 在 TableQA 任务中的不足所做的学术贡献一样,是 LLMs 如何与海量数据集交互,同时确保数据安全并最大限度降低生成成本的具体体现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信