金融中的上下文、语言建模和多模态数据

Sanjiv Ranjan Das, Connor Goggins, John He, G. Karypis, Krishnamurthy Sandeep, Mitali Mahajan, N. Prabhala, Dylan Slack, R. V. Dusen, Shenghua Yue, Sheng Zha, Shuai Zheng
{"title":"金融中的上下文、语言建模和多模态数据","authors":"Sanjiv Ranjan Das, Connor Goggins, John He, G. Karypis, Krishnamurthy Sandeep, Mitali Mahajan, N. Prabhala, Dylan Slack, R. V. Dusen, Shenghua Yue, Sheng Zha, Shuai Zheng","doi":"10.3905/JFDS.2021.1.063","DOIUrl":null,"url":null,"abstract":"The authors enhance pretrained language models with Securities and Exchange Commission filings data to create better language representations for features used in a predictive model. Specifically, they train RoBERTa class models with additional financial regulatory text, which they denote as a class of RoBERTa-Fin models. Using different datasets, the authors assess whether there is material improvement over models that use only text-based numerical features (e.g., sentiment, readability, polarity), which is the traditional approach adopted in academia and practice. The RoBERTa-Fin models also outperform generic bidirectional encoder representations from transformers (BERT) class models that are not trained with financial text. The improvement in classification accuracy is material, suggesting that full text and context are important in classifying financial documents and that the benefits from the use of mixed data, (i.e., enhancing numerical tabular data with text) are feasible and fruitful in machine learning models in finance. TOPICS: Quantitative methods, big data/machine learning, legal/regulatory/public policy, information providers/credit ratings Key Findings ▪ Machine learning based on multimodal data provides meaningful improvement over models based on numerical data alone. ▪ Context-rich models perform better than context-free models. ▪ Pretrained language models that mix common text and financial text do better than those pretrained on financial text alone.","PeriodicalId":199045,"journal":{"name":"The Journal of Financial Data Science","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Context, Language Modeling, and Multimodal Data in Finance\",\"authors\":\"Sanjiv Ranjan Das, Connor Goggins, John He, G. Karypis, Krishnamurthy Sandeep, Mitali Mahajan, N. Prabhala, Dylan Slack, R. V. Dusen, Shenghua Yue, Sheng Zha, Shuai Zheng\",\"doi\":\"10.3905/JFDS.2021.1.063\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The authors enhance pretrained language models with Securities and Exchange Commission filings data to create better language representations for features used in a predictive model. Specifically, they train RoBERTa class models with additional financial regulatory text, which they denote as a class of RoBERTa-Fin models. Using different datasets, the authors assess whether there is material improvement over models that use only text-based numerical features (e.g., sentiment, readability, polarity), which is the traditional approach adopted in academia and practice. The RoBERTa-Fin models also outperform generic bidirectional encoder representations from transformers (BERT) class models that are not trained with financial text. The improvement in classification accuracy is material, suggesting that full text and context are important in classifying financial documents and that the benefits from the use of mixed data, (i.e., enhancing numerical tabular data with text) are feasible and fruitful in machine learning models in finance. TOPICS: Quantitative methods, big data/machine learning, legal/regulatory/public policy, information providers/credit ratings Key Findings ▪ Machine learning based on multimodal data provides meaningful improvement over models based on numerical data alone. ▪ Context-rich models perform better than context-free models. ▪ Pretrained language models that mix common text and financial text do better than those pretrained on financial text alone.\",\"PeriodicalId\":199045,\"journal\":{\"name\":\"The Journal of Financial Data Science\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The Journal of Financial Data Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3905/JFDS.2021.1.063\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Journal of Financial Data Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3905/JFDS.2021.1.063","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

作者利用美国证券交易委员会的文件数据增强预训练语言模型,为预测模型中使用的特征创建更好的语言表示。具体来说,他们用额外的金融监管文本训练RoBERTa类模型,他们将其表示为一类RoBERTa- fin模型。使用不同的数据集,作者评估了仅使用基于文本的数字特征(例如,情感,可读性,极性)的模型是否有实质性的改进,这是学术界和实践中采用的传统方法。RoBERTa-Fin模型的性能也优于没有使用金融文本训练的变形器(BERT)类模型的通用双向编码器表示。分类准确性的提高是实质性的,这表明全文和上下文在分类财务文件中很重要,并且使用混合数据(即用文本增强数字表格数据)的好处在金融机器学习模型中是可行的和富有成效的。主题:定量方法、大数据/机器学习、法律/监管/公共政策、信息提供者/信用评级。关键发现▪基于多模态数据的机器学习比仅基于数值数据的模型提供了有意义的改进。▪上下文丰富的模型比上下文无关的模型性能更好。▪混合普通文本和金融文本的预训练语言模型比仅对金融文本进行预训练的模型表现更好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Context, Language Modeling, and Multimodal Data in Finance
The authors enhance pretrained language models with Securities and Exchange Commission filings data to create better language representations for features used in a predictive model. Specifically, they train RoBERTa class models with additional financial regulatory text, which they denote as a class of RoBERTa-Fin models. Using different datasets, the authors assess whether there is material improvement over models that use only text-based numerical features (e.g., sentiment, readability, polarity), which is the traditional approach adopted in academia and practice. The RoBERTa-Fin models also outperform generic bidirectional encoder representations from transformers (BERT) class models that are not trained with financial text. The improvement in classification accuracy is material, suggesting that full text and context are important in classifying financial documents and that the benefits from the use of mixed data, (i.e., enhancing numerical tabular data with text) are feasible and fruitful in machine learning models in finance. TOPICS: Quantitative methods, big data/machine learning, legal/regulatory/public policy, information providers/credit ratings Key Findings ▪ Machine learning based on multimodal data provides meaningful improvement over models based on numerical data alone. ▪ Context-rich models perform better than context-free models. ▪ Pretrained language models that mix common text and financial text do better than those pretrained on financial text alone.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信