LatinX in AI at North American Chapter of the Association for Computational Linguistics Conference 2022最新文献

筛选
英文 中文
Incorporating Natural Language Processing models in Mexico City's 311 Locatel 墨西哥城311 Locatel的自然语言处理模型
Alejandro Molina-Villegas, Edwin Aldana-Bibadilla, O. Siordia, Jorge Pérez
{"title":"Incorporating Natural Language Processing models in Mexico City's 311 Locatel","authors":"Alejandro Molina-Villegas, Edwin Aldana-Bibadilla, O. Siordia, Jorge Pérez","doi":"10.52591/lxai202207101","DOIUrl":"https://doi.org/10.52591/lxai202207101","url":null,"abstract":"Natural Language Processing based technologies are transforming various sectors by facilitating new ways of providing services through Artificial Intelligence (AI). In this paper, we describe the methodology and present the challenges encountered during the creation of a Deep Learning-based model for classifying citizen service requests. Our system is able to effectively recognize among 48 categories of public services with an accuracy of 97% and was integrated into Mexico City’s 311, significantly increasing the government’s ability to provide better services.","PeriodicalId":350984,"journal":{"name":"LatinX in AI at North American Chapter of the Association for Computational Linguistics Conference 2022","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126226057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic multi-modal processing of language and vision to assist people with visual impairments 语言和视觉的自动多模态处理,以帮助有视觉障碍的人
{"title":"Automatic multi-modal processing of language and vision to assist people with visual impairments","authors":"","doi":"10.52591/lxai202207104","DOIUrl":"https://doi.org/10.52591/lxai202207104","url":null,"abstract":"In recent years, the study of the intersection between vision and language modalities, specifically in visual question answering (VQA) models, has gained significant appeal due to its great potential in assistive applications for people with visual disabilities. Despite this, to date, many of the existing VQA models are nor applicable to this goal for at least three reasons. To begin with, they are designed to respond to a single question. That is, they are not able to give feedback to incomplete or incremental questions. Secondly, they only consider a single image which is neither blurred, nor poorly focused, nor poorly framed. All these problems are directly related to the loss of the visual capacity. People with visual disabilities may have trouble interacting with a visual user interface for asking questions and for taking adequate photographs. They also frequently need to read text captured by the images, and most current VQA systems fall short in this task. This work presents a PhD proposal with four lines of research that will be carried out until December 2025. It investigates techniques that increase the robustness of the VQA models. In particular we propose the integration of dialogue history, the analysis of more than one input image, and the incorporation of text recognition capabilities to the models. All of these contributions are motivated to assist people with vision problems with their day-to-day tasks.","PeriodicalId":350984,"journal":{"name":"LatinX in AI at North American Chapter of the Association for Computational Linguistics Conference 2022","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130157118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed Text Representations Using Transformers for Noisy Written Language 使用变压器对有噪声的书面语言进行分布式文本表示
A. Rodriguez, Pablo Rivas, G. Bejarano
{"title":"Distributed Text Representations Using Transformers for Noisy Written Language","authors":"A. Rodriguez, Pablo Rivas, G. Bejarano","doi":"10.52591/lxai202207102","DOIUrl":"https://doi.org/10.52591/lxai202207102","url":null,"abstract":"This work proposes a methodology to derive latent representations for highly noisy text. Traditionally in Natural Language Processing systems, methods rely on words as the core components of a text. Unlike those, we propose a character-based approach to be robust against our target texts’ high syntactical noise. We propose pre-training a Transformer model (BERT) on different, general-purpose language tasks and using the pre-trained model to obtain a representation for an input text. Weights are transferred from one task in the pipeline to the other. Instead of tokenizing the text on a word or sub-word basis, we propose considering the text’s characters as tokens. The ultimate goal is that the representations produced prove useful for other downstream tasks on the data, such as criminal activity in marketplace platforms.","PeriodicalId":350984,"journal":{"name":"LatinX in AI at North American Chapter of the Association for Computational Linguistics Conference 2022","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129354347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Study of Question Answering on Legal Software Document using BERT based models 基于BERT模型的法律软件文档问答研究
Ernesto Quevedo Caballero, Mushfika Rahman, T. Cerný, Pablo Rivas, G. Bejarano
{"title":"Study of Question Answering on Legal Software Document using BERT based models","authors":"Ernesto Quevedo Caballero, Mushfika Rahman, T. Cerný, Pablo Rivas, G. Bejarano","doi":"10.52591/lxai202207103","DOIUrl":"https://doi.org/10.52591/lxai202207103","url":null,"abstract":"The transformer-based architectures have achieved remarkable success in several Natural Language Processing tasks, such as the Question Answering domain. Our research focuses on different transformer-based language models’ performance in software development legal domain specialized datasets for the Question Answering task. It compares the performance with the general-purpose Question Answering task. We have experimented with the PolicyQA dataset and conformed to documents regarding users’ data handling policies, which fall into the software legal domain. We used as base encoders BERT, ALBERT, RoBERTa, DistilBERT and LEGAL-BERT and compare their performance on the Question answering benchmark dataset SQuAD V2.0 and PolicyQA. Our results indicate that the performance of these models as contextual embeddings encoders in the PolicyQA dataset is significantly lower than in the SQuAD V2.0. Furthermore, we showed that surprisingly general domain BERT-based models like ALBERT and BERT obtain better performance than a more domain-specific trained model like LEGAL-BERT.","PeriodicalId":350984,"journal":{"name":"LatinX in AI at North American Chapter of the Association for Computational Linguistics Conference 2022","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115545463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Improving Language Model Fine-tuning with Information Gain Filtration 利用信息增益过滤改进语言模型微调
Javier Turek, Richard Antonello, Nicole M. Beckage, Alexander G. Huth
{"title":"Improving Language Model Fine-tuning with Information Gain Filtration","authors":"Javier Turek, Richard Antonello, Nicole M. Beckage, Alexander G. Huth","doi":"10.52591/lxai202207105","DOIUrl":"https://doi.org/10.52591/lxai202207105","url":null,"abstract":"Language model fine-tuning is essential for modern natural language processing. The effectiveness of fine-tuning is limited by the inclusion of training examples that negatively affect performance. Here we present Information Gain Filtration, a general fine-tuning method, for improving the overall final performance of a fine-tuned model. We define Information Gain of an example as the improvement on a validation metric after training on that example. A secondary learner is then trained to approximate this quantity. During fine-tuning, this learner filters informative examples from uninformative ones. We show that our method is robust and has consistent improvement across datasets, fine-tuning tasks, and language model architectures.","PeriodicalId":350984,"journal":{"name":"LatinX in AI at North American Chapter of the Association for Computational Linguistics Conference 2022","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130885475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信