AACL BiofluxPub Date : 2022-05-19DOI: 10.48550/arXiv.2205.09634
FAHIM FAISAL, Antonios Anastasopoulos
{"title":"Phylogeny-Inspired Adaptation of Multilingual Models to New Languages","authors":"FAHIM FAISAL, Antonios Anastasopoulos","doi":"10.48550/arXiv.2205.09634","DOIUrl":"https://doi.org/10.48550/arXiv.2205.09634","url":null,"abstract":"Large pretrained multilingual models, trained on dozens of languages, have delivered promising results due to cross-lingual learning capabilities on a variety of language tasks. Further adapting these models to specific languages, especially ones unseen during pre-training, is an important goal toward expanding the coverage of language technologies. In this study, we show how we can use language phylogenetic information to improve cross-lingual transfer leveraging closely related languages in a structured, linguistically-informed manner. We perform adapter-based training on languages from diverse language families (Germanic, Uralic, Tupian, Uto-Aztecan) and evaluate on both syntactic and semantic tasks, obtaining more than 20% relative performance improvements over strong commonly used baselines, especially on languages unseen during pre-training.","PeriodicalId":39298,"journal":{"name":"AACL Bioflux","volume":"74 1","pages":"434-452"},"PeriodicalIF":0.0,"publicationDate":"2022-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81575784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AACL BiofluxPub Date : 2022-05-09DOI: 10.48550/arXiv.2205.04602
Pinzhen Chen, Zheng Zhao
{"title":"A Unified Model for Reverse Dictionary and Definition Modelling","authors":"Pinzhen Chen, Zheng Zhao","doi":"10.48550/arXiv.2205.04602","DOIUrl":"https://doi.org/10.48550/arXiv.2205.04602","url":null,"abstract":"We build a dual-way neural dictionary to retrieve words given definitions, and produce definitions for queried words. The model learns the two tasks simultaneously and handles unknown words via embeddings. It casts a word or a definition to the same representation space through a shared layer, then generates the other form in a multi-task fashion. Our method achieves promising automatic scores on previous benchmarks without extra resources. Human annotators prefer the model’s outputs in both reference-less and reference-based evaluation, indicating its practicality. Analysis suggests that multiple objectives benefit learning.","PeriodicalId":39298,"journal":{"name":"AACL Bioflux","volume":"82 1","pages":"8-13"},"PeriodicalIF":0.0,"publicationDate":"2022-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84004930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AACL BiofluxPub Date : 2022-03-22DOI: 10.48550/arXiv.2203.11933
Hugo Elias Berg, S. Hall, Yash Bhalgat, Wonsuk Yang, Hannah Rose Kirk, Aleksandar Shtedritski, Max Bain
{"title":"A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models with Adversarial Learning","authors":"Hugo Elias Berg, S. Hall, Yash Bhalgat, Wonsuk Yang, Hannah Rose Kirk, Aleksandar Shtedritski, Max Bain","doi":"10.48550/arXiv.2203.11933","DOIUrl":"https://doi.org/10.48550/arXiv.2203.11933","url":null,"abstract":"Vision-language models can encode societal biases and stereotypes, but there are challenges to measuring and mitigating these multimodal harms due to lacking measurement robustness and feature degradation. To address these challenges, we investigate bias measures and apply ranking metrics for image-text representations. We then investigate debiasing methods and show that prepending learned embeddings to text queries that are jointly trained with adversarial debiasing and a contrastive loss, reduces various bias measures with minimal degradation to the image-text representation.","PeriodicalId":39298,"journal":{"name":"AACL Bioflux","volume":"48 1","pages":"806-822"},"PeriodicalIF":0.0,"publicationDate":"2022-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79165225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Seamlessly Integrating Factual Information and Social Content with Persuasive Dialogue","authors":"Maximillian Chen, Weiyan Shi, Feifan Yan, Ryan Hou, Jingwen Zhang, Saurav Sahay, Zhou Yu","doi":"10.48550/arXiv.2203.07657","DOIUrl":"https://doi.org/10.48550/arXiv.2203.07657","url":null,"abstract":"Complex conversation settings such as persuasion involve communicating changes in attitude or behavior, so users’ perspectives need to be addressed, even when not directly related to the topic. In this work, we contribute a novel modular dialogue system framework that seamlessly integrates factual information and social content into persuasive dialogue. Our framework is generalizable to any dialogue tasks that have mixed social and task contents. We conducted a study that compared user evaluations of our framework versus a baseline end-to-end generation model. We found our model was evaluated to be more favorable in all dimensions including competence and friendliness compared to the baseline model which does not explicitly handle social content or factual questions.","PeriodicalId":39298,"journal":{"name":"AACL Bioflux","volume":"22 1","pages":"399-413"},"PeriodicalIF":0.0,"publicationDate":"2022-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81482344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AACL BiofluxPub Date : 2020-01-01DOI: 10.5715/jnlp.29.785
Yukun Feng, Chenlong Hu, Hidetaka Kamigaito, Hiroya Takamura, M. Okumura
{"title":"A Simple and Effective Usage of Word Clusters for CBOW Model","authors":"Yukun Feng, Chenlong Hu, Hidetaka Kamigaito, Hiroya Takamura, M. Okumura","doi":"10.5715/jnlp.29.785","DOIUrl":"https://doi.org/10.5715/jnlp.29.785","url":null,"abstract":"We propose a simple and effective method for incorporating word clusters into the Continuous Bag-of-Words (CBOW) model. Specifically, we propose to replace infrequent input and output words in CBOW model with their clusters. The resulting cluster-incorporated CBOW model produces embeddings of frequent words and a small amount of cluster embeddings, which will be fine-tuned in downstream tasks. We empirically show our replacing method works well on several downstream tasks. Through our analysis, we show that our method might be also useful for other similar models which produce word embeddings.","PeriodicalId":39298,"journal":{"name":"AACL Bioflux","volume":"42 1","pages":"80-86"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75510058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AACL BiofluxPub Date : 2020-01-01DOI: 10.5715/JNLP.28.264
Shin Kanouchi, Masato Neishi, Yuta Hayashibe, Hiroki Ouchi, Naoaki Okazaki
{"title":"You May Like This Hotel Because ...: Identifying Evidence for Explainable Recommendations","authors":"Shin Kanouchi, Masato Neishi, Yuta Hayashibe, Hiroki Ouchi, Naoaki Okazaki","doi":"10.5715/JNLP.28.264","DOIUrl":"https://doi.org/10.5715/JNLP.28.264","url":null,"abstract":"Explainable recommendation is a good way to improve user satisfaction. However, explainable recommendation in dialogue is challenging since it has to handle natural language as both input and output. To tackle the challenge, this paper proposes a novel and practical task to explain evidences in recommending hotels given vague requests expressed freely in natural language. We decompose the process into two subtasks on hotel reviews: Evidence Identification and Evidence Explanation. The former predicts whether or not a sentence contains evidence that expresses why a given request is satisfied. The latter generates a recommendation sentence given a request and an evidence sentence. In order to address these subtasks, we build an Evidence-based Explanation dataset, which is the largest dataset for explaining evidences in recommending hotels for vague requests. The experimental results demonstrate that the BERT model can find evidence sentences with respect to various vague requests and that the LSTM-based model can generate recommendation sentences.","PeriodicalId":39298,"journal":{"name":"AACL Bioflux","volume":"11 1","pages":"890-899"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78349942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}