Language Resources and Evaluation最新文献

筛选
英文 中文
A corpus of Persian literary text 波斯文学文本的文集
IF 2.7 3区 计算机科学
Language Resources and Evaluation Pub Date : 2023-11-23 DOI: 10.1007/s10579-023-09689-6
Shahab Raji, Malihe Alikhani, Gerard de Melo, Matthew Stone
{"title":"A corpus of Persian literary text","authors":"Shahab Raji, Malihe Alikhani, Gerard de Melo, Matthew Stone","doi":"10.1007/s10579-023-09689-6","DOIUrl":"https://doi.org/10.1007/s10579-023-09689-6","url":null,"abstract":"<p>Persian poetry has profoundly affected all periods of Persian literature and the literature of other countries as well. It is a fundamental vehicle for expressing Persian culture and political opinion. This paper presents a corpus of Persian literary text mainly focusing on poetry, covering the ninth to twenty-first century annotated for century and style, with additional partial annotation of rhetorical figures. Our resource is the largest and the most diverse corpus available in Persian literary text, with a particularly broad temporal scope. This allows us to conduct several computational experiments to analyze poetic styles, authors and time periods, as well as context shifts over time, for which we rely both on supervised models and on Persian poetry-specific heuristics. The corpus, the tools, and experiments described in this paper can be used not only for digital humanities studies of Persian literature but also for processing Persian texts in general, as well as in other broader cross-linguistic applications.</p>","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":"24 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138524447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A corpus of English learners with Arabic and Hebrew backgrounds 具有阿拉伯语和希伯来语背景的英语学习者的语料库
IF 2.7 3区 计算机科学
Language Resources and Evaluation Pub Date : 2023-11-20 DOI: 10.1007/s10579-023-09692-x
Omaima Abboud, Batia Laufer, Noam Ordan, Uliana Sentsova, Shuly Wintner
{"title":"A corpus of English learners with Arabic and Hebrew backgrounds","authors":"Omaima Abboud, Batia Laufer, Noam Ordan, Uliana Sentsova, Shuly Wintner","doi":"10.1007/s10579-023-09692-x","DOIUrl":"https://doi.org/10.1007/s10579-023-09692-x","url":null,"abstract":"<p>Learner corpora—datasets that reflect the language of non-native speakers—are instrumental for research of language learning and development, as well as for practical applications, mainly for teaching and education. Such corpora now exist for a plethora of native–foreign language pairs; but until recently, none of them reflected native Hebrew speakers, and very few reflected native Arabic speakers. We introduce a recently-released corpus of English essays authored by learners in Israel. The corpus consists of two sub-corpora, one of them of Arabic native speakers and the other consisting mainly of Hebrew native speakers. We report on the composition and curation of the datasets; specifically, we processed the data so that both sub-corpora are now uniformly represented, facilitating seamless research and computational processing of the data. We provide statistical information on the corpora and outline a few research projects that had already used them. This is the first and only learner corpus in Israel including two major native languages of people in the same educational system regarding the English syllabus. All the resources related to the corpus are freely available.</p>","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":"57 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138524448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Reading Everyday Emotion Database (REED): a set of audio-visual recordings of emotions in music and language 阅读日常情绪数据库(REED):一套以音乐和语言表达情绪的视听记录
IF 2.7 3区 计算机科学
Language Resources and Evaluation Pub Date : 2023-11-20 DOI: 10.1007/s10579-023-09698-5
Jia Hoong Ong, Florence Yik Nam Leung, Fang Liu
{"title":"The Reading Everyday Emotion Database (REED): a set of audio-visual recordings of emotions in music and language","authors":"Jia Hoong Ong, Florence Yik Nam Leung, Fang Liu","doi":"10.1007/s10579-023-09698-5","DOIUrl":"https://doi.org/10.1007/s10579-023-09698-5","url":null,"abstract":"<p>Most audio-visual (AV) emotion databases consist of clips that do not reflect real-life emotion processing (e.g., professional actors in bright studio-like environment), contain only spoken clips, and none have sung clips that express complex emotions. Here, we introduce a new AV database, the Reading Everyday Emotion Database (REED), which directly addresses those gaps. We recorded the faces of everyday adults with a diverse range of acting experience expressing 13 emotions—neutral, the six basic emotions (angry, disgusted, fearful, happy, sad, surprised), and six complex emotions (embarrassed, hopeful, jealous, proud, sarcastic, stressed)—in two auditory domains (spoken and sung) using everyday recording devices (e.g., laptops, mobile phones, etc.). The recordings were validated by an independent group of raters. We found that: intensity ratings of the recordings were positively associated with recognition accuracy; and the basic emotions, as well as the Neutral and Sarcastic emotions, were recognised more accurately than the other complex emotions. Emotion recognition accuracy also differed by utterance. Exploratory analysis revealed that recordings of those with drama experience were better recognised than those without. Overall, this database will benefit those who need AV clips with natural variations in both emotion expressions and recording environment.</p>","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":"6 6","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138524444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multilingual, multimodal dataset of aggression and bias: the ComMA dataset 侵略和偏见的多语言、多模态数据集:逗号数据集
IF 2.7 3区 计算机科学
Language Resources and Evaluation Pub Date : 2023-11-16 DOI: 10.1007/s10579-023-09696-7
Ritesh Kumar, Shyam Ratan, Siddharth Singh, Enakshi Nandi, Laishram Niranjana Devi, Akash Bhagat, Yogesh Dawer, Bornini Lahiri, Akanksha Bansal
{"title":"A multilingual, multimodal dataset of aggression and bias: the ComMA dataset","authors":"Ritesh Kumar, Shyam Ratan, Siddharth Singh, Enakshi Nandi, Laishram Niranjana Devi, Akash Bhagat, Yogesh Dawer, Bornini Lahiri, Akanksha Bansal","doi":"10.1007/s10579-023-09696-7","DOIUrl":"https://doi.org/10.1007/s10579-023-09696-7","url":null,"abstract":"<p>In this paper, we discuss the development of a multilingual dataset annotated with a hierarchical, fine-grained tagset marking different types of aggression and the “context\" in which they occur. The context, here, is defined by the conversational thread in which a specific comment occurs and also the “type” of discursive role that the comment is performing with respect to the previous comment(s). The dataset has been developed as part of the ComMA Project and consists of a total of 57,363 annotated comments, 1142 annotated memes, and around 70 h of annotated audio (extracted from videos) in four languages—Meitei, Bangla, Hindi, and Indian English. This data has been collected from various social media platforms such as YouTube, Facebook, Twitter, and Telegram. As is usual on social media websites, a large number of these comments are multilingual, and many are code-mixed with English. This paper gives a detailed description of the tagset developed during the course of this project and elaborates on the process of developing and using a multi-label, fine-grained tagset for marking comments with aggression and bias of various kinds, which includes gender bias, religious intolerance (called communal bias in the tagset), class/caste bias, and ethnic/racial bias. We define and discuss the tags that have been used for marking different discursive roles being performed through the comments, such as attack, defend, and so on. We also present a statistical analysis of the dataset as well as the results of our baseline experiments for developing an automatic aggression identification system using the dataset developed. Based on the results of the baseline experiments, we also argue that our dataset provides diverse and ‘hard’ sets of instances which makes it a good dataset for training and testing new techniques for aggressive and abusive language classification.</p>","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":"77 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138524445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic genre identification: a survey 自动体裁识别:一项调查
IF 2.7 3区 计算机科学
Language Resources and Evaluation Pub Date : 2023-11-16 DOI: 10.1007/s10579-023-09695-8
Taja Kuzman, Nikola Ljubešić
{"title":"Automatic genre identification: a survey","authors":"Taja Kuzman, Nikola Ljubešić","doi":"10.1007/s10579-023-09695-8","DOIUrl":"https://doi.org/10.1007/s10579-023-09695-8","url":null,"abstract":"<p>Automatic genre identification (AGI) is a text classification task focused on genres, i.e., text categories defined by the author’s purpose, common function of the text, and the text’s conventional form. Obtaining genre information has been shown to be beneficial for a wide range of disciplines, including linguistics, corpus linguistics, computational linguistics, natural language processing, information retrieval and information security. Consequently, in the past 20 years, numerous researchers have collected genre datasets with the aim to develop an efficient genre classifier. However, their approaches to the definition of genre schemata, data collection and manual annotation vary substantially, resulting in significantly different datasets. As most AGI experiments are dataset-dependent, a sufficient understanding of the differences between the available genre datasets is of great importance for the researchers venturing into this area. In this paper, we present a detailed overview of different approaches to each of the steps of the AGI task, from the definition of the genre concept and the genre schema, to the dataset collection and annotation methods, and, finally, to machine learning strategies. Special focus is dedicated to the description of the most relevant genre schemata and datasets, and details on the availability of all of the datasets are provided. In addition, the paper presents the recent advances in machine learning approaches to automatic genre identification, and concludes with proposing the directions towards developing a stable multilingual genre classifier.</p>","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":"22 3","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138524449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Brazilian Portuguese corpora for teaching and translation: the CoMET project 用于教学和翻译的巴西葡萄牙语料库:CoMET项目
IF 2.7 3区 计算机科学
Language Resources and Evaluation Pub Date : 2023-11-16 DOI: 10.1007/s10579-023-09690-z
Stella E. O. Tagnin
{"title":"Brazilian Portuguese corpora for teaching and translation: the CoMET project","authors":"Stella E. O. Tagnin","doi":"10.1007/s10579-023-09690-z","DOIUrl":"https://doi.org/10.1007/s10579-023-09690-z","url":null,"abstract":"<p>This paper starts with an overview of corpora available for Brazilian Portuguese to subsequently focus mainly on the CoMET Project developed at the University of São Paulo. CoMET consists of three corpora: a comparable Portuguese-English technical corpus (CorTec), a Portuguese-English parallel (translation) corpus (CorTrad) and a multilingual learner corpus, (CoMAprend), all available for online queries with specific tools. CorTec offers over fifty corpora in a variety of domains, from Health Sciences to Olympic Games. CorTrad is divided into three parts: Popular Science, Technical-Scientific and Literary. Each one of CoMET’s corpora is presented in detail. Examples are also provided.</p>","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":"8 4","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138524437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: The DELAD initiative for sharing language resources on speech disorders 更正:DELAD关于语言障碍语言资源共享的倡议
3区 计算机科学
Language Resources and Evaluation Pub Date : 2023-11-06 DOI: 10.1007/s10579-023-09701-z
Alice Lee, Nicola Bessell, Henk van den Heuvel, Katarzyna Klessa, Satu Saalasti
{"title":"Correction: The DELAD initiative for sharing language resources on speech disorders","authors":"Alice Lee, Nicola Bessell, Henk van den Heuvel, Katarzyna Klessa, Satu Saalasti","doi":"10.1007/s10579-023-09701-z","DOIUrl":"https://doi.org/10.1007/s10579-023-09701-z","url":null,"abstract":"","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":"757 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135636775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LoNLI: An Extensible Framework for Testing Diverse Logical Reasoning Capabilities for NLI 用于测试NLI的各种逻辑推理能力的可扩展框架
3区 计算机科学
Language Resources and Evaluation Pub Date : 2023-11-04 DOI: 10.1007/s10579-023-09691-y
Ishan Tarunesh, Somak Aditya, Monojit Choudhury
{"title":"LoNLI: An Extensible Framework for Testing Diverse Logical Reasoning Capabilities for NLI","authors":"Ishan Tarunesh, Somak Aditya, Monojit Choudhury","doi":"10.1007/s10579-023-09691-y","DOIUrl":"https://doi.org/10.1007/s10579-023-09691-y","url":null,"abstract":"Natural Language Inference (NLI) is considered a representative task to test natural language understanding (NLU). In this work, we propose an extensible framework to collectively yet categorically test diverse Logical reasoning capabilities required for NLI (and, by extension, NLU). Motivated by behavioral testing, we create a semi-synthetic large test bench (363 templates, 363k examples) and an associated framework that offers the following utilities: (1) individually test and analyze reasoning capabilities along 17 reasoning dimensions (including pragmatic reasoning); (2) design experiments to study cross-capability information content (leave one out or bring one in); and (3) the synthetic nature enables us to control for artifacts and biases. We extend a publicly available framework of automated test case instantiation from free-form natural language templates (CheckList) and a well-defined taxonomy of capabilities to cover a wide range of increasingly harder test cases while varying the complexity of natural language. Through our analysis of state-of-the-art NLI systems, we observe that our benchmark is indeed hard (and non-trivial even with training on additional resources). Some capabilities stand out as harder. Further, fine-grained analysis and fine-tuning experiments reveal more insights about these capabilities and the models – supporting and extending previous observations; thus showing the utility of the proposed testbench.","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":"11 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135774512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Building the VisSE Corpus of Spanish SignWriting 西班牙语符号写作VisSE语料库的构建
3区 计算机科学
Language Resources and Evaluation Pub Date : 2023-10-26 DOI: 10.1007/s10579-023-09694-9
Antonio F. G. Sevilla, Alberto Díaz Esteban, José María Lahoz-Bengoechea
{"title":"Building the VisSE Corpus of Spanish SignWriting","authors":"Antonio F. G. Sevilla, Alberto Díaz Esteban, José María Lahoz-Bengoechea","doi":"10.1007/s10579-023-09694-9","DOIUrl":"https://doi.org/10.1007/s10579-023-09694-9","url":null,"abstract":"","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":"24 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134909333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond plain toxic: building datasets for detection of flammable topics and inappropriate statements 除了简单的有毒之外:构建用于检测易燃话题和不适当陈述的数据集
3区 计算机科学
Language Resources and Evaluation Pub Date : 2023-10-21 DOI: 10.1007/s10579-023-09682-z
Nikolay Babakov, Varvara Logacheva, Alexander Panchenko
{"title":"Beyond plain toxic: building datasets for detection of flammable topics and inappropriate statements","authors":"Nikolay Babakov, Varvara Logacheva, Alexander Panchenko","doi":"10.1007/s10579-023-09682-z","DOIUrl":"https://doi.org/10.1007/s10579-023-09682-z","url":null,"abstract":"Toxicity on the Internet is an acknowledged problem. It includes a wide range of actions from the use of obscene words to offenses and hate speech toward particular users or groups of people. However, there also exist other types of inappropriate messages which are usually not viewed as toxic as they do not contain swear words or explicit offenses. Such messages can contain covert toxicity or generalizations, incite harmful actions (crime, suicide, drug use), and provoke “heated” discussions. These messages are often related to particular sensitive topics, e.g. politics, sexual minorities, or social injustice. Such topics tend to yield toxic emotional reactions more often than other topics, e.g. cars or computing. At the same time, not all messages within “flammable” topics are inappropriate. This work focuses on automatically detecting inappropriate language in natural texts. This is crucial for monitoring user-generated content and developing dialogue systems and AI assistants. While many works focus on toxicity detection, we highlight the fact that texts can be harmful without being toxic or containing obscene language. Blind censorship based on keywords is a common approach to address these issues, but it limits a system’s functionality. This work proposes a safe and effective solution to serve broad user needs and develop necessary resources and tools. Thus, machinery for inappropriateness detection could be useful (i) for making communication on the Internet safer, more productive, and inclusive by flagging truly inappropriate content while not banning messages blindly by topic; (ii) for detection of inappropriate messages generated by automatic systems, e.g. neural chatbots, due to biases in training data; (iii) for debiasing training data for language models (e.g. BERT and GPT-2). Towards this end, in this work, we present two text collections labeled according to a binary notion of inappropriateness (124,597 samples) and a multinomial notion of sensitive topic (33,904 samples). Assuming that the notion of inappropriateness is common among people of the same culture, we base our approach on a human intuitive understanding of what is not acceptable and harmful. To devise an objective view of inappropriateness, we define it in a data-driven way through crowdsourcing. Namely, we run a large-scale annotation study asking workers if a given chatbot-generated utterance could harm the reputation of the company that created this chatbot. High values of inter-annotator agreement suggest that the notion of inappropriateness exists and can be uniformly understood by different people. To define the notion of a sensitive topic in an objective way we use guidelines suggested by specialists in the Legal and PR departments of a large company. We use the collected datasets to train inappropriateness and sensitive topic classifiers employing both classic and Transformer-based models.","PeriodicalId":49927,"journal":{"name":"Language Resources and Evaluation","volume":"14 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135510980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信