Computational Linguistics & Natural Language Processing eJournal最新文献

筛选
英文 中文
Digital Storytelling: Computer Based Learning Activity to Enhance Young Learner Vocabulary 数位讲故事:以电脑为基础的学习活动,以提高青少年学习者的词汇量
Computational Linguistics & Natural Language Processing eJournal Pub Date : 2020-11-24 DOI: 10.2139/ssrn.3736914
Endang Sulistianingsih, Nur Aflahatun
{"title":"Digital Storytelling: Computer Based Learning Activity to Enhance Young Learner Vocabulary","authors":"Endang Sulistianingsih, Nur Aflahatun","doi":"10.2139/ssrn.3736914","DOIUrl":"https://doi.org/10.2139/ssrn.3736914","url":null,"abstract":"Vocabulary is very important in English language teaching but often ignored in learning activities. It is difficult for EFL learners to learn English with a lack of vocabulary. Digital storytelling is one of learning model which is interesting and can be used to enhance EFL learners’ vocabulary. This study was aimed at describing the effectiveness of digital storytelling to enhance young learner vocabulary. The research used one group pretest-posttest design. There was two evaluation before and after the intervention to measure the effectiveness. The participant was twenty-nine students at state elementary school of Central Java, Indonesia. Quantitative data analysis i.e a score of vocabulary mastery was done by t-test. The research findings revealed that digital storytelling was effective to enhance EFL learner’s vocabulary, made them being joyful, relax, well-motivated, and having self-enthusiasm while learning English. Digital storytelling is a powerful learning activity based computer for EFL Learner’s to enhance their vocabulary.","PeriodicalId":256367,"journal":{"name":"Computational Linguistics & Natural Language Processing eJournal","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130778122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corporate ESG News and The Stock Market 企业ESG新闻与股票市场
Computational Linguistics & Natural Language Processing eJournal Pub Date : 2020-10-30 DOI: 10.2139/ssrn.3723799
Walid Taleb, Théo Le Guenedal, Frédéric Lepetit, Vincent Mortier, Takaya Sekine, Lauren Stagnol
{"title":"Corporate ESG News and The Stock Market","authors":"Walid Taleb, Théo Le Guenedal, Frédéric Lepetit, Vincent Mortier, Takaya Sekine, Lauren Stagnol","doi":"10.2139/ssrn.3723799","DOIUrl":"https://doi.org/10.2139/ssrn.3723799","url":null,"abstract":"ESG investing's popularity has continually increased in the past five years. ESG data is increasingly integrated into investment processes. However, the information contained in ESG-related news for corporates has not been entirely exploited by institutional and long-only investors. The objective of this paper is to identify the benefits of ESG news information for active and factor-based investors. Indeed, one of the issues with ESG is the low frequency of score updates. For active management, we analyze ESG-sorted portfolios in investment universes filtered by ESG news volume. Metrics of ESG-related news are sourced from Truvalue Labs, a provider of Artificial Intelligence-powered ESG insights and analytics. We find that the approach of a universe focused on ESG news of corporates has been efficient in the early 2010s on the lower ESG-ranked side of the universe, but also on the higher ESG rank. More recently, it has positively contributed to more dynamic approaches of ESG investing. Finally, increasing the sensitivity to the highly visible SDGs significantly improves the return of ESG long-short portfolios.","PeriodicalId":256367,"journal":{"name":"Computational Linguistics & Natural Language Processing eJournal","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123016976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Neural Discourse Modelling of Conversations 对话的神经话语建模
Computational Linguistics & Natural Language Processing eJournal Pub Date : 2020-07-29 DOI: 10.2139/ssrn.3663042
John M. Pierre
{"title":"Neural Discourse Modelling of Conversations","authors":"John M. Pierre","doi":"10.2139/ssrn.3663042","DOIUrl":"https://doi.org/10.2139/ssrn.3663042","url":null,"abstract":"Deep neural networks have shown recent promise in many language-related tasks such as the modelling of conversations. We extend RNN-based sequence to sequence models to capture the long-range discourse across many turns of conversation. We perform a sensitivity analysis on how much additional context affects performance, and provide quantitative and qualitative evidence that these models can capture discourse relationships across multiple utterances. Our results show how adding an additional RNN layer for modelling discourse improves the quality of output utterances and providing more of the previous conversation as input also improves performance. By searching the generated outputs for specific discourse markers, we show how neural discourse models can exhibit increased coherence and cohesion in conversations.","PeriodicalId":256367,"journal":{"name":"Computational Linguistics & Natural Language Processing eJournal","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121184274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FinBERT—A Deep Learning Approach to Extracting Textual Information 一种提取文本信息的深度学习方法
Computational Linguistics & Natural Language Processing eJournal Pub Date : 2020-07-28 DOI: 10.2139/ssrn.3910214
Allen Huang, Hui Wang, Yi Yang
{"title":"FinBERT—A Deep Learning Approach to Extracting Textual Information","authors":"Allen Huang, Hui Wang, Yi Yang","doi":"10.2139/ssrn.3910214","DOIUrl":"https://doi.org/10.2139/ssrn.3910214","url":null,"abstract":"In this paper, we develop FinBERT, a state-of-the-art deep learning algorithm that incorporates the contextual relations between words in the finance domain. First, using a researcher-labeled analyst report sample, we document that FinBERT significantly outperforms the Loughran and McDonald (LM) dictionary, the naïve Bayes, and Word2Vec in sentiment classification, primarily because of its ability to uncover sentiment in sentences that other algorithms mislabel as neutral. Next, we show that other approaches underestimate the textual informativeness of earnings conference calls by at least 32% compared with FinBERT. Our results also indicate that FinBERT’s greater accuracy is especially relevant when empirical tests may suffer from low power, such as with small samples. Last, textual sentiments summarized by FinBERT can better predict future earnings than the LM dictionary, especially after 2011, consistent with firms’ strategic disclosures reducing the information content of textual sentiments measured with LM dictionary. Our results have implications for academic researchers, investment professionals, and financial market regulators who want to extract insights from financial texts.","PeriodicalId":256367,"journal":{"name":"Computational Linguistics & Natural Language Processing eJournal","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128229327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Implementation on Text Classification Using Bag of Words Model 基于词袋模型的文本分类实现
Computational Linguistics & Natural Language Processing eJournal Pub Date : 2019-05-17 DOI: 10.2139/ssrn.3507923
Nisha V M, D. Kumar R
{"title":"Implementation on Text Classification Using Bag of Words Model","authors":"Nisha V M, D. Kumar R","doi":"10.2139/ssrn.3507923","DOIUrl":"https://doi.org/10.2139/ssrn.3507923","url":null,"abstract":"Bag of words provides one way to deal with text representation and apply it to a standard type of text arrangement. This method depends on the idea of Bag-of-Words (BOW) that measures the content which is accessible from Wikipedia, Kaggle, Gmail and so on. The proposed method is utilized to create a Vector Space Model, which truly sustained into a Support Vector Machine classifier. This is to arrange and gathering of document records that are publically accessible datasets through social media. The text results demonstrate the examination between the raw information and the clean information that is viewed on the word cloud.","PeriodicalId":256367,"journal":{"name":"Computational Linguistics & Natural Language Processing eJournal","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129613423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Language Style Similarity and Friendship Networks 语言风格相似性和友谊网络
Computational Linguistics & Natural Language Processing eJournal Pub Date : 2019-02-27 DOI: 10.2139/ssrn.3131715
Balázs Kovács, Adam M. Kleinbaum
{"title":"Language Style Similarity and Friendship Networks","authors":"Balázs Kovács, Adam M. Kleinbaum","doi":"10.2139/ssrn.3131715","DOIUrl":"https://doi.org/10.2139/ssrn.3131715","url":null,"abstract":"This paper demonstrates that linguistic similarity predicts network tie formation and that friends exhibit linguistic convergence over time. Study 1 analyzes the linguistic styles and the emerging friendship network in a complete cohort of 285 students. Study 2 analyzes a large-scale dataset of online reviews. Across both studies, we collected data in two waves to examine changes in both friendship networks and linguistic styles. Using the LIWC linguistic framework, we analyze the text of students’ essays and of 1.7 million reviews by 159,651 Yelp reviewers. We find that similarity in linguistic style corresponds to higher likelihood of friendship formation and persistence, and that friendship ties, in turn, correspond with a convergence in linguistic style. We discuss the implications of the co-evolution of linguistic styles and social networks, which contribute to the formation of relational echo chambers.","PeriodicalId":256367,"journal":{"name":"Computational Linguistics & Natural Language Processing eJournal","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122317000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Multi-Layer Arabic Text Steganographic Method Based on Letter Shaping 基于字母整形的多层阿拉伯文本隐写方法
Computational Linguistics & Natural Language Processing eJournal Pub Date : 2019-01-25 DOI: 10.5121/ijnsa.2019.11103
A.F. Al Azzawi
{"title":"A Multi-Layer Arabic Text Steganographic Method Based on Letter Shaping","authors":"A.F. Al Azzawi","doi":"10.5121/ijnsa.2019.11103","DOIUrl":"https://doi.org/10.5121/ijnsa.2019.11103","url":null,"abstract":"Text documents are widely used, however, the text steganography is more difficult than other media because of a little redundant information. This paper presents a text steganography methodology appropriate for Arabic Unicode texts that do not use a normal sequential inserting process to overcome the security issues of the current approaches that are sensitive to steg-analysis. The Arabic Unicode text is kept within main unshaped letters, and the proposed method is used text file as cover text to hide a bit in each letter by reshaping the letters according to its position (beginning, middle, end of the word, or standalone), this hiding process is accomplished through multi-embedding layer where each layer contains all words with the same Tag detected using the POS tagger, and the embedding layers are selected randomly using the stego key to improve the security issues. The experimental result shows that the purposed method satisfied the hiding capacity requirements, improve security, and imperceptibility is better than currently developed approaches","PeriodicalId":256367,"journal":{"name":"Computational Linguistics & Natural Language Processing eJournal","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127392235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Politeness Strategies of Russian School Students: Quantitative Approach to Qualitative Data 俄罗斯中学生礼貌策略:定性数据的定量分析
Computational Linguistics & Natural Language Processing eJournal Pub Date : 2018-12-05 DOI: 10.2139/ssrn.3296303
M. Grabovskaya, E. Gridneva, A. Vlakhov
{"title":"Politeness Strategies of Russian School Students: Quantitative Approach to Qualitative Data","authors":"M. Grabovskaya, E. Gridneva, A. Vlakhov","doi":"10.2139/ssrn.3296303","DOIUrl":"https://doi.org/10.2139/ssrn.3296303","url":null,"abstract":"This study deals with the politeness strategies of speakers of Russian, focusing on verbal expression of politeness. After running a field survey in schools in mid-2018, we try to analyze specific verbal markers of expressing politeness quantitatively. Four such markers were selected for this study, namely greeting, leave-taking, expressing gratitude and apology. Quantitative analysis shows that there is a clear frequency pattern found in these markers’ use, indicating a relatively high degree of sociolinguistic variation. Possible causes of this effect are discussed, including cultural diversity and multilingual setting of the modern Russian school communicative domain","PeriodicalId":256367,"journal":{"name":"Computational Linguistics & Natural Language Processing eJournal","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122104436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sentence-Level Dialects Identification in the Greater China Region 大中国地区句子级方言识别
Computational Linguistics & Natural Language Processing eJournal Pub Date : 2016-12-30 DOI: 10.5121/IJNLC.2016.5602
Fan Xu, Mingwen Wang, Maoxi Li
{"title":"Sentence-Level Dialects Identification in the Greater China Region","authors":"Fan Xu, Mingwen Wang, Maoxi Li","doi":"10.5121/IJNLC.2016.5602","DOIUrl":"https://doi.org/10.5121/IJNLC.2016.5602","url":null,"abstract":"Identifying the different varieties of the same language is more challenging than unrelated languages identification. In this paper, we propose an approach to discriminate language varieties or dialects of Mandarin Chinese for the Mainland China, Hong Kong, Taiwan, Macao, Malaysia and Singapore, a.k.a., the Greater China Region (GCR). When applied to the dialects identification of the GCR, we find that the commonly used character-level or word-level uni-gram feature is not very efficient since there exist several specific problems such as the ambiguity and context-dependent characteristic of words in the dialects of the GCR. To overcome these challenges, we use not only the general features like character-level n-gram, but also many new word-level features, including PMI-based and word alignment-based features. A series of evaluation results on both the news and open-domain dataset from Wikipedia show the effectiveness of the proposed approach.","PeriodicalId":256367,"journal":{"name":"Computational Linguistics & Natural Language Processing eJournal","volume":"40 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123598389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信