AI Open最新文献

筛选
英文 中文
Data Augmentation Approaches in Natural Language Processing: A Survey 自然语言处理中的数据增强方法综述
AI Open Pub Date : 2021-10-05 DOI: 10.1016/j.aiopen.2022.03.001
Bohan Li, Yutai Hou, Wanxiang Che
{"title":"Data Augmentation Approaches in Natural Language Processing: A Survey","authors":"Bohan Li, Yutai Hou, Wanxiang Che","doi":"10.1016/j.aiopen.2022.03.001","DOIUrl":"https://doi.org/10.1016/j.aiopen.2022.03.001","url":null,"abstract":"","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"2 1","pages":"71-90"},"PeriodicalIF":0.0,"publicationDate":"2021-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76184765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 110
StackVAE-G: An efficient and interpretable model for time series anomaly detection StackVAE-G:一种高效可解释的时间序列异常检测模型
AI Open Pub Date : 2021-05-18 DOI: 10.1016/j.aiopen.2022.07.001
Wenkai Li, Wenbo Hu, Ting Chen, Ning Chen, Cheng Feng
{"title":"StackVAE-G: An efficient and interpretable model for time series anomaly detection","authors":"Wenkai Li, Wenbo Hu, Ting Chen, Ning Chen, Cheng Feng","doi":"10.1016/j.aiopen.2022.07.001","DOIUrl":"https://doi.org/10.1016/j.aiopen.2022.07.001","url":null,"abstract":"","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"37 6 1","pages":"101-110"},"PeriodicalIF":0.0,"publicationDate":"2021-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83401508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Heterogeneous graph knowledge enhanced stock market prediction 异构图知识增强股票市场预测
AI Open Pub Date : 2021-01-01 DOI: 10.1016/j.aiopen.2021.09.001
Kai Xiong, Xiao Ding, Li Du, Ting Liu, Bing Qin
{"title":"Heterogeneous graph knowledge enhanced stock market prediction","authors":"Kai Xiong,&nbsp;Xiao Ding,&nbsp;Li Du,&nbsp;Ting Liu,&nbsp;Bing Qin","doi":"10.1016/j.aiopen.2021.09.001","DOIUrl":"10.1016/j.aiopen.2021.09.001","url":null,"abstract":"<div><p>We focus on the task of stock market prediction based on financial text which contains information that could influence the movement of stock market. Previous works mainly utilize a single semantic unit of financial text, such as words, events, sentences, to predict the tendency of stock market. However, the interaction of different-grained information within financial text can be useful for context knowledge supplement and predictive information selection, and then improve the performance of stock market prediction. To facilitate this, we propose constructing a heterogeneous graph with different-grained information nodes from financial text for the task. A novel heterogeneous neural network is presented to aggregate multi-grained information. Experimental results demonstrate that our proposed approach reaches higher performance than baselines.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"2 ","pages":"Pages 168-174"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651021000243/pdfft?md5=618178faed3a536b57646ee675c7b211&pid=1-s2.0-S2666651021000243-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73852604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
CPM: A large-scale generative Chinese Pre-trained language model CPM:大规模生成中文预训练语言模型
AI Open Pub Date : 2021-01-01 DOI: 10.1016/j.aiopen.2021.07.001
Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Maosong Sun
{"title":"CPM: A large-scale generative Chinese Pre-trained language model","authors":"Zhengyan Zhang,&nbsp;Xu Han,&nbsp;Hao Zhou,&nbsp;Pei Ke,&nbsp;Yuxian Gu,&nbsp;Deming Ye,&nbsp;Yujia Qin,&nbsp;Yusheng Su,&nbsp;Haozhe Ji,&nbsp;Jian Guan,&nbsp;Fanchao Qi,&nbsp;Xiaozhi Wang,&nbsp;Yanan Zheng,&nbsp;Guoyang Zeng,&nbsp;Huanqi Cao,&nbsp;Shengqi Chen,&nbsp;Daixuan Li,&nbsp;Zhenbo Sun,&nbsp;Zhiyuan Liu,&nbsp;Minlie Huang,&nbsp;Maosong Sun","doi":"10.1016/j.aiopen.2021.07.001","DOIUrl":"10.1016/j.aiopen.2021.07.001","url":null,"abstract":"<div><p>Pre-trained Language Models (PLMs) have proven to be beneficial for various downstream NLP tasks. Recently, GPT-3, with 175 billion parameters and 570 GB training data, drew a lot of attention due to the capacity of few-shot (even zero-shot) learning. However, applying GPT-3 to address Chinese NLP tasks is still challenging, as the training corpus of GPT-3 is primarily English, and the parameters are not publicly available. In this technical report, we release the Chinese Pre-trained Language Model (CPM) with generative pre-training on large-scale Chinese training data. To the best of our knowledge, CPM, with 2.6 billion parameters and 100 GB Chinese training data, is the largest Chinese pre-trained language model, which could facilitate several downstream Chinese NLP tasks, such as conversation, essay generation, cloze test, and language understanding. Extensive experiments demonstrate that CPM achieves strong performance on many NLP tasks in the settings of few-shot (even zero-shot) learning. The code and parameters are available at <span>https://github.com/TsinghuaAI/CPM</span><svg><path></path></svg>.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"2 ","pages":"Pages 93-99"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aiopen.2021.07.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90523293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 86
Discrete and continuous representations and processing in deep learning: Looking forward 深度学习中的离散和连续表示与处理:展望
AI Open Pub Date : 2021-01-01 DOI: 10.1016/j.aiopen.2021.07.002
Ruben Cartuyvels, Graham Spinks , Marie-Francine Moens
{"title":"Discrete and continuous representations and processing in deep learning: Looking forward","authors":"Ruben Cartuyvels,&nbsp;Graham Spinks ,&nbsp;Marie-Francine Moens","doi":"10.1016/j.aiopen.2021.07.002","DOIUrl":"10.1016/j.aiopen.2021.07.002","url":null,"abstract":"<div><p>Discrete and continuous representations of content (<em>e.g.</em>, of language or images) have interesting properties to be explored for the understanding of or reasoning with this content by machines. This position paper puts forward our opinion on the role of discrete and continuous representations and their processing in the deep learning field. Current neural network models compute continuous-valued data. Information is compressed into dense, distributed embeddings. By stark contrast, humans use discrete symbols in their communication with language. Such symbols represent a compressed version of the world that derives its meaning from shared contextual information. Additionally, human reasoning involves symbol manipulation at a cognitive level, which facilitates abstract reasoning, the composition of knowledge and understanding, generalization and efficient learning. Motivated by these insights, in this paper we argue that combining discrete and continuous representations and their processing will be essential to build systems that exhibit a general form of intelligence. We suggest and discuss several avenues that could improve current neural networks with the inclusion of discrete elements to combine the advantages of both types of representations.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"2 ","pages":"Pages 143-159"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651021000206/pdfft?md5=2930ea7a8804d90c964ce7206c845bec&pid=1-s2.0-S2666651021000206-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87346774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A comprehensive survey of entity alignment for knowledge graphs 知识图谱实体对齐的综合调查
AI Open Pub Date : 2021-01-01 DOI: 10.1016/j.aiopen.2021.02.002
Kaisheng Zeng , Chengjiang Li , Lei Hou , Juanzi Li , Ling Feng
{"title":"A comprehensive survey of entity alignment for knowledge graphs","authors":"Kaisheng Zeng ,&nbsp;Chengjiang Li ,&nbsp;Lei Hou ,&nbsp;Juanzi Li ,&nbsp;Ling Feng","doi":"10.1016/j.aiopen.2021.02.002","DOIUrl":"10.1016/j.aiopen.2021.02.002","url":null,"abstract":"<div><p>Knowledge Graphs (KGs), as a structured human knowledge, manage data in an ease-of-store, recognizable, and understandable way for machines and provide a rich knowledge base for different artificial intelligence applications. However, current multi-source KGs have heterogeneity and complementarity, and it is necessary to fuse heterogeneous knowledge from different data sources or different languages into a unified and consistent KG. Entity alignment aims to find equivalence relations between entities in different knowledge graphs but semantically represent the same real-world object, which is the most fundamental and essential technology in knowledge fusion. This paper investigated almost all the latest knowledge graph representations learning and entity alignment methods and summarized their core technologies and features from different aspects. Our full investigation gives a comprehensive outlook on several promising research directions for future work. We also provide an efficient and efficiency entity alignment toolkit to help researchers quickly start their own entity alignment models.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"2 ","pages":"Pages 1-13"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aiopen.2021.02.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75984252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
A review of deep learning in question answering over knowledge bases 基于知识库的深度学习问题回答研究综述
AI Open Pub Date : 2021-01-01 DOI: 10.1016/j.aiopen.2021.12.001
Chen Zhang , Yuxuan Lai , Yansong Feng , Dongyan Zhao
{"title":"A review of deep learning in question answering over knowledge bases","authors":"Chen Zhang ,&nbsp;Yuxuan Lai ,&nbsp;Yansong Feng ,&nbsp;Dongyan Zhao","doi":"10.1016/j.aiopen.2021.12.001","DOIUrl":"10.1016/j.aiopen.2021.12.001","url":null,"abstract":"<div><p>Question answering over knowledge bases (KBQA) is a challenging task in natural language processing. It requires machines to answer natural language questions based on large-scale knowledge bases. Recent years have witnessed remarkable success of neural network models on many natural language processing tasks, including KBQA. In this paper, we first review the recent advances of deep learning methods on solving simple questions in two streams, the information extraction style and semantic parsing style. We then introduce how to extend the neural architectures to answer more complex questions with iteration and decomposition techniques, and summarize current research challenges.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"2 ","pages":"Pages 205-215"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651021000292/pdfft?md5=eb6c1b2ea9296d53ba86dfc7d7ce5213&pid=1-s2.0-S2666651021000292-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74007285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Towards a universal continuous knowledge base 朝向一个通用的连续知识库
AI Open Pub Date : 2021-01-01 DOI: 10.1016/j.aiopen.2021.11.001
Gang Chen , Maosong Sun , Yang Liu
{"title":"Towards a universal continuous knowledge base","authors":"Gang Chen ,&nbsp;Maosong Sun ,&nbsp;Yang Liu","doi":"10.1016/j.aiopen.2021.11.001","DOIUrl":"10.1016/j.aiopen.2021.11.001","url":null,"abstract":"<div><p>In artificial intelligence (AI), knowledge is the information required by an intelligent system to accomplish tasks. While traditional knowledge bases use discrete, symbolic representations, detecting knowledge encoded in the continuous representations learned from data has received increasing attention recently. In this work, we propose a method for building a continuous knowledge base (CKB) that can store knowledge imported from multiple, diverse neural networks. The key idea of our approach is to define an interface for each neural network and cast knowledge transferring as a function simulation problem. Experiments on text classification show promising results: the CKB imports knowledge from a single model and then exports the knowledge to a new model, achieving comparable performance with the original model. More interesting, we import the knowledge from multiple models to the knowledge base, from which the fused knowledge is exported back to a single model, achieving a higher accuracy than the original model. With the CKB, it is also easy to achieve knowledge distillation and transfer learning. Our work opens the door to building a universal continuous knowledge base to collect, store, and organize all continuous knowledge encoded in various neural networks trained for different AI tasks.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"2 ","pages":"Pages 197-204"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651021000280/pdfft?md5=6baa28b4172e47cb5e69435795e785e6&pid=1-s2.0-S2666651021000280-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85616245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
CokeBERT: Contextual knowledge selection and embedding towards enhanced pre-trained language models CokeBERT:面向增强预训练语言模型的上下文知识选择和嵌入
AI Open Pub Date : 2021-01-01 DOI: 10.1016/j.aiopen.2021.06.004
Yusheng Su , Xu Han , Zhengyan Zhang , Yankai Lin , Peng Li , Zhiyuan Liu , Jie Zhou , Maosong Sun
{"title":"CokeBERT: Contextual knowledge selection and embedding towards enhanced pre-trained language models","authors":"Yusheng Su ,&nbsp;Xu Han ,&nbsp;Zhengyan Zhang ,&nbsp;Yankai Lin ,&nbsp;Peng Li ,&nbsp;Zhiyuan Liu ,&nbsp;Jie Zhou ,&nbsp;Maosong Sun","doi":"10.1016/j.aiopen.2021.06.004","DOIUrl":"10.1016/j.aiopen.2021.06.004","url":null,"abstract":"<div><p>Several recent efforts have been devoted to enhancing pre-trained language models (PLMs) by utilizing extra heterogeneous knowledge in knowledge graphs (KGs), and achieved consistent improvements on various knowledge-driven NLP tasks. However, most of these knowledge-enhanced PLMs embed static sub-graphs of KGs (“knowledge context”), regardless of that the knowledge required by PLMs may change dynamically according to specific text (“textual context”). In this paper, we propose a novel framework named Coke to dynamically select contextual knowledge and embed knowledge context according to textual context for PLMs, which can avoid the effect of redundant and ambiguous knowledge in KGs that cannot match the input text. Our experimental results show that Coke outperforms various baselines on typical knowledge-driven NLP tasks, indicating the effectiveness of utilizing dynamic knowledge context for language understanding. Besides the performance improvements, the dynamically selected knowledge in Coke can describe the semantics of text-related knowledge in a more interpretable form than the conventional PLMs. Our implementation and datasets are publicly available.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"2 ","pages":"Pages 127-134"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aiopen.2021.06.004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78659489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Lawformer: A pre-trained language model for Chinese legal long documents Lawformer:中文法律长文件的预训练语言模型
AI Open Pub Date : 2021-01-01 DOI: 10.1016/j.aiopen.2021.06.003
Chaojun Xiao , Xueyu Hu , Zhiyuan Liu , Cunchao Tu , Maosong Sun
{"title":"Lawformer: A pre-trained language model for Chinese legal long documents","authors":"Chaojun Xiao ,&nbsp;Xueyu Hu ,&nbsp;Zhiyuan Liu ,&nbsp;Cunchao Tu ,&nbsp;Maosong Sun","doi":"10.1016/j.aiopen.2021.06.003","DOIUrl":"10.1016/j.aiopen.2021.06.003","url":null,"abstract":"<div><p>Legal artificial intelligence (LegalAI) aims to benefit legal systems with the technology of artificial intelligence, especially natural language processing (NLP). Recently, inspired by the success of pre-trained language models (PLMs) in the generic domain, many LegalAI researchers devote their effort to applying PLMs to legal tasks. However, utilizing PLMs to address legal tasks is still challenging, as the legal documents usually consist of thousands of tokens, which is far longer than the length that mainstream PLMs can process. In this paper, we release the Longformer-based pre-trained language model, named as Lawformer, for Chinese legal long documents understanding. We evaluate Lawformer on a variety of LegalAI tasks, including judgment prediction, similar case retrieval, legal reading comprehension, and legal question answering. The experimental results demonstrate that our model can achieve promising improvement on tasks with long documents as inputs. The code and parameters are available at <span>https://github.com/thunlp/LegalPLMs</span><svg><path></path></svg>.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"2 ","pages":"Pages 79-84"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.aiopen.2021.06.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73107746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 88
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信