AI Open最新文献

筛选
英文 中文
The road from MLE to EM to VAE: A brief tutorial 从MLE到EM再到VAE的道路:一个简短的教程
AI Open Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2021.10.001
Ming Ding
{"title":"The road from MLE to EM to VAE: A brief tutorial","authors":"Ming Ding","doi":"10.1016/j.aiopen.2021.10.001","DOIUrl":"10.1016/j.aiopen.2021.10.001","url":null,"abstract":"<div><p>Variational Auto-Encoders (VAEs) have emerged as one of the most popular genres of <em>generative models</em>, which are learned to characterize the data distribution. The classic Expectation Maximization (EM) algorithm aims to learn models with hidden variables. Essentially, both of them are iteratively optimizing the <em>evidence lower bound</em> (ELBO) to maximize to the likelihood of the observed data.</p><p>This short tutorial joins them up into a line and offer a good way to thoroughly understand EM and VAE with minimal knowledge. It is especially helpful to beginners and readers with experiences in machine learning applications but no statistics background.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 29-34"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651021000279/pdfft?md5=8f78a90e4fd74243d885b738de1fe94e&pid=1-s2.0-S2666651021000279-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73299255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Optimized separable convolution: Yet another efficient convolution operator 优化的可分离卷积:又一种高效的卷积算子
AI Open Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2022.10.002
Tao Wei , Yonghong Tian , Yaowei Wang , Yun Liang , Chang Wen Chen
{"title":"Optimized separable convolution: Yet another efficient convolution operator","authors":"Tao Wei ,&nbsp;Yonghong Tian ,&nbsp;Yaowei Wang ,&nbsp;Yun Liang ,&nbsp;Chang Wen Chen","doi":"10.1016/j.aiopen.2022.10.002","DOIUrl":"https://doi.org/10.1016/j.aiopen.2022.10.002","url":null,"abstract":"<div><p>The convolution operation is the most critical component in recent surge of deep learning research. Conventional 2D convolution needs <span><math><mrow><mi>O</mi><mrow><mo>(</mo><msup><mrow><mi>C</mi></mrow><mrow><mn>2</mn></mrow></msup><msup><mrow><mi>K</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>)</mo></mrow></mrow></math></span> parameters to represent, where <span><math><mi>C</mi></math></span> is the channel size and <span><math><mi>K</mi></math></span> is the kernel size. The amount of parameters has become really costly considering that these parameters increased tremendously recently to meet the needs of demanding applications. Among various implementations of the convolution, separable convolution has been proven to be more efficient in reducing the model size. For example, depth separable convolution reduces the complexity to <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mi>C</mi><mi>⋅</mi><mrow><mo>(</mo><mi>C</mi><mo>+</mo><msup><mrow><mi>K</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>)</mo></mrow><mo>)</mo></mrow></mrow></math></span> while spatial separable convolution reduces the complexity to <span><math><mrow><mi>O</mi><mrow><mo>(</mo><msup><mrow><mi>C</mi></mrow><mrow><mn>2</mn></mrow></msup><mi>K</mi><mo>)</mo></mrow></mrow></math></span>. However, these are considered ad hoc designs which cannot ensure that they can in general achieve optimal separation. In this research, we propose a novel and principled operator called <em>optimized separable convolution</em> by optimal design for the internal number of groups and kernel sizes for general separable convolutions can achieve the complexity of <span><math><mrow><mi>O</mi><mrow><mo>(</mo><msup><mrow><mi>C</mi></mrow><mrow><mfrac><mrow><mn>3</mn></mrow><mrow><mn>2</mn></mrow></mfrac></mrow></msup><mi>K</mi><mo>)</mo></mrow></mrow></math></span>. When the restriction in the number of separated convolutions can be lifted, an even lower complexity at <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mi>C</mi><mi>⋅</mi><mo>log</mo><mrow><mo>(</mo><mi>C</mi><msup><mrow><mi>K</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>)</mo></mrow><mo>)</mo></mrow></mrow></math></span> can be achieved. Experimental results demonstrate that the proposed optimized separable convolution is able to achieve an improved performance in terms of accuracy-#Params trade-offs over both conventional, depth-wise, and depth/spatial separable convolutions.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 162-171"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651022000158/pdfft?md5=53825a8ab2de46247d122c455ee0622b&pid=1-s2.0-S2666651022000158-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72286083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of transformers 变压器的调查
AI Open Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2022.10.001
Tianyang Lin, Yuxin Wang, Xiangyang Liu, Xipeng Qiu
{"title":"A survey of transformers","authors":"Tianyang Lin,&nbsp;Yuxin Wang,&nbsp;Xiangyang Liu,&nbsp;Xipeng Qiu","doi":"10.1016/j.aiopen.2022.10.001","DOIUrl":"10.1016/j.aiopen.2022.10.001","url":null,"abstract":"<div><p>Transformers have achieved great success in many artificial intelligence fields, such as natural language processing, computer vision, and audio processing. Therefore, it is natural to attract lots of interest from academic and industry researchers. Up to the present, a great variety of Transformer variants (a.k.a. X-formers) have been proposed, however, a systematic and comprehensive literature review on these Transformer variants is still missing. In this survey, we provide a comprehensive review of various X-formers. We first briefly introduce the vanilla Transformer and then propose a new taxonomy of X-formers. Next, we introduce the various X-formers from three perspectives: architectural modification, pre-training, and applications. Finally, we outline some potential directions for future research.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 111-132"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651022000146/pdfft?md5=802c180f3454a2e26d638dce462d3dff&pid=1-s2.0-S2666651022000146-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80994748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 431
Debiased recommendation with neural stratification 基于神经分层的去偏推荐
AI Open Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2022.11.005
Quanyu Dai , Zhenhua Dong , Xu Chen
{"title":"Debiased recommendation with neural stratification","authors":"Quanyu Dai ,&nbsp;Zhenhua Dong ,&nbsp;Xu Chen","doi":"10.1016/j.aiopen.2022.11.005","DOIUrl":"https://doi.org/10.1016/j.aiopen.2022.11.005","url":null,"abstract":"<div><p>Debiased recommender models have recently attracted increasing attention from the academic and industry communities. Existing models are mostly based on the technique of inverse propensity score (IPS). However, in the recommendation domain, IPS can be hard to estimate given the sparse and noisy nature of the observed user–item exposure data. To alleviate this problem, in this paper, we assume that the user preference can be dominated by a small amount of latent factors, and propose to cluster the users for computing more accurate IPS via increasing the exposure densities. Basically, such method is similar with the spirit of stratification models in applied statistics. However, unlike previous heuristic stratification strategy, we learn the cluster criterion by presenting the users with low ranking embeddings, which are future shared with the user representations in the recommender model. At last, we find that our model has strong connections with the previous two types of debiased recommender models. We conduct extensive experiments based on real-world datasets to demonstrate the effectiveness of the proposed method.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 213-217"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651022000201/pdfft?md5=1244b2c9319c988375fcebe6f3172caa&pid=1-s2.0-S2666651022000201-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72246441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BCA: Bilinear Convolutional Neural Networks and Attention Networks for legal question answering BCA:用于法律问答的双线性卷积神经网络和注意力网络
AI Open Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2022.11.002
Haiguang Zhang, Tongyue Zhang, Faxin Cao, Zhizheng Wang, Yuanyu Zhang, Yuanyuan Sun, Mark Anthony Vicente
{"title":"BCA: Bilinear Convolutional Neural Networks and Attention Networks for legal question answering","authors":"Haiguang Zhang,&nbsp;Tongyue Zhang,&nbsp;Faxin Cao,&nbsp;Zhizheng Wang,&nbsp;Yuanyu Zhang,&nbsp;Yuanyuan Sun,&nbsp;Mark Anthony Vicente","doi":"10.1016/j.aiopen.2022.11.002","DOIUrl":"https://doi.org/10.1016/j.aiopen.2022.11.002","url":null,"abstract":"<div><p>The National Judicial Examination of China is an essential examination for selecting legal practitioners. In recent years, people have tried to use machine learning algorithms to answer examination questions. With the proposal of JEC-QA (Zhong et al. 2020), the judicial examination becomes a particular legal task. The data of judicial examination contains two types, i.e., Knowledge-Driven questions and Case-Analysis questions. Both require complex reasoning and text comprehension, thus challenging computers to answer judicial examination questions. We propose <strong>B</strong>ilinear <strong>C</strong>onvolutional Neural Networks and <strong>A</strong>ttention Networks (<strong>BCA</strong>) in this paper, which is an improved version based on the model proposed by our team on the Challenge of AI in Law 2021 judicial examination task. It has two essential modules, <strong>K</strong>nowledge-<strong>D</strong>riven <strong>M</strong>odule (<strong>KDM</strong>) for local features extraction and <strong>C</strong>ase-<strong>A</strong>nalysis <strong>M</strong>odule (<strong>CAM</strong>) for the semantic difference clarification between the question stem and the options. We also add a post-processing module to correct the results in the final stage. The experimental results show that our system achieves state-of-the-art in the offline test of the judicial examination task.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 172-181"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651022000171/pdfft?md5=7fc8cf53d6ea6be2b3999607b407f336&pid=1-s2.0-S2666651022000171-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72286081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Hierarchical label with imbalance and attributed network structure fusion for network embedding 不平衡分层标签与网络嵌入的属性网络结构融合
AI Open Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2022.07.002
Shu Zhao , Jialin Chen , Jie Chen , Yanping Zhang , Jie Tang
{"title":"Hierarchical label with imbalance and attributed network structure fusion for network embedding","authors":"Shu Zhao ,&nbsp;Jialin Chen ,&nbsp;Jie Chen ,&nbsp;Yanping Zhang ,&nbsp;Jie Tang","doi":"10.1016/j.aiopen.2022.07.002","DOIUrl":"https://doi.org/10.1016/j.aiopen.2022.07.002","url":null,"abstract":"<div><p>Network embedding (NE) aims to learn low-dimensional vectors for nodes while preserving the network’s essential properties (e.g., attributes and structure). Previous methods have been proposed to learn node representations with encouraging achievements. Recent research has shown that the hierarchical label has potential value in seeking latent hierarchical structures and learning more effective classification information. Nevertheless, most existing network embedding methods either focus on the network without the hierarchical label, or the learning process of hierarchical structure for labels is separate from the network structure. Learning node embedding with the hierarchical label suffers from two challenges: (1) Fusing hierarchical labels and network is still an arduous task. (2) The data volume imbalance under different hierarchical labels is more noticeable than flat labels. This paper proposes a <strong>H</strong>ierarchical Label and <strong>A</strong>ttributed <strong>N</strong>etwork <strong>S</strong>tructure Fusion model(HANS), which realizes the fusion of hierarchical labels and nodes through attributes and the attention-based fusion module. Particularly, HANS designs a directed hierarchy structure encoder for modeling label dependencies in three directions (parent–child, child–parent, and sibling) to strengthen the co-occurrence information between labels of different frequencies and reduce the impact of the label imbalance. Experiments on real-world datasets demonstrate that the proposed method achieves significantly better performance than the state-of-the-art algorithms.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 91-100"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651022000122/pdfft?md5=b0971b7ac0f357e13fd0e41f95f6412d&pid=1-s2.0-S2666651022000122-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72246448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Self-directed machine learning 自主机器学习
AI Open Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2022.06.001
Wenwu Zhu , Xin Wang , Pengtao Xie
{"title":"Self-directed machine learning","authors":"Wenwu Zhu ,&nbsp;Xin Wang ,&nbsp;Pengtao Xie","doi":"10.1016/j.aiopen.2022.06.001","DOIUrl":"https://doi.org/10.1016/j.aiopen.2022.06.001","url":null,"abstract":"<div><p>Conventional machine learning (ML) relies heavily on manual design from machine learning experts to decide learning tasks, data, models, optimization algorithms, and evaluation metrics, which is labor-intensive, time-consuming, and cannot learn autonomously like humans. In education science, self-directed learning, where human learners select learning tasks and materials on their own without requiring hands-on guidance, has been shown to be more effective than passive teacher-guided learning. Inspired by the concept of self-directed human learning, we introduce the principal concept of Self-directed Machine Learning (SDML) and propose a framework for SDML. Specifically, we design SDML as a self-directed learning process guided by self-awareness, including internal awareness and external awareness. Our proposed SDML process benefits from self task selection, self data selection, self model selection, self optimization strategy selection and self evaluation metric selection through self-awareness without human guidance. Meanwhile, the learning performance of the SDML process serves as feedback to further improve self-awareness. We propose a mathematical formulation for SDML based on multi-level optimization. Furthermore, we present case studies together with potential applications of SDML, followed by discussing future research directions. We expect that SDML could enable machines to conduct human-like self-directed learning and provide a new perspective towards artificial general intelligence.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 58-70"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651022000109/pdfft?md5=5480e0d544d9f6d6307d44ca29f5d00c&pid=1-s2.0-S2666651022000109-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72282567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Deep learning for fake news detection: A comprehensive survey 深度学习用于假新闻检测:一项综合调查
AI Open Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2022.09.001
Linmei Hu , Siqi Wei , Ziwang Zhao , Bin Wu
{"title":"Deep learning for fake news detection: A comprehensive survey","authors":"Linmei Hu ,&nbsp;Siqi Wei ,&nbsp;Ziwang Zhao ,&nbsp;Bin Wu","doi":"10.1016/j.aiopen.2022.09.001","DOIUrl":"https://doi.org/10.1016/j.aiopen.2022.09.001","url":null,"abstract":"<div><p>The information age enables people to obtain news online through various channels, yet in the meanwhile making false news spread at unprecedented speed. Fake news exerts detrimental effects for it impairs social stability and public trust, which calls for increasing demand for fake news detection (FND). As deep learning (DL) achieves tremendous success in various domains, it has also been leveraged in FND tasks and surpasses traditional machine learning based methods, yielding state-of-the-art performance. In this survey, we present a complete review and analysis of existing DL based FND methods that focus on various features such as news content, social context, and external knowledge. We review the methods under the lines of supervised, weakly supervised, and unsupervised methods. For each line, we systematically survey the representative methods utilizing different features. Then, we introduce several commonly used FND datasets and give a quantitative analysis of the performance of the DL based FND methods over these datasets. Finally, we analyze the remaining limitations of current approaches and highlight some promising future directions.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 133-155"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651022000134/pdfft?md5=d2d9826705629e3762ea484a2d93d29d&pid=1-s2.0-S2666651022000134-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72286084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Human motion modeling with deep learning: A survey 基于深度学习的人体运动建模:综述
AI Open Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2021.12.002
Zijie Ye, Haozhe Wu, Jia Jia
{"title":"Human motion modeling with deep learning: A survey","authors":"Zijie Ye,&nbsp;Haozhe Wu,&nbsp;Jia Jia","doi":"10.1016/j.aiopen.2021.12.002","DOIUrl":"10.1016/j.aiopen.2021.12.002","url":null,"abstract":"<div><p>The aim of human motion modeling is to understand human behaviors and create reasonable human motion like real people given different priors. With the development of deep learning, researchers tend to leverage data-driven methods to improve the performance of traditional motion modeling methods. In this paper, we present a comprehensive survey of recent human motion modeling researches. We discuss three categories of human motion modeling researches: human motion prediction, humanoid motion control and cross-modal motion synthesis and provide a detailed review over existing methods. Finally, we further discuss the remaining challenges in human motion modeling.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 35-39"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651021000309/pdfft?md5=ad9a69283a477c5f5d6b127141e48a38&pid=1-s2.0-S2666651021000309-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83892046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
On the distribution alignment of propagation in graph neural networks 关于图神经网络中传播的分布对齐
AI Open Pub Date : 2022-01-01 DOI: 10.1016/j.aiopen.2022.11.006
Qinkai Zheng , Xiao Xia , Kun Zhang , Evgeny Kharlamov , Yuxiao Dong
{"title":"On the distribution alignment of propagation in graph neural networks","authors":"Qinkai Zheng ,&nbsp;Xiao Xia ,&nbsp;Kun Zhang ,&nbsp;Evgeny Kharlamov ,&nbsp;Yuxiao Dong","doi":"10.1016/j.aiopen.2022.11.006","DOIUrl":"https://doi.org/10.1016/j.aiopen.2022.11.006","url":null,"abstract":"<div><p>Graph neural networks (GNNs) have been widely adopted for modeling graph-structure data. Most existing GNN studies have focused on designing <em>different</em> strategies to propagate information over the graph structures. After systematic investigations, we observe that the propagation step in GNNs matters, but its resultant performance improvement is insensitive to the location where we apply it. Our empirical examination further shows that the performance improvement brought by propagation mostly comes from a phenomenon of <em>distribution alignment</em>, i.e., propagation over graphs actually results in the alignment of the underlying distributions between the training and test sets. The findings are instrumental to understand GNNs, e.g., why decoupled GNNs can work as good as standard GNNs.<span><sup>1</sup></span></p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"3 ","pages":"Pages 218-228"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651022000213/pdfft?md5=e78f6562530f06a112827f05883082be&pid=1-s2.0-S2666651022000213-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72282565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信