International Conference on Neural Information Processing最新文献

筛选
英文 中文
Multimodal Neural Network For Demand Forecasting 需求预测的多模态神经网络
International Conference on Neural Information Processing Pub Date : 2022-10-20 DOI: 10.48550/arXiv.2210.11502
Nitesh Kumar, K. Dheenadayalan, Suprabath Reddy, Sumant Kulkarni
{"title":"Multimodal Neural Network For Demand Forecasting","authors":"Nitesh Kumar, K. Dheenadayalan, Suprabath Reddy, Sumant Kulkarni","doi":"10.48550/arXiv.2210.11502","DOIUrl":"https://doi.org/10.48550/arXiv.2210.11502","url":null,"abstract":"Demand forecasting applications have immensely benefited from the state-of-the-art Deep Learning methods used for time series forecasting. Traditional uni-modal models are predominantly seasonality driven which attempt to model the demand as a function of historic sales along with information on holidays and promotional events. However, accurate and robust sales forecasting calls for accommodating multiple other factors, such as natural calamities, pandemics, elections, etc., impacting the demand for products and product categories in general. We propose a multi-modal sales forecasting network that combines real-life events from news articles with traditional data such as historical sales and holiday information. Further, we fuse information from general product trends published by Google trends. Empirical results show statistically significant improvements in the SMAPE error metric with an average improvement of 7.37% against the existing state-of-the-art sales forecasting techniques on a real-world supermarket dataset.","PeriodicalId":281152,"journal":{"name":"International Conference on Neural Information Processing","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128124353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Binary Orthogonal Non-negative Matrix Factorization 二元正交非负矩阵分解
International Conference on Neural Information Processing Pub Date : 2022-10-19 DOI: 10.48550/arXiv.2210.10660
S. F. Hafshejani, D. Gaur, S. Hossain, R. Benkoczi
{"title":"Binary Orthogonal Non-negative Matrix Factorization","authors":"S. F. Hafshejani, D. Gaur, S. Hossain, R. Benkoczi","doi":"10.48550/arXiv.2210.10660","DOIUrl":"https://doi.org/10.48550/arXiv.2210.10660","url":null,"abstract":"We propose a method for computing binary orthogonal non-negative matrix factorization (BONMF) for clustering and classification. The method is tested on several representative real-world data sets. The numerical results confirm that the method has improved accuracy compared to the related techniques. The proposed method is fast for training and classification and space efficient.","PeriodicalId":281152,"journal":{"name":"International Conference on Neural Information Processing","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131191637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DAGKT: Difficulty and Attempts Boosted Graph-based Knowledge Tracing DAGKT:难度和尝试提高了基于图的知识追踪
International Conference on Neural Information Processing Pub Date : 2022-10-18 DOI: 10.48550/arXiv.2210.15470
Rui Luo, Fei-Tsung Liu, Wen-yao Liang, Yuhong Zhang, Chenyang Bu, Xuegang Hu
{"title":"DAGKT: Difficulty and Attempts Boosted Graph-based Knowledge Tracing","authors":"Rui Luo, Fei-Tsung Liu, Wen-yao Liang, Yuhong Zhang, Chenyang Bu, Xuegang Hu","doi":"10.48550/arXiv.2210.15470","DOIUrl":"https://doi.org/10.48550/arXiv.2210.15470","url":null,"abstract":"In the field of intelligent education, knowledge tracing (KT) has attracted increasing attention, which estimates and traces students' mastery of knowledge concepts to provide high-quality education. In KT, there are natural graph structures among questions and knowledge concepts so some studies explored the application of graph neural networks (GNNs) to improve the performance of the KT models which have not used graph structure. However, most of them ignored both the questions' difficulties and students' attempts at questions. Actually, questions with the same knowledge concepts have different difficulties, and students' different attempts also represent different knowledge mastery. In this paper, we propose a difficulty and attempts boosted graph-based KT (DAGKT), using rich information from students' records. Moreover, a novel method is designed to establish the question similarity relationship inspired by the F1 score. Extensive experiments on three real-world datasets demonstrate the effectiveness of the proposed DAGKT.","PeriodicalId":281152,"journal":{"name":"International Conference on Neural Information Processing","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116556550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Explainable Causal Analysis of Mental Health on Social Media Data 社交媒体数据对心理健康的可解释因果分析
International Conference on Neural Information Processing Pub Date : 2022-10-16 DOI: 10.48550/arXiv.2210.08430
Chandni Saxena, Muskan Garg, G. Saxena
{"title":"Explainable Causal Analysis of Mental Health on Social Media Data","authors":"Chandni Saxena, Muskan Garg, G. Saxena","doi":"10.48550/arXiv.2210.08430","DOIUrl":"https://doi.org/10.48550/arXiv.2210.08430","url":null,"abstract":"With recent developments in Social Computing, Natural Language Processing and Clinical Psychology, the social NLP research community addresses the challenge of automation in mental illness on social media. A recent extension to the problem of multi-class classification of mental health issues is to identify the cause behind the user's intention. However, multi-class causal categorization for mental health issues on social media has a major challenge of wrong prediction due to the overlapping problem of causal explanations. There are two possible mitigation techniques to solve this problem: (i) Inconsistency among causal explanations/ inappropriate human-annotated inferences in the dataset, (ii) in-depth analysis of arguments and stances in self-reported text using discourse analysis. In this research work, we hypothesise that if there exists the inconsistency among F1 scores of different classes, there must be inconsistency among corresponding causal explanations as well. In this task, we fine tune the classifiers and find explanations for multi-class causal categorization of mental illness on social media with LIME and Integrated Gradient (IG) methods. We test our methods with CAMS dataset and validate with annotated interpretations. A key contribution of this research work is to find the reason behind inconsistency in accuracy of multi-class causal categorization. The effectiveness of our methods is evident with the results obtained having category-wise average scores of $81.29 %$ and $0.906$ using cosine similarity and word mover's distance, respectively.","PeriodicalId":281152,"journal":{"name":"International Conference on Neural Information Processing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131034405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Neural Network Compression by Joint Sparsity Promotion and Redundancy Reduction 联合稀疏度提升和冗余缩减的神经网络压缩
International Conference on Neural Information Processing Pub Date : 2022-10-14 DOI: 10.48550/arXiv.2210.07451
T. M. Khan, S. Naqvi, A. Robles-Kelly, E. Meijering
{"title":"Neural Network Compression by Joint Sparsity Promotion and Redundancy Reduction","authors":"T. M. Khan, S. Naqvi, A. Robles-Kelly, E. Meijering","doi":"10.48550/arXiv.2210.07451","DOIUrl":"https://doi.org/10.48550/arXiv.2210.07451","url":null,"abstract":"Compression of convolutional neural network models has recently been dominated by pruning approaches. A class of previous works focuses solely on pruning the unimportant filters to achieve network compression. Another important direction is the design of sparsity-inducing constraints which has also been explored in isolation. This paper presents a novel training scheme based on composite constraints that prune redundant filters and minimize their effect on overall network learning via sparsity promotion. Also, as opposed to prior works that employ pseudo-norm-based sparsity-inducing constraints, we propose a sparse scheme based on gradient counting in our framework. Our tests on several pixel-wise segmentation benchmarks show that the number of neurons and the memory footprint of networks in the test phase are significantly reduced without affecting performance. MobileNetV3 and UNet, two well-known architectures, are used to test the proposed scheme. Our network compression method not only results in reduced parameters but also achieves improved performance compared to MobileNetv3, which is an already optimized architecture.","PeriodicalId":281152,"journal":{"name":"International Conference on Neural Information Processing","volume":"5 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123470462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
IDPL: Intra-subdomain adaptation adversarial learning segmentation method based on Dynamic Pseudo Labels 基于动态伪标签的子域内自适应对抗学习分割方法
International Conference on Neural Information Processing Pub Date : 2022-10-07 DOI: 10.48550/arXiv.2210.03435
Xuewei Li, Weilun Zhang, Jie Gao, Xuzhou Fu, Jian Yu
{"title":"IDPL: Intra-subdomain adaptation adversarial learning segmentation method based on Dynamic Pseudo Labels","authors":"Xuewei Li, Weilun Zhang, Jie Gao, Xuzhou Fu, Jian Yu","doi":"10.48550/arXiv.2210.03435","DOIUrl":"https://doi.org/10.48550/arXiv.2210.03435","url":null,"abstract":"Unsupervised domain adaptation(UDA) has been applied to image semantic segmentation to solve the problem of domain offset. However, in some difficult categories with poor recognition accuracy, the segmentation effects are still not ideal. To this end, in this paper, Intra-subdomain adaptation adversarial learning segmentation method based on Dynamic Pseudo Labels(IDPL) is proposed. The whole process consists of 3 steps: Firstly, the instance-level pseudo label dynamic generation module is proposed, which fuses the class matching information in global classes and local instances, thus adaptively generating the optimal threshold for each class, obtaining high-quality pseudo labels. Secondly, the subdomain classifier module based on instance confidence is constructed, which can dynamically divide the target domain into easy and difficult subdomains according to the relative proportion of easy and difficult instances. Finally, the subdomain adversarial learning module based on self-attention is proposed. It uses multi-head self-attention to confront the easy and difficult subdomains at the class level with the help of generated high-quality pseudo labels, so as to focus on mining the features of difficult categories in the high-entropy region of target domain images, which promotes class-level conditional distribution alignment between the subdomains, improving the segmentation performance of difficult categories. For the difficult categories, the experimental results show that the performance of IDPL is significantly improved compared with other latest mainstream methods.","PeriodicalId":281152,"journal":{"name":"International Conference on Neural Information Processing","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121723997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Double Attention-based Lightweight Network for Plant Pest Recognition 基于双关注的植物病虫害识别轻量级网络
International Conference on Neural Information Processing Pub Date : 2022-10-04 DOI: 10.48550/arXiv.2210.09956
Sivasubramaniam Janarthan, S. Thuseethan, S. Rajasegarar, J. Yearwood
{"title":"Double Attention-based Lightweight Network for Plant Pest Recognition","authors":"Sivasubramaniam Janarthan, S. Thuseethan, S. Rajasegarar, J. Yearwood","doi":"10.48550/arXiv.2210.09956","DOIUrl":"https://doi.org/10.48550/arXiv.2210.09956","url":null,"abstract":"Timely recognition of plant pests from field images is significant to avoid potential losses of crop yields. Traditional convolutional neural network-based deep learning models demand high computational capability and require large labelled samples for each pest type for training. On the other hand, the existing lightweight network-based approaches suffer in correctly classifying the pests because of common characteristics and high similarity between multiple plant pests. In this work, a novel double attention-based lightweight deep learning architecture is proposed to automatically recognize different plant pests. The lightweight network facilitates faster and small data training while the double attention module increases performance by focusing on the most pertinent information. The proposed approach achieves 96.61%, 99.08% and 91.60% on three variants of two publicly available datasets with 5869, 545 and 500 samples, respectively. Moreover, the comparison results reveal that the proposed approach outperforms existing approaches on both small and large datasets consistently.","PeriodicalId":281152,"journal":{"name":"International Conference on Neural Information Processing","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128210283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Document Image Understanding with Reinforcement Finetuning 增强微调提高文档图像理解能力
International Conference on Neural Information Processing Pub Date : 2022-09-26 DOI: 10.48550/arXiv.2209.12561
Bao-Sinh Nguyen, Dung Tien Le, Hieu M. Vu, Tuan-Anh Dang Nguyen, Minh Le Nguyen, Hung Le
{"title":"Improving Document Image Understanding with Reinforcement Finetuning","authors":"Bao-Sinh Nguyen, Dung Tien Le, Hieu M. Vu, Tuan-Anh Dang Nguyen, Minh Le Nguyen, Hung Le","doi":"10.48550/arXiv.2209.12561","DOIUrl":"https://doi.org/10.48550/arXiv.2209.12561","url":null,"abstract":"Successful Artificial Intelligence systems often require numerous labeled data to extract information from document images. In this paper, we investigate the problem of improving the performance of Artificial Intelligence systems in understanding document images, especially in cases where training data is limited. We address the problem by proposing a novel finetuning method using reinforcement learning. Our approach treats the Information Extraction model as a policy network and uses policy gradient training to update the model to maximize combined reward functions that complement the traditional cross-entropy losses. Our experiments on four datasets using labels and expert feedback demonstrate that our finetuning mechanism consistently improves the performance of a state-of-the-art information extractor, especially in the small training data regime.","PeriodicalId":281152,"journal":{"name":"International Conference on Neural Information Processing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133608117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Investigation of RNN and Self-attention for the Cyrillic-Traditional Mongolian Bidirectional Conversion 西里尔-传统蒙古语双向转换RNN与自关注的深入研究
International Conference on Neural Information Processing Pub Date : 2022-09-24 DOI: 10.48550/arXiv.2209.11963
Muhan Na, Rui Liu, Feilong, Guanglai Gao
{"title":"A Deep Investigation of RNN and Self-attention for the Cyrillic-Traditional Mongolian Bidirectional Conversion","authors":"Muhan Na, Rui Liu, Feilong, Guanglai Gao","doi":"10.48550/arXiv.2209.11963","DOIUrl":"https://doi.org/10.48550/arXiv.2209.11963","url":null,"abstract":"Cyrillic and Traditional Mongolian are the two main members of the Mongolian writing system. The Cyrillic-Traditional Mongolian Bidirectional Conversion (CTMBC) task includes two conversion processes, including Cyrillic Mongolian to Traditional Mongolian (C2T) and Traditional Mongolian to Cyrillic Mongolian conversions (T2C). Previous researchers adopted the traditional joint sequence model, since the CTMBC task is a natural Sequence-to-Sequence (Seq2Seq) modeling problem. Recent studies have shown that Recurrent Neural Network (RNN) and Self-attention (or Transformer) based encoder-decoder models have shown significant improvement in machine translation tasks between some major languages, such as Mandarin, English, French, etc. However, an open problem remains as to whether the CTMBC quality can be improved by utilizing the RNN and Transformer models. To answer this question, this paper investigates the utility of these two powerful techniques for CTMBC task combined with agglutinative characteristics of Mongolian language. We build the encoder-decoder based CTMBC model based on RNN and Transformer respectively and compare the different network configurations deeply. The experimental results show that both RNN and Transformer models outperform the traditional joint sequence model, where the Transformer achieves the best performance. Compared with the joint sequence baseline, the word error rate (WER) of the Transformer for C2T and T2C decreased by 5.72% and 5.06% respectively.","PeriodicalId":281152,"journal":{"name":"International Conference on Neural Information Processing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123876798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boosting Star-GANs for Voice Conversion with Contrastive Discriminator 基于对比鉴别器的增强star - gan语音转换
International Conference on Neural Information Processing Pub Date : 2022-09-21 DOI: 10.48550/arXiv.2209.10088
Shijing Si, Jianzong Wang, Xulong Zhang, Xiaoyang Qu, Ning Cheng, Jing Xiao
{"title":"Boosting Star-GANs for Voice Conversion with Contrastive Discriminator","authors":"Shijing Si, Jianzong Wang, Xulong Zhang, Xiaoyang Qu, Ning Cheng, Jing Xiao","doi":"10.48550/arXiv.2209.10088","DOIUrl":"https://doi.org/10.48550/arXiv.2209.10088","url":null,"abstract":"Nonparallel multi-domain voice conversion methods such as the StarGAN-VCs have been widely applied in many scenarios. However, the training of these models usually poses a challenge due to their complicated adversarial network architectures. To address this, in this work we leverage the state-of-the-art contrastive learning techniques and incorporate an efficient Siamese network structure into the StarGAN discriminator. Our method is called SimSiam-StarGAN-VC and it boosts the training stability and effectively prevents the discriminator overfitting issue in the training process. We conduct experiments on the Voice Conversion Challenge (VCC 2018) dataset, plus a user study to validate the performance of our framework. Our experimental results show that SimSiam-StarGAN-VC significantly outperforms existing StarGAN-VC methods in terms of both the objective and subjective metrics.","PeriodicalId":281152,"journal":{"name":"International Conference on Neural Information Processing","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126933035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信