Sheng Ming, Huajun Liu, Ziming Luo, Peng Huang, Mark Junjie Li
{"title":"Label-Aware Recurrent Reading for Multi-Label Classification","authors":"Sheng Ming, Huajun Liu, Ziming Luo, Peng Huang, Mark Junjie Li","doi":"10.1109/CACML55074.2022.00091","DOIUrl":null,"url":null,"abstract":"Multi-label classification (MLC) is an essential branch of natural language processing where a given instance may be associated with multiple labels. Recently, neural network approaches invested considerable dependency between labels and the instance, achieving state-of-the-art performance. However, the existing methods ignore the hidden correlations between each document's semantic information and labels. In this paper, inspired by the cognitive process of human reading, we propose a Label-Aware Recurrent Reading (LARD) network based on neuroscience. LARD modeled the MLC problem as a decision-making process of recurrent reading and constructs label-aware document representation according to the top-down mechanism of neuroscience. The model outputs the prediction of all labels after each reading, and in the process of recurrent reading, the prediction accuracy is improved. Besides, the attention mechanism is applied to make the weight of words dynamically adjust according to the topdown classification prediction information, taking into account the different contributions of words to labels. Experiments show that our model has better performance than the existing models.","PeriodicalId":137505,"journal":{"name":"2022 Asia Conference on Algorithms, Computing and Machine Learning (CACML)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Asia Conference on Algorithms, Computing and Machine Learning (CACML)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CACML55074.2022.00091","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Multi-label classification (MLC) is an essential branch of natural language processing where a given instance may be associated with multiple labels. Recently, neural network approaches invested considerable dependency between labels and the instance, achieving state-of-the-art performance. However, the existing methods ignore the hidden correlations between each document's semantic information and labels. In this paper, inspired by the cognitive process of human reading, we propose a Label-Aware Recurrent Reading (LARD) network based on neuroscience. LARD modeled the MLC problem as a decision-making process of recurrent reading and constructs label-aware document representation according to the top-down mechanism of neuroscience. The model outputs the prediction of all labels after each reading, and in the process of recurrent reading, the prediction accuracy is improved. Besides, the attention mechanism is applied to make the weight of words dynamically adjust according to the topdown classification prediction information, taking into account the different contributions of words to labels. Experiments show that our model has better performance than the existing models.