Yuhai Liu, Guozheng Li, Hong-yu Zhang, Mary Yang, Jack Y. Yang
{"title":"基于多标签懒惰学习的基因功能预测特征选择","authors":"Yuhai Liu, Guozheng Li, Hong-yu Zhang, Mary Yang, Jack Y. Yang","doi":"10.1504/IJFIPM.2008.021388","DOIUrl":null,"url":null,"abstract":"In multi-label learning, each instance in the training set is associated with a set of labels, and the task is to output a label set whose size is unknown a priori for each unseen instance. In this paper, feature selection for the multi-label method was proposed based on mutual information. In detail, we use the distribution of mutual information for feature selection in the multi-label problems. Our experiment was preceded on a multi-label lazy learning approach named ML-kNN, which is derived from the traditional k-Nearest Neighbour (KNN) algorithm. Experimental results on a real-world multi-label bioinformatics data show that ML-kNN with feature selection greatly outperforms the prior ML-kNN algorithm.","PeriodicalId":216126,"journal":{"name":"Int. J. Funct. Informatics Pers. Medicine","volume":"334 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Feature selection for gene function prediction using multi-labelled lazy learning\",\"authors\":\"Yuhai Liu, Guozheng Li, Hong-yu Zhang, Mary Yang, Jack Y. Yang\",\"doi\":\"10.1504/IJFIPM.2008.021388\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In multi-label learning, each instance in the training set is associated with a set of labels, and the task is to output a label set whose size is unknown a priori for each unseen instance. In this paper, feature selection for the multi-label method was proposed based on mutual information. In detail, we use the distribution of mutual information for feature selection in the multi-label problems. Our experiment was preceded on a multi-label lazy learning approach named ML-kNN, which is derived from the traditional k-Nearest Neighbour (KNN) algorithm. Experimental results on a real-world multi-label bioinformatics data show that ML-kNN with feature selection greatly outperforms the prior ML-kNN algorithm.\",\"PeriodicalId\":216126,\"journal\":{\"name\":\"Int. J. Funct. Informatics Pers. Medicine\",\"volume\":\"334 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-11-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Int. J. Funct. Informatics Pers. Medicine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1504/IJFIPM.2008.021388\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Funct. Informatics Pers. Medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1504/IJFIPM.2008.021388","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Feature selection for gene function prediction using multi-labelled lazy learning
In multi-label learning, each instance in the training set is associated with a set of labels, and the task is to output a label set whose size is unknown a priori for each unseen instance. In this paper, feature selection for the multi-label method was proposed based on mutual information. In detail, we use the distribution of mutual information for feature selection in the multi-label problems. Our experiment was preceded on a multi-label lazy learning approach named ML-kNN, which is derived from the traditional k-Nearest Neighbour (KNN) algorithm. Experimental results on a real-world multi-label bioinformatics data show that ML-kNN with feature selection greatly outperforms the prior ML-kNN algorithm.