{"title":"使用遗传算法学习相似度的特征权重","authors":"N. Ishii, Yong Wang","doi":"10.1109/IJSIS.1998.685412","DOIUrl":null,"url":null,"abstract":"This paper presents a GA-based method for learning feature weights in a similarity function from similarity information. The similarity information can be divided into two kinds: one is called qualitative similarity information which represents the similarities between cases; and the other is called relative similarity information which represents the relation between similarities of two case pairs both including a same case. We apply genetic algorithms to learn feature weights from these similarity information. The proposed genetic algorithms are applicable to both linear and nonlinear similarity functions. Our experiments show the learning results are better even if the given similarity information includes errors.","PeriodicalId":289764,"journal":{"name":"Proceedings. IEEE International Joint Symposia on Intelligence and Systems (Cat. No.98EX174)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1998-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Learning feature weights for similarity using genetic algorithms\",\"authors\":\"N. Ishii, Yong Wang\",\"doi\":\"10.1109/IJSIS.1998.685412\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a GA-based method for learning feature weights in a similarity function from similarity information. The similarity information can be divided into two kinds: one is called qualitative similarity information which represents the similarities between cases; and the other is called relative similarity information which represents the relation between similarities of two case pairs both including a same case. We apply genetic algorithms to learn feature weights from these similarity information. The proposed genetic algorithms are applicable to both linear and nonlinear similarity functions. Our experiments show the learning results are better even if the given similarity information includes errors.\",\"PeriodicalId\":289764,\"journal\":{\"name\":\"Proceedings. IEEE International Joint Symposia on Intelligence and Systems (Cat. No.98EX174)\",\"volume\":\"54 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1998-05-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. IEEE International Joint Symposia on Intelligence and Systems (Cat. No.98EX174)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJSIS.1998.685412\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. IEEE International Joint Symposia on Intelligence and Systems (Cat. No.98EX174)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJSIS.1998.685412","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learning feature weights for similarity using genetic algorithms
This paper presents a GA-based method for learning feature weights in a similarity function from similarity information. The similarity information can be divided into two kinds: one is called qualitative similarity information which represents the similarities between cases; and the other is called relative similarity information which represents the relation between similarities of two case pairs both including a same case. We apply genetic algorithms to learn feature weights from these similarity information. The proposed genetic algorithms are applicable to both linear and nonlinear similarity functions. Our experiments show the learning results are better even if the given similarity information includes errors.