Benoît Frénay, Daniela Hofmann, Alexander Schulz, Michael Biehl, B. Hammer
{"title":"线性数据映射中特征相关性的有效解释","authors":"Benoît Frénay, Daniela Hofmann, Alexander Schulz, Michael Biehl, B. Hammer","doi":"10.1109/CIDM.2014.7008661","DOIUrl":null,"url":null,"abstract":"Linear data transformations constitute essential operations in various machine learning algorithms, ranging from linear regression up to adaptive metric transformation. Often, linear scalings are not only used to improve the model accuracy, rather feature coefficients as provided by the mapping are interpreted as an indicator for the relevance of the feature for the task at hand. This principle, however, can be misleading in particular for high-dimensional or correlated features, since it easily marks irrelevant features as relevant or vice versa. In this contribution, we propose a mathematical formalisation of the minimum and maximum feature relevance for a given linear transformation which can efficiently be solved by means of linear programming. We evaluate the method in several benchmarks, where it becomes apparent that the minimum and maximum relevance closely resembles what is often referred to as weak and strong relevance of the features; hence unlike the mere scaling provided by the linear mapping, it ensures valid interpretability.","PeriodicalId":117542,"journal":{"name":"2014 IEEE Symposium on Computational Intelligence and Data Mining (CIDM)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Valid interpretation of feature relevance for linear data mappings\",\"authors\":\"Benoît Frénay, Daniela Hofmann, Alexander Schulz, Michael Biehl, B. Hammer\",\"doi\":\"10.1109/CIDM.2014.7008661\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Linear data transformations constitute essential operations in various machine learning algorithms, ranging from linear regression up to adaptive metric transformation. Often, linear scalings are not only used to improve the model accuracy, rather feature coefficients as provided by the mapping are interpreted as an indicator for the relevance of the feature for the task at hand. This principle, however, can be misleading in particular for high-dimensional or correlated features, since it easily marks irrelevant features as relevant or vice versa. In this contribution, we propose a mathematical formalisation of the minimum and maximum feature relevance for a given linear transformation which can efficiently be solved by means of linear programming. We evaluate the method in several benchmarks, where it becomes apparent that the minimum and maximum relevance closely resembles what is often referred to as weak and strong relevance of the features; hence unlike the mere scaling provided by the linear mapping, it ensures valid interpretability.\",\"PeriodicalId\":117542,\"journal\":{\"name\":\"2014 IEEE Symposium on Computational Intelligence and Data Mining (CIDM)\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE Symposium on Computational Intelligence and Data Mining (CIDM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CIDM.2014.7008661\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE Symposium on Computational Intelligence and Data Mining (CIDM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIDM.2014.7008661","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Valid interpretation of feature relevance for linear data mappings
Linear data transformations constitute essential operations in various machine learning algorithms, ranging from linear regression up to adaptive metric transformation. Often, linear scalings are not only used to improve the model accuracy, rather feature coefficients as provided by the mapping are interpreted as an indicator for the relevance of the feature for the task at hand. This principle, however, can be misleading in particular for high-dimensional or correlated features, since it easily marks irrelevant features as relevant or vice versa. In this contribution, we propose a mathematical formalisation of the minimum and maximum feature relevance for a given linear transformation which can efficiently be solved by means of linear programming. We evaluate the method in several benchmarks, where it becomes apparent that the minimum and maximum relevance closely resembles what is often referred to as weak and strong relevance of the features; hence unlike the mere scaling provided by the linear mapping, it ensures valid interpretability.