V. W. Anelli, Vito Bellini, T. D. Noia, E. Sciascio
{"title":"知识感知可解释推荐系统","authors":"V. W. Anelli, Vito Bellini, T. D. Noia, E. Sciascio","doi":"10.3233/SSW200014","DOIUrl":null,"url":null,"abstract":"Recommender systems are everywhere, from e-commerce to streaming platforms. They help users lost in the maze of available information, items and services to find their way. Among them, over the years, approaches based on machine learning techniques have shown particularly good performance for top-N recommendations engines. Unfortunately, they mostly behave as black-boxes and, even when they embed some form of description about the items to recommend, after the training phase they move such descriptions in a latent space thus loosing the actual explicit semantics of recommended items. As a consequence, the system designers struggle at providing satisfying explanations to the recommendation list provided to the end user. In this chapter, we describe two approaches to recommendation which make use of the semantics encoded in a knowledge graph to train interpretable models which keep the original semantics of the items description thus providing a powerful tool to automatically compute explainable results. The two methods relies on two completely different machine learning algorithms, namely, factorization machines and autoencoder neural networks. We also show how to measure the interpretability of the model through the introduction of two metrics: semantic accuracy and robustness.","PeriodicalId":331476,"journal":{"name":"Knowledge Graphs for eXplainable Artificial Intelligence","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Knowledge-Aware Interpretable Recommender Systems\",\"authors\":\"V. W. Anelli, Vito Bellini, T. D. Noia, E. Sciascio\",\"doi\":\"10.3233/SSW200014\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recommender systems are everywhere, from e-commerce to streaming platforms. They help users lost in the maze of available information, items and services to find their way. Among them, over the years, approaches based on machine learning techniques have shown particularly good performance for top-N recommendations engines. Unfortunately, they mostly behave as black-boxes and, even when they embed some form of description about the items to recommend, after the training phase they move such descriptions in a latent space thus loosing the actual explicit semantics of recommended items. As a consequence, the system designers struggle at providing satisfying explanations to the recommendation list provided to the end user. In this chapter, we describe two approaches to recommendation which make use of the semantics encoded in a knowledge graph to train interpretable models which keep the original semantics of the items description thus providing a powerful tool to automatically compute explainable results. The two methods relies on two completely different machine learning algorithms, namely, factorization machines and autoencoder neural networks. We also show how to measure the interpretability of the model through the introduction of two metrics: semantic accuracy and robustness.\",\"PeriodicalId\":331476,\"journal\":{\"name\":\"Knowledge Graphs for eXplainable Artificial Intelligence\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Knowledge Graphs for eXplainable Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3233/SSW200014\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Knowledge Graphs for eXplainable Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/SSW200014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Recommender systems are everywhere, from e-commerce to streaming platforms. They help users lost in the maze of available information, items and services to find their way. Among them, over the years, approaches based on machine learning techniques have shown particularly good performance for top-N recommendations engines. Unfortunately, they mostly behave as black-boxes and, even when they embed some form of description about the items to recommend, after the training phase they move such descriptions in a latent space thus loosing the actual explicit semantics of recommended items. As a consequence, the system designers struggle at providing satisfying explanations to the recommendation list provided to the end user. In this chapter, we describe two approaches to recommendation which make use of the semantics encoded in a knowledge graph to train interpretable models which keep the original semantics of the items description thus providing a powerful tool to automatically compute explainable results. The two methods relies on two completely different machine learning algorithms, namely, factorization machines and autoencoder neural networks. We also show how to measure the interpretability of the model through the introduction of two metrics: semantic accuracy and robustness.