{"title":"基于强化学习的信任建立模型","authors":"Abdullah Aref, T. Tran","doi":"10.1109/Trustcom.2015.436","DOIUrl":null,"url":null,"abstract":"Trust is a complex, multifaceted concept that includes more than just evaluating others' honesty. Many trust evaluation models have been proposed and implemented in different areas, most of them focused on creating algorithms for trusters to model the honesty of trustees in order to make effective decisions about which trustees to select, where a rational truster is supposed to interact with the trustworthy ones. If interactions are based on trust, trustworthy trustees will have a greater impact on the results of interactions' results. Consequently, building a high trust may be an advantage for rational trustees. This work describes a Reinforcement Learning based Trust Establishment model (RLTE) that goes beyond trust evaluation to outline actions to direct trustees (instead of trusters). RLTE uses the retention of trusters and reinforcement learning to model trustors' behaviors. A trustee uses reinforcement learning to adjust the utility gain it provides when interacting with each truster. The trustee depends on the average number of transactions carried out by that truster, relative to the mean number of transactions performed by all trusters interacting with this trustee. The trustee accelerates or decelerates the adjustment of the utility gain based on the increase or decrease of the average retention rate of all trusters in the society, respectively. The proposed model does not depend on direct feedback, nor does it depend on the current reputation of trustees in the environment. Simulation results indicate that trustees empowered with the proposed model can be selected more by trusters.","PeriodicalId":277092,"journal":{"name":"2015 IEEE Trustcom/BigDataSE/ISPA","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"RLTE: A Reinforcement Learning Based Trust Establishment Model\",\"authors\":\"Abdullah Aref, T. Tran\",\"doi\":\"10.1109/Trustcom.2015.436\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Trust is a complex, multifaceted concept that includes more than just evaluating others' honesty. Many trust evaluation models have been proposed and implemented in different areas, most of them focused on creating algorithms for trusters to model the honesty of trustees in order to make effective decisions about which trustees to select, where a rational truster is supposed to interact with the trustworthy ones. If interactions are based on trust, trustworthy trustees will have a greater impact on the results of interactions' results. Consequently, building a high trust may be an advantage for rational trustees. This work describes a Reinforcement Learning based Trust Establishment model (RLTE) that goes beyond trust evaluation to outline actions to direct trustees (instead of trusters). RLTE uses the retention of trusters and reinforcement learning to model trustors' behaviors. A trustee uses reinforcement learning to adjust the utility gain it provides when interacting with each truster. The trustee depends on the average number of transactions carried out by that truster, relative to the mean number of transactions performed by all trusters interacting with this trustee. The trustee accelerates or decelerates the adjustment of the utility gain based on the increase or decrease of the average retention rate of all trusters in the society, respectively. The proposed model does not depend on direct feedback, nor does it depend on the current reputation of trustees in the environment. Simulation results indicate that trustees empowered with the proposed model can be selected more by trusters.\",\"PeriodicalId\":277092,\"journal\":{\"name\":\"2015 IEEE Trustcom/BigDataSE/ISPA\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 IEEE Trustcom/BigDataSE/ISPA\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/Trustcom.2015.436\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE Trustcom/BigDataSE/ISPA","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/Trustcom.2015.436","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
RLTE: A Reinforcement Learning Based Trust Establishment Model
Trust is a complex, multifaceted concept that includes more than just evaluating others' honesty. Many trust evaluation models have been proposed and implemented in different areas, most of them focused on creating algorithms for trusters to model the honesty of trustees in order to make effective decisions about which trustees to select, where a rational truster is supposed to interact with the trustworthy ones. If interactions are based on trust, trustworthy trustees will have a greater impact on the results of interactions' results. Consequently, building a high trust may be an advantage for rational trustees. This work describes a Reinforcement Learning based Trust Establishment model (RLTE) that goes beyond trust evaluation to outline actions to direct trustees (instead of trusters). RLTE uses the retention of trusters and reinforcement learning to model trustors' behaviors. A trustee uses reinforcement learning to adjust the utility gain it provides when interacting with each truster. The trustee depends on the average number of transactions carried out by that truster, relative to the mean number of transactions performed by all trusters interacting with this trustee. The trustee accelerates or decelerates the adjustment of the utility gain based on the increase or decrease of the average retention rate of all trusters in the society, respectively. The proposed model does not depend on direct feedback, nor does it depend on the current reputation of trustees in the environment. Simulation results indicate that trustees empowered with the proposed model can be selected more by trusters.