Gaurav Patil, Phillip Bagala, Patrick Nalepka, Rachel W. Kallen, Michael J. Richardson
{"title":"协调行动任务中人-人工智能体决策一致性评价","authors":"Gaurav Patil, Phillip Bagala, Patrick Nalepka, Rachel W. Kallen, Michael J. Richardson","doi":"10.1145/3527188.3563923","DOIUrl":null,"url":null,"abstract":"Recommender systems designed to augment human decision-making in multi-agent tasks need to not only recommend actions that align with the task goal, but which also maintain coordinative behaviors between agents. Further, if these systems are to be used for skill training, they need to impart implicit learning to its users. This work compared a recommender system trained using deep reinforcement learning to a heuristic-based system in recommending actions to human participants teaming with an artificial agent during a collaborative problem-solving task. In addition to evaluating task performance and learning, we also evaluate the extent to which the human action are congruent with the recommended actions.","PeriodicalId":179256,"journal":{"name":"Proceedings of the 10th International Conference on Human-Agent Interaction","volume":"81 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluating Human-Artificial Agent Decision Congruence in a Coordinated Action Task\",\"authors\":\"Gaurav Patil, Phillip Bagala, Patrick Nalepka, Rachel W. Kallen, Michael J. Richardson\",\"doi\":\"10.1145/3527188.3563923\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recommender systems designed to augment human decision-making in multi-agent tasks need to not only recommend actions that align with the task goal, but which also maintain coordinative behaviors between agents. Further, if these systems are to be used for skill training, they need to impart implicit learning to its users. This work compared a recommender system trained using deep reinforcement learning to a heuristic-based system in recommending actions to human participants teaming with an artificial agent during a collaborative problem-solving task. In addition to evaluating task performance and learning, we also evaluate the extent to which the human action are congruent with the recommended actions.\",\"PeriodicalId\":179256,\"journal\":{\"name\":\"Proceedings of the 10th International Conference on Human-Agent Interaction\",\"volume\":\"81 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 10th International Conference on Human-Agent Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3527188.3563923\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 10th International Conference on Human-Agent Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3527188.3563923","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Evaluating Human-Artificial Agent Decision Congruence in a Coordinated Action Task
Recommender systems designed to augment human decision-making in multi-agent tasks need to not only recommend actions that align with the task goal, but which also maintain coordinative behaviors between agents. Further, if these systems are to be used for skill training, they need to impart implicit learning to its users. This work compared a recommender system trained using deep reinforcement learning to a heuristic-based system in recommending actions to human participants teaming with an artificial agent during a collaborative problem-solving task. In addition to evaluating task performance and learning, we also evaluate the extent to which the human action are congruent with the recommended actions.