{"title":"Learning a Robot's Social Obligations from Comparisons of Observed Behavior","authors":"Colin Shea-Blymyer, Houssam Abbas","doi":"10.1109/ARSO51874.2021.9542846","DOIUrl":null,"url":null,"abstract":"We study the problem of learning a formal representation of a robot's social obligations from a human population's preferences. Rigorous system design requires a logical formalization of a robot's desired behavior, including the social obligations that constrain its actions. The preferences of the society hosting these robots are a natural source of these obligations. Thus we ask: how can we turn a popu-lation's preferences concerning robot behavior into a logico-mathematical specification that we can use to design the robot's controllers? We use non-deterministic weighted automata to model a robot's behavioral algorithms, and we use the deontic logic of Dominance Act Utilitarianism (DAU) to model the robot's social and ethical obligations. Given a set of automaton executions, and pair-wise comparisons between the executions, we develop simple algorithms to infer the automaton's weights, and compare them to existing methods; these weights are then turned into logical obligation formulas in DAU. We bound the sensitivity of the inferred weights to changes in the comparisons. We evaluate empirically the degree to which the obligations inferred from these various methods differ from each other.","PeriodicalId":156296,"journal":{"name":"2021 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Advanced Robotics and Its Social Impacts (ARSO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ARSO51874.2021.9542846","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
We study the problem of learning a formal representation of a robot's social obligations from a human population's preferences. Rigorous system design requires a logical formalization of a robot's desired behavior, including the social obligations that constrain its actions. The preferences of the society hosting these robots are a natural source of these obligations. Thus we ask: how can we turn a popu-lation's preferences concerning robot behavior into a logico-mathematical specification that we can use to design the robot's controllers? We use non-deterministic weighted automata to model a robot's behavioral algorithms, and we use the deontic logic of Dominance Act Utilitarianism (DAU) to model the robot's social and ethical obligations. Given a set of automaton executions, and pair-wise comparisons between the executions, we develop simple algorithms to infer the automaton's weights, and compare them to existing methods; these weights are then turned into logical obligation formulas in DAU. We bound the sensitivity of the inferred weights to changes in the comparisons. We evaluate empirically the degree to which the obligations inferred from these various methods differ from each other.