{"title":"[self.]: an Interactive Art Installation that Embodies Artificial Intelligence and Creativity: A Demonstration","authors":"A. Tidemann, Øyvind Brandtsegg","doi":"10.1145/2757226.2767691","DOIUrl":null,"url":null,"abstract":"This demonstration paper describes [self.], an open source art installation that embodies artificial intelligence in order to learn, react, respond and be creative in its environment. Biologically inspired models are implemented to achieve this behaviour. The robot is built using a moving head, projector, camera and microphones. No form of knowledge or grammar have been implemented in the AI, the entity learns everything via its own sensory channels, forming categories in a bottom-up fashion. The robot recognizes sounds, and is able to recognize similar sounds, link them with the corresponding faces, and use the knowledge of past experiences to form new sentences. It projects neural memories that represent an association between sound and video as experienced during interaction.","PeriodicalId":231794,"journal":{"name":"Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2015-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2757226.2767691","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This demonstration paper describes [self.], an open source art installation that embodies artificial intelligence in order to learn, react, respond and be creative in its environment. Biologically inspired models are implemented to achieve this behaviour. The robot is built using a moving head, projector, camera and microphones. No form of knowledge or grammar have been implemented in the AI, the entity learns everything via its own sensory channels, forming categories in a bottom-up fashion. The robot recognizes sounds, and is able to recognize similar sounds, link them with the corresponding faces, and use the knowledge of past experiences to form new sentences. It projects neural memories that represent an association between sound and video as experienced during interaction.