Candy Olivia Mawalim, S. Okada, Y. Nakano, M. Unoki
{"title":"基于多模态分析和说话人嵌入的小组讨论人格特征估计","authors":"Candy Olivia Mawalim, S. Okada, Y. Nakano, M. Unoki","doi":"10.1007/s12193-023-00401-0","DOIUrl":null,"url":null,"abstract":"","PeriodicalId":17529,"journal":{"name":"Journal on Multimodal User Interfaces","volume":"26 2","pages":"1-17"},"PeriodicalIF":2.2000,"publicationDate":"2023-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Personality trait estimation in group discussions using multimodal analysis and speaker embedding\",\"authors\":\"Candy Olivia Mawalim, S. Okada, Y. Nakano, M. Unoki\",\"doi\":\"10.1007/s12193-023-00401-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\",\"PeriodicalId\":17529,\"journal\":{\"name\":\"Journal on Multimodal User Interfaces\",\"volume\":\"26 2\",\"pages\":\"1-17\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2023-02-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal on Multimodal User Interfaces\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s12193-023-00401-0\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal on Multimodal User Interfaces","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12193-023-00401-0","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
期刊介绍:
The Journal of Multimodal User Interfaces publishes work in the design, implementation and evaluation of multimodal interfaces. Research in the domain of multimodal interaction is by its very essence a multidisciplinary area involving several fields including signal processing, human-machine interaction, computer science, cognitive science and ergonomics. This journal focuses on multimodal interfaces involving advanced modalities, several modalities and their fusion, user-centric design, usability and architectural considerations. Use cases and descriptions of specific application areas are welcome including for example e-learning, assistance, serious games, affective and social computing, interaction with avatars and robots.