B. R. Barricelli, D. Fogli, Letizia Iemmolo, A. Locoro
{"title":"为智能扬声器创建例程的多模态方法","authors":"B. R. Barricelli, D. Fogli, Letizia Iemmolo, A. Locoro","doi":"10.1145/3531073.3531168","DOIUrl":null,"url":null,"abstract":"Smart speakers can execute user-defined routines, namely, sequences of actions triggered by specific events or conditions. This paper presents a new approach to the creation of routines, which leverages the multi-modal features (vision, speech, and touch) offered by Amazon Alexa running on Echo Show devices. It then illustrates how end users found easier to create routines with the proposed approach than with the usual interaction with the Alexa app.","PeriodicalId":412533,"journal":{"name":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","volume":"83 6","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"A Multi-Modal Approach to Creating Routines for Smart Speakers\",\"authors\":\"B. R. Barricelli, D. Fogli, Letizia Iemmolo, A. Locoro\",\"doi\":\"10.1145/3531073.3531168\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Smart speakers can execute user-defined routines, namely, sequences of actions triggered by specific events or conditions. This paper presents a new approach to the creation of routines, which leverages the multi-modal features (vision, speech, and touch) offered by Amazon Alexa running on Echo Show devices. It then illustrates how end users found easier to create routines with the proposed approach than with the usual interaction with the Alexa app.\",\"PeriodicalId\":412533,\"journal\":{\"name\":\"Proceedings of the 2022 International Conference on Advanced Visual Interfaces\",\"volume\":\"83 6\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2022 International Conference on Advanced Visual Interfaces\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3531073.3531168\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 International Conference on Advanced Visual Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3531073.3531168","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Multi-Modal Approach to Creating Routines for Smart Speakers
Smart speakers can execute user-defined routines, namely, sequences of actions triggered by specific events or conditions. This paper presents a new approach to the creation of routines, which leverages the multi-modal features (vision, speech, and touch) offered by Amazon Alexa running on Echo Show devices. It then illustrates how end users found easier to create routines with the proposed approach than with the usual interaction with the Alexa app.