{"title":"Managing Cold-Start Issues in Music Recommendation Systems: An Approach Based on User Experience","authors":"W. Assunção, R. Prates, L. Zaina","doi":"10.1145/3596454.3597180","DOIUrl":"https://doi.org/10.1145/3596454.3597180","url":null,"abstract":"Music recommendation systems have been widely used to suggest songs to users based on their listening history or interests. Traditionally, most recommender systems have focused on prediction accuracy without considering user experience (UX) in generating recommendations. In addition, there is also the problem of cold-start, which is when the system has new users and not enough data is available about them. This study presents a new approach for music recommendation based on user experience that explores the cold-start problem. We implemented our approach in a mobile application and evaluated the system’s communicability using the Intermediate Semiotic Inspection Method (ISIM). As a result, we identified three categories relevant to music recommendation systems: novelty in recommendations, continuous updates, and users’ interest in rating. In addition, we checked each participant’s understanding of the tool, which was generally very close to the intended proposal.","PeriodicalId":227076,"journal":{"name":"Companion Proceedings of the 2023 ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123902967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maxime André, Anthony Bayet, Tobias Jetzen, Pierre Luycx, Maxime Cauz, Bruno Dumas
{"title":"Engineering User Interfaces with Beat Gestures","authors":"Maxime André, Anthony Bayet, Tobias Jetzen, Pierre Luycx, Maxime Cauz, Bruno Dumas","doi":"10.1145/3596454.3597187","DOIUrl":"https://doi.org/10.1145/3596454.3597187","url":null,"abstract":"Beat gestures are biphasic up-and-down or back-and-forth movements of hand(s) that are associated with a specific meaning, such as in speech, or without, such as in rhythmic commands. Incorporating beat gesture recognition into user interface engineering involves dynamic recognition of hand pose, identification of movement direction, and calculation of beat number and frequency. We demonstrate a game that uses beat gestures for musical rhythm learning. We aim to understand the impact of real-time embodiment and guidance visualizations synthesizing user hands and gestures and considering the distance between virtual and real worlds.","PeriodicalId":227076,"journal":{"name":"Companion Proceedings of the 2023 ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129258964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuchong Zhang, Adam Nowak, A. Romanowski, M. Fjeld
{"title":"Virtuality or Physicality? Supporting Memorization Through Augmented Reality Gamification","authors":"Yuchong Zhang, Adam Nowak, A. Romanowski, M. Fjeld","doi":"10.1145/3596454.3597183","DOIUrl":"https://doi.org/10.1145/3596454.3597183","url":null,"abstract":"Augmented reality (AR) is evolving to become a pervasive tool for interacting with virtual objects. We conducted a comparative study to explore the impact of virtuality and physicality in supporting human memorization through gamification. A head-mounted display (HMD) AR memory matching game and a corresponding physical version game with paper boards were harnessed. The proof-of-concept version was demonstrated in an initial user study (n=12) with counterbalancing design to determine that our proposed gamified HMD AR system with virtuality could support better human memorization compared to the physical version game in reducing task time, improving usability, becoming more recommendable, and decreasing cognitive task workload. The study was then followed by quantitative analysis of the respective four metrics: game completion time (GCT), system usability scale (SUS), recommendation level, and NASA task load index (TLX). A brief qualitative analysis is presented. The results show that in our case, the virtuality outperformed the physicality in supporting human memorization in a gamified context through HMD AR in an evident range.","PeriodicalId":227076,"journal":{"name":"Companion Proceedings of the 2023 ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130257455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"2nd Workshop on Engineering Interactive Computing Systems for People with Disabilities","authors":"K. Oliveira, P. Forbrig, Isabelle Pecci","doi":"10.1145/3596454.3597194","DOIUrl":"https://doi.org/10.1145/3596454.3597194","url":null,"abstract":"The advances in the area of interactive systems are unquestionable. New multi-modal, multi-user, multi-device/screen interaction and interaction techniques, new development methods and processes to improve the development of interactive systems, and so on, have been widely proposed by the community. Using these approaches in the development of interactive systems for people with disabilities can be challenging and requires adapting, customizing, evolving and even defining new approaches. This is even more evident when advocating user-centered design. This second edition of this workshop aims to present and discuss the design, development, implementation, verification and validation of interactive systems for users with disabilities, whether permanent (visual, hearing, mobility impairments,...), evolutive (in the case of degenerative diseases such as Alzheimer and Parkinson) or temporary (situationally impaired people) both in the case of classical systems and in the context of ubiquitous and IoT applications.","PeriodicalId":227076,"journal":{"name":"Companion Proceedings of the 2023 ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121329457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Crafting Interactive Experiences with Non-programmers","authors":"C. Greenhalgh","doi":"10.1145/3596454.3597174","DOIUrl":"https://doi.org/10.1145/3596454.3597174","url":null,"abstract":"The Mixed Reality Lab has a long history of creating public interactive experiences in collaboration with creative practitioners. Looking across four such experiences, this keynote explores the role of code (i.e., bespoke software) in making them possible, the practicalities of non-programmers “authoring” key parts of the experience, the relationship between coding and knowledge production, and the changing nature of technical responsibilities. As well as being personally and inherently satisfying, the practical realization of novel interactive systems manifests new creative “materials”, which open the door to new experiences and understandings of people and the world.","PeriodicalId":227076,"journal":{"name":"Companion Proceedings of the 2023 ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116636763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ethical Design for Wellbeing and Affective Health","authors":"C. Sas","doi":"10.1145/3596454.3597175","DOIUrl":"https://doi.org/10.1145/3596454.3597175","url":null,"abstract":"Emotional wellbeing and mental health are topics of much social significance, which are also reflected in the growing HCI work aimed to support them. Research in this area covers a broad space from affective computing to affective interaction approach, and the ethical design of wellbeing and mental health technologies has become much needed. This talk will provide design exemplars of technologies for wellbeing and mental health, with an emphasis on the importance of supporting emotional awareness and regulation. The talk will also highlight the value of existing research for articulating novel design implications for ethical wellbeing and mental health technologies.","PeriodicalId":227076,"journal":{"name":"Companion Proceedings of the 2023 ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121892914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A User Preference and Intent Extraction Framework for Explainable Conversational Recommender Systems","authors":"Jieun Park, Sangyeon Kim, Sangwon Lee","doi":"10.1145/3596454.3597178","DOIUrl":"https://doi.org/10.1145/3596454.3597178","url":null,"abstract":"Conversational recommender systems (CRS) communicate with a user through natural language understanding to support the user finding necessary information. While the importance of user information extraction from a dialog is growing, previous systems rely on named-entity recognition to find out user preference based on deep learning methods. However, there is still scope for such recognition modules to perform better in terms of accuracy and richness of the elicited user preference information. Also, extracting user information solely depending on entities mentioned in user utterances might ignore contextual semantics. Besides, black-box recommender systems are widely used in previous CRSs whereas such methods undermine transparency and interpretability of recommended results. To alleviate these problems, we propose a novel framework to extract user preference and user intent and apply it to a recommender system. User preference is extracted from sets of an item feature entity detected by our item feature entity detection module and an estimated rating about each entity. Utilizing graph representation of user utterances, user intent is also elicited to consider the contextual semantic of each element word. Based on both outcomes, we implement recommendation by candidate selection and ranking, then provide explanation of the recommendation result to enhance interpretability and manipulability of the system. We illustrate how our framework works in practice by a sample conversation. Experiments present improvement and effectiveness of user information elicitation in recommendation.","PeriodicalId":227076,"journal":{"name":"Companion Proceedings of the 2023 ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131780064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive GUI Layout by Satisfying Fuzzy Constraints","authors":"T. Yanagida, J. Vanderdonckt, Nicolas Burny","doi":"10.1145/3596454.3597190","DOIUrl":"https://doi.org/10.1145/3596454.3597190","url":null,"abstract":"A graphical user interface is equipped with an adaptive layout when it holds the ability to dynamically adjust its layout and structure of widgets based on conditions that evolve over time coming from the end user, the platform used, and the environment. More specifically, such a layout automatically adapts itself depending on the window dimensions of the application, the screen resolution, and the screen size, thus posing a series of constraints that, sometimes, could not be entirely satisfied together at once. To cope with this variation, we formulate the adaptive GUI layout as a fuzzy constraint satisfaction problem in which the solver attempts to satisfy the most important constraints first, such as an appropriate widget selection, then the least important constraints after.","PeriodicalId":227076,"journal":{"name":"Companion Proceedings of the 2023 ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134491244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Listen Veronica! Can You Give Me a Hand With This Bug?","authors":"J. P. Sáenz, Luigi De Russis","doi":"10.1145/3596454.3597179","DOIUrl":"https://doi.org/10.1145/3596454.3597179","url":null,"abstract":"Developing software implies looking for documentation, following tutorials, making implementation decisions, encountering errors, and overcoming them. Behind each aspect is the developer’s reasoning that, if not collected, is lost after the implementation. Conversely, if captured and linked to the code, the developers’ reasoning and motivations for each step they accomplish can become a valuable asset, meaningful for them and other developers. Looking for a mechanism to capture such knowledge seamlessly, we present Veronica. It is a conversational agent integrated directly into Visual Studio Code that, based on the developers’ self-explanatory reasoning, records memos and links them with the code they are writing. Furthermore, Veronica can interact with the web browser to automatically gather the sources consulted by the developer and attach them to the code. We validated our approach by conducting a usability study with eight participants that positively assessed the tool’s usefulness and suggested improvements in the graphical interface.","PeriodicalId":227076,"journal":{"name":"Companion Proceedings of the 2023 ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"497 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123505271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards A Visual Programming Tool to Create Deep Learning Models","authors":"Tommaso Calò, Luigi De Russis","doi":"10.1145/3596454.3597181","DOIUrl":"https://doi.org/10.1145/3596454.3597181","url":null,"abstract":"Deep Learning (DL) developers come from different backgrounds, e.g., medicine, genomics, finance, and computer science. To create a DL model, they must learn and use high-level programming languages (e.g., Python), thus needing to handle related setups and solve programming errors. This paper presents DeepBlocks, a visual programming tool that allows DL developers to design, train, and evaluate models without relying on specific programming languages. DeepBlocks works by building on the typical model structure: a sequence of learnable functions whose arrangement defines the specific characteristics of the model. We derived DeepBlocks’ design goals from a 5-participants formative interview, and we validated the first implementation of the tool through a typical use case. Results are promising and show that developers could visually design complex DL architectures.","PeriodicalId":227076,"journal":{"name":"Companion Proceedings of the 2023 ACM SIGCHI Symposium on Engineering Interactive Computing Systems","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126790519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}