Ruipu Hu, Alisha Pradhan, Elizabeth Bonsignore, Amanda Lazar
{"title":"Sustaining the Usefulness and Appeal of an Older Adult-led Makerspace through Developing and Adapting Resources.","authors":"Ruipu Hu, Alisha Pradhan, Elizabeth Bonsignore, Amanda Lazar","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Researchers are studying makerspaces as one way to support older adults in learning about and using new technologies and tools. In this paper, through a long-term (34 months), ethnographic approach, we study the ways that older adults arranged sociotechnical resources to sustain the community use of a makerspace. Our analysis identifies three interconnected resources that were developed: an adaptive staffing approach that could withstand constant personnel shifts and shortages; structured activities to draw interest and overcome challenges associated with learning to use the machines; and reference materials to support individuals in independent usage of the space. We describe the issues that arose as time went on with each of these resource types, and how individuals affiliated with the makerspace adapted the resources to address these issues. In the discussion, we extend best practices by reflecting on strategies that worked well in the makerspace, such as drawing interest through introductory classes, as well as different purposes for reference materials to support technology use.</p>","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"8 ","pages":"1-29"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11404555/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alana Grant, Vilma Kankaanpää, Ilyena Hirskyj-Douglas
{"title":"Hum-ble Beginnings: Developing Touch- and Proximity-Input-Based Interfaces for Zoo-Housed Giraffes’ Audio Enrichment","authors":"Alana Grant, Vilma Kankaanpää, Ilyena Hirskyj-Douglas","doi":"10.1145/3626470","DOIUrl":"https://doi.org/10.1145/3626470","url":null,"abstract":"Though computer systems have entered widespread use for animals' enrichment in zoos, no interactive computer systems suited to giraffes have yet been developed. Hence, which input modes or audio stimuli giraffes might best utilise remains unknown. To address this issue and probe development of such systems alongside the animals themselves and zookeepers, researchers gathered requirements from the keepers and from prototyping with giraffes, then created two interfaces -- one touch-based and one proximity-based -- that play giraffe-humming audio or white noise when activated. Over two months of observation, giraffes utilised the proximity-based system more frequently than the touch-based one but in shorter episodes. Secondly, the study highlighted the significance of considering user-specific needs in computer systems' development: the lack of preference shown for any specific audio type indicates that the audio stimuli chosen were inappropriate for these giraffes. In addition, the paper articulates several lessons that can be drawn from human--computer interaction when one develops systems for animals and, in turn, what the findings presented mean for humans.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135930039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahsan Jamal Akbar, Zhiyao Sheng, Qian Zhang, Dong Wang
{"title":"Cross-Domain Gesture Sequence Recognition for Two-Player Exergames using COTS mmWave Radar","authors":"Ahsan Jamal Akbar, Zhiyao Sheng, Qian Zhang, Dong Wang","doi":"10.1145/3626477","DOIUrl":"https://doi.org/10.1145/3626477","url":null,"abstract":"Wireless-based gesture recognition provides an effective input method for exergames. However, previous works in wireless-based gesture recognition systems mainly recognize one primary user's gestures. In the multi-player scenario, the mutual interference between users makes it difficult to predict multiple players' gestures individually. To address this challenge, we propose a flexible FMCW-radar-based system, RFDual, which enables real-time cross-domain gesture sequence recognition for two players. To eliminate the mutual interference between users, we extract a new feature type, biased range-velocity spectrum (BRVS), which only depends on a target user. We then propose customized preprocessing methods (cropping and stationary component removal) to produce environment-independent and position-independent inputs. To enhance RFDual's resistance to unseen users and articulating speeds, we design effective data augmentation methods, sequence concatenating, and randomizing. RFDual is evaluated with a dataset containing only unseen gesture sequences and achieves a gesture error rate of 1.41%. Extensive experimental results show the impressive robustness of RFDual for data in new domains, including new users, articulating speeds, positions, and environments. These results demonstrate the great potential of RFDual in practical applications like two-player exergames and gesture/activity recognition for drivers and passengers in the cab.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135928380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mille Skovhus Lunding, Jens Emil Sloth Grønbæk, Nicolai Grymer, Thomas Wells, Steven Houben, Marianne Graves Petersen
{"title":"Reality and Beyond: Proxemics as a Lens for Designing Handheld Collaborative Augmented Reality","authors":"Mille Skovhus Lunding, Jens Emil Sloth Grønbæk, Nicolai Grymer, Thomas Wells, Steven Houben, Marianne Graves Petersen","doi":"10.1145/3626463","DOIUrl":"https://doi.org/10.1145/3626463","url":null,"abstract":"Augmented Reality (AR) has shown great potential for supporting co-located collaboration. Yet, it is rarely articulated in the design rationales of AR systems that they promote a certain socio-spatial configuration of the users. Learning from proxemics, we argue that such configurations enable and constrain different co-located spatial behaviors with consequences for collaborative activities. We focus specifically on enabling different collaboration styles via the design of Handheld Collaborative Augmented Reality (HCAR) systems. Drawing upon notions of proxemics, we show how different HCAR designs enable different socio-spatial configurations. Through a design exploration, we demonstrate interaction techniques to expand on the notion of collaborative coupling styles by either deliberately designing for aligning with physical reality or going beyond. The main contributions are a proxemics-based conceptual lens and vocabulary for supporting interaction designers in being mindful of the proxemic consequences when developing handheld multi-user AR systems.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135930028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingyi Li, Hyerim Park, Robin Welsch, Sven Mayer, Andreas Butz
{"title":"SeatmateVR: Proxemic Cues for Close Bystander-Awareness in Virtual Reality","authors":"Jingyi Li, Hyerim Park, Robin Welsch, Sven Mayer, Andreas Butz","doi":"10.1145/3626474","DOIUrl":"https://doi.org/10.1145/3626474","url":null,"abstract":"Prior research explored ways to alert virtual reality users of bystanders entering the play area from afar. However, in confined social settings like sharing a couch with seatmates, bystanders' proxemic cues, such as distance, are limited during interruptions, posing challenges for proxemic-aware systems. To address this, we investigated three visualizations, using a 2D animoji, a fully-rendered avatar, and their combination, to gradually share bystanders' orientation and location during interruptions. In a user study (N=22), participants played virtual reality games while responding to questions from their seatmates. We found that the avatar preserved game experiences yet did not support the fast identification of seatmates as the animoji did. Instead, users preferred the mixed visualization, where they found the seatmate's orientation cues instantly in their view and were gradually guided to the person's actual location. We discuss implications for fine-grained proxemic-aware virtual reality systems to support interaction in constrained social spaces.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135928980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"1D-Touch: NLP-Assisted Coarse Text Selection via a Semi-Direct Gesture","authors":"Jiang, Peiling, Feng, Li, Sun, Fuling, Sarkar, Parakrant, Xia, Haijun, Liu, Can","doi":"10.1145/3626483","DOIUrl":"https://doi.org/10.1145/3626483","url":null,"abstract":"Existing text selection techniques on touchscreen focus on improving the control for moving the carets. Coarse-grained text selection on word and phrase levels has not received much support beyond word-snapping and entity recognition. We introduce 1D-Touch, a novel text selection method that complements the carets-based sub-word selection by facilitating the selection of semantic units of words and above. This method employs a simple vertical slide gesture to expand and contract a selection area from a word. The expansion can be by words or by semantic chunks ranging from sub-phrases to sentences. This technique shifts the concept of text selection, from defining a range by locating the first and last words, towards a dynamic process of expanding and contracting a textual semantic entity. To understand the effects of our approach, we prototyped and tested two variants: WordTouch, which offers a straightforward word-by-word expansion, and ChunkTouch, which leverages NLP to chunk text into syntactic units, allowing the selection to grow by semantically meaningful units in response to the sliding gesture. Our evaluation, focused on the coarse-grained selection tasks handled by 1D-Touch, shows a 20% improvement over the default word-snapping selection method on Android.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135808583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"WorldPoint: Finger Pointing as a Rapid and Natural Trigger for In-the-Wild Mobile Interactions","authors":"Daehwa Kim, Vimal Mollyn, Chris Harrison","doi":"10.1145/3626478","DOIUrl":"https://doi.org/10.1145/3626478","url":null,"abstract":"Pointing with one's finger is a natural and rapid way to denote an area or object of interest. It is routinely used in human-human interaction to increase both the speed and accuracy of communication, but it is rarely utilized in human-computer interactions. In this work, we use the recent inclusion of wide-angle, rear-facing smartphone cameras, along with hardware-accelerated machine learning, to enable real-time, infrastructure-free, finger-pointing interactions on today's mobile phones. We envision users raising their hands to point in front of their phones as a \"wake gesture\". This can then be coupled with a voice command to trigger advanced functionality. For example, while composing an email, a user can point at a document on a table and say \"attach\". Our interaction technique requires no navigation away from the current app and is both faster and more privacy-preserving than the current method of taking a photo.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135928368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating the Applicability of GUI-Based Steering Laws to VR Car Driving: A Case of Curved Constrained Paths","authors":"Shota Yamanaka, Takumi Takaku, Yukina Funazaki, Noboru Seto, Satoshi Nakamura","doi":"10.1145/3626466","DOIUrl":"https://doi.org/10.1145/3626466","url":null,"abstract":"Evaluating the validity of an existing user performance model in a variety of tasks is important for enhancing its applicability. The model studied in this work is the steering law for predicting the speed and time needed to perform tasks in which a cursor or a car passes through a constrained path. Previous HCI studies have refined this model to take additional path factors into account, but its applicability has only been evaluated in GUI-based environments such as those using mice or pen tablets. Accordingly, we conducted a user experiment with a driving simulator to measure the speed and time on curved roads and thus facilitate evaluation of models for pen-based path-steering tasks. The results showed that the best-fit models for speed and time had adjusted r^2 values of 0.9342 and 0.9723, respectively, for three road widths and eight curvature radii. While the models required some adjustments, the overall components of the tested models were consistent with those in previous pen-based experimental results. Our results demonstrated that user experiments to validate potential models based on pen-based tasks are effective as a pilot approach for driving tasks with more complex road conditions.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135929883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yongjie Duan, Jinyang Yu, Jianjiang Feng, Ke He, Jiwen Lu, Jie Zhou
{"title":"3D Finger Rotation Estimation from Fingerprint Images","authors":"Yongjie Duan, Jinyang Yu, Jianjiang Feng, Ke He, Jiwen Lu, Jie Zhou","doi":"10.1145/3626467","DOIUrl":"https://doi.org/10.1145/3626467","url":null,"abstract":"Various touch-based interaction techniques have been developed to make interactions on mobile devices more effective, efficient, and intuitive. Finger orientation, especially, has attracted a lot of attentions since it intuitively brings three additional degrees of freedom (DOF) compared with two-dimensional (2D) touching points. The mapping of finger orientation can be classified as being either absolute or relative, suitable for different interaction applications. However, only absolute orientation has been explored in prior works. The relative angles can be calculated based on two estimated absolute orientations, although, a higher accuracy is expected by predicting relative rotation from input images directly. Consequently, in this paper, we propose to estimate complete 3D relative finger angles based on two fingerprint images, which incorporate more information with a higher image resolution than capacitive images. For algorithm training and evaluation, we constructed a dataset consisting of fingerprint images and their corresponding ground truth 3D relative finger rotation angles. Experimental results on this dataset revealed that our method outperforms previous approaches with absolute finger angle models. Further, extensive experiments were conducted to explore the impact of image resolutions, finger types, and rotation ranges on performance. A user study was also conducted to examine the efficiency and precision using 3D relative finger orientation in 3D object rotation task.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135928983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SurfaceCast: Ubiquitous, Cross-Device Surface Sharing","authors":"Florian Echtler, Vitus Maierhöfer, Nicolai Brodersen Hansen, Raphael Wimmer","doi":"10.1145/3626475","DOIUrl":"https://doi.org/10.1145/3626475","url":null,"abstract":"Real-time online interaction is the norm today. Tabletops and other dedicated interactive surface devices with direct input and tangible interaction can enhance remote collaboration, and open up new interaction scenarios based on mixed physical/virtual components. However, they are only available to a small subset of users, as they usually require identical bespoke hardware for every participant, are complex to setup, and need custom scenario-specific applications. We present SurfaceCast, a software toolkit designed to merge multiple distributed, heterogeneous end-user devices into a single, shared mixed-reality surface. Supported devices include regular desktop and laptop computers, tablets, and mixed-reality headsets, as well as projector-camera setups and dedicated interactive tabletop systems. This device-agnostic approach provides a fundamental building block for exploration of a far wider range of usage scenarios than previously feasible, including future clients using our provided API. In this paper, we discuss the software architecture of SurfaceCast, present a formative user study and a quantitative performance analysis of our framework, and introduce five example application scenarios which we enhance through the multi-user and multi-device features of the framework. Our results show that the hardware- and content-agnostic architecture of SurfaceCast can run on a wide variety of devices with sufficient performance and fidelity for real-time interaction.","PeriodicalId":36902,"journal":{"name":"Proceedings of the ACM on Human-Computer Interaction","volume":"158 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135928213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}