{"title":"IUI 2022: 27th International Conference on Intelligent User Interfaces, Helsinki, Finland, March 22 - 25, 2022","authors":"","doi":"10.1145/3490099","DOIUrl":"https://doi.org/10.1145/3490099","url":null,"abstract":"","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78970867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Employing Social Media to Improve Mental Health: Pitfalls, Lessons Learned, and the Next Frontier","authors":"M. Choudhury","doi":"10.1145/3490099.3519389","DOIUrl":"https://doi.org/10.1145/3490099.3519389","url":null,"abstract":"","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"348 ","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91448305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IUI '21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021","authors":"","doi":"10.1145/3397481","DOIUrl":"https://doi.org/10.1145/3397481","url":null,"abstract":"","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81904467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Selman Aydin, Shirin Feiz, Vikas Ashok, I V Ramakrishnan
{"title":"Towards Making Videos Accessible for Low Vision Screen Magnifier Users.","authors":"Ali Selman Aydin, Shirin Feiz, Vikas Ashok, I V Ramakrishnan","doi":"10.1145/3377325.3377494","DOIUrl":"10.1145/3377325.3377494","url":null,"abstract":"<p><p>People with low vision who use screen magnifiers to interact with computing devices find it very challenging to interact with dynamically changing digital content such as videos, since they do not have the luxury of time to manually move, i.e., pan the magnifier lens to different regions of interest (ROIs) or zoom into these ROIs before the content changes across frames. In this paper, we present SViM, a first of its kind screen-magnifier interface for such users that leverages advances in computer vision, particularly video saliency models, to identify salient ROIs in videos. SViM's interface allows users to zoom in/out of any point of interest, switch between ROIs via mouse clicks and provides assistive panning with the added flexibility that lets the user explore other regions of the video besides the ROIs identified by SViM. Subjective and objective evaluation of a user study with 13 low vision screen magnifier users revealed that overall the participants had a better user experience with SViM over extant screen magnifiers, indicative of the former's promise and potential for making videos accessible to low vision screen magnifier users.</p>","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"2020 ","pages":"10-21"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7871698/pdf/nihms-1666230.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25358571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Selman Aydin, Shirin Feiz, Vikas Ashok, I V Ramakrishnan
{"title":"SaIL: Saliency-Driven Injection of ARIA Landmarks.","authors":"Ali Selman Aydin, Shirin Feiz, Vikas Ashok, I V Ramakrishnan","doi":"10.1145/3377325.3377540","DOIUrl":"https://doi.org/10.1145/3377325.3377540","url":null,"abstract":"<p><p>Navigating webpages with screen readers is a challenge even with recent improvements in screen reader technologies and the increased adoption of web standards for accessibility, namely ARIA. ARIA landmarks, an important aspect of ARIA, lets screen reader users access different sections of the webpage quickly, by enabling them to skip over blocks of irrelevant or redundant content. However, these landmarks are sporadically and inconsistently used by web developers, and in many cases, even absent in numerous web pages. Therefore, we propose SaIL, a scalable approach that automatically detects the important sections of a web page, and then injects ARIA landmarks into the corresponding HTML markup to facilitate quick access to these sections. The central concept underlying SaIL is visual saliency, which is determined using a state-of-the-art deep learning model that was trained on gaze-tracking data collected from sighted users in the context of web browsing. We present the findings of a pilot study that demonstrated the potential of SaIL in reducing both the time and effort spent in navigating webpages with screen readers.</p>","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"2020 ","pages":"111-115"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3377325.3377540","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25368054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scene Text Access: A Comparison of Mobile OCR Modalities for Blind Users.","authors":"Leo Neat, Ren Peng, Siyang Qin, Roberto Manduchi","doi":"10.1145/3301275.3302271","DOIUrl":"https://doi.org/10.1145/3301275.3302271","url":null,"abstract":"<p><p>We present a study with seven blind participants using three different mobile OCR apps to find text posted in various indoor environments. The first app considered was Microsoft SeeingAI in its Short Text mode, which reads any text in sight with a minimalistic interface. The second app was Spot+OCR, a custom application that separates the task of text detection from OCR proper. Upon detection of text in the image, Spot+OCR generates a short vibration; as soon as the user stabilizes the phone, a high-resolution snapshot is taken and OCR-processed. The third app, Guided OCR, was designed to guide the user in taking several pictures in a 360° span at the maximum resolution available by the camera, with minimum overlap between pictures. Quantitative results (in terms of true positive ratios and traversal speed) were recorded. Along with the qualitative observation and outcomes from an exit survey, these results allow us to identify and assess the different strategies used by our participants, as well as the challenges of operating these systems without sight.</p>","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"2019 ","pages":"197-207"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3301275.3302271","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41223152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Keum San Chun, Ashley B Sanders, Rebecca Adaimi, Necole Streeper, David E Conroy, Edison Thomaz
{"title":"Towards a Generalizable Method for Detecting Fluid Intake with Wrist-Mounted Sensors and Adaptive Segmentation.","authors":"Keum San Chun, Ashley B Sanders, Rebecca Adaimi, Necole Streeper, David E Conroy, Edison Thomaz","doi":"10.1145/3301275.3302315","DOIUrl":"10.1145/3301275.3302315","url":null,"abstract":"<p><p>Over the last decade, advances in mobile technologies have enabled the development of intelligent systems that attempt to recognize and model a variety of health-related human behaviors. While automated dietary monitoring based on passive sensors has been an area of increasing research activity for many years, much less attention has been given to tracking fluid intake. In this work, we apply an adaptive segmentation technique on a continuous stream of inertial data captured with a practical, off-the-shelf wrist-mounted device to detect fluid intake gestures passively. We evaluated our approach in a study with 30 participants where 561 drinking instances were recorded. Using a leave-one-participant-out (LOPO), we were able to detect drinking episodes with 90.3% precision and 91.0% recall, demonstrating the generalizability of our approach. In addition to our proposed method, we also contribute an anonymized and labeled dataset of drinking and non-drinking gestures to encourage further work in the field.</p>","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"2019 ","pages":"80-85"},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3301275.3302315","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37194335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Providing Adaptive and Personalized Visual Support based on Behavioral Tracking of Children with Autism for Assessing Reciprocity and Coordination Skills in a Joint Attention Training Application","authors":"T. Tang, Pinata Winoto","doi":"10.1145/3180308.3180349","DOIUrl":"https://doi.org/10.1145/3180308.3180349","url":null,"abstract":"Recent works have demonstrated the applicability of the activity and behavioral pattern analysis mechanisms to assist therapists, care-givers and individuals with development disorders including those with autism spectrum disorder (ASD); the computational cost and sophistication of such behavioral modeling systems might prevent them from deploying. As such, in this paper, we proposed an easily deployable automatic system to train joint attention (JA) skills, assess the frequency and degree of reciprocity and provide visual cues accordingly. Our proposed approach is different from most of earlier attempts in that we do not capitalize the sophisticated feature-space construction methodology; instead, the simple design, in-game automatic data collection for adaptive visual supports offers hassle-free benefits especially for low-functioning ASD individuals and those with severe verbal impairments.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"7 1","pages":"40:1-40:2"},"PeriodicalIF":0.0,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80103481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Configurable and Contextually Expandable Interactive Picture Exchange Communication System (PECS) for Chinese Children with Autism","authors":"T. Tang, Pinata Winoto","doi":"10.1145/3180308.3180348","DOIUrl":"https://doi.org/10.1145/3180308.3180348","url":null,"abstract":"The electronic versions of PECS (picture exchange communication system) have been introduced to non-verbal children with autism spectrum disorder (ASD) in the past decade. In this paper, we discuss some related issues and propose the design of more versatile electronic PECS (ePECS) as a comprehensive language training tool.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"69 1","pages":"39:1-39:2"},"PeriodicalIF":0.0,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84258282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can We Predict the Scenic Beauty of Locations from Geo-tagged Flickr Images?","authors":"Ch. Md. Rakin Haider, Mohammed Eunus Ali","doi":"10.1145/3172944.3173000","DOIUrl":"https://doi.org/10.1145/3172944.3173000","url":null,"abstract":"In this work, we propose a novel technique to determine the aesthetic score of a location from social metadata of Flickr photos. In particular, we built machine learning classifiers to predict the class of a location where each class corresponds to a set of locations having equal aesthetic rating. These models are trained on two empirically build datasets containing locations in two different cities (Rome and Paris) where aesthetic ratings of locations were gathered from TripAdvisor.com. In this work we exploit the idea that in a location with higher aesthetic rating, it is more likely for an user to capture a photo and other users are more likely to interact with that photo. Our models achieved as high as 79.48% accuracy (78.60% precision and 79.27% recall) on Rome dataset and 73.78% accuracy(75.62% precision and 78.07% recall) on Paris dataset. The proposed technique can facilitate urban planning, tour planning and recommending aesthetically pleasing paths.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"32 1","pages":"653-657"},"PeriodicalIF":0.0,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81330229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}