IUI. International Conference on Intelligent User Interfaces最新文献

筛选
英文 中文
IUI 2022: 27th International Conference on Intelligent User Interfaces, Helsinki, Finland, March 22 - 25, 2022 IUI 2022:第27届智能用户界面国际会议,赫尔辛基,芬兰,2022年3月22日至25日
IUI. International Conference on Intelligent User Interfaces Pub Date : 2022-01-01 DOI: 10.1145/3490099
{"title":"IUI 2022: 27th International Conference on Intelligent User Interfaces, Helsinki, Finland, March 22 - 25, 2022","authors":"","doi":"10.1145/3490099","DOIUrl":"https://doi.org/10.1145/3490099","url":null,"abstract":"","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78970867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Employing Social Media to Improve Mental Health: Pitfalls, Lessons Learned, and the Next Frontier 利用社交媒体改善心理健康:陷阱、经验教训和下一个前沿
IUI. International Conference on Intelligent User Interfaces Pub Date : 2022-01-01 DOI: 10.1145/3490099.3519389
M. Choudhury
{"title":"Employing Social Media to Improve Mental Health: Pitfalls, Lessons Learned, and the Next Frontier","authors":"M. Choudhury","doi":"10.1145/3490099.3519389","DOIUrl":"https://doi.org/10.1145/3490099.3519389","url":null,"abstract":"","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91448305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IUI '21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021 IUI '21:第26届智能用户界面国际会议,大学城,德克萨斯州,美国,4月13-17日
IUI. International Conference on Intelligent User Interfaces Pub Date : 2021-01-01 DOI: 10.1145/3397481
{"title":"IUI '21: 26th International Conference on Intelligent User Interfaces, College Station, TX, USA, April 13-17, 2021","authors":"","doi":"10.1145/3397481","DOIUrl":"https://doi.org/10.1145/3397481","url":null,"abstract":"","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81904467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Making Videos Accessible for Low Vision Screen Magnifier Users. 让低视力屏幕放大镜用户也能观看视频。
IUI. International Conference on Intelligent User Interfaces Pub Date : 2020-03-01 DOI: 10.1145/3377325.3377494
Ali Selman Aydin, Shirin Feiz, Vikas Ashok, I V Ramakrishnan
{"title":"Towards Making Videos Accessible for Low Vision Screen Magnifier Users.","authors":"Ali Selman Aydin, Shirin Feiz, Vikas Ashok, I V Ramakrishnan","doi":"10.1145/3377325.3377494","DOIUrl":"10.1145/3377325.3377494","url":null,"abstract":"<p><p>People with low vision who use screen magnifiers to interact with computing devices find it very challenging to interact with dynamically changing digital content such as videos, since they do not have the luxury of time to manually move, i.e., pan the magnifier lens to different regions of interest (ROIs) or zoom into these ROIs before the content changes across frames. In this paper, we present SViM, a first of its kind screen-magnifier interface for such users that leverages advances in computer vision, particularly video saliency models, to identify salient ROIs in videos. SViM's interface allows users to zoom in/out of any point of interest, switch between ROIs via mouse clicks and provides assistive panning with the added flexibility that lets the user explore other regions of the video besides the ROIs identified by SViM. Subjective and objective evaluation of a user study with 13 low vision screen magnifier users revealed that overall the participants had a better user experience with SViM over extant screen magnifiers, indicative of the former's promise and potential for making videos accessible to low vision screen magnifier users.</p>","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7871698/pdf/nihms-1666230.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25358571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SaIL: Saliency-Driven Injection of ARIA Landmarks. SaIL:显著性驱动的ARIA地标注入。
IUI. International Conference on Intelligent User Interfaces Pub Date : 2020-03-01 DOI: 10.1145/3377325.3377540
Ali Selman Aydin, Shirin Feiz, Vikas Ashok, I V Ramakrishnan
{"title":"SaIL: Saliency-Driven Injection of ARIA Landmarks.","authors":"Ali Selman Aydin,&nbsp;Shirin Feiz,&nbsp;Vikas Ashok,&nbsp;I V Ramakrishnan","doi":"10.1145/3377325.3377540","DOIUrl":"https://doi.org/10.1145/3377325.3377540","url":null,"abstract":"<p><p>Navigating webpages with screen readers is a challenge even with recent improvements in screen reader technologies and the increased adoption of web standards for accessibility, namely ARIA. ARIA landmarks, an important aspect of ARIA, lets screen reader users access different sections of the webpage quickly, by enabling them to skip over blocks of irrelevant or redundant content. However, these landmarks are sporadically and inconsistently used by web developers, and in many cases, even absent in numerous web pages. Therefore, we propose SaIL, a scalable approach that automatically detects the important sections of a web page, and then injects ARIA landmarks into the corresponding HTML markup to facilitate quick access to these sections. The central concept underlying SaIL is visual saliency, which is determined using a state-of-the-art deep learning model that was trained on gaze-tracking data collected from sighted users in the context of web browsing. We present the findings of a pilot study that demonstrated the potential of SaIL in reducing both the time and effort spent in navigating webpages with screen readers.</p>","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3377325.3377540","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25368054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Scene Text Access: A Comparison of Mobile OCR Modalities for Blind Users. 场景文本访问:盲人用户移动OCR模式的比较。
IUI. International Conference on Intelligent User Interfaces Pub Date : 2019-03-01 DOI: 10.1145/3301275.3302271
Leo Neat, Ren Peng, Siyang Qin, Roberto Manduchi
{"title":"Scene Text Access: A Comparison of Mobile OCR Modalities for Blind Users.","authors":"Leo Neat,&nbsp;Ren Peng,&nbsp;Siyang Qin,&nbsp;Roberto Manduchi","doi":"10.1145/3301275.3302271","DOIUrl":"https://doi.org/10.1145/3301275.3302271","url":null,"abstract":"<p><p>We present a study with seven blind participants using three different mobile OCR apps to find text posted in various indoor environments. The first app considered was Microsoft SeeingAI in its Short Text mode, which reads any text in sight with a minimalistic interface. The second app was Spot+OCR, a custom application that separates the task of text detection from OCR proper. Upon detection of text in the image, Spot+OCR generates a short vibration; as soon as the user stabilizes the phone, a high-resolution snapshot is taken and OCR-processed. The third app, Guided OCR, was designed to guide the user in taking several pictures in a 360° span at the maximum resolution available by the camera, with minimum overlap between pictures. Quantitative results (in terms of true positive ratios and traversal speed) were recorded. Along with the qualitative observation and outcomes from an exit survey, these results allow us to identify and assess the different strategies used by our participants, as well as the challenges of operating these systems without sight.</p>","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3301275.3302271","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41223152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Towards a Generalizable Method for Detecting Fluid Intake with Wrist-Mounted Sensors and Adaptive Segmentation. 用腕戴式传感器和自适应分割检测液体摄入的通用方法。
IUI. International Conference on Intelligent User Interfaces Pub Date : 2019-03-01 DOI: 10.1145/3301275.3302315
Keum San Chun, Ashley B Sanders, Rebecca Adaimi, Necole Streeper, David E Conroy, Edison Thomaz
{"title":"Towards a Generalizable Method for Detecting Fluid Intake with Wrist-Mounted Sensors and Adaptive Segmentation.","authors":"Keum San Chun,&nbsp;Ashley B Sanders,&nbsp;Rebecca Adaimi,&nbsp;Necole Streeper,&nbsp;David E Conroy,&nbsp;Edison Thomaz","doi":"10.1145/3301275.3302315","DOIUrl":"10.1145/3301275.3302315","url":null,"abstract":"<p><p>Over the last decade, advances in mobile technologies have enabled the development of intelligent systems that attempt to recognize and model a variety of health-related human behaviors. While automated dietary monitoring based on passive sensors has been an area of increasing research activity for many years, much less attention has been given to tracking fluid intake. In this work, we apply an adaptive segmentation technique on a continuous stream of inertial data captured with a practical, off-the-shelf wrist-mounted device to detect fluid intake gestures passively. We evaluated our approach in a study with 30 participants where 561 drinking instances were recorded. Using a leave-one-participant-out (LOPO), we were able to detect drinking episodes with 90.3% precision and 91.0% recall, demonstrating the generalizability of our approach. In addition to our proposed method, we also contribute an anonymized and labeled dataset of drinking and non-drinking gestures to encourage further work in the field.</p>","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1145/3301275.3302315","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37194335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Providing Adaptive and Personalized Visual Support based on Behavioral Tracking of Children with Autism for Assessing Reciprocity and Coordination Skills in a Joint Attention Training Application 基于行为跟踪的自适应和个性化视觉支持在自闭症儿童互惠和协调技能评估中的应用
IUI. International Conference on Intelligent User Interfaces Pub Date : 2018-03-05 DOI: 10.1145/3180308.3180349
T. Tang, Pinata Winoto
{"title":"Providing Adaptive and Personalized Visual Support based on Behavioral Tracking of Children with Autism for Assessing Reciprocity and Coordination Skills in a Joint Attention Training Application","authors":"T. Tang, Pinata Winoto","doi":"10.1145/3180308.3180349","DOIUrl":"https://doi.org/10.1145/3180308.3180349","url":null,"abstract":"Recent works have demonstrated the applicability of the activity and behavioral pattern analysis mechanisms to assist therapists, care-givers and individuals with development disorders including those with autism spectrum disorder (ASD); the computational cost and sophistication of such behavioral modeling systems might prevent them from deploying. As such, in this paper, we proposed an easily deployable automatic system to train joint attention (JA) skills, assess the frequency and degree of reciprocity and provide visual cues accordingly. Our proposed approach is different from most of earlier attempts in that we do not capitalize the sophisticated feature-space construction methodology; instead, the simple design, in-game automatic data collection for adaptive visual supports offers hassle-free benefits especially for low-functioning ASD individuals and those with severe verbal impairments.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80103481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Configurable and Contextually Expandable Interactive Picture Exchange Communication System (PECS) for Chinese Children with Autism 面向中国自闭症儿童的可配置、情境可扩展的交互式图片交换通信系统
IUI. International Conference on Intelligent User Interfaces Pub Date : 2018-03-05 DOI: 10.1145/3180308.3180348
T. Tang, Pinata Winoto
{"title":"A Configurable and Contextually Expandable Interactive Picture Exchange Communication System (PECS) for Chinese Children with Autism","authors":"T. Tang, Pinata Winoto","doi":"10.1145/3180308.3180348","DOIUrl":"https://doi.org/10.1145/3180308.3180348","url":null,"abstract":"The electronic versions of PECS (picture exchange communication system) have been introduced to non-verbal children with autism spectrum disorder (ASD) in the past decade. In this paper, we discuss some related issues and propose the design of more versatile electronic PECS (ePECS) as a comprehensive language training tool.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84258282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Can We Predict the Scenic Beauty of Locations from Geo-tagged Flickr Images? 我们能从地理标记的Flickr图片中预测地点的美景吗?
IUI. International Conference on Intelligent User Interfaces Pub Date : 2018-03-05 DOI: 10.1145/3172944.3173000
Ch. Md. Rakin Haider, Mohammed Eunus Ali
{"title":"Can We Predict the Scenic Beauty of Locations from Geo-tagged Flickr Images?","authors":"Ch. Md. Rakin Haider, Mohammed Eunus Ali","doi":"10.1145/3172944.3173000","DOIUrl":"https://doi.org/10.1145/3172944.3173000","url":null,"abstract":"In this work, we propose a novel technique to determine the aesthetic score of a location from social metadata of Flickr photos. In particular, we built machine learning classifiers to predict the class of a location where each class corresponds to a set of locations having equal aesthetic rating. These models are trained on two empirically build datasets containing locations in two different cities (Rome and Paris) where aesthetic ratings of locations were gathered from TripAdvisor.com. In this work we exploit the idea that in a location with higher aesthetic rating, it is more likely for an user to capture a photo and other users are more likely to interact with that photo. Our models achieved as high as 79.48% accuracy (78.60% precision and 79.27% recall) on Rome dataset and 73.78% accuracy(75.62% precision and 78.07% recall) on Paris dataset. The proposed technique can facilitate urban planning, tour planning and recommending aesthetically pleasing paths.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81330229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信