Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology最新文献

筛选
英文 中文
CookAR: Affordance Augmentations in Wearable AR to Support Kitchen Tool Interactions for People with Low Vision. CookAR:可穿戴AR的功能增强,支持低视力人群的厨房工具交互。
Jaewook Lee, Andrew D Tjahjadi, Jiho Kim, Junpu Yu, Minji Park, Jiawen Zhang, Jon E Froehlich, Yapeng Tian, Yuhang Zhao
{"title":"CookAR: Affordance Augmentations in Wearable AR to Support Kitchen Tool Interactions for People with Low Vision.","authors":"Jaewook Lee, Andrew D Tjahjadi, Jiho Kim, Junpu Yu, Minji Park, Jiawen Zhang, Jon E Froehlich, Yapeng Tian, Yuhang Zhao","doi":"10.1145/3654777.3676449","DOIUrl":"10.1145/3654777.3676449","url":null,"abstract":"<p><p>Cooking is a central activity of daily living, supporting independence as well as mental and physical health. However, prior work has highlighted key barriers for people with low vision (LV) to cook, particularly around safely interacting with tools, such as sharp knives or hot pans. Drawing on recent advancements in computer vision (CV), we present <i>CookAR</i>, a head-mounted AR system with real-time object affordance augmentations to support safe and efficient interactions with kitchen tools. To design and implement CookAR, we collected and annotated the first egocentric dataset of kitchen tool affordances, fine-tuned an affordance segmentation model, and developed an AR system with a stereo camera to generate visual augmentations. To validate CookAR, we conducted a technical evaluation of our fine-tuned model as well as a qualitative lab study with 10 LV participants for suitable augmentation design. Our technical evaluation demonstrates that our model outperforms the baseline on our tool affordance dataset, while our user study indicates a preference for affordance augmentations over the traditional whole object augmentations.</p>","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12279023/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144683778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accessible Gesture Typing on Smartphones for People with Low Vision. 低视力人士在智能手机上的无障碍手势输入。
Dan Zhang, William H Seiple, Zhi Li, I V Ramakrishnan, Vikas Ashok, Xiaojun Bi
{"title":"Accessible Gesture Typing on Smartphones for People with Low Vision.","authors":"Dan Zhang, William H Seiple, Zhi Li, I V Ramakrishnan, Vikas Ashok, Xiaojun Bi","doi":"10.1145/3654777.3676447","DOIUrl":"10.1145/3654777.3676447","url":null,"abstract":"<p><p>While gesture typing is widely adopted on touchscreen keyboards, its support for low vision users is limited. We have designed and implemented two keyboard prototypes, layout-magnified and key-magnified keyboards, to enable gesture typing for people with low vision. Both keyboards facilitate uninterrupted access to all keys while the screen magnifier is active, allowing people with low vision to input text with one continuous stroke. Furthermore, we have created a kinematics-based decoding algorithm to accommodate the typing behavior of people with low vision. This algorithm can decode the gesture input even if the gesture trace deviates from a pre-defined word template, and the starting position of the gesture is far from the starting letter of the target word. Our user study showed that the key-magnified keyboard achieved 5.28 words per minute, 27.5% faster than a conventional gesture typing keyboard with voice feedback.</p>","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11707649/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142960116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Voice and Touch Based Error-tolerant Multimodal Text Editing and Correction for Smartphones. 基于语音和触摸的智能手机容错多模态文本编辑和校正。
Maozheng Zhao, Wenzhe Cui, I V Ramakrishnan, Shumin Zhai, Xiaojun Bi
{"title":"Voice and Touch Based Error-tolerant Multimodal Text Editing and Correction for Smartphones.","authors":"Maozheng Zhao,&nbsp;Wenzhe Cui,&nbsp;I V Ramakrishnan,&nbsp;Shumin Zhai,&nbsp;Xiaojun Bi","doi":"10.1145/3472749.3474742","DOIUrl":"https://doi.org/10.1145/3472749.3474742","url":null,"abstract":"<p><p>Editing operations such as cut, copy, paste, and correcting errors in typed text are often tedious and challenging to perform on smartphones. In this paper, we present VT, a voice and touch-based multi-modal text editing and correction method for smartphones. To edit text with VT, the user glides over a text fragment with a finger and dictates a command, such as \"bold\" to change the format of the fragment, or the user can tap inside a text area and speak a command such as \"highlight this paragraph\" to edit the text. For text correcting, the user taps approximately at the area of erroneous text fragment and dictates the new content for substitution or insertion. VT combines touch and voice inputs with language context such as language model and phrase similarity to infer a user's editing intention, which can handle ambiguities and noisy input signals. It is a great advantage over the existing error correction methods (e.g., iOS's Voice Control) which require precise cursor control or text selection. Our evaluation shows that VT significantly improves the efficiency of text editing and text correcting on smartphones over the touch-only method and the iOS's Voice Control method. Our user studies showed that VT reduced the text editing time by 30.80%, and text correcting time by 29.97% over the touch-only method. VT reduced the text editing time by 30.81%, and text correcting time by 47.96% over the iOS's Voice Control method.</p>","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"2021 ","pages":"162-178"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/02/ef/nihms-1777404.PMC8845054.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39930110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Modeling Touch Point Distribution with Rotational Dual Gaussian Model. 用旋转双高斯模型模拟触摸点分布。
Yan Ma, Shumin Zhai, I V Ramakrishnan, Xiaojun Bi
{"title":"Modeling Touch Point Distribution with Rotational Dual Gaussian Model.","authors":"Yan Ma, Shumin Zhai, I V Ramakrishnan, Xiaojun Bi","doi":"10.1145/3472749.3474816","DOIUrl":"10.1145/3472749.3474816","url":null,"abstract":"<p><p>Touch point distribution models are important tools for designing touchscreen interfaces. In this paper, we investigate how the finger movement direction affects the touch point distribution, and how to account for it in modeling. We propose the Rotational Dual Gaussian model, a refinement and generalization of the Dual Gaussian model, to account for the finger movement direction in predicting touch point distribution. In this model, the major axis of the prediction ellipse of the touch point distribution is along the finger movement direction, and the minor axis is perpendicular to the finger movement direction. We also propose using <i>projected</i> target width and height, in lieu of nominal target width and height to model touch point distribution. Evaluation on three empirical datasets shows that the new model reflects the observation that the touch point distribution is elongated along the finger movement direction, and outperforms the original Dual Gaussian Model in all prediction tests. Compared with the original Dual Gaussian model, the Rotational Dual Gaussian model reduces the RMSE of touch error rate prediction from 8.49% to 4.95%, and more accurately predicts the touch point distribution in target acquisition. Using the Rotational Dual Gaussian model can also improve the soft keyboard decoding accuracy on smartwatches.</p>","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"2021 ","pages":"1197-1209"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/e0/88/nihms-1777409.PMC8853834.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UIST '21: The Adjunct Publication of the 34th Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, October 10-14, 2021 UIST '21:第34届ACM用户界面软件与技术年度研讨会附刊,虚拟事件,美国,2021年10月10-14日
{"title":"UIST '21: The Adjunct Publication of the 34th Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, October 10-14, 2021","authors":"","doi":"10.1145/3474349","DOIUrl":"https://doi.org/10.1145/3474349","url":null,"abstract":"","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"85 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89532699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UIST '21: The 34th Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, October 10-14, 2021 UIST '21:第34届ACM用户界面软件与技术研讨会,虚拟事件,美国,2021年10月10-14日
{"title":"UIST '21: The 34th Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, October 10-14, 2021","authors":"","doi":"10.1145/3472749","DOIUrl":"https://doi.org/10.1145/3472749","url":null,"abstract":"","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"92 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72715627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling Two Dimensional Touch Pointing. 二维触摸指向建模
Yu-Jung Ko, Hang Zhao, Yoonsang Kim, I V Ramakrishnan, Shumin Zhai, Xiaojun Bi
{"title":"Modeling Two Dimensional Touch Pointing.","authors":"Yu-Jung Ko, Hang Zhao, Yoonsang Kim, I V Ramakrishnan, Shumin Zhai, Xiaojun Bi","doi":"10.1145/3379337.3415871","DOIUrl":"10.1145/3379337.3415871","url":null,"abstract":"<p><p>Modeling touch pointing is essential to touchscreen interface development and research, as pointing is one of the most basic and common touch actions users perform on touchscreen devices. Finger-Fitts Law [4] revised the conventional Fitts' law into a 1D (one-dimensional) pointing model for finger touch by explicitly accounting for the fat finger ambiguity (absolute error) problem which was unaccounted for in the original Fitts' law. We generalize Finger-Fitts law to 2D touch pointing by solving two critical problems. First, we extend two of the most successful 2D Fitts law forms to accommodate finger ambiguity. Second, we discovered that using nominal target width and height is a conceptually simple yet effective approach for defining amplitude and directional constraints for 2D touch pointing across different movement directions. The evaluation shows our derived 2D Finger-Fitts law models can be both principled and powerful. Specifically, they outperformed the existing 2D Fitts' laws, as measured by the regression coefficient and model selection information criteria (e.g., Akaike Information Criterion) considering the number of parameters. Finally, 2D Finger-Fitts laws also advance our understanding of touch pointing and thereby serve as the basis for touch interface designs.</p>","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"2020 ","pages":"858-868"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8318005/pdf/nihms-1666148.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39258978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UIST '20 Adjunct: The 33rd Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, October 20-23, 2020 第33届ACM用户界面软件与技术研讨会,虚拟事件,美国,2020年10月20-23日
{"title":"UIST '20 Adjunct: The 33rd Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, October 20-23, 2020","authors":"","doi":"10.1145/3379350","DOIUrl":"https://doi.org/10.1145/3379350","url":null,"abstract":"","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77239870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using Personal Devices to Facilitate Multi-user Interaction with Large Display Walls 使用个人设备促进与大型显示墙的多用户交互
Ulrich von Zadow
{"title":"Using Personal Devices to Facilitate Multi-user Interaction with Large Display Walls","authors":"Ulrich von Zadow","doi":"10.1145/2815585.2815592","DOIUrl":"https://doi.org/10.1145/2815585.2815592","url":null,"abstract":"Large display walls and personal devices such as Smartphones have complementary characteristics. While large displays are well-suited to multi-user interaction (potentially with complex data), they are inherently public and generally cannot present an interface adapted to the individual user. However, effective multi-user interaction in many cases depends on the ability to tailor the interface, to interact without interfering with others, and to access and possibly share private data. The combination with personal devices facilitates exactly this. Multi-device interaction concepts enable data transfer and include moving parts of UIs to the personal device. In addition, hand-held devices can be used to present personal views to the user. Our work will focus on using personal devices for true multi-user interaction with interactive display walls. It will cover appropriate interaction techniques as well as the technical foundation and will be validated with corresponding application cases.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"14 1","pages":"25-28"},"PeriodicalIF":0.0,"publicationDate":"2015-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89889664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Machine Intelligence and Human Intelligence 机器智能和人类智能
B. A. Y. Arcas
{"title":"Machine Intelligence and Human Intelligence","authors":"B. A. Y. Arcas","doi":"10.1145/2807442.2814655","DOIUrl":"https://doi.org/10.1145/2807442.2814655","url":null,"abstract":"There has been a stellar rise in computational power since 2006 in part thanks to GPUs, yet today, we are as an intelligent species essentially singular. There are of course some other brainy species, like chimpanzees, dolphins, crows and octopuses, but if anything they only emphasize our unique position on Earth -- as animals richly gifted with self-awareness, language, abstract thought, art, mathematical capability, science, technology and so on. Many of us have staked our entire self-concept on the idea that to be human is to have a mind, and that minds are the unique province of humans. For those of us who are not religious, this could be interpreted as the last bastion of dualism. Our economic, legal and ethical systems are also implicitly built around this idea. Now, we're well along the road to really understanding the fundamental principles of how a mind can be built, and Moore's Law will put brain-scale computing within reach this decade. (We need to put some asterisks next to Moore's Law, since we are already running up against certain limits in computational scale using our present-day approaches, but I'll stand behind the broader statement.) In this talk I will discuss the relationships between engineered neurally inspired systems and brains today, between humans and machines tomorrow, and how these relationships will alter user interfaces, software and technology.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"43 1","pages":"665"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90587401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信