Proceedings of the 2014 ACM SIGCHI symposium on Engineering interactive computing systems最新文献

筛选
英文 中文
A domain-specific textual language for rapid prototyping of multimodal interactive systems 用于多模态交互系统快速原型设计的领域特定文本语言
Fredy Cuenca, J. V. D. Bergh, K. Luyten, K. Coninx
{"title":"A domain-specific textual language for rapid prototyping of multimodal interactive systems","authors":"Fredy Cuenca, J. V. D. Bergh, K. Luyten, K. Coninx","doi":"10.1145/2607023.2607036","DOIUrl":"https://doi.org/10.1145/2607023.2607036","url":null,"abstract":"There are currently toolkits that allow the specification of executable multimodal human-machine interaction models. Some provide domain-specific visual languages with which a broad range of interactions can be modeled but at the expense of bulky diagrams. Others instead, interpret concise specifications written in existing textual languages even though their non-specialized notations prevent the productivity improvement achievable through domain-specific ones. We propose a domain-specific textual language and its supporting toolkit; they both overcome the shortcomings of the existing approaches while retaining their strengths. The language provides notations and constructs specially tailored to compactly declare the event patterns raised during the execution of multimodal commands. The toolkit detects the occurrence of these patterns and invokes the functionality of a back-end system in response.","PeriodicalId":297680,"journal":{"name":"Proceedings of the 2014 ACM SIGCHI symposium on Engineering interactive computing systems","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134369594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Formal modelling of dynamic instantiation of input devices and interaction techniques: application to multi-touch interactions 输入设备和交互技术的动态实例化的形式化建模:在多点触摸交互中的应用
A. Hamon, Philippe A. Palanque, Martin Cronel, Raphaël André, Eric Barboni, D. Navarre
{"title":"Formal modelling of dynamic instantiation of input devices and interaction techniques: application to multi-touch interactions","authors":"A. Hamon, Philippe A. Palanque, Martin Cronel, Raphaël André, Eric Barboni, D. Navarre","doi":"10.1145/2607023.2610286","DOIUrl":"https://doi.org/10.1145/2607023.2610286","url":null,"abstract":"Representing the behavior of multi-touch interactive systems in a complete, concise and non-ambiguous way is still a challenge for formal description techniques. Indeed, multi-touch interactive systems embed specific constraints that are either cumbersome or impossible to capture with classical formal description techniques. This is due to both the idiosyncratic nature of multi-touch technology (e.g. the fact that each finger represent an input device and that gestures are directly performed on the surface without an additional instrument) and the high dynamicity of interactions usually encountered in this kind of systems. This paper presents a formal description technique able to model multi-touch interactive systems. We focus the presentation on how to represent the dynamic instantiation of input devices (i.e. finger) and how they can then be exploited dynamically to offer a multiplicity of interaction techniques which are also dynamically instantiated","PeriodicalId":297680,"journal":{"name":"Proceedings of the 2014 ACM SIGCHI symposium on Engineering interactive computing systems","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129668874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A gestural concrete user interface in MARIA MARIA中的一个具体的手势用户界面
L. D. Spano, F. Paternò, G. Fenu
{"title":"A gestural concrete user interface in MARIA","authors":"L. D. Spano, F. Paternò, G. Fenu","doi":"10.1145/2607023.2610282","DOIUrl":"https://doi.org/10.1145/2607023.2610282","url":null,"abstract":"In this paper, we describe a solution for engineering and modelling user interfaces for supporting input collected through gesture recognition hardware. We describe how we applied such approach by extending the MARIA UIDL, and how the modelling solution can be applied to other UI toolkits. In addition, we detail the model-to-code transformation for obtaining a running application through an example case study.","PeriodicalId":297680,"journal":{"name":"Proceedings of the 2014 ACM SIGCHI symposium on Engineering interactive computing systems","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131166965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The frameSoC software architecture for multiple-view trace data analysis 框架soc多视图跟踪数据分析软件体系结构
Generoso Pagano, Vania Marangozova-Martin
{"title":"The frameSoC software architecture for multiple-view trace data analysis","authors":"Generoso Pagano, Vania Marangozova-Martin","doi":"10.1145/2607023.2610274","DOIUrl":"https://doi.org/10.1145/2607023.2610274","url":null,"abstract":"Trace analysis graphical user environments have to provide different views on trace data, in order to be effective in helping the comprehension of the traced application behavior. In this article we propose an open and modular software architecture, the FrameSoC workbench, which defines clear principles for view engineering and for view consistency management. The FrameSoC workbench has been successfully applied in real trace analysis use cases.","PeriodicalId":297680,"journal":{"name":"Proceedings of the 2014 ACM SIGCHI symposium on Engineering interactive computing systems","volume":"198 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123271585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Presenting EveWorks, a framework for daily life event detection 介绍EveWorks,一个日常生活事件检测框架
Bruno Cardoso, T. Romão
{"title":"Presenting EveWorks, a framework for daily life event detection","authors":"Bruno Cardoso, T. Romão","doi":"10.1145/2607023.2610279","DOIUrl":"https://doi.org/10.1145/2607023.2610279","url":null,"abstract":"In this paper we present EveWorks, a new framework for the development of context-aware mobile applications, focused on the detection of events on people's daily lives. In our framework, events of interest are expressed through statements written in a simple domain-specific language that, being interpreted, allows for changing an application's reactive behavior at runtime. Instead of being focused on programming through technology of framework-specific components, our approach allows developers to express events in terms of more natural constructs -- intervals of time where some data invariants are true, articulated through the operators of James Allen's Interval Algebra.","PeriodicalId":297680,"journal":{"name":"Proceedings of the 2014 ACM SIGCHI symposium on Engineering interactive computing systems","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115572448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
SecSpace: prototyping usable privacy and security for mixed reality collaborative environments SecSpace:用于混合现实协作环境的可用隐私和安全性原型
Derek F. Reilly, Mohamad H. Salimian, Bonnie MacKay, N. Mathiasen, W. K. Edwards, Juliano Franz
{"title":"SecSpace: prototyping usable privacy and security for mixed reality collaborative environments","authors":"Derek F. Reilly, Mohamad H. Salimian, Bonnie MacKay, N. Mathiasen, W. K. Edwards, Juliano Franz","doi":"10.1145/2607023.2607039","DOIUrl":"https://doi.org/10.1145/2607023.2607039","url":null,"abstract":"Privacy mechanisms are important in mixed-presence (collocated and remote) collaborative systems. These systems try to achieve a sense of co-presence in order to promote fluid collaboration, yet it can be unclear how actions made in one location are manifested in the other. This ambiguity makes it difficult to share sensitive information with confidence, impacting the fluidity of the shared experience. In this paper, we focus on mixed reality approaches (blending physical and virtual spaces) for mixed presence collaboration. We present SecSpace, our software toolkit for usable privacy and security research in mixed reality collaborative environments. SecSpace permits privacy-related actions in either physical or virtual space to generate effects simultaneously in both spaces. These effects will be the same in terms of their impact on privacy but they may be functionally tailored to suit the requirements of each space. We detail the architecture of SecSpace and present three prototypes that illustrate the flexibility and capabilities of our approach.","PeriodicalId":297680,"journal":{"name":"Proceedings of the 2014 ACM SIGCHI symposium on Engineering interactive computing systems","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115016164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Extracting behavioral information from electronic storyboards 从电子故事板中提取行为信息
Jason B. Forsyth, Thomas L. Martin
{"title":"Extracting behavioral information from electronic storyboards","authors":"Jason B. Forsyth, Thomas L. Martin","doi":"10.1145/2607023.2607034","DOIUrl":"https://doi.org/10.1145/2607023.2607034","url":null,"abstract":"In this paper we outline methods for extracting behavioral descriptions of interactive prototypes from electronic storyboards. This information is used to help interdisciplinary design teams evaluate potential ideas early in the design process. Using electronic storyboards provides a common descriptive medium where team members from different disciplinary backgrounds can collectively express the intended behavior of their prototype. The behavioral information is extracted by a combination of visual tags applied to elements of the storyboard, analysis of storyboard layout, and natural language processing of text written in the frames. We describe this process, provide a proof of concept example, and discuss design choices in developing this tool.","PeriodicalId":297680,"journal":{"name":"Proceedings of the 2014 ACM SIGCHI symposium on Engineering interactive computing systems","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125426836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Supporting design, prototyping, and evaluation of public display systems 支持公共展示系统的设计、原型制作和评估
M. Ostkamp, C. Kray
{"title":"Supporting design, prototyping, and evaluation of public display systems","authors":"M. Ostkamp, C. Kray","doi":"10.1145/2607023.2607035","DOIUrl":"https://doi.org/10.1145/2607023.2607035","url":null,"abstract":"Public displays have become ubiquitous in urban areas. They can efficiently deliver information to many people and increasingly also provide means for interaction. Designing, developing, and testing such systems can be challenging, particularly if a system consists of many displays in multiple locations. Deployment is costly and contextual factors such as placement within and interaction with the environment can have a major impact on the success of such systems. In this paper we propose a new prototyping and evaluation method for public display systems (PDS) that integrates augmented panoramic imagery and a light-weight, graph-based model to simulate PDS. Our approach facilitates low-effort, rapid design of interactive PDS and their evaluation. We describe a prototypical implementation and present an initial assessment based on a comparison with existing methods, our own experiences, and an example case study.","PeriodicalId":297680,"journal":{"name":"Proceedings of the 2014 ACM SIGCHI symposium on Engineering interactive computing systems","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122723173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
XDKinect: development framework for cross-device interaction using kinect XDKinect:使用kinect进行跨设备交互的开发框架
Michael Nebeling, E. Teunissen, Maria Husmann, M. Norrie
{"title":"XDKinect: development framework for cross-device interaction using kinect","authors":"Michael Nebeling, E. Teunissen, Maria Husmann, M. Norrie","doi":"10.1145/2607023.2607024","DOIUrl":"https://doi.org/10.1145/2607023.2607024","url":null,"abstract":"Interactive systems set in multi-device environments continue to attract increasing attention, prompting researchers to experiment with emerging technologies. This paper presents XDKinect--a lightweight framework that facilitates development of cross-device applications using Kinect to mediate user interactions. The main benefits of XDKinect include its simplicity, adaptability and extensibility based on a flexible client-server architecture. Our framework features a time-based API to handle full-body interactions, a multi-modal API to capture gesture and speech commands, an API to utilise proxemic awareness information, a cross-device communication API, and a settings API to optimise for particular application requirements. A study with developers was conducted to investigate the potential of these features in terms of ease of use, effectiveness and possible use in the future. We show several example applications of XDKinect, as well as discussing advantages and limitations of our framework as revealed by our user study and experiments.","PeriodicalId":297680,"journal":{"name":"Proceedings of the 2014 ACM SIGCHI symposium on Engineering interactive computing systems","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124195654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
IceTT: a responsive visualization for task models IceTT:任务模型的响应式可视化
L. D. Spano, G. Fenu
{"title":"IceTT: a responsive visualization for task models","authors":"L. D. Spano, G. Fenu","doi":"10.1145/2607023.2611452","DOIUrl":"https://doi.org/10.1145/2607023.2611452","url":null,"abstract":"Task models are useful for designers and domain experts in order to describe sequences of actions that need to be completed for reaching a user's goal. Their hierarchical structure is usually visualized through a tree representation that, for large models, is inclined to grow horizontally and reduces its readability. In this paper we introduce a visualization based on icicle graphs, which is able to adapt the tasks visualization to the screen width, suitable for displaying large models even on small screens.","PeriodicalId":297680,"journal":{"name":"Proceedings of the 2014 ACM SIGCHI symposium on Engineering interactive computing systems","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125077342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信