Ramadevi Vennelakanti, A. Subramanian, S. Madhvanath, S. Subramanian
{"title":"Counting on your fingertips: an exploration and analysis of actions in the Rich Touch space","authors":"Ramadevi Vennelakanti, A. Subramanian, S. Madhvanath, S. Subramanian","doi":"10.1145/2407796.2407800","DOIUrl":"https://doi.org/10.1145/2407796.2407800","url":null,"abstract":"Although multi-touch technology and horizontal interactive surfaces have been around for a decade now, there is limited understanding of how users use the Rich Touch space and multiple fingers to manipulate objects on a table. In this paper, we describe the findings and insights from an observational study on how users manipulate photographs on a physical table surface. Through a detailed video analysis based on images captured from four distinct cameras we investigate the various actions users perform, and various aspects of these actions, such as the number of fingers, the space of action, and handedness. Our investigation shows that user interactions can be described in terms of a small set of actions, and there are insightful ways in which hands are used, and number of finger used to carry out these actions. These insights may in turn be used to inform the design of future interactive surfaces, and improve the accuracy of interpreting these actions.","PeriodicalId":179432,"journal":{"name":"Proceedings of the 3rd Indian Conference on Human-Computer Interaction","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132603386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prabhath Gokarn, Kushal Gore, Devanuj, P. Doke, Sylvan Lobo, S. Kimbahune
{"title":"KLM operator values for rural mobile phone user","authors":"Prabhath Gokarn, Kushal Gore, Devanuj, P. Doke, Sylvan Lobo, S. Kimbahune","doi":"10.1145/2407796.2407811","DOIUrl":"https://doi.org/10.1145/2407796.2407811","url":null,"abstract":"Keystroke-Level Model (KLM) is a simplified cognitive modelling technique. The value of KLM operators have been defined for keyboard/mouse based interaction and literate western users. We conjectured that the values of the operators, especially the mental operator, would change for semi-literate Indian users using mobile phones, given their diversity. We have conducted tests with two user groups -- highly literate and semi-literate, to derive KLM operators. We discovered that the values of all the operators remain unaffected by literacy levels. However, the mental operator still varies according to the complexity of the interface. While performing the analysis we also discovered certain qualitative aspects of mobile based interactions which we have shared in this paper. Our findings would aid the upcoming rural mobile application HCI industry in India.","PeriodicalId":179432,"journal":{"name":"Proceedings of the 3rd Indian Conference on Human-Computer Interaction","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130798812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Neokleous, M. Avraamides, Costas Neocleous, C. Schizas
{"title":"A neurocomputational model of visual selective attention for human computer interface applications","authors":"K. Neokleous, M. Avraamides, Costas Neocleous, C. Schizas","doi":"10.1145/2407796.2407815","DOIUrl":"https://doi.org/10.1145/2407796.2407815","url":null,"abstract":"An overview of a neurocomputational model of visual selective attention that has been properly implemented is presented in this report. We briefly explain the basic components and neural interactions that comprise the model and we discuss its possible applications for human computer interaction.","PeriodicalId":179432,"journal":{"name":"Proceedings of the 3rd Indian Conference on Human-Computer Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126964377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How are distributed groups affected by an imposed structuring of their decision-making process?","authors":"Anders Lundell, M. Hertzum","doi":"10.1145/2407796.2407802","DOIUrl":"https://doi.org/10.1145/2407796.2407802","url":null,"abstract":"Groups often suffer from ineffective communication and decision making. This experimental study compares distributed groups solving a preference task with support from either a communication system or a system providing both communication and a structuring of the decision-making process. Results show that groups using the latter system spend more time solving the task, spend more of their time on solution analysis, spend less of their time on disorganized activity, and arrive at task solutions with less extreme preferences. Thus, the type of system affects the decision-making process as well as its outcome. Notably, the task solutions arrived at by the groups using the system that imposes a structuring of the decision-making process show limited correlation with the task solutions suggested by the system on the basis of the groups' explicitly stated criteria. We find no differences in group influence, consensus, and satisfaction between groups using the two systems.","PeriodicalId":179432,"journal":{"name":"Proceedings of the 3rd Indian Conference on Human-Computer Interaction","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128186034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"While working around security","authors":"N. Mathiasen, M. G. Petersen, S. Bødker","doi":"10.1145/2407796.2407798","DOIUrl":"https://doi.org/10.1145/2407796.2407798","url":null,"abstract":"The title of this paper describes our work at two levels. First of all the paper discusses how users of IT deal with issues of IT security in their everyday life. Secondly, we discuss how the kind of understanding of IT security that comes out of careful analyses of use confronts the ways in which usable IT security is established in the literature. Recent literature has called for better conceptual models as starting point for improving IT security. In contrast to such models we propose to dress up designers by helping them understand better the work that goes into everyday security. The result is a methodological toolbox that helps address and design for usable and useful IT security. We deploy examples of analyses and design, carried out by ourselves and by others to fine-tune our design perspective; in particular we use examples from three current research projects.","PeriodicalId":179432,"journal":{"name":"Proceedings of the 3rd Indian Conference on Human-Computer Interaction","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126694733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Bhutkar, R. Poovaiah, D. Katre, Shekhar Karmarkar
{"title":"Semiotic analysis combined with usability and ergonomic testing for evaluation of icons in medical user interface","authors":"G. Bhutkar, R. Poovaiah, D. Katre, Shekhar Karmarkar","doi":"10.1145/2407796.2407804","DOIUrl":"https://doi.org/10.1145/2407796.2407804","url":null,"abstract":"In this research, we have evaluated the medical icons and iconic interfaces of touch screen ventilator systems used in Intensive Care Unit (ICU). Precise communication through iconic interface between ventilator system and medical users like physicians or nurses is critical to avoid medical errors which may cost patient's life. We have used Usability Testing, User Survey, Lexical Analysis, Semiotic Analysis, Long Distance Visibility Testing (Ergonomic aspect) in combination for evaluating the medical icons. The usability testing was performed through three icon tests -- Test without Context, Test with Context and Test with Comparison. The lexical analysis along with three dimensional analyses in terms of semantics, syntactics and pragmatics was performed. It is evident that evaluation of medical icons is very different in comparison with icons used in general software applications.","PeriodicalId":179432,"journal":{"name":"Proceedings of the 3rd Indian Conference on Human-Computer Interaction","volume":"5 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132201672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MozArt: an immersive multimodal CAD system for 3D modeling","authors":"Anirudh Sharma, S. Madhvanath","doi":"10.1145/2407796.2407812","DOIUrl":"https://doi.org/10.1145/2407796.2407812","url":null,"abstract":"3D modeling has been revolutionized in recent years by the advent of computers. While computers have become much more affordable and accessible to the masses, computer modeling remains a complex task involving a steep learning curve and extensive training. In this paper we describe the MozArt Table, our effort to redefine the interface for computer modeling to make it more accessible to lay users. We have explored both the hardware and software aspects of the interface, specifically, the use of intuitive speech commands and multitouch gestures on an inclined interactive surface. The paper describes our approach, hardware setup and the technology used to make it work.","PeriodicalId":179432,"journal":{"name":"Proceedings of the 3rd Indian Conference on Human-Computer Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130829576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natalie Linnell, Richard J. Anderson, Guy Bordelon, R. Gandhi, Bruce Hemingway, S. Nadagouda, K. Toyama
{"title":"Context-aware technology for improving interaction in video-based agricultural extension","authors":"Natalie Linnell, Richard J. Anderson, Guy Bordelon, R. Gandhi, Bruce Hemingway, S. Nadagouda, K. Toyama","doi":"10.1145/2407796.2407799","DOIUrl":"https://doi.org/10.1145/2407796.2407799","url":null,"abstract":"Our work explores how handheld technology can help mediators perform at a higher level when facilitating video material, using two novel interaction mechanisms. We describe work with Digital Green, an NGO using facilitated video for agricultural extension in rural India. During an investigation into the information needs of Digital Green facilitators we found that novice facilitators benefited from targeted information presented during the video shows. Based upon this finding, we built and field-tested two different solutions for delivering this information to the facilitator in real time during the video shows. The primary difference between the two was the mechanism used to synchronize the video with the device, allowing the user to interact with the device as an extension of the presentation system (e.g. TV/DVD player). One approach involves audio codes embedded in the video that were decoded on an Android smart phone using digital signal processing. The other approach was a custom-hardware \"smart\" remote control. We field tested both devices for four weeks with Digital Green facilitators in northern Karnataka, and users stopped for and discussed most of the prompts. This field test established both approaches as viable for field use and identified a number of improvements for revised devices.","PeriodicalId":179432,"journal":{"name":"Proceedings of the 3rd Indian Conference on Human-Computer Interaction","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126303710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}