{"title":"A trainable system for recognizing diagrammatic sketch languages","authors":"G. Costagliola, V. Deufemia, M. Risi","doi":"10.1109/VLHCC.2005.12","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.12","url":null,"abstract":"In this paper we propose an approach for constructing sketch parsers whose recognition accuracy and speed is significantly improved by acquiring information on the user's sketching style during a training phase. The construction process consists in specifying a sketch grammar description of the language syntax, automatically generating a parser from such specification, and let the user train the recognition system on a set of sketch sentences.","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"46 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113993624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Application of jigsaw-puzzle-like 3D glyph to visualizing grammatical constraints","authors":"Noritaka Osawa","doi":"10.1109/VLHCC.2005.19","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.19","url":null,"abstract":"An extension of three-dimensional visualization using jigsaw-puzzle-like glyphs, or shapes, is presented as a means of representing grammatical constraints in programming. The visualization uses 3D glyphs such as wireframe and hole shapes, as well as convex and concave shapes. A semantic constraint, such as a type constraint in an assignment, is represented by an inclusive match between 3D glyphs. The use of wireframe and hole shapes enables representation of polymorphic constraints. An experimental evaluation showed that the 3D glyphs are easier to learn and are faster to answer than 2D glyphs and 1D symbol sequences.","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114161960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Facilitators and inhibitors of end-user development by teachers in a school","authors":"S. Wiedenbeck","doi":"10.1109/VLHCC.2005.36","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.36","url":null,"abstract":"This paper describes the perceptions and attitudes of teachers who are end-user developers. A semi-structured interview of 22 teachers was carried out. It revealed motivational, situational, knowledge, and tool factors that affect their programming efforts. Five of the survey participants also carried out several program maintenance tasks in order to identify additional challenges in performing end-user programming tasks. The contribution of the study is an understanding of the facilitators and inhibitors of end-user development in a particular professional setting.","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121160549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Bednarik, Niko Myller, E. Sutinen, M. Tukiainen
{"title":"Applying eye-movememt tracking to program visualization","authors":"R. Bednarik, Niko Myller, E. Sutinen, M. Tukiainen","doi":"10.1109/VLHCC.2005.20","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.20","url":null,"abstract":"We describe an evaluation of a program animation tool, Jeliot 3. Eye movement data, interaction data, and audio and video protocols were recorded for sixteen participants comprehending three Java programs. In this paper, we report and analyze a subset of the captured data, the patterns of visual attention in time and space during animation of a program. We discuss the possible potentials and prospects of using eye-gaze as an interaction and evaluation modality for building intelligent, gaze-aware program animation tools.","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117286831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resource-based models of visual interaction: understanding errors","authors":"Paolo Bottoni, S. Levialdi","doi":"10.1109/VLHCC.2005.55","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.55","url":null,"abstract":"We propose a resource-based model of human-computer interaction, and discuss how WIPPOG, a rule-based language for multiset transformations, can support such a model. WIPPOG is here adopted as an enactment mechanism for task models expressed by a visual notation. We exploit this resource-based perspective to model some common interaction errors and suggest possible prevention and mitigation strategies.","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125651408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Formal integration of inheritance with typed attributed graph transformation for efficient VL definition and model manipulation","authors":"H. Ehrig, K. Ehrig, Ulrike Golas, G. Taentzer","doi":"10.1109/VLHCC.2005.37","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.37","url":null,"abstract":"Several approaches exist to define a visual language (VL). Among those the meta-modeling approach used to define the Unified Modeling Language (UML), and the graph transformation approach are very popular. Especially the combination of both, using meta-modeling to define the syntax of a VL and graph transformation for specifying model transformations has been considered conceptually and explored in a number of applications. A formal integration of both approaches has just been started by integrating classical algebraic graph grammars with a node type inheritance concept. In this paper, the integration of inheritance is extending to attributed graph transformation. More precisely, we define attributed type graphs with inheritance leading to a formal integration of inheritance with typed attributed graph transformation.","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131034626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unified Modeling Language 2.0","authors":"Alexander Knapp, H. Störrle","doi":"10.1109/VLHCC.2005.65","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.65","url":null,"abstract":"Our goal is to provide participants of the tutorial with a working knowledge of UML. Thus, the whole course is based on a case study, and is accompanied with interactive exercises. We also consider commercial tools supporting UML 2.0, and provide an up-to-date assessment of industrial and academic experience with UML. We conclude with a assessment of the potential and future developments of the UML. O","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134393541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spoken programs","authors":"Andrew Begel, S. Graham","doi":"10.1109/VLHCC.2005.58","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.58","url":null,"abstract":"Programmers who suffer from repetitive stress injuries find it difficult to spend long amounts of time typing code. Speech interfaces can help developers reduce their dependence on typing. However, existing programming by voice techniques make it awkward for programmers to enter and edit program text. To design a better alternative, we conducted a study to learn how software developers naturally verbalize programs. We found that spoken programs are different from written programs in ways similar to the differences between spoken and written English; spoken programs contain lexical, syntactic and semantic ambiguities that do not appear in written programs. Using the results from this study, we designed Spoken Java, a semantically identical variant of Java that is easier to say out loud. Using Spoken Java, software developers can speak more naturally by verbalizing their program code as if they were reading it out loud. Spoken Java is analyzed by extending a conventional Java programming language analysis engine written in our Harmonia program analysis framework to support the kinds of ambiguities that arise from speech.","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129425912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transformation of UML state machines for direct execution","authors":"Tim Schattkowsky, W. Müller","doi":"10.1109/VLHCC.2005.64","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.64","url":null,"abstract":"Executable UML models are nowadays gaining interest in embedded systems design. This domain is strongly devoted to the modeling of reactive behavior using StateChart variants. In this context, the direct execution of UML state machines is an interesting alternative to native code generation approaches since it significantly increases portability. However, fully featured UML 2.0 State Machines may contain a broad set of features with complex execution semantics that differ significantly from other StateChart variants. This makes their direct execution complex and inefficient. In this paper, we demonstrate how such state machines can be represented using a small subset of the UML state machine features that enables efficient execution. We describe the necessary model transformations in terms of graph transformations and discuss the underlying semantics and implications for execution.","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129002456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Easing program comprehension by sharing navigation data","authors":"R. Deline, M. Czerwinski, G. Robertson","doi":"10.1109/VLHCC.2005.32","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.32","url":null,"abstract":"Large software projects often require a programmer to make changes to unfamiliar source code. This paper describes a set of tools, called Team Tracks, designed to ease program comprehension by showing the source code navigation patterns of fellow development team members. One technique shows a list of related items, given that the user is viewing a given method or class. Another technique shows the favorite classes, by showing a class hierarchy view that hides less frequently visited classes, methods, and members. Two user studies, a laboratory study and a field study, were run to evaluate the effectiveness of these techniques. The results of the two studies demonstrate that sharing navigation data can improve program comprehension and is subjectively preferred by users.","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130013904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}