{"title":"Navigating software architectures with constant visual complexity","authors":"Wanchun Li, P. Eades, Seok-Hee Hong","doi":"10.1109/VLHCC.2005.52","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.52","url":null,"abstract":"Visualizing software architecture faces the challenges of both data complexity and visual complexity. This paper presents an approach for visualizing software architecture, which reduces data complexity using the clustered graph model and navigates pictures of clustered graphs with constant visual complexity. A graph drawing algorithm is introduced to generate visualizations of clustered graphs. A semantic fisheye view of a clustered graph is proposed for conserving constant visual complexity. Animation is used to present smooth transition of visualizations. A case study is investigated to navigate the architecture of the Compiler c488.","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134475106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RGG+: an enhancement to the reserved graph grammar formalism","authors":"Xiaoqin Zeng, Kang Zhang, Jun Kong, G. Song","doi":"10.1109/VLHCC.2005.56","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.56","url":null,"abstract":"Enhancing the reserved graph grammar (RGG) formalism, this paper introduces a size-increasing condition on the structure of graph grammars' productions to simplify the definition of graph grammars, and a general parsing algorithm to extend the power of the RGG parsing algorithm.","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132708820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HyperFlow: an integrated visual query and dataflow language for end-user information analysis","authors":"Dolev Dotan, R. Pinter","doi":"10.1109/VLHCC.2005.45","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.45","url":null,"abstract":"We present HyperFlow, a novel visual language for information analysis that combines features from visual dataflow and visual query languages into a unified framework. HyperFlow is designed to make it easier for users to retrieve, filter, and manipulate information, using databases alongside e.g. Web services, in a transparent, intuitive, reproducible and traceable manner. It allows users to visually design and execute information analysis processes in a single diagram. We present HyperFlow's constructs and describe the characteristics of a prototype interface we have implemented.","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116640952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Achieving flexibility in direct-manipulation programming environments by relaxing the edit-time grammar","authors":"Benjamin E. Birnbaum, K. Goldman","doi":"10.1109/VLHCC.2005.15","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.15","url":null,"abstract":"Structured program editors can lower the entry barrier for beginning computer science students by preventing syntax errors. However, when editors force programs to be executable after every edit, a rigid development process results. We explore the use of a separate edit-time grammar that is more permissive than the runtime grammar. This helps achieve a balance between structured editing and flexibility, particularly in live development environments. JPie is a graphical programming environment that applies this separation to the live development of Java applications. We present the design goals for JPie's edit-time grammar and describe how its implementation supports a balance between structure and flexibility. As further illustration of the benefits of a relaxed edit-time grammar, we present \"mixed-mode editing,\" an integration of textual and graphical editing for added flexibility.","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114848048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robin Abraham, Martin Erwig, Steve Kollmansberger, Ethan Seifert
{"title":"Visual specifications of correct spreadsheets","authors":"Robin Abraham, Martin Erwig, Steve Kollmansberger, Ethan Seifert","doi":"10.1109/VLHCC.2005.70","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.70","url":null,"abstract":"We introduce a visual specification language for spreadsheets that allows the definition of spreadsheet templates. A spreadsheet generator can automatically create Excel spreadsheets from these templates together with customized update operations. It can be shown that spreadsheets created in this way are free from a large class of errors, such as reference, omission, and type errors. We present a formal definition of the visual language for templates and describe the process of generating spreadsheets from templates. In addition, we present an editor for templates and analyze the editor using the cognitive dimensions framework.","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114980546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visualizing what people are doing on the Web","authors":"S. Reiss, G. Eddon","doi":"10.1109/VLHCC.2005.71","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.71","url":null,"abstract":"What are people currently looking at in their Web browser? Do the patterns of pages change over time? Are changes periodic or just related to current events or other factors? We are developing a tool that attempts to provide insight into these and other questions. The tool sits on top of an Internet-scale programming backbone supports large numbers of simultaneous users. The tool itself provides a unique category-based visualization of browsing history and includes the necessary code to obtain the raw data.","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115876446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Neema, Z. Kalmár, Feng Shi, A. Vizhanyo, G. Karsai
{"title":"A visually-specified code generator for Simulink/Stateflow","authors":"S. Neema, Z. Kalmár, Feng Shi, A. Vizhanyo, G. Karsai","doi":"10.1109/VLHCC.2005.14","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.14","url":null,"abstract":"Visual modeling languages are often used today in engineering domains, Mathworks' Simulink/Stateflow for simulation, signal processing and controls being the prime example. However, they are also becoming suitable for implementing other computational tasks, like model transformations. In this paper we briefly introduce GReAT: a visual language with simple, yet powerful semantics for implementing transformations on attributed, typed hypergraphs with the help of explicitly sequenced graph transformation rules. The main contribution of the paper is a Simulink/Stateflow code generator that generates executable code (running on a distributed platform) from the visual input models. The paper provides an overview of the algorithms used and their realization in GReAT.","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120955338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Executable visual contracts","authors":"Marc Lohmann, Stefan Sauer, G. Engels","doi":"10.1109/VLHCC.2005.35","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.35","url":null,"abstract":"Design by contract (DbC) is widely acknowledged to be a powerful technique for creating reliable software. DbC allows developers to specify the behavior of an operation precisely by pre- and post-conditions. Existing DbC approaches predominantly use textual representations of contracts to annotate the actual program code with assertions. In the unified modeling language (UML), the textual object constraint language (OCL) supports the specification of preand post-conditions by constraining the model elements that occur in UML diagrams. However, textual specifications in OCL can become complex and cumbersome, especially for software developers who are typically not used to OCL. In this paper, we propose to specify the pre-and post-conditions of an operation visually by a pair of UML object diagrams (visual contract). We define a mapping of visual contracts into Java classes that are annotated with behavioral interface specifications in the Java modeling language (JML). The mapping supports testing the correctness of the implementation against the specification using JML tools, which include a runtime assertion checker. Thus we make the visual contracts executable.","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128156821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Show Me! Guidelines for producing recorded demonstrations","authors":"C. Plaisant, B. Shneiderman","doi":"10.1109/VLHCC.2005.57","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.57","url":null,"abstract":"Although recorded demonstrations (screen capture animations with narration) have become a popular form of instruction for user interfaces, little work has been done to describe guidelines for their design. Based on our experience in several projects, we offer a starting set of guidelines for the design of visually appealing and cognitively effective recorded demonstrations. Technical guidelines encourage users to keep file sizes small, strive for universal usability, and ensure user control etc. and provide tips to achieve those goals. Content guidelines include: create short demonstrations that focus on tasks, highlight each step with auditory and visual cues, synchronize narration and animation carefully, and create demonstrations with a clear beginning, middle, and end.","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124426812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A toolkit for addressing HCI issues in visual language environments","authors":"Emmanuel Pietriga","doi":"10.1109/VLHCC.2005.11","DOIUrl":"https://doi.org/10.1109/VLHCC.2005.11","url":null,"abstract":"As noted almost a decade ago, HCI (human-computer interaction) aspects of visual language environments are under-developed. This remains a fact, in spite of the central role played by user interfaces in the acceptance and usability of visual languages. We introduce ZVTM, a toolkit aimed at promoting the development of HCI aspects of visual environments by making the creation of interactive structured graphical editors easier, while favoring the rapid integration of novel interaction techniques such as zoomable user interfaces, distortion lenses, superimposed layers, and alternate scrolling and pointing methods.","PeriodicalId":241986,"journal":{"name":"2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC'05)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133783972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}