{"title":"Improving API documentation for Java-like languages","authors":"Gilles Dubochet, Donna Malayeri","doi":"10.1145/1937117.1937120","DOIUrl":"https://doi.org/10.1145/1937117.1937120","url":null,"abstract":"The Javadoc paradigm for displaying API documentation to users is quite popular, with similar variants existing for many mainstream languages. However, two user interface design properties of Javadoc may reduce its utility when displaying documentation for APIs that make use of inheritance and parametric polymorphism. First, Javadoc does not show a flattened view of all members of a class or interface, but rather only those defined directly in the type. Second, and as a consequence, any methods whose types contain type parameters of a superclass will always be shown in the context of the superclass. That is, if the method C.m returns type T, subclasses of C will always see this parent signature, even if they instantiate T to a concrete type such as Integer.\u0000 We show that this situation arises often in some libraries, and present the results of a study that measures the usability consequences of these two Javadoc design decisions. Our results show that a user interface that shows instantiated type parameters for members is preferred over one that presents type parameters in the Javadoc style.","PeriodicalId":217446,"journal":{"name":"Workshop on Evaluation and Usability of Programming Languages and Tools","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130235434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hard-to-answer questions about code","authors":"Thomas D. Latoza, B. Myers","doi":"10.1145/1937117.1937125","DOIUrl":"https://doi.org/10.1145/1937117.1937125","url":null,"abstract":"To build new tools and programming languages that make it easier for professional software developers to create, debug, and understand code, it is helpful to better understand the questions that developers ask during coding activities. We surveyed professional software developers and asked them to list hard-to-answer questions that they had recently asked about code. 179 respondents reported 371 questions. We then clustered these questions into 21 categories and 94 distinct questions. The most frequently reported categories dealt with intent and rationale -- what does this code do, what is it intended to do, and why was it done this way? Many questions described very specific situations -- e.g., what does the code do when an error occurs, how to refactor without breaking callers, or the implications of a specific change on security. These questions revealed opportunities for both existing research tools to help developers and for developing new languages and tools that make answering these questions easier.","PeriodicalId":217446,"journal":{"name":"Workshop on Evaluation and Usability of Programming Languages and Tools","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125523816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GoHotDraw: evaluating the Go programming language with design patterns","authors":"Frank Schmager, N. Cameron, J. Noble","doi":"10.1145/1937117.1937127","DOIUrl":"https://doi.org/10.1145/1937117.1937127","url":null,"abstract":"Go, a new programming language backed by Google, has the potential for widespread use: it deserves an evaluation. Design patterns are records of idiomatic programming practice and inform programmers about good program design. In this study, we evaluate Go by implementing design patterns, and porting the \"pattern-dense\" drawing framework HotDraw into Go, producing GoHotDraw. We show how Go's language features affect the implementation of Design Patterns, identify some potential Go programming patterns, and demonstrate how studying design patterns can contribute to the evaluation of a programming language.","PeriodicalId":217446,"journal":{"name":"Workshop on Evaluation and Usability of Programming Languages and Tools","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123118274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Toward transforming freely available source code into usable learning materials for end-users","authors":"Paul A. Gross, Caitlin L. Kelleher","doi":"10.1145/1937117.1937123","DOIUrl":"https://doi.org/10.1145/1937117.1937123","url":null,"abstract":"The availability of example source code on the web presents an array of potential learning resources for any code consumer. However not all code consumers may find these resources usable. With end-user programmers increasingly relying on example code on the web, any difficulty can prevent these code resources from reaching their potential as learning materials for users who may see the greatest benefits: inexperienced end-users. In this paper, we discuss freely available source code's usability for end-users. We focus on one problem area: supporting inexperienced end-users in selecting relevant code sections from examples they find interesting. We discuss a user study to evaluate the adequacy of two tools that can support non-programmers in this code selection task, and highlight design guidelines for future tools. Finally, we identify further challenges in transforming example code into usable learning materials for all end-users.","PeriodicalId":217446,"journal":{"name":"Workshop on Evaluation and Usability of Programming Languages and Tools","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124045928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Bellamy, Bonnie E. John, J. Richards, J. Thomas
{"title":"Using CogTool to model programming tasks","authors":"R. Bellamy, Bonnie E. John, J. Richards, J. Thomas","doi":"10.1145/1937117.1937118","DOIUrl":"https://doi.org/10.1145/1937117.1937118","url":null,"abstract":"In this paper, we describe the use of CogTool, a tool that enables non-psychologists to create cognitive models of user tasks from which reliable estimates of skilled user task times can be derived. We show how CogTool was used to compare a new parallel programming toolkit built on Eclipse, with Vim, a programming editor typically used in command line environments. This comparison was conducted to evaluate new parallel/scientific systems as part of the US Defense Advanced Research Projects Agency's High Productivity Computing Systems initiative. Our models indicate that for the four tasks analyzed, the new Eclipse tools are faster than the command line environments. Surprisingly, our models also reveal that despite programmers' preference for keyboard interaction in command line environments, mouse-based interaction is sometimes faster.","PeriodicalId":217446,"journal":{"name":"Workshop on Evaluation and Usability of Programming Languages and Tools","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124949230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deb Chatterji, Beverly Massengill, Jason Oslin, Jeffrey C. Carver, Nicholas A. Kraft
{"title":"Measuring the efficacy of code clone information: an empirical study","authors":"Deb Chatterji, Beverly Massengill, Jason Oslin, Jeffrey C. Carver, Nicholas A. Kraft","doi":"10.1145/1937117.1937121","DOIUrl":"https://doi.org/10.1145/1937117.1937121","url":null,"abstract":"Much recent research effort has been devoted to designing efficient code clone detection algorithms and tools. However, there has been little human-based empirical study of how the output of those tools is used by developers when performing maintenance tasks. In this study 43 computer science graduate students completed a bug localization task in which a clone was present while researchers observed their activities. The goal of the study was to understand how those developers use clone information to perform this task. The results of this study showed that participants who used the clone information correctly, i.e. they first identified a defect then used it to look for clones of the defect, were more effective than participants who used the information incorrectly. The results also showed that participants who had industrial experience were more effective than those without industrial experience.","PeriodicalId":217446,"journal":{"name":"Workshop on Evaluation and Usability of Programming Languages and Tools","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130027827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The API walkthrough method: a lightweight method for getting early feedback about an API","authors":"P. O'Callaghan","doi":"10.1145/1937117.1937122","DOIUrl":"https://doi.org/10.1145/1937117.1937122","url":null,"abstract":"We propose a method for evaluating the usability of an Application Programming Interface (API) in the context of MATLAB, a high-level programming language. The primary goal is to evaluate whether the participant can develop an accurate mental model of the API based on the code alone. Like traditional usability testing, this method takes place in a lab setting with a facilitator and observers, and a single participant is exposed to a prototype. Unlike traditional usability testing, the prototype is a static text document containing a series of programmatic statements. Rather than performing a task, the participant \"walks through\" the code line by line in an attempt to gain understanding of the system. Using standard usability testing protocols, the facilitators are able to assess whether the participant understands the API, as well as gather preference data between two designs.","PeriodicalId":217446,"journal":{"name":"Workshop on Evaluation and Usability of Programming Languages and Tools","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124170758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}