{"title":"A Report on Software Engineering Education Workshop (SEED) colocated with ISEC 2015","authors":"Bimlesh Wadhwa, Bharat Gera, A. Sureka","doi":"10.1145/2723742.2723769","DOIUrl":"https://doi.org/10.1145/2723742.2723769","url":null,"abstract":"SEED 2015, The 1st International workshop on Software Engineering Education (co-located with India Software Engineering Conference - ISEC 2015 from 18-20 February) with a theme Evolving Body of Knowledge of Software Engineering - New ideas and Innovative methods in the era of Cloud Computing, Big Data and Internet of Things (IoT), aims to bring together Software Engineering (SE) educators and practitioners into a dialogue to build a shared understanding of Software Engineering curriculum topics and specific issues in teaching and learning of Software Engineering with respect to the emerging topics of Cloud Computing, Big Data and Internet of Things (IoT). SEED 2015 invited Position Papers (maximum 6 pages long) in the area of Software Engineering. The workshop accepted 7 papers and consisted of invited talks as well as group discussions in addition to the position paper presentations.","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126069180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ahaan: Software Process Intelligence: Mining Software Process Data for Extracting Actionable Information","authors":"A. Sureka, Atul Kumar, Shrinath Gupta","doi":"10.1145/2723742.2723763","DOIUrl":"https://doi.org/10.1145/2723742.2723763","url":null,"abstract":"Software Processes consist of a structured set of activities performed during creation and maintenance of software products. The construction and subsequent maintenance of a software is facilitated by several applications and tools. Some of the tools such as Issue Tracking System (ITS) and Version Control System (VCS) can be classified as Process Aware Information System (PAIS) logging data consisting of events, activities, time-stamp, user or actor and context specific information. Such events or trace data generated by information systems used during software construction (as part of the software development process) contains valuable information which can be mined for gaining useful insights and actionable information. Software Process Intelligence (SPI) is an emerging and evolving discipline involving mining and analysis of software processes. This is modeled on the lines of Business Process Intelligence (BPI), but with the focus on software processes and its applicability in software systems. In this paper, we present a generic framework for Software Process Intelligence and some of its applications.","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131722450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Approach of Generating Test Requirements for Agile Software Development","authors":"S. Tiwari, Atul Gupta","doi":"10.1145/2723742.2723761","DOIUrl":"https://doi.org/10.1145/2723742.2723761","url":null,"abstract":"Test driven development, an agile programming approach, has been advocated as a promising approach for developing quality code. This approach aims at developing minimal, non-redundant code accepting only one test case at a time. However, not much has been deliberated in the literature on how to systematically generate test cases for incremental coding. In this paper, we propose an approach of generating test requirements from use case specification for test driven development. The approach consists of first constructing actor-oriented activity diagrams from use case specifications, and then generating systematic, prioritized test sequences from these activity diagrams for developing code incrementally. We demonstrate the applicability of the proposed approach using an illustrative example and discuss other related issues. We also present a comparative study of the effectiveness of test sequences generated from our approach with some other existing use case based test case generation approaches.","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121853491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Will this be Quick?: A Case Study of Bug Resolution Times across Industrial Projects","authors":"Subhajit Datta, Prasanth Lade","doi":"10.1145/2723742.2723744","DOIUrl":"https://doi.org/10.1145/2723742.2723744","url":null,"abstract":"Resolution of problem tickets is a source of significant revenue in the worldwide software services industry. Due to the high volume of problem tickets in any large scale customer engagement, automated techniques are necessary to segregate related incoming tickets into groups. Existing techniques focus on this classification problem. In this paper, we present a case study built around the position that predicting the category of resolution times within a class of tickets and also the actual resolution times, is strongly beneficial to ticket resolution. We present an approach based on topic analysis to predict the category of resolution times of incoming tickets and validate it on a data-set of 49,000+ problem tickets across 14 classes from four real-life projects. To establish the effectiveness of our approach, we compare topic features with traditional features for both classification and regression problems. Our results indicate the promise of topic analysis based approaches for large scale problem ticket management.","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134334700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Atul Kumar, R. Jetley, Anil R. Nair, A. Rath, A. Vaidyanathan
{"title":"Analyzing Dependencies in an Industrial Automation Engineering System","authors":"Atul Kumar, R. Jetley, Anil R. Nair, A. Rath, A. Vaidyanathan","doi":"10.1145/2723742.2723748","DOIUrl":"https://doi.org/10.1145/2723742.2723748","url":null,"abstract":"This paper presents an approach to compute and visualize dependencies between elements of an industrial automation system. The various system elements are represented by a network of vertices and edges in a graph, which in turn is used for computing dependencies, assessing the impact of a change and providing a mechanism for visualizing relationships between the individual elements. The final goal of this effort is to improve the engineering efficiency by providing a system that computes and visualizes the engineering project data dependencies and provides the user with a visually-driven rapid-search and navigation mechanism.","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133132499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Verb-based Semantic Modelling and Analysis of Textual Requirements","authors":"Shubhashis Sengupta, Roshni Ramnani, Subhabrata Das, Anitha Chandran","doi":"10.1145/2723742.2723745","DOIUrl":"https://doi.org/10.1145/2723742.2723745","url":null,"abstract":"Automated machine analysis of natural language requirements poses several challenges. Complex requirements such as functional requirements and use cases are hard to parse and analyze, the language itself is un-constrained, the flow of requirements may be haphazard, and one requirement may contradict another - to name a few challenges. In this paper, we present a lightweight semantic modeling technique through natural language processing to filter requirements and create a semi-formal semantic network of requirement sentences. We employ novel techniques of classification of verbs used in requirements, semantic role labeling, discourse identification, and a few verb entailment and dependency relationships to generate a lightweight semantic network and critique the requirements. We discuss the design of the model and some early results obtained from analyzing real-life industrial requirements.","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129433264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated Test Generation Using Concolic Testing","authors":"Koushik Sen","doi":"10.1145/2723742.2723768","DOIUrl":"https://doi.org/10.1145/2723742.2723768","url":null,"abstract":"In this talk, I will talk about the recent advances and challenges in concolic testing and symbolic execution. Concolic testing, also known as directed automated random testing (DART) or dynamic symbolic execution, is an efficient way to automatically and systematically generate test inputs for programs. Concolic testing uses a combination of runtime symbolic execution and automated theorem proving techniques to generate automatically non-redundant and exhaustive test inputs. Concolic testing has inspired the development of several industrial and academic automated testing and security tools such as PEX, SAGE, and YOGI at Microsoft, Apollo at IBM, Conbol at Samsung, and CUTE, jCUTE, CATG, Jalangi, SPLAT, BitBlaze, jFuzz, Oasis, and SmartFuzz in academia. A central reason behind the wide adoption of concolic testing is that, while concolic testing uses program analysis and automated theorem proving techniques internally, it exposes a testing usage model that is familiar to most software developers. A key challenge in concolic testing techniques is scalability for large realistic programs---often the number of feasible execution paths of a program increases exponentially with the increase in the length of an execution path. I will describe MultiSE, a new technique for merging states incrementally during symbolic execution, without using auxiliary variables. The key idea of MultiSE is based on an alternative representation of the state, where we map each variable, including the program counter, to a set of guarded symbolic expressions called a value summary. MultiSE has several advantages over conventional DSE and state merging techniques: 1) value summaries enable sharing of symbolic expressions and path constraints along multiple paths, 2) value-summaries avoid redundant execution, 3) MultiSE does not introduce auxiliary symbolic values, which enables it to make progress even when merging values not supported by the constraint solver, such as floating point or function values. We have implemented MultiSE for JavaScript programs in a publicly available open-source tool . Our evaluation of MultiSE on several programs shows that MultiSE can run significantly faster than traditional symbolic execution.","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125080478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tushar Sharma, Ganesh Samarthyam, G. Suryanarayana
{"title":"Applying Design Principles in Practice","authors":"Tushar Sharma, Ganesh Samarthyam, G. Suryanarayana","doi":"10.1145/2723742.2723764","DOIUrl":"https://doi.org/10.1145/2723742.2723764","url":null,"abstract":"It is widely recognized that the systematic application of relevant design principles plays a key role in taming the complexity involved in real-world software development. In our experience as consultant architects and trainers on software architecture and design in a corporate setting, we find that software engineers have a theoretical understanding of design principles but sorely lack knowledge about how to properly apply them to in real-world situations to achieve high-quality design. This half-day tutorial is geared towards bridging this gap. It first introduces fundamental design principles (viz. abstraction, encapsulation, modularization, and hierarchy) and describes how these can be applied using a set of corresponding enabling techniques. Next, it discusses how design smells can be viewed as violations of these enabling techniques, and explains how these design smells can be refactored via the application of relevant design principles. Finally, it highlights useful tools, processes, and practical techniques that support the application of design principles in real-world software projects.","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127756875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Naturalness of Natural Language Artifacts in Software","authors":"G. Sridhara, Vibha Sinha, Senthil Mani","doi":"10.1145/2723742.2723758","DOIUrl":"https://doi.org/10.1145/2723742.2723758","url":null,"abstract":"We present a study on the naturalness of the natural language artifacts in software. Naturalness is essentially repetitiveness or predictability. By natural language artifacts, we mean source code comments, revision history messages, bug reports and so on. We measure \"naturalness\" using a standard measure, cross-entropy or perplexity from the widely used N-Gram models. Previously, Hindle et al. demonstrated empirically that source code was comparatively more repetitive or regular (i.e., more natural) when compared with traditional English text. A question that logically follows from their work is the naturalness of other artifacts associated with software. We present our findings on source code comments, commit logs, bug reports, string messages and content present in the popular question and answer forum, StackOverflow. Each of the artifact that we examine is a natural language artifact that is associated with software. However, they do not exhibit the same amount of regularity (naturalness). Commit logs were the most regular, followed by string literal messages and source code comments. Content from StackOverflow (viz., title, question and answers) showed a behavior similar to traditional English text i.e., comparatively lesser regularity. Bug reports from industrial projects exhibited more regularity than bug reports from open source projects, whose naturalness resembled that of typical English text. Our findings have implications for feasibility of building tools such as comment and bug report completion engines. We describe a next-word prediction tool that we built using the N-Gram language model. This tool achieved an accuracy ranging from 70 to 90% on commit messages in different projects. It also achieved an accuracy ranging from 56 to 78% on source comments. We also present a part of speech based analysis of words that are easy to predict and difficult to predict.","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114219924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evidence Engineering","authors":"A. Mockus","doi":"10.1145/2723742.2723767","DOIUrl":"https://doi.org/10.1145/2723742.2723767","url":null,"abstract":"The costs of computing have decreased a billion times over half a century. The focus of software engineering has consequently transformed from trying to squeeze as much as possible from every compute cycle and of every bit of memory, to improving developer productivity, and, as of late, to engineering user experiences and behaviors. As computing becomes a commodity, software is omnipresent in all parts of life and, therefore, it either helps end users make decisions or it makes decisions for them. Because most users are not able to understand software systems or articulating their needs, software systems have both to collect massive amounts of operational data related to user activities and to analyze and use that data to provide user experiences that lead to desired outcomes, e.g., increasing sales revenue or the quality of software (if the user happens to be a software developer). It no longer suffices to deliver software that requires, for example, an entry field for a specific piece of data. Instead, the software has to ensure that users can and will enter the relevant data or it has to obtain the data by observing user behavior. Moreover, the software has to ensure that the resulting data reflects the intended quantities, and that the quality of that data is sufficient to make important decisions either automatically of with human support. Such software is engineered to provide accurate and actionable evidence and, therefore, it requires novel approaches to design, implement, test, and operate it. The criteria for success demand much more than performing a well-defined task according to specification. Software has to provide evidence that is both accurate and also leads to the intended user behavior. In contexts where the desired user behaviors are relatively well defined, some existing software systems achieve these goals through detailed measurement of behavior and massive use of AB testing (in which two samples of users provided slightly different versions of software in order to estimate the effect these differences have on user activity). It is not clear if and how these approaches could generalize to the setting where the desired behaviors are less clearly defined or vary among users. As operation and measurement are becoming increasingly a part of software development, the separation between the software tools and end-user software are increasingly blurred. Similarly, the measurement associated with testing and use of software is increasingly becoming an integral part of the software delivered to users. Software engineering needs to catch up with these realities by adjusting the topics of its study. Software construction, development, build, delivery, and operation will become increasingly critical tools and an integral part of the software system. Simply concerning ourselves with architectures and languages to support scalable computation and storage will not be enough. Software systems will have to produce compelling evidence, not s","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133144233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}