{"title":"A Report on Modelling Symposium (ModSym)","authors":"Y. R. Reddy, V. Kulkarni","doi":"10.1145/2723742.2723770","DOIUrl":"https://doi.org/10.1145/2723742.2723770","url":null,"abstract":"The principal objective of modelling symposium is to help form an active community working on various aspects of modelling and its applications. The focus is on problems that would be critical 3-5 years out in future. For example, problem spaces such as enterprises, systems, cyber-physical systems, etc. will be given more importance. The event has keynote and 6 invited talks delivered by personnel from Academia and Industry around the world. ModSym invited students and researchers to submit posters and extended abstracts.","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125645880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Hybrid Technique for Software Reliability Prediction","authors":"Jayadeep Pati, K. K. Shukla","doi":"10.1145/2723742.2723756","DOIUrl":"https://doi.org/10.1145/2723742.2723756","url":null,"abstract":"Reliability is an important factor of software quality. The accurate prediction of software reliability is a challenging task. There exist many reliability models to predict the reliability based on software testing activities. There are many software reliability growth models (SRGMs) developed to predict the reliability but they have many unrealistic assumptions and they are also environment dependent. The accuracy of the models is also questionable. In this paper we have used a time series approach for software reliability prediction. We have used an ensemble technique called hybrid ARIMA (ARIMA + NN) for prediction of software reliability based on real life data on software failures. This paper also gives a comparative analysis of forecasting performance of hybrid ARIMA, and ARIMA models. Empirical results indicate that a hybrid ARIMA model can improve the prediction accuracy.","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129761252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How the Repository Driven Test Automation (RDTA) will make test automation more efficient, easier & maintainable","authors":"D. Almog, Yaron Tsubery","doi":"10.1145/2723742.2723762","DOIUrl":"https://doi.org/10.1145/2723742.2723762","url":null,"abstract":"Many testing organizations find themselves lagging behind the development forces on the agile journey; it is obvious for us to demand an easy integration between all software program elements developed. Do we treat our testing artifacts in the same manner? It is time to make the change. Take this opportunity and prepare a right testing infrastructure that will enable you to reuse and integrate all testing artifacts as basic components for the same puzzle. Make test automation to integrate and operate in similar manner. Mobile emerging world present this issue even harder - the versatile and speedy continues integration required to multiply our demand for reuse and efficiency. Proposing reuse of testing artifacts as a fundamental principle for the creation test automation is what makes RDTA a unique approach to the buildup process of test automation infrastructure. Having your attention aimed to the question; what will be done with your efforts later on and how someone else can benefit from your previous efforts? Drove us to the new concept for forming test automation. The term software repository here refers to a storage location from which software packages or artifacts may be retrieved to be reuse on another system or other software products. This tutorial explores different aspect of software test automation artifacts reuse and elaborate several practical implication and changes implied from the new paradigm to be implemented in a software development organization. Report about an empirical case study of implementation of RDTA in a large organization will conclude this tutorial, presenting the benefits and some of the obstacles facing this approach. RDTA approach is yet to be matures and elaborated, this tutorial will present several new subject needed to be formally theoretically modeled and addressed","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128331727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Systemic Requirements of a Software Engineering Learning Environment","authors":"Kirti Garg, Vasudeva Varma","doi":"10.1145/2723742.2723757","DOIUrl":"https://doi.org/10.1145/2723742.2723757","url":null,"abstract":"Software Engineering (SE) educators worldwide are attempting to create learning environments that can effectively achieve their desired learning objectives. However, there exist additional needs that impact the learning process and the overall quality of a learning environment. We identified two sets of differentiating requirements, Climatic and Systemic, whose inclusion in design can assist in an effective, sustainable and usable SE learning environment. In this paper, we will describe the Systemic requirements, i.e. the desired system wide capabilities that impact the sustainability of a SE learning environment by affecting its operationalization and use in short and long term. We will also discuss, through few examples, the interactions between various differentiating requirements. Current SE course design and evaluation consider these as challenges to deal later, instead of addressing them through a conscientious design. Such courses find it hard to sustain and evolve with time, despite using powerful pedagogies. We intend to change this design approach by identifying and recording the various needs (as requirements) and their influence on the learning environment. Our aim is to draw attention to these differentiating requirements and help the educators look beyond learning objectives and move towards a more holistic and systematic design of SE learning environments.","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130852591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pravaaha: Mining Android Applications for Discovering API Call Usage Patterns and Trends","authors":"Y. Lamba, Manisha Khattar, A. Sureka","doi":"10.1145/2723742.2723743","DOIUrl":"https://doi.org/10.1145/2723742.2723743","url":null,"abstract":"Software libraries and frameworks, consisting of a collection of Class and Interface definitions, provide a mechanism for code reuse by providing methods, APIs, components (generic functionality) and a support structure for developers to build applications, products and solutions. KitKat, Jelly Bean, Ice Cream Sandwich, Honeycomb and Gingerbread are different versions (open-source) of Android, one of the most popular mobile platforms in the world. In this paper, we present the results of our large-scale (consisting of 1, 120 open-source applications and 17.4 million lines of code) API usage analysis of Android applications. Our work is motivated by the need to mine actual Android API usage, frequent API call usage patterns and trends to understand and generate empirical data on how developers are using the mobile platform in their applications. Extracting popular and frequently-invoked methods, API packages and API call-usage patterns is useful to both the API Producers and API Consumers. For example, API Producers can view the quantitative data on API usage as a feedback from users on the relevance, usability and applicability of the respective APIs. We conduct a series of experiments on analysing the Android platform API usage (usage of different packages, usage of methods, usage across categories) and present the results of our analysis using graphs such as Bubble Chart, Radar Chart, Heat-Map for effective visualization of the results and for extraction of actionable information.","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131885054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SARATHI: Characterization Study on Regression Bugs and Identification of Regression Bug Inducing Changes: A Case-Study on Google Chromium Project","authors":"Manisha Khattar, Y. Lamba, A. Sureka","doi":"10.1145/2723742.2723747","DOIUrl":"https://doi.org/10.1145/2723742.2723747","url":null,"abstract":"As a software system evolves, maintaining the system becomes increasingly difficult. A lot of times code changes or system patches cause an existing feature to misbehave or fail completely. An issue ticket reporting a defect in a feature that was working earlier, is known as a Regression Bug. Running a test suite to validate the new features getting added and faults introduced in previously working code, after every change is impractical. As a result, by the time an issue is identified and reported a lot of changes are made to the source code, which makes it very difficult for the developers to find the regression bug inducing change. Regression bugs are considered to be inevitable and truism in large and complex software systems [1]. Issue Tracking System (ITS) are applications to track and manage issue reports and to archive bug or feature enhancement requests. Version Control System (VCS) are source code control systems recording the author, timestamp, commit message and modified files. We first conduct an in-depth characterization study of regression bugs by mining issue tracking system dataset belonging to a large and complex software system i.e. Google Chromium Project. We then describe our solution approach to find the regression bug inducing change, based on mining ITS and VCS data. We build a recommendation engine Sarathi to assist a bug fixer in locating a regression bug inducing change and validate the system on real world Google Chromium project.","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132885809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simplifying Web Programming","authors":"Nishant Sinha, Rezwana Karim, Monika Gupta","doi":"10.1145/2723742.2723750","DOIUrl":"https://doi.org/10.1145/2723742.2723750","url":null,"abstract":"Modern web programming is plagued by a jungle of heterogeneous programming frameworks and lacks adequate abstractions for end-to-end rapid, structured, design and development. We studied the current problems faced by developers using an online survey, and found that integrating client-side interactivity with the back-end is a major source of inefficiency. Based on the reported issues, we developed a new programming environment, called WebNat, to reduce the burden of client-server programming. WebNat makes it easy to specify bindings of client-side views with server-side data and provides multiple abstractions that enable succinct specifications for interactive web applications. We conducted a user study to understand its usefulness and barriers to adoption. Our participants were able to learn and use WebNat in less than 2 hours showing minimal learning curve. We also discovered that although novices embrace the system readily, experience developers are more cautious about adopting a new web programming framework.","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133144012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predicting Object-Oriented Software Maintainability using Hybrid Neural Network with Parallel Computing Concept","authors":"L. Kumar, S. K. Rath","doi":"10.1145/2723742.2723752","DOIUrl":"https://doi.org/10.1145/2723742.2723752","url":null,"abstract":"Software maintenance is an important aspect of software life cycle development, hence prior estimation of effort for maintainability plays a vital role. Existing approaches for maintainability estimation are mostly based on regression analysis and neural network approaches. It is observed that numerous software metrics are even used as input for estimation. In this study, Object-Oriented software metrics are considered to provide requisite input data for designing a model. It helps in estimating the maintainability of Object-Oriented software. Models for estimating maintainability are designed using the parallel computing concept of Neuro-Genetic algorithm (hybrid approach of neural network and genetic algorithm). This technique is employed to estimate the software maintainability of two case studies such as the User Interface System (UIMS), and Quality Evaluation System (QUES). This paper also focuses on the effectiveness of feature reduction techniques such as rough set analysis (RSA) and principal component analysis (PCA). The results show that, RSA and PCA obtained better results for UIMS and QUES respectively. Further, it observed the parallel computing concept is helpful in accelerating the training procedure of the neural network model.","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128060507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lack of Conceptual Cohesion of Methods: A new alternative to Lack Of Cohesion of Methods","authors":"Vaibhav Jain, Arpit Gupta","doi":"10.1145/2723742.2723753","DOIUrl":"https://doi.org/10.1145/2723742.2723753","url":null,"abstract":"While often defined in informal ways, class cohesion reflects important properties of modules in a software system. High cohesion for classes is one of the desirable properties in Object Oriented (OO) analysis as it supports program comprehension, testing, reusability, maintainability. Cohesion metrics have been used for quality assessment, fault prediction, software modularization etc. Existing approaches of class cohesion metrics are largely based on the structural information of the source code, such as attribute references in class methods. These cohesion metrics reflect particular interpretations of cohesion. However, only looking at structural aspect of cohesion is not sufficient for completely and accurately specifying class cohesion. So there is a need to pay attention on other aspects of cohesion like conceptual aspect. But only few conceptual metrics have been proposed till now. In our work, we have proposed a new set of cohesion metrics named LCCM (Lack of Conceptual Cohesion of Methods) metrics. These cohesion metrics are conceptual version of widely used LCOM (Lack of Cohesion of methods) metrics. LCOM metrics measure cohesion on structural information extracted entirely from the source code (e.g., attribute references in methods and method calls) that captures the degree to which the elements of a class belong together from a structural point of view. Proposed LCCM metrics use conceptual concerns embedded in source code entities for measuring class cohesion. These metrics are based on the analysis of latent topics embedded in comments and identifiers in source code. Latent Drichlet Allocation (LDA), a topic modeling tool is used for this purpose. These topics are used by proposed LCCM metrics to define similarity between methods of a class and on the basis of this similarity; proposed LCCM metrics define cohesion of the class. For the verification of proposed metrics, a case study on an open source java software system, called Rhino, is performed. The case study indicates that the novel cohesion metrics capture different aspects of class cohesion compared to the exiting cohesion metrics.","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125154867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recommendation and Regression Test Suite Optimization Using Heuristic Algorithms","authors":"K. Prakash, S. Prasad, D. G. Krishna","doi":"10.1145/2723742.2723765","DOIUrl":"https://doi.org/10.1145/2723742.2723765","url":null,"abstract":"In the Software Development Life Cycle, testing is an integral and important phase. It is estimated that close to 45% of project cost is marked for testing. Defect removal efficiency is directly proportional to the rigor of the testing and number of test cycles. Given this prelude, important optimization dual is to reduce the testing time and cost without compromising on the quality and coverage. We revisit this popular research and industry sought problem, in the historical data perspective. For this, it is important to follow an approach and minimize the available test suites and recommend N Test cases based on multiple heuristics. The heuristics can be derived based on Test Manager, Test Lead and/or Test Director requirements and inputs. The N test cases that are to be recommended will be derived upon executing evolutionary randomized algorithms such as Random Forest / Genetic Algorithm. These algorithms fed with historically derived inputs such as test case execution frequency, test case failure pattern, change feature pattern and bug fixes & associations. The recommended test suite is further optimized based on a 2 dimensional approach. Test case specific vertical constraints such as distribution of environments, distribution of features as well as Test suite composition parameters such as golden test cases, sanity test cases, that serves as horizontal parameters.","PeriodicalId":288030,"journal":{"name":"Proceedings of the 8th India Software Engineering Conference","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127919858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}