{"title":"Prioritizing User Feedback from Twitter: A Survey Report","authors":"Emitzá Guzmán, M. Ibrahim, M. Glinz","doi":"10.1109/CSI-SE.2017.4","DOIUrl":"https://doi.org/10.1109/CSI-SE.2017.4","url":null,"abstract":"Twitter messages (tweets) contain important information for software and requirements evolution, such as feature requests, bug reports and feature shortcoming descriptions. For this reason, Twitter is an important source for crowd-based requirements engineering and software evolution. However, a manual analysis of this information is unfeasible due to the large number of tweets, its unstructured nature and varying quality. Therefore, automatic analysis techniques are needed for, e.g., summarizing, classifying and prioritizing tweets. In this work we present a survey with 84 software engineering practitioners and researchers that studies the tweet attributes that are most telling of tweet priority when performing software evolution tasks. We believe that our results can be used to implement mechanisms for prioritizing user feedback with social components. Thus, it can be helpful for enhancing crowd-based requirements engineering and software evolution.","PeriodicalId":431605,"journal":{"name":"2017 IEEE/ACM 4th International Workshop on CrowdSourcing in Software Engineering (CSI-SE)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132163304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Crowd-Based Programming for Reactive Systems","authors":"D. Harel, Idan Heimlich, R. Marelly, Assaf Marron","doi":"10.1109/CSI-SE.2017.3","DOIUrl":"https://doi.org/10.1109/CSI-SE.2017.3","url":null,"abstract":"End-user applications aimed at the public in general (mobile and web applications, games, etc.) are usually developed with feedback from only a tiny fraction of the millions of intended users, and are thus built under significant uncertainty. The developer cannot really tell a priori which features the users will like, which they will dislike, and which ones will help create the desired outcome, such as high usage or increased revenue. In these cases, providing adaptive capabilities can be the key factor in the application's success. Existing self-adaptive techniques can provide some of the needed capabilities, but they too must be planned, and leave the developers, and much of the development process, \"out of the loop\". We propose a development environment that allows the wisdom of the crowd to influence the very structure and flow of the program being created, by voting upon behavioral choices as they are observed in early versions of the working program. The approach still allows the developers to retain known desired behaviors, and to enforce constraints on crowd-driven changes. The developers can also react to ongoing crowd-programmed feedback throughout the entire lifetime of the application.","PeriodicalId":431605,"journal":{"name":"2017 IEEE/ACM 4th International Workshop on CrowdSourcing in Software Engineering (CSI-SE)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133462477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Preliminary Findings on Software Engineering Practices in Civic Hackathons","authors":"Kiev Gama","doi":"10.1109/CSI-SE.2017.5","DOIUrl":"https://doi.org/10.1109/CSI-SE.2017.5","url":null,"abstract":"Civic hackathons gained momentum in the last years, mainly propelled by city halls and government agencies as a way to explore public data repositories. These initiatives became an attempt to crowdsource the development of software applications targeting government transparency and urban life, under the smart cities umbrella. Some authors have been criticizing the results of these competitions, complaining about the usefulness and quality of the software that is produced. However, academic literature has much anecdotal evidence on that, being scarce on empirical analysis of civic hackathons. Therefore, we intended to gather preliminary data not only to help verifying those claims but also to understand how teams in these competitions are tackling the different activities in their software development process, from requirements to application release and maintenance. In this work, we present preliminary results of these findings.","PeriodicalId":431605,"journal":{"name":"2017 IEEE/ACM 4th International Workshop on CrowdSourcing in Software Engineering (CSI-SE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123963413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juan Diaz-Mosquera, Pablo Sanabria, H. A. Neyem, Denis Parra, Jaime C. Navón
{"title":"Enriching Capstone Project-Based Learning Experiences Using a Crowdsourcing Recommender Engine","authors":"Juan Diaz-Mosquera, Pablo Sanabria, H. A. Neyem, Denis Parra, Jaime C. Navón","doi":"10.1109/CSI-SE.2017.1","DOIUrl":"https://doi.org/10.1109/CSI-SE.2017.1","url":null,"abstract":"Capstone project-based learning courses generate a suitable space where students can put into action knowledge specific to an area. In the case of Software Engineering (SE), students must apply knowledge at the level of Analysis, Design, Development, Implementation and Management of Software Projects. There is a large number of supportive resources for SE that one can find on the web, however, information overload ends up saturating the students who wish to find resources more accurate depending on their needs. This is why we propose a crowdsourcing recommender engine as part of an educational software platform. This engine based its recommendations on content from StackExchange posts using the project's profile in which a student is currently working. To generate the project's profile, our engine takes advantage of the information stored by students in the aforementioned platform. Content-based algorithms based on Okapi BM25 and Latent Dirichlet Allocation (LDA) are used to provide suitable recommendations. The evaluation of the engine was held with students from the capstone course in SE of the University Catholic of Chile. Results show that Cosine similarity over traditional bag-of-words TF-IDF content vectors yield interesting results, but they are outperformed by the integration of BM25 with LDA.","PeriodicalId":431605,"journal":{"name":"2017 IEEE/ACM 4th International Workshop on CrowdSourcing in Software Engineering (CSI-SE)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134080760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Machado, A. L. Zanatta, S. Marczak, R. Prikladnicki
{"title":"The Good, the Bad and the Ugly: An Onboard Journey in Software Crowdsourcing Competitive Model","authors":"L. Machado, A. L. Zanatta, S. Marczak, R. Prikladnicki","doi":"10.1109/CSI-SE.2017.6","DOIUrl":"https://doi.org/10.1109/CSI-SE.2017.6","url":null,"abstract":"This paper reports on a study that aimed to characterize how crowd workers experienced for the first time the use of TopCoder, a crowdsourcing platform for software development that implements a competitive model. We explored how they perceived collaboration in this setting, what challenges they faced to perform a single task, and reflect upon their suggestions to overcome the challenges their experienced. More specifically, we asked graduate students to select a development challenge task, work on it, and submit their contribution to the platform. Early analysis of the results: (1) reveal the potential benefits of software crowdsourcing from the crowd perspective, (2) discuss collaboration in a competitive model, and (3) highlight that the onboarding process for newcomers is seen as challenging. We discuss our findings in light of current literature.","PeriodicalId":431605,"journal":{"name":"2017 IEEE/ACM 4th International Workshop on CrowdSourcing in Software Engineering (CSI-SE)","volume":"330 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115970175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Winkler, M. Sabou, S. Petrovic, Gisele Carneiro, Marcos Kalinowski, S. Biffl
{"title":"Improving Model Inspection with Crowdsourcing","authors":"D. Winkler, M. Sabou, S. Petrovic, Gisele Carneiro, Marcos Kalinowski, S. Biffl","doi":"10.1109/CSI-SE.2017.2","DOIUrl":"https://doi.org/10.1109/CSI-SE.2017.2","url":null,"abstract":"Traditional Software Inspection is a well-established approach to identify defects in software artifacts and models early and efficiently. However, insufficient method and tool support hinder efficient defect detection in large software models. Recent Human Computation and Crowdsourcing processes may help to overcome this limitation by splitting complex inspection artifacts into smaller parts including a better control over defect detection tasks and increasing the scalability of inspection tasks. Therefore, we introduce a Crowdsourcing-Based Inspection (CSI) process with tool support with focus on inspection teams and the quality of defect detection. We evaluate the CSI process in a feasibility study involving 63 inspectors using the CSI process and 12 inspectors using a traditional best-practice inspection process. The CSI process was found useful by the participants. Although the preliminary results of the study were promising, the CSI process should be further investigated with typical large software engineering models.","PeriodicalId":431605,"journal":{"name":"2017 IEEE/ACM 4th International Workshop on CrowdSourcing in Software Engineering (CSI-SE)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128347847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}