Empirical Software Engineering最新文献

筛选
英文 中文
Common challenges of deep reinforcement learning applications development: an empirical study 深度强化学习应用开发的常见挑战:实证研究
IF 4.1 2区 计算机科学
Empirical Software Engineering Pub Date : 2024-06-14 DOI: 10.1007/s10664-024-10500-5
Mohammad Mehdi Morovati, Florian Tambon, Mina Taraghi, Amin Nikanjam, Foutse Khomh
{"title":"Common challenges of deep reinforcement learning applications development: an empirical study","authors":"Mohammad Mehdi Morovati, Florian Tambon, Mina Taraghi, Amin Nikanjam, Foutse Khomh","doi":"10.1007/s10664-024-10500-5","DOIUrl":"https://doi.org/10.1007/s10664-024-10500-5","url":null,"abstract":"<p>Machine Learning (ML) is increasingly being adopted in different industries. Deep Reinforcement Learning (DRL) is a subdomain of ML used to produce intelligent agents. Despite recent developments in DRL technology, the main challenges that developers face in the development of DRL applications are still unknown. To fill this gap, in this paper, we conduct a large-scale empirical study of <b>927</b> DRL-related posts extracted from Stack Overflow, the most popular Q &amp;A platform in the software community. Through the process of labeling and categorizing extracted posts, we created a taxonomy of common challenges encountered in the development of DRL applications, along with their corresponding popularity levels. This taxonomy has been validated through a survey involving 65 DRL developers. Results show that at least <span>(45%)</span> of developers experienced 18 of the 21 challenges identified in the taxonomy. The most frequent source of difficulty during the development of DRL applications are <i>Comprehension</i>, <i>API usage</i>, and <i>Design problems</i>, while <i>Parallel processing</i>, and <i>DRL libraries/frameworks</i> are classified as the most difficult challenges to address, with respect to the time required to receive an accepted answer. We hope that the research community will leverage this taxonomy to develop efficient strategies to address the identified challenges and improve the quality of DRL applications</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"97 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141521544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Studying the explanations for the automated prediction of bug and non-bug issues using LIME and SHAP 研究使用 LIME 和 SHAP 自动预测错误和非错误问题的原因
IF 4.1 2区 计算机科学
Empirical Software Engineering Pub Date : 2024-06-13 DOI: 10.1007/s10664-024-10469-1
Lukas Schulte, Benjamin Ledel, Steffen Herbold
{"title":"Studying the explanations for the automated prediction of bug and non-bug issues using LIME and SHAP","authors":"Lukas Schulte, Benjamin Ledel, Steffen Herbold","doi":"10.1007/s10664-024-10469-1","DOIUrl":"https://doi.org/10.1007/s10664-024-10469-1","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Context</h3><p>The identification of bugs within issues reported to an issue tracking system is crucial for triage. Machine learning models have shown promising results for this task. However, we have only limited knowledge of how such models identify bugs. Explainable AI methods like LIME and SHAP can be used to increase this knowledge.</p><h3 data-test=\"abstract-sub-heading\">Objective</h3><p>We want to understand if explainable AI provides explanations that are reasonable to us as humans and align with our assumptions about the model’s decision-making. We also want to know if the quality of predictions is correlated with the quality of explanations.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>We conduct a study where we rate LIME and SHAP explanations based on their quality of explaining the outcome of an issue type prediction model. For this, we rate the quality of the explanations, i.e., if they align with our expectations and help us understand the underlying machine learning model.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>We found that both LIME and SHAP give reasonable explanations and that correct predictions are well explained. Further, we found that SHAP outperforms LIME due to a lower ambiguity and a higher contextuality that can be attributed to the ability of the deep SHAP variant to capture sentence fragments.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>We conclude that the model finds explainable signals for both bugs and non-bugs. Also, we recommend that research dealing with the quality of explanations for classification tasks reports and investigates rater agreement, since the rating of explanations is highly subjective.</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"52 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141521631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How far are we with automated machine learning? characterization and challenges of AutoML toolkits 自动机器学习进展如何? AutoML 工具包的特点和挑战
IF 4.1 2区 计算机科学
Empirical Software Engineering Pub Date : 2024-06-13 DOI: 10.1007/s10664-024-10450-y
Md Abdullah Al Alamin, Gias Uddin
{"title":"How far are we with automated machine learning? characterization and challenges of AutoML toolkits","authors":"Md Abdullah Al Alamin, Gias Uddin","doi":"10.1007/s10664-024-10450-y","DOIUrl":"https://doi.org/10.1007/s10664-024-10450-y","url":null,"abstract":"<p>Automated Machine Learning aka AutoML toolkits are low/no-code software that aim to democratize ML system application development by ensuring rapid prototyping of ML models and by enabling collaboration across different stakeholders in ML system design (e.g., domain experts, data scientists, etc.). It is thus important to know the state of current AutoML toolkits and the challenges ML practitioners face while using those toolkits. In this paper, we first offer a characterization of currently available AutoML toolits by analyzing 37 top AutoML tools and platforms. We find that the top AutoML platforms are mostly cloud-based. Most of the tools are optimized for the adoption of shallow ML models. Second, we present an empirical study of 14.3K AutoML related posts from Stack Overflow (SO) that we analyzed using topic modelling algorithm LDA (Latent Dirichlet Allocation) to understand the challenges of ML practitioners while using the AutoML toolkits. We find 13 topics in the AutoML related discussions in SO. The 13 topics are grouped into four categories: MLOps (43% of all questions), Model (28% questions), Data (27% questions), and Documentation (2% questions). Most questions are asked during Model training (29%) and Data preparation (25%) phases. AutoML practitioners find the MLOps topic category most challenging. Topics related to the MLOps category are the most prevalent and popular for cloud-based AutoML toolkits. Based on our study findings, we provide 15 recommendations to improve the adoption and development of AutoML toolkits.</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"61 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141521640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An empirical study of fault localization in Python programs Python 程序故障定位实证研究
IF 4.1 2区 计算机科学
Empirical Software Engineering Pub Date : 2024-06-13 DOI: 10.1007/s10664-024-10475-3
Mohammad Rezaalipour, Carlo A. Furia
{"title":"An empirical study of fault localization in Python programs","authors":"Mohammad Rezaalipour, Carlo A. Furia","doi":"10.1007/s10664-024-10475-3","DOIUrl":"https://doi.org/10.1007/s10664-024-10475-3","url":null,"abstract":"<p>Despite its massive popularity as a programming language, especially in novel domains like data science programs, there is comparatively little research about fault localization that targets Python. Even though it is plausible that several findings about programming languages like C/C++ and Java—the most common choices for fault localization research—carry over to other languages, whether the dynamic nature of Python and how the language is used in practice affect the capabilities of classic fault localization approaches remain open questions to investigate. This paper is the first multi-family large-scale empirical study of fault localization on real-world Python programs and faults. Using Zou et al.’s recent large-scale empirical study of fault localization in Java (Zou et al. 2021) as the basis of our study, we investigated the effectiveness (i.e., localization accuracy), efficiency (i.e., runtime performance), and other features (e.g., different entity granularities) of seven well-known fault-localization techniques in four families (spectrum-based, mutation-based, predicate switching, and stack-trace based) on 135 faults from 13 open-source Python projects from the <span>BugsInPy</span> curated collection (Widyasari et al. 2020). The results replicate for Python several results known about Java, and shed light on whether Python’s peculiarities affect the capabilities of fault localization. The replication package that accompanies this paper includes detailed data about our experiments, as well as the tool <span>FauxPy</span> that we implemented to conduct the study.</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"68 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141531677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Utilization of pre-trained language models for adapter-based knowledge transfer in software engineering 在软件工程中利用预训练语言模型进行基于适配器的知识转移
IF 4.1 2区 计算机科学
Empirical Software Engineering Pub Date : 2024-06-13 DOI: 10.1007/s10664-024-10457-5
Iman Saberi, Fatemeh Fard, Fuxiang Chen
{"title":"Utilization of pre-trained language models for adapter-based knowledge transfer in software engineering","authors":"Iman Saberi, Fatemeh Fard, Fuxiang Chen","doi":"10.1007/s10664-024-10457-5","DOIUrl":"https://doi.org/10.1007/s10664-024-10457-5","url":null,"abstract":"<p>Software Engineering (SE) Pre-trained Language Models (PLMs), such as CodeBERT, are pre-trained on large code corpora, and their learned knowledge has shown success in transferring into downstream tasks (e.g., code clone detection) through the fine-tuning of PLMs. In Natural Language Processing (NLP), an alternative in transferring the knowledge of PLMs is explored through the use of <i>adapter</i>, a compact and <b>parameter efficient</b> module that is inserted into a PLM. Although the use of adapters has shown promising results in many NLP-based downstream tasks, their application and exploration in SE-based downstream tasks are limited. Here, we study the knowledge transfer using adapters on multiple downstream tasks including cloze test, code clone detection, and code summarization. These adapters are trained on code corpora and are inserted into a PLM that is pre-trained on English corpora or code corpora. We called these PLMs as NL-PLM and C-PLM, respectively. We observed an improvement in results using NL-PLM over a PLM that does not have adapters, and this suggested that adapters can transfer and utilize useful knowledge from NL-PLM to SE tasks. The results are sometimes on par with or exceed the results of C-PLM; while being more efficient in terms of the number of parameters and training time. Interestingly, adapters inserted into a C-PLM generally yield better results than a traditional fine-tuned C-PLM. Our results open new directions to build more compact models for SE tasks.</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"355 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141521633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adoption of automated software engineering tools and techniques in Thailand 泰国采用自动化软件工程工具和技术的情况
IF 4.1 2区 计算机科学
Empirical Software Engineering Pub Date : 2024-06-10 DOI: 10.1007/s10664-024-10472-6
Chaiyong Ragkhitwetsagul, Jens Krinke, Morakot Choetkiertikul, Thanwadee Sunetnanta, Federica Sarro
{"title":"Adoption of automated software engineering tools and techniques in Thailand","authors":"Chaiyong Ragkhitwetsagul, Jens Krinke, Morakot Choetkiertikul, Thanwadee Sunetnanta, Federica Sarro","doi":"10.1007/s10664-024-10472-6","DOIUrl":"https://doi.org/10.1007/s10664-024-10472-6","url":null,"abstract":"<p>Readiness for the adoption of Automated Software Engineering (ASE) tools and techniques can vary according to the size and maturity of software companies. ASE tools and techniques have been adopted by large or ultra-large software companies. However, little is known about the adoption of ASE tools and techniques in small and medium-sized software enterprises (SSMEs) in emerging countries, and the challenges faced by such companies. We study the adoption of ASE tools and techniques for software measurement, static code analysis, continuous integration, and software testing, and the respective challenges faced by software developers in Thailand, a developing country with a growing software economy which mainly consists of SSMEs (similar to other developing countries). Based on the answers from 103 Thai participants in an online survey, we found that Thai software developers are somewhat familiar with ASE tools and agree that adopting such tools would be beneficial. Most of the developers do not use software measurement or static code analysis tools due to a lack of knowledge or experience but agree that their use would be useful. Continuous integration tools have been used with some difficulties. Lastly, although automated testing tools are adopted despite several serious challenges, many developers are still testing the software manually. We call for improvements in ASE tools to be easier to use in order to lower the barrier to adoption in small and medium-sized software enterprises (SSMEs) in developing countries.</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"61 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding the characteristics and the role of visual issue reports 了解视觉问题报告的特点和作用
IF 4.1 2区 计算机科学
Empirical Software Engineering Pub Date : 2024-06-10 DOI: 10.1007/s10664-024-10459-3
Hiroki Kuramoto, Dong Wang, Masanari Kondo, Yutaro Kashiwa, Yasutaka Kamei, Naoyasu Ubayashi
{"title":"Understanding the characteristics and the role of visual issue reports","authors":"Hiroki Kuramoto, Dong Wang, Masanari Kondo, Yutaro Kashiwa, Yasutaka Kamei, Naoyasu Ubayashi","doi":"10.1007/s10664-024-10459-3","DOIUrl":"https://doi.org/10.1007/s10664-024-10459-3","url":null,"abstract":"<p>Issue reports are a pivotal interface between developers and users for receiving information about bugs in their products. In practice, reproducing those bugs is challenging, since issue reports often contain incorrect information or lack sufficient information. Furthermore, the poor quality of issue reports would have the effect of delaying the entire bug-fixing process. To enhance bug comprehension and facilitate bug reproduction, GitHub Issue allows users to embed visuals such as images and videos to complement the textual description. Hence, we conduct an empirical study on 34 active GitHub repositories to quantitatively analyze the difference between visual issue reports and non-visual ones, and qualitatively analyze the characteristics of visuals and the usage of visuals in bug types. Our results show that visual issue reports have a significantly higher probability of reporting bugs. Visual reports also tend to receive the first comment and complete the conversation in a relatively shorter time. Visuals are frequently used to present the program behavior and the user interface, with the major purpose of introducing problems in reports. Additionally, we observe that visuals are commonly used to report GUI-related bugs, but they are rarely used to report configuration bugs in comparison to non-visual issue reports. To summarize, our work highlights the role of visual play in the bug-fixing process and lays the foundation for future research to support bug comprehension by exploiting visuals.</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"23 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward effective secure code reviews: an empirical study of security-related coding weaknesses 实现有效的安全代码审查:与安全相关的编码弱点实证研究
IF 4.1 2区 计算机科学
Empirical Software Engineering Pub Date : 2024-06-08 DOI: 10.1007/s10664-024-10496-y
Wachiraphan Charoenwet, Patanamon Thongtanunam, Van-Thuan Pham, Christoph Treude
{"title":"Toward effective secure code reviews: an empirical study of security-related coding weaknesses","authors":"Wachiraphan Charoenwet, Patanamon Thongtanunam, Van-Thuan Pham, Christoph Treude","doi":"10.1007/s10664-024-10496-y","DOIUrl":"https://doi.org/10.1007/s10664-024-10496-y","url":null,"abstract":"<p>Identifying security issues early is encouraged to reduce the latent negative impacts on the software systems. Code review is a widely-used method that allows developers to manually inspect modified code, catching security issues during a software development cycle. However, existing code review studies often focus on known vulnerabilities, neglecting coding weaknesses, which can introduce real-world security issues that are more visible through code review. The practices of code reviews in identifying such coding weaknesses are not yet fully investigated. To better understand this, we conducted an empirical case study in two large open-source projects, OpenSSL and PHP. Based on 135,560 code review comments, we found that reviewers raised security concerns in 35 out of 40 coding weakness categories. Surprisingly, some coding weaknesses related to past vulnerabilities, such as memory errors and resource management, were discussed less often than the vulnerabilities. Developers attempted to address raised security concerns in many cases (39%-41%), but a substantial portion was merely acknowledged (30%-36%), and some went unfixed due to disagreements about solutions (18%-20%). This highlights that coding weaknesses can slip through code review even when identified. Our findings suggest that reviewers can identify various coding weaknesses leading to security issues during code reviews. However, these results also reveal shortcomings in current code review practices, indicating the need for more effective mechanisms or support for increasing awareness of security issue management in code reviews.</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"204 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141521634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The untold impact of learning approaches on software fault-proneness predictions: an analysis of temporal aspects 学习方法对软件故障倾向性预测的不可言喻的影响:时间方面的分析
IF 4.1 2区 计算机科学
Empirical Software Engineering Pub Date : 2024-06-08 DOI: 10.1007/s10664-024-10454-8
Mohammad Jamil Ahmad, Katerina Goseva-Popstojanova, Robyn R. Lutz
{"title":"The untold impact of learning approaches on software fault-proneness predictions: an analysis of temporal aspects","authors":"Mohammad Jamil Ahmad, Katerina Goseva-Popstojanova, Robyn R. Lutz","doi":"10.1007/s10664-024-10454-8","DOIUrl":"https://doi.org/10.1007/s10664-024-10454-8","url":null,"abstract":"<p>This paper aims to improve software fault-proneness prediction by investigating the unexplored effects on classification performance of the temporal decisions made by practitioners and researchers regarding (i) the interval for which they will collect longitudinal features (software metrics data), and (ii) the interval for which they will predict software bugs (the target variable). We call these specifics of the data used for training and of the target variable being predicted the <i>learning approach</i>, and explore the impact of the two most common learning approaches on the performance of software fault-proneness prediction, both within a single release of a software product and across releases. The paper presents empirical results from a study based on data extracted from 64 releases of twelve open-source projects. Results show that the learning approach has a substantial, and typically unacknowledged, impact on classification performance. Specifically, we show that one learning approach leads to significantly better performance than the other, both within-release and across-releases. Furthermore, this paper uncovers that, for within-release predictions, the difference in classification performance is due to different levels of class imbalance in the two learning approaches. Our findings show that improved specification of the learning approach is essential to understanding and explaining the performance of fault-proneness prediction models, as well as to avoiding misleading comparisons among them. The paper concludes with some practical recommendations and research directions based on our findings toward improved software fault-proneness prediction.</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"65 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141521632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Challenges, adaptations, and fringe benefits of conducting software engineering research with human participants during the COVID-19 pandemic 在 COVID-19 大流行期间与人类参与者一起开展软件工程研究的挑战、适应性和附带利益
IF 4.1 2区 计算机科学
Empirical Software Engineering Pub Date : 2024-06-07 DOI: 10.1007/s10664-024-10490-4
Anuradha Madugalla, Tanjila Kanij, Rashina Hoda, Dulaji Hidellaarachchi, Aastha Pant, Samia Ferdousi, John Grundy
{"title":"Challenges, adaptations, and fringe benefits of conducting software engineering research with human participants during the COVID-19 pandemic","authors":"Anuradha Madugalla, Tanjila Kanij, Rashina Hoda, Dulaji Hidellaarachchi, Aastha Pant, Samia Ferdousi, John Grundy","doi":"10.1007/s10664-024-10490-4","DOIUrl":"https://doi.org/10.1007/s10664-024-10490-4","url":null,"abstract":"<p>The COVID-19 pandemic changed the way we live, work and the way we conduct research. With the restrictions of lockdowns and social distancing, various impacts were experienced by many software engineering researchers, especially whose studies depend on human participants. We conducted a mixed methods study to understand the extent of this impact. Through a detailed survey with 89 software engineering researchers working with human participants around the world and a further nine follow-up interviews, we identified the key challenges faced, the adaptations made, and the surprising fringe benefits of conducting research involving human participants during the pandemic. Our findings also revealed that in retrospect, many researchers did not wish to revert to the old ways of conducting human-orienfted research. Based on our analysis and insights, we share recommendations on how to conduct remote studies with human participants effectively in an increasingly hybrid world when face-to-face engagement is not possible or where remote participation is preferred.</p>","PeriodicalId":11525,"journal":{"name":"Empirical Software Engineering","volume":"238 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141521635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信