{"title":"Missing data and careless responses: recommendations for instructional communication","authors":"Zac D. Johnson","doi":"10.1080/03634523.2023.2171445","DOIUrl":null,"url":null,"abstract":"Data collection is, without question, a resource intensive process. Unfortunately, many survey responses are returned incomplete, or individuals respond carelessly. These issues are exacerbated by the increase in online data collection, which often results in lower response rates and higher instances of careless respondents than paper-andpencil surveys, which are not without their own drawbacks (Lefever et al., 2007; Nichols & Edlund, 2020). The issues of missing data and careless responses ultimately equate to more sunk costs for researchers only for the data to be incomplete or otherwise problematic. Notably, these issues are accompanied by higher rates of type I or type II error (see Allison, 2003), meaning that claims drawn from these datasets may not be easily replicated due to faulty parameter estimates related to the original dataset. These issues hinder the ability for researchers to more deeply explore the relationship between communication and learning. Thankfully, there are strategies that quantitative researchers may utilize to address these issues, and in so doing more thoroughly and accurately ascertain communication’s relationship to learning. Each of the following methodological strategies is largely absent from the current instructional communication research canon and is relatively accessible. First, instructional communication researchers should begin by considering the length of their measurement instruments. As our methods have grown more sophisticated, we have included more and more in our models and research questions; each additional construct equates to more items to which participants must read and respond. Scholars routinely consider four, five, or even more variables, resulting in participants being asked to provide upwards of 100 responses (e.g., Schrodt et al., 2009; Sidelinger et al., 2011). Participants lose interest and stop responding carefully or stop responding entirely; this, as described above, is a significant problem. Thus, instructional communication scholars should consider shortening measurement instruments (see Raykov et al., 2015). Perhaps we do not need 18 items to assess teacher confirmation (Ellis, 2000) or teacher credibility (Teven & McCroskey, 1997); perhaps far fewer items would suffice while maintaining validity. Shorter instruments would help to address some of the issues underlying missing data and careless responses. Additionally, shorter instruments may afford researchers the opportunity to consider more complex relationships between additional variables without overburdening participants. A reconsideration of these scales validity may also reveal factor structures that are more accurate representations of communication related to instruction (Reise, 2012).","PeriodicalId":47722,"journal":{"name":"COMMUNICATION EDUCATION","volume":"72 1","pages":"194 - 196"},"PeriodicalIF":0.9000,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"COMMUNICATION EDUCATION","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/03634523.2023.2171445","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMMUNICATION","Score":null,"Total":0}
引用次数: 0
Abstract
Data collection is, without question, a resource intensive process. Unfortunately, many survey responses are returned incomplete, or individuals respond carelessly. These issues are exacerbated by the increase in online data collection, which often results in lower response rates and higher instances of careless respondents than paper-andpencil surveys, which are not without their own drawbacks (Lefever et al., 2007; Nichols & Edlund, 2020). The issues of missing data and careless responses ultimately equate to more sunk costs for researchers only for the data to be incomplete or otherwise problematic. Notably, these issues are accompanied by higher rates of type I or type II error (see Allison, 2003), meaning that claims drawn from these datasets may not be easily replicated due to faulty parameter estimates related to the original dataset. These issues hinder the ability for researchers to more deeply explore the relationship between communication and learning. Thankfully, there are strategies that quantitative researchers may utilize to address these issues, and in so doing more thoroughly and accurately ascertain communication’s relationship to learning. Each of the following methodological strategies is largely absent from the current instructional communication research canon and is relatively accessible. First, instructional communication researchers should begin by considering the length of their measurement instruments. As our methods have grown more sophisticated, we have included more and more in our models and research questions; each additional construct equates to more items to which participants must read and respond. Scholars routinely consider four, five, or even more variables, resulting in participants being asked to provide upwards of 100 responses (e.g., Schrodt et al., 2009; Sidelinger et al., 2011). Participants lose interest and stop responding carefully or stop responding entirely; this, as described above, is a significant problem. Thus, instructional communication scholars should consider shortening measurement instruments (see Raykov et al., 2015). Perhaps we do not need 18 items to assess teacher confirmation (Ellis, 2000) or teacher credibility (Teven & McCroskey, 1997); perhaps far fewer items would suffice while maintaining validity. Shorter instruments would help to address some of the issues underlying missing data and careless responses. Additionally, shorter instruments may afford researchers the opportunity to consider more complex relationships between additional variables without overburdening participants. A reconsideration of these scales validity may also reveal factor structures that are more accurate representations of communication related to instruction (Reise, 2012).
期刊介绍:
Communication Education is a peer-reviewed publication of the National Communication Association. Communication Education publishes original scholarship that advances understanding of the role of communication in the teaching and learning process in diverse spaces, structures, and interactions, within and outside of academia. Communication Education welcomes scholarship from diverse perspectives and methodologies, including quantitative, qualitative, and critical/textual approaches. All submissions must be methodologically rigorous and theoretically grounded and geared toward advancing knowledge production in communication, teaching, and learning. Scholarship in Communication Education addresses the intersections of communication, teaching, and learning related to topics and contexts that include but are not limited to: • student/teacher relationships • student/teacher characteristics • student/teacher identity construction • student learning outcomes • student engagement • diversity, inclusion, and difference • social justice • instructional technology/social media • the basic communication course • service learning • communication across the curriculum • communication instruction in business and the professions • communication instruction in civic arenas In addition to articles, the journal will publish occasional scholarly exchanges on topics related to communication, teaching, and learning, such as: • Analytic review articles: agenda-setting pieces including examinations of key questions about the field • Forum essays: themed pieces for dialogue or debate on current communication, teaching, and learning issues