{"title":"资料缺失与反应疏忽:教学沟通之建议","authors":"Zac D. Johnson","doi":"10.1080/03634523.2023.2171445","DOIUrl":null,"url":null,"abstract":"Data collection is, without question, a resource intensive process. Unfortunately, many survey responses are returned incomplete, or individuals respond carelessly. These issues are exacerbated by the increase in online data collection, which often results in lower response rates and higher instances of careless respondents than paper-andpencil surveys, which are not without their own drawbacks (Lefever et al., 2007; Nichols & Edlund, 2020). The issues of missing data and careless responses ultimately equate to more sunk costs for researchers only for the data to be incomplete or otherwise problematic. Notably, these issues are accompanied by higher rates of type I or type II error (see Allison, 2003), meaning that claims drawn from these datasets may not be easily replicated due to faulty parameter estimates related to the original dataset. These issues hinder the ability for researchers to more deeply explore the relationship between communication and learning. Thankfully, there are strategies that quantitative researchers may utilize to address these issues, and in so doing more thoroughly and accurately ascertain communication’s relationship to learning. Each of the following methodological strategies is largely absent from the current instructional communication research canon and is relatively accessible. First, instructional communication researchers should begin by considering the length of their measurement instruments. As our methods have grown more sophisticated, we have included more and more in our models and research questions; each additional construct equates to more items to which participants must read and respond. Scholars routinely consider four, five, or even more variables, resulting in participants being asked to provide upwards of 100 responses (e.g., Schrodt et al., 2009; Sidelinger et al., 2011). Participants lose interest and stop responding carefully or stop responding entirely; this, as described above, is a significant problem. Thus, instructional communication scholars should consider shortening measurement instruments (see Raykov et al., 2015). Perhaps we do not need 18 items to assess teacher confirmation (Ellis, 2000) or teacher credibility (Teven & McCroskey, 1997); perhaps far fewer items would suffice while maintaining validity. Shorter instruments would help to address some of the issues underlying missing data and careless responses. Additionally, shorter instruments may afford researchers the opportunity to consider more complex relationships between additional variables without overburdening participants. A reconsideration of these scales validity may also reveal factor structures that are more accurate representations of communication related to instruction (Reise, 2012).","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Missing data and careless responses: recommendations for instructional communication\",\"authors\":\"Zac D. Johnson\",\"doi\":\"10.1080/03634523.2023.2171445\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Data collection is, without question, a resource intensive process. Unfortunately, many survey responses are returned incomplete, or individuals respond carelessly. These issues are exacerbated by the increase in online data collection, which often results in lower response rates and higher instances of careless respondents than paper-andpencil surveys, which are not without their own drawbacks (Lefever et al., 2007; Nichols & Edlund, 2020). The issues of missing data and careless responses ultimately equate to more sunk costs for researchers only for the data to be incomplete or otherwise problematic. Notably, these issues are accompanied by higher rates of type I or type II error (see Allison, 2003), meaning that claims drawn from these datasets may not be easily replicated due to faulty parameter estimates related to the original dataset. These issues hinder the ability for researchers to more deeply explore the relationship between communication and learning. Thankfully, there are strategies that quantitative researchers may utilize to address these issues, and in so doing more thoroughly and accurately ascertain communication’s relationship to learning. Each of the following methodological strategies is largely absent from the current instructional communication research canon and is relatively accessible. First, instructional communication researchers should begin by considering the length of their measurement instruments. As our methods have grown more sophisticated, we have included more and more in our models and research questions; each additional construct equates to more items to which participants must read and respond. Scholars routinely consider four, five, or even more variables, resulting in participants being asked to provide upwards of 100 responses (e.g., Schrodt et al., 2009; Sidelinger et al., 2011). Participants lose interest and stop responding carefully or stop responding entirely; this, as described above, is a significant problem. Thus, instructional communication scholars should consider shortening measurement instruments (see Raykov et al., 2015). Perhaps we do not need 18 items to assess teacher confirmation (Ellis, 2000) or teacher credibility (Teven & McCroskey, 1997); perhaps far fewer items would suffice while maintaining validity. Shorter instruments would help to address some of the issues underlying missing data and careless responses. Additionally, shorter instruments may afford researchers the opportunity to consider more complex relationships between additional variables without overburdening participants. A reconsideration of these scales validity may also reveal factor structures that are more accurate representations of communication related to instruction (Reise, 2012).\",\"PeriodicalId\":0,\"journal\":{\"name\":\"\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0,\"publicationDate\":\"2023-03-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/03634523.2023.2171445\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/03634523.2023.2171445","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
毫无疑问,数据收集是一个资源密集的过程。不幸的是,许多调查回复是不完整的,或者个人回答不认真。这些问题因在线数据收集的增加而加剧,这往往导致较低的回复率和比纸笔调查更粗心的受访者,这并非没有自己的缺点(Lefever等人,2007;Nichols & Edlund, 2020)。缺少数据和粗心大意的回答问题最终等同于研究人员更多的沉没成本,因为数据不完整或有其他问题。值得注意的是,这些问题伴随着更高的I型或II型错误率(见Allison, 2003),这意味着由于与原始数据集相关的错误参数估计,从这些数据集得出的索赔可能不容易复制。这些问题阻碍了研究者更深入地探索交流与学习之间的关系。值得庆幸的是,定量研究人员可以利用一些策略来解决这些问题,从而更彻底、更准确地确定交流与学习的关系。以下每一种方法策略在当前的教学传播研究经典中基本上都是缺失的,并且相对容易获得。首先,教学交际研究者应该从考虑测量工具的长度开始。随着我们的方法越来越复杂,我们在模型和研究问题中加入了越来越多的内容;每增加一个结构就意味着参与者必须阅读和回应更多的内容。学者们通常会考虑4个、5个甚至更多的变量,导致参与者被要求提供100个以上的回答(例如,Schrodt等人,2009;Sidelinger et al., 2011)。参与者失去兴趣,不再认真回应或完全停止回应;如上所述,这是一个重大问题。因此,教学交流学者应该考虑缩短测量工具(见Raykov et al., 2015)。也许我们不需要18个项目来评估教师的确认(Ellis, 2000)或教师的可信度(Teven & mcroskey, 1997);也许在保持有效性的同时,更少的项目就足够了。较短的工具将有助于解决数据缺失和草率反应背后的一些问题。此外,较短的工具可以使研究人员有机会考虑额外变量之间更复杂的关系,而不会使参与者负担过重。重新考虑这些量表的效度也可能揭示出更准确地表征与教学相关的交流的因素结构(Reise, 2012)。
Missing data and careless responses: recommendations for instructional communication
Data collection is, without question, a resource intensive process. Unfortunately, many survey responses are returned incomplete, or individuals respond carelessly. These issues are exacerbated by the increase in online data collection, which often results in lower response rates and higher instances of careless respondents than paper-andpencil surveys, which are not without their own drawbacks (Lefever et al., 2007; Nichols & Edlund, 2020). The issues of missing data and careless responses ultimately equate to more sunk costs for researchers only for the data to be incomplete or otherwise problematic. Notably, these issues are accompanied by higher rates of type I or type II error (see Allison, 2003), meaning that claims drawn from these datasets may not be easily replicated due to faulty parameter estimates related to the original dataset. These issues hinder the ability for researchers to more deeply explore the relationship between communication and learning. Thankfully, there are strategies that quantitative researchers may utilize to address these issues, and in so doing more thoroughly and accurately ascertain communication’s relationship to learning. Each of the following methodological strategies is largely absent from the current instructional communication research canon and is relatively accessible. First, instructional communication researchers should begin by considering the length of their measurement instruments. As our methods have grown more sophisticated, we have included more and more in our models and research questions; each additional construct equates to more items to which participants must read and respond. Scholars routinely consider four, five, or even more variables, resulting in participants being asked to provide upwards of 100 responses (e.g., Schrodt et al., 2009; Sidelinger et al., 2011). Participants lose interest and stop responding carefully or stop responding entirely; this, as described above, is a significant problem. Thus, instructional communication scholars should consider shortening measurement instruments (see Raykov et al., 2015). Perhaps we do not need 18 items to assess teacher confirmation (Ellis, 2000) or teacher credibility (Teven & McCroskey, 1997); perhaps far fewer items would suffice while maintaining validity. Shorter instruments would help to address some of the issues underlying missing data and careless responses. Additionally, shorter instruments may afford researchers the opportunity to consider more complex relationships between additional variables without overburdening participants. A reconsideration of these scales validity may also reveal factor structures that are more accurate representations of communication related to instruction (Reise, 2012).