Survey practice最新文献

筛选
英文 中文
Survey Practice in Non-Survey-Literate Populations: Lessons Learned from a Cognitive Interview Study in Brazil. 非调查文化人群的调查实践:从巴西认知访谈研究中获得的经验教训。
Survey practice Pub Date : 2025-03-01 Epub Date: 2025-03-02 DOI: 10.29115/SP-2024-0035
Meredith Massey
{"title":"Survey Practice in Non-Survey-Literate Populations: Lessons Learned from a Cognitive Interview Study in Brazil.","authors":"Meredith Massey","doi":"10.29115/SP-2024-0035","DOIUrl":"10.29115/SP-2024-0035","url":null,"abstract":"<p><p>Survey research relies on cooperation and coordination between researchers and respondents. Survey-literate respondents possess a level of understanding of the survey process that facilitates their participation. Non-survey-literate respondents, on the other hand, are less equipped to provide accurate responses, which can lead to increased survey error. In this article, we present findings from a cognitive interview project conducted in Brazil to illustrate potential barriers to respondent participation and demonstrate how these barriers contribute to response error. To test questions on inclusive education, researchers from the Collaborating Center for Questionnaire Design and Evaluation Research at the National Center for Health Statistics facilitated the collection of data through 80 cognitive interviews conducted in various neighborhoods in Rio de Janeiro, Brazil. Recruitment targeted caregivers of children with and without disabilities, and respondents were predominantly female with low literacy levels. The results indicated that respondents with limited familiarity with the survey process (non-survey-literate) struggled to orient themselves to the survey task. They faced challenges in choosing response options, understanding scale relationships, and interpreting vocabulary. Additionally, many respondents expressed a need to share salient details of their lives, which the survey was not designed to capture. Understanding these barriers to participation and identifying ways to mitigate them can help reduce survey error, particularly in vulnerable populations.</p>","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":"19 SI","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11987661/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144043737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Examining the Link between the ‘Middle Means Typical’ Heuristic and Answer Behavior 研究 "中间即典型 "启发式与回答行为之间的联系
Survey practice Pub Date : 2024-07-17 DOI: 10.29115/sp-2024-0009
Ádám Stefkovics, Jan Karem Höhne
{"title":"Examining the Link between the ‘Middle Means Typical’ Heuristic and Answer Behavior","authors":"Ádám Stefkovics, Jan Karem Höhne","doi":"10.29115/sp-2024-0009","DOIUrl":"https://doi.org/10.29115/sp-2024-0009","url":null,"abstract":"Question interpretation in web surveys may not only depend on the textual content but also on visual design aspects. Research has shown that respondents seem to make use of interpretative heuristics when answering questions potentially influencing their answer behavior. In this study, we investigate the implications of the ‘middle means typical’ (MMT) heuristic, which suggests that respondents perceive the middle option of a scale as the most typical one. For this purpose, we use data from a survey experiment embedded in the probability-based German Internet Panel (N = 4,679) varying the inclusion of a non-substantive “Don’t know” option (with or without separation from the substantive options) and scale polarity (unipolar or bipolar). The four questions under investigation were adopted from the Big5 inventory dealing with agreeableness and openness. The results suggest that the MMT heuristic has a minor impact on answer behavior, as the separation of non-substantive options did not affect answer distributions and response times (as a measure of response effort). However, scale polarity influenced answer behavior and response times. Similar to what has been observed in previous studies, unipolar scales elicited more middle answers and bipolar scales elicited more positive answers. Bipolar scales also resulted in longer response times. Although design violations against the MMT heuristic do not seem to impact answer behavior, we still recommend exercising caution when designing scales with non-substantive options. We also highlight the necessity of testing scales differing with respect to polarity.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":"172 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141828599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Best Practices for Measuring Skin Color in Surveys 在调查中测量肤色的最佳做法
Survey practice Pub Date : 2024-05-23 DOI: 10.29115/sp-2024-0005
Rachel A. Gordon, Amelia R. Branigan, M. Khan, Johanna G. Nunez
{"title":"Best Practices for Measuring Skin Color in Surveys","authors":"Rachel A. Gordon, Amelia R. Branigan, M. Khan, Johanna G. Nunez","doi":"10.29115/sp-2024-0005","DOIUrl":"https://doi.org/10.29115/sp-2024-0005","url":null,"abstract":"Surveys that assess skin color support evidence building about colorism and related systemic inequalities that affect health and wellbeing. Methodologists have increasing choices for such assessments, including a growing array of digital images for rating scales and increasingly cost-effective handheld mechanical devices based on color science. Guidance is needed for choosing among these growing options. We used data from a diverse sample of 102 college students to produce new empirical evidence and practical guidance about various options. We compared three handheld devices that ranged in price, considering variations in their reliabilities and how their results differed by where on the body and with what device settings readings were taken. We also offered evidence regarding how reliably interviewers and participants could choose from a large array of color swatches offering variation in skin undertone (redness, yellowness) in addition to skin shade (lightness-to-darkness). Overall, the results were promising, demonstrating that modern handheld devices and rating scales could be feasibly and reliably used. For instance, results demonstrated that just one or two device readings were needed at any given location, and, the device readings and rating scale scores similarly captured the relative darkness of skin. In other cases, recommendations were less certain. For instance, skin undertones of redness and yellowness were more sensitive to device choices and body locations. We encourage future studies that pursue why such variability exists and for which substantive questions it matters most.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":"38 50","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141103729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Applying Machine Learning to Survey Question Assessment 将机器学习应用于调查问题评估
Survey practice Pub Date : 2024-05-09 DOI: 10.29115/sp-2024-0006
Ting Yan, Hanyu Sun, Anil Battalahalli
{"title":"Applying Machine Learning to Survey Question Assessment","authors":"Ting Yan, Hanyu Sun, Anil Battalahalli","doi":"10.29115/sp-2024-0006","DOIUrl":"https://doi.org/10.29115/sp-2024-0006","url":null,"abstract":"Sun and Yan (2023) described a Computer-Assisted Recorded Interviewing (CARI) Machine Learning (ML) pipeline that efficiently processes 100% of recorded interviews as quickly as possible and as inexpensively as possible. The CARI ML pipeline leads to automatic identification of recordings that are at a higher risk of being falsified or exhibiting undesirable interviewer behaviors. This paper describes an extension to the pipeline that can be used to automatically detect survey questions at a higher risk of poor performance. A proof-of-concept study was conducted and showed that the enhanced pipeline was able to detect worst performing items judged by experts. The results demonstrated the potential of the enhanced pipeline to screen and select problematic items for conventional behavior coding and to improve the efficiency of using CARI for question evaluation and testing.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":" 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140997613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Processes and Challenges Associated with Conducting Survey Research in Conflict Zones 在冲突地区开展调查研究的相关流程和挑战
Survey practice Pub Date : 2024-04-10 DOI: 10.29115/sp-2023-0032
William L. Rosenberg
{"title":"Processes and Challenges Associated with Conducting Survey Research in Conflict Zones","authors":"William L. Rosenberg","doi":"10.29115/sp-2023-0032","DOIUrl":"https://doi.org/10.29115/sp-2023-0032","url":null,"abstract":"A wide variety of organizations domestically and internationally conduct surveys in conflict zones. Their timely findings are essential for policy makers. Survey research in conflict zones is an essential tool for policy makers in government, humanitarian, private sector, and non-governmental organizations to develop and evaluate programs, services, and policies. Such research is often even more challenging in less developed areas of the world due to issues related to access, technology, and staffing. Other challenges include population movements, lack of societal data structures such as a recent census, communication systems such as telephones, as well as issues related to sampling, design, and the omnipresent issue of security for the researchers and respondents. Often, researchers must make trade-offs regarding internal and external validity which are tempered by issues related to cost, speed, and accuracy. Policy makers need to feel comfortable in the reliability and the generalizability of the findings they receive or develop. The data collected in conflict zones must strive to use appropriate techniques to quantify the data and evaluate the results. Furthermore, flexibility and adaptability are important concepts to consider when conducting research in conflict zones.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140717797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recruiting non-respondents for a conversation about reasons for non-response: A description and evaluation 招募未回复者,就未回复原因进行对话:说明和评估
Survey practice Pub Date : 2024-03-14 DOI: 10.29115/sp-2024-0001
M. Haan, Vera Toepoel, Y. Ongena, Björn Janssen
{"title":"Recruiting non-respondents for a conversation about reasons for non-response: A description and evaluation","authors":"M. Haan, Vera Toepoel, Y. Ongena, Björn Janssen","doi":"10.29115/sp-2024-0001","DOIUrl":"https://doi.org/10.29115/sp-2024-0001","url":null,"abstract":"The decline in survey response rates has become a significant concern in survey research. To address this issue and gain a deeper understanding of non-participation, a research approach was pursued involving semi-structured interviews with non-respondents, guided by survey theories and practical experiences from Statistics Netherlands (SN). Experienced SN interviewers visited 2763 addresses of individuals who initially did not respond to a survey request from one out of six different SN surveys. Non-responding individuals were given another opportunity to participate in a SN survey immediately at their doorstep or by scheduling an appointment. In case of refusal, interviewers presented the option to engage in a qualitative interview study led by the University of Groningen (UG). However, despite concerted efforts by interviewers, none could successfully convince non-respondents to participate in the UG interview study about survey experiences. The inclusion of personal information on a form, particularly those requiring a signature, led to increased refusals and privacy concerns. An evaluation during an expert meeting with SN interviewers suggested three potential strategies for future recruitment efforts: exploring a collaborative approach between SN interviewers and UG researchers, considering incentives, and developing a tailored fact sheet for the UG study to enhance its relevance to potential respondents.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":"22 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140242161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Keep the noise down: On the performance of automatic speech recognition of voice-recordings in web surveys 降低噪音论网络调查中语音录音的自动语音识别性能
Survey practice Pub Date : 2024-02-29 DOI: 10.29115/sp-2023-0022
Katharina Meitinger, Sabien van der Sluis, Matthias Schonlau
{"title":"Keep the noise down: On the performance of automatic speech recognition of voice-recordings in web surveys","authors":"Katharina Meitinger, Sabien van der Sluis, Matthias Schonlau","doi":"10.29115/sp-2023-0022","DOIUrl":"https://doi.org/10.29115/sp-2023-0022","url":null,"abstract":"Voice-recordings are increasingly implemented in web surveys, but the resulting audio data need to be transcribed before analysis. Since manual coding is too time- and work-intensive, researchers often rely on automatic speech recognition (ASR) systems for the transcription of the voice-recordings. However, ASR tools might create partly incorrect transcriptions and potentially change the content of responses. If the ASR performance (i.e., accuracy and validity) differs by subgroup and contextual factors, a bias is introduced in the analysis of open-ended questions. We assessed the impact of sociodemographic and contextual factors on the accuracy and validity of ASR transcriptions with data from the Longitudinal Internet Studies for the Social Sciences (LISS) panel collected in December 2020. We find that background noise reduces the accuracy and validity of ASR transcriptions. In addition, validity improved when the respondent was alone during the survey. Fortunately, we did not find any evidence of systematic differences across subgroups (age, sex, education), devices or respondent location.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":"7 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140408468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Testing Theoretical Communication Strategies on Survey Email Outreach to South Carolina Manufacturing Firms 测试针对南卡罗来纳州制造业企业的调查电子邮件推广的理论交流策略
Survey practice Pub Date : 2024-02-15 DOI: 10.29115/sp-2023-0031
Jason Kosakow
{"title":"Testing Theoretical Communication Strategies on Survey Email Outreach to South Carolina Manufacturing Firms","authors":"Jason Kosakow","doi":"10.29115/sp-2023-0031","DOIUrl":"https://doi.org/10.29115/sp-2023-0031","url":null,"abstract":"Survey practitioners have conducted numerous experiments to understand why people participate in surveys. Although there is a significant amount of literature addressing survey response theories, we do not know if these hold for email recruitment. As the internet becomes more widely used in the administration of surveys, it is important to understand how to communicate when soliciting a survey response using email. This paper tests two communication theories – Social Exchange theory and Pre-Suasion theory – on how the subject line and the content of an email affects a person’s willingness to open an email and, ultimately, to join a business survey panel. Results suggest that the subject lines produced according to the Social Exchange and Pre-Suasion theories perform similarly in email open rates. Additionally, both communication strategies perform similarly in converting a business into the survey. There was some evidence that Social Exchange theory was more effective at converting businesses with five or fewer employees, however, conversion rates were low for this experiment.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":"15 13","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large sampling errors when using the Unmatched Count Technique to estimate prevalence: A simulation study 使用非匹配计数技术估计流行率时存在较大的抽样误差:模拟研究
Survey practice Pub Date : 2024-02-01 DOI: 10.29115/sp-2024-0002
Zachary Neal
{"title":"Large sampling errors when using the Unmatched Count Technique to estimate prevalence: A simulation study","authors":"Zachary Neal","doi":"10.29115/sp-2024-0002","DOIUrl":"https://doi.org/10.29115/sp-2024-0002","url":null,"abstract":"The Unmatched Count Technique (UCT) is a method for ensuring respondent anonymity and thereby providing an unbiased estimate of the prevalence of a characteristic in a population. I illustrate that under realistic conditions UCT estimates can have ten times more sampling error than estimates derived from direct questions, and that UCT estimates can take nonsensical negative values. Therefore, the UCT should be used with caution.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":"34 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139889992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large sampling errors when using the Unmatched Count Technique to estimate prevalence: A simulation study 使用非匹配计数技术估计流行率时存在较大的抽样误差:模拟研究
Survey practice Pub Date : 2024-02-01 DOI: 10.29115/sp-2024-0002
Zachary Neal
{"title":"Large sampling errors when using the Unmatched Count Technique to estimate prevalence: A simulation study","authors":"Zachary Neal","doi":"10.29115/sp-2024-0002","DOIUrl":"https://doi.org/10.29115/sp-2024-0002","url":null,"abstract":"The Unmatched Count Technique (UCT) is a method for ensuring respondent anonymity and thereby providing an unbiased estimate of the prevalence of a characteristic in a population. I illustrate that under realistic conditions UCT estimates can have ten times more sampling error than estimates derived from direct questions, and that UCT estimates can take nonsensical negative values. Therefore, the UCT should be used with caution.","PeriodicalId":74893,"journal":{"name":"Survey practice","volume":"264 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139830259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信