在线调查中发现被调查者的欺骗和不感兴趣。使用面部表情分析的案例研究

IF 2.3 Q3 BUSINESS
R. W. Hammond, Claudia Parvanta, R. Zemen
{"title":"在线调查中发现被调查者的欺骗和不感兴趣。使用面部表情分析的案例研究","authors":"R. W. Hammond, Claudia Parvanta, R. Zemen","doi":"10.1177/15245004221074403","DOIUrl":null,"url":null,"abstract":"Background Much social marketing research is done on-line recruiting participants through Amazon Mechanical Turk, vetted panel vendors, social media, or community sources. When compensation is offered, care must be taken to distinguish genuine respondents from those with ulterior motives. Focus of the Article We present a case study based on unanticipated empirical observations made while evaluating perceived effectiveness (PE) ratings of anti-tobacco public service announcements (PSAs) using facial expression (FE) analysis (pretesting). Importance to the Social Marketing Field This study alerts social marketers to the risk and impact of disinterest or fraud in compensated on-line surveys. We introduce FE analysis to detect and remove bad data, improving the rigor and validity of on-line data collection. We also compare community (free) and vetted panel (fee added) recruitment in terms of usable samples. Methods We recruited respondents through (Community) sources and through a well-known (Panel) vendor. Respondents completed a one-time, random block design Qualtrics® survey that collected PE ratings and recorded FE in response to PSAs. We used the AFFDEX® feature of iMotions® to calculate respondent attention and expressions; we also visually inspected respondent video records. Based on this quan/qual analysis, we divided 501 respondents (1503 observations) into three groups: (1) Those demonstrably watching PSAs before rating them (Valid), (2) those who were inattentive but completed the rating tasks (Disinterested), and (3) those employing various techniques to game the system (Deceitful). We used one-way analysis of variance (ANOVA) of attention (head positioning), engagement (all facial expressions), and specific facial expressions (FE) to test the likelihood a respondent fell into one of the three behavior groups. Results PE ratings: The Community pool (N = 92) was infiltrated by Deceitful actors (58%), but the remaining 42% was “attentive” (i.e., no disinterest). The Panel pool (N = 409) included 11% deceitful and 2% disinterested respondents. Over half of the PSAs change rank order when deceitful responses are included in the Community sample. The smaller proportion of Deceitful and Disinterested (D&D) respondents in the Panel affected 2 (out of 12) videos. In both samples, the effect was to lower the PE ranking of more diverse and “locally made” PSAs. D&D responses clustered tightly to the mean values, believed to be an artefact of “professional” test taking behavior. FE analysis: The combined Valid sample was more attentive (87.2% of the time) compared to Disinterested (51%) or Deceitful (41%) (ANOVA F = 195.6, p < .001). Models using “engagement” and specific Fes (“cheek raise and smirk”) distinguished Valid from D&D responses. Recommendations False PE pretesting scores waste social marketing budgets and could have disastrous results. Risk can be reduced by using vetted panels with a trade-off that community sources may produce more authentically interested respondents. Ways to make surveys more tamper-evident, with and without webcam recording, are provided as well as procedures to clean data. Check data before compensating respondents! Limitations This was an accidental finding in a parent study. The study required computers which potentially biased the pool of survey respondents. The community pool is smaller than the panel group, limiting statistical power.","PeriodicalId":46085,"journal":{"name":"Social Marketing Quarterly","volume":null,"pages":null},"PeriodicalIF":2.3000,"publicationDate":"2022-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Caught in the Act: Detecting Respondent Deceit and Disinterest in On-Line Surveys. A Case Study Using Facial Expression Analysis\",\"authors\":\"R. W. Hammond, Claudia Parvanta, R. Zemen\",\"doi\":\"10.1177/15245004221074403\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Background Much social marketing research is done on-line recruiting participants through Amazon Mechanical Turk, vetted panel vendors, social media, or community sources. When compensation is offered, care must be taken to distinguish genuine respondents from those with ulterior motives. Focus of the Article We present a case study based on unanticipated empirical observations made while evaluating perceived effectiveness (PE) ratings of anti-tobacco public service announcements (PSAs) using facial expression (FE) analysis (pretesting). Importance to the Social Marketing Field This study alerts social marketers to the risk and impact of disinterest or fraud in compensated on-line surveys. We introduce FE analysis to detect and remove bad data, improving the rigor and validity of on-line data collection. We also compare community (free) and vetted panel (fee added) recruitment in terms of usable samples. Methods We recruited respondents through (Community) sources and through a well-known (Panel) vendor. Respondents completed a one-time, random block design Qualtrics® survey that collected PE ratings and recorded FE in response to PSAs. We used the AFFDEX® feature of iMotions® to calculate respondent attention and expressions; we also visually inspected respondent video records. Based on this quan/qual analysis, we divided 501 respondents (1503 observations) into three groups: (1) Those demonstrably watching PSAs before rating them (Valid), (2) those who were inattentive but completed the rating tasks (Disinterested), and (3) those employing various techniques to game the system (Deceitful). We used one-way analysis of variance (ANOVA) of attention (head positioning), engagement (all facial expressions), and specific facial expressions (FE) to test the likelihood a respondent fell into one of the three behavior groups. Results PE ratings: The Community pool (N = 92) was infiltrated by Deceitful actors (58%), but the remaining 42% was “attentive” (i.e., no disinterest). The Panel pool (N = 409) included 11% deceitful and 2% disinterested respondents. Over half of the PSAs change rank order when deceitful responses are included in the Community sample. The smaller proportion of Deceitful and Disinterested (D&D) respondents in the Panel affected 2 (out of 12) videos. In both samples, the effect was to lower the PE ranking of more diverse and “locally made” PSAs. D&D responses clustered tightly to the mean values, believed to be an artefact of “professional” test taking behavior. FE analysis: The combined Valid sample was more attentive (87.2% of the time) compared to Disinterested (51%) or Deceitful (41%) (ANOVA F = 195.6, p < .001). Models using “engagement” and specific Fes (“cheek raise and smirk”) distinguished Valid from D&D responses. Recommendations False PE pretesting scores waste social marketing budgets and could have disastrous results. Risk can be reduced by using vetted panels with a trade-off that community sources may produce more authentically interested respondents. Ways to make surveys more tamper-evident, with and without webcam recording, are provided as well as procedures to clean data. Check data before compensating respondents! Limitations This was an accidental finding in a parent study. The study required computers which potentially biased the pool of survey respondents. The community pool is smaller than the panel group, limiting statistical power.\",\"PeriodicalId\":46085,\"journal\":{\"name\":\"Social Marketing Quarterly\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2022-02-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Social Marketing Quarterly\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/15245004221074403\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"BUSINESS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Social Marketing Quarterly","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/15245004221074403","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 1

摘要

背景许多社会营销研究都是通过亚马逊机械土耳其人、经过审查的面板供应商、社交媒体或社区来源在线招募参与者。在提供赔偿时,必须注意区分真正的答复者和别有用心的答复者。文章的重点我们提出了一个案例研究,该研究基于在使用面部表情(FE)分析(预测试)评估反烟草公共服务公告(PSA)的感知有效性(PE)评级时所做的出乎意料的实证观察。对社会营销领域的重要性这项研究提醒社会营销人员在有偿在线调查中不感兴趣或欺诈的风险和影响。我们引入了有限元分析来检测和去除不良数据,提高了在线数据采集的严谨性和有效性。我们还比较了社区(免费)和审查小组(收费)招聘的可用样本。方法我们通过(社区)来源和知名(小组)供应商招募受访者。受访者完成了一项一次性随机区块设计Qualtrics®调查,该调查收集了PE评级,并记录了对PSA的FE。我们使用iMotions®的AFFDEX®功能来计算受访者的注意力和表情;我们还目视检查了受访者的视频记录。基于这一quan/qual分析,我们将501名受访者(1503次观察)分为三组:(1)那些在评分前明显观看了公益广告的人(有效),(2)那些注意力不集中但完成了评分任务的人(不感兴趣),以及(3)那些使用各种技术玩系统游戏的人(欺骗)。我们使用注意力(头部定位)、参与度(所有面部表情)和特定面部表情(FE)的单因素方差分析(ANOVA)来测试受访者属于三个行为组之一的可能性。结果PE评分:社区池(N=92)被欺骗行为者渗透(58%),但其余42%是“专注”的(即没有不感兴趣)。专家组成员(N=409)包括11%的欺骗性受访者和2%的无利害关系受访者。当社区样本中包含欺骗性回复时,超过一半的公益广告会改变排名顺序。专家组中欺骗和无兴趣(D&D)受访者的比例较小,影响了2个(共12个)视频。在这两个样本中,效果都是降低了更多样化和“本地制造”PSA的PE排名。D&D反应与平均值紧密相关,被认为是“专业”考试行为的产物。FE分析:与无兴趣(51%)或欺骗(41%)相比,组合有效样本更专注(87.2%的时间)(ANOVA F=195.6,p<.001)。使用“参与”和特定Fes(“脸颊抬高和假笑”)的模型区分了有效和D&D反应。建议虚假的体育预测试分数浪费了社会营销预算,可能会产生灾难性的结果。可以通过使用经过审查的小组来降低风险,同时权衡社区来源可能会产生更真正感兴趣的受访者。提供了在有和没有网络摄像头记录的情况下,使调查更具篡改性的方法,以及清理数据的程序。在补偿受访者之前,请检查数据!局限性这是在一项家长研究中偶然发现的。这项研究需要计算机,这可能会使调查对象群体产生偏见。社区池比小组组小,限制了统计能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Caught in the Act: Detecting Respondent Deceit and Disinterest in On-Line Surveys. A Case Study Using Facial Expression Analysis
Background Much social marketing research is done on-line recruiting participants through Amazon Mechanical Turk, vetted panel vendors, social media, or community sources. When compensation is offered, care must be taken to distinguish genuine respondents from those with ulterior motives. Focus of the Article We present a case study based on unanticipated empirical observations made while evaluating perceived effectiveness (PE) ratings of anti-tobacco public service announcements (PSAs) using facial expression (FE) analysis (pretesting). Importance to the Social Marketing Field This study alerts social marketers to the risk and impact of disinterest or fraud in compensated on-line surveys. We introduce FE analysis to detect and remove bad data, improving the rigor and validity of on-line data collection. We also compare community (free) and vetted panel (fee added) recruitment in terms of usable samples. Methods We recruited respondents through (Community) sources and through a well-known (Panel) vendor. Respondents completed a one-time, random block design Qualtrics® survey that collected PE ratings and recorded FE in response to PSAs. We used the AFFDEX® feature of iMotions® to calculate respondent attention and expressions; we also visually inspected respondent video records. Based on this quan/qual analysis, we divided 501 respondents (1503 observations) into three groups: (1) Those demonstrably watching PSAs before rating them (Valid), (2) those who were inattentive but completed the rating tasks (Disinterested), and (3) those employing various techniques to game the system (Deceitful). We used one-way analysis of variance (ANOVA) of attention (head positioning), engagement (all facial expressions), and specific facial expressions (FE) to test the likelihood a respondent fell into one of the three behavior groups. Results PE ratings: The Community pool (N = 92) was infiltrated by Deceitful actors (58%), but the remaining 42% was “attentive” (i.e., no disinterest). The Panel pool (N = 409) included 11% deceitful and 2% disinterested respondents. Over half of the PSAs change rank order when deceitful responses are included in the Community sample. The smaller proportion of Deceitful and Disinterested (D&D) respondents in the Panel affected 2 (out of 12) videos. In both samples, the effect was to lower the PE ranking of more diverse and “locally made” PSAs. D&D responses clustered tightly to the mean values, believed to be an artefact of “professional” test taking behavior. FE analysis: The combined Valid sample was more attentive (87.2% of the time) compared to Disinterested (51%) or Deceitful (41%) (ANOVA F = 195.6, p < .001). Models using “engagement” and specific Fes (“cheek raise and smirk”) distinguished Valid from D&D responses. Recommendations False PE pretesting scores waste social marketing budgets and could have disastrous results. Risk can be reduced by using vetted panels with a trade-off that community sources may produce more authentically interested respondents. Ways to make surveys more tamper-evident, with and without webcam recording, are provided as well as procedures to clean data. Check data before compensating respondents! Limitations This was an accidental finding in a parent study. The study required computers which potentially biased the pool of survey respondents. The community pool is smaller than the panel group, limiting statistical power.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.30
自引率
16.70%
发文量
21
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信