快讯A-B测试的误区:差异化交付如何影响在线实验无法(和能够)告诉您的客户对广告的反应

IF 11.5 1区 管理学 Q1 BUSINESS
Michael Braun, Eric M. Schwartz
{"title":"快讯A-B测试的误区:差异化交付如何影响在线实验无法(和能够)告诉您的客户对广告的反应","authors":"Michael Braun, Eric M. Schwartz","doi":"10.1177/00222429241275886","DOIUrl":null,"url":null,"abstract":"Marketers use online advertising platforms to compare user responses to different ad content. But platforms’ experimentation tools deliver different ads to distinct and undetectably optimized mixes of users that vary across ads, even during the test. Because exposure to ads in the test is non-random, the estimated comparisons confound the effect of the ad content with the effect of algorithmic targeting. This means experimenters may not be learning what they think they are learning from ad A-B tests. The authors document these “divergent delivery” patterns during an online experiment for the first time. They explain how algorithmic targeting, user heterogeneity, and data aggregation conspire to confound the magnitude, and even the sign, of ad A-B test results. Analytically, the paper extends the potential outcomes model of causal inference to treat random assignment of ads and user exposure to ads as separate experimental design elements. Managerially, the authors explain why platforms lack incentives to allow experimenters to untangle the effects of ad content from proprietary algorithmic selection of users when running A-B tests. Given that experimenters have diverse reasons for comparing user responses to ads, the authors offer tailored prescriptive guidance to experimenters based on their specific goals.","PeriodicalId":16152,"journal":{"name":"Journal of Marketing","volume":"7 1","pages":""},"PeriodicalIF":11.5000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"EXPRESS: Where A-B Testing Goes Wrong: How Divergent Delivery Affects What Online Experiments Cannot (and Can) Tell You about How Customers Respond to Advertising\",\"authors\":\"Michael Braun, Eric M. Schwartz\",\"doi\":\"10.1177/00222429241275886\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Marketers use online advertising platforms to compare user responses to different ad content. But platforms’ experimentation tools deliver different ads to distinct and undetectably optimized mixes of users that vary across ads, even during the test. Because exposure to ads in the test is non-random, the estimated comparisons confound the effect of the ad content with the effect of algorithmic targeting. This means experimenters may not be learning what they think they are learning from ad A-B tests. The authors document these “divergent delivery” patterns during an online experiment for the first time. They explain how algorithmic targeting, user heterogeneity, and data aggregation conspire to confound the magnitude, and even the sign, of ad A-B test results. Analytically, the paper extends the potential outcomes model of causal inference to treat random assignment of ads and user exposure to ads as separate experimental design elements. Managerially, the authors explain why platforms lack incentives to allow experimenters to untangle the effects of ad content from proprietary algorithmic selection of users when running A-B tests. Given that experimenters have diverse reasons for comparing user responses to ads, the authors offer tailored prescriptive guidance to experimenters based on their specific goals.\",\"PeriodicalId\":16152,\"journal\":{\"name\":\"Journal of Marketing\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":11.5000,\"publicationDate\":\"2024-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Marketing\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.1177/00222429241275886\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"BUSINESS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Marketing","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1177/00222429241275886","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 0

摘要

营销人员利用在线广告平台比较用户对不同广告内容的反应。但平台的实验工具会向不同的用户投放不同的广告,这些用户的组合经过优化,难以察觉,甚至在测试期间也是如此。由于在测试中接触广告是非随机的,因此估计的比较结果混淆了广告内容的效果和算法定位的效果。这意味着实验者可能无法从广告 A-B 测试中了解到他们所认为的信息。作者首次记录了在线实验中的这些 "分歧投放 "模式。他们解释了算法定位、用户异质性和数据聚合是如何合谋混淆广告 A-B 测试结果的大小甚至符号的。在分析上,论文扩展了因果推断的潜在结果模型,将广告的随机分配和用户接触广告作为独立的实验设计要素。在管理上,作者解释了为什么平台缺乏激励机制,让实验者在进行 A-B 测试时将广告内容的影响与用户的专有算法选择区分开来。鉴于实验者比较用户对广告反应的原因多种多样,作者根据实验者的具体目标为他们提供了量身定制的规范性指导。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
EXPRESS: Where A-B Testing Goes Wrong: How Divergent Delivery Affects What Online Experiments Cannot (and Can) Tell You about How Customers Respond to Advertising
Marketers use online advertising platforms to compare user responses to different ad content. But platforms’ experimentation tools deliver different ads to distinct and undetectably optimized mixes of users that vary across ads, even during the test. Because exposure to ads in the test is non-random, the estimated comparisons confound the effect of the ad content with the effect of algorithmic targeting. This means experimenters may not be learning what they think they are learning from ad A-B tests. The authors document these “divergent delivery” patterns during an online experiment for the first time. They explain how algorithmic targeting, user heterogeneity, and data aggregation conspire to confound the magnitude, and even the sign, of ad A-B test results. Analytically, the paper extends the potential outcomes model of causal inference to treat random assignment of ads and user exposure to ads as separate experimental design elements. Managerially, the authors explain why platforms lack incentives to allow experimenters to untangle the effects of ad content from proprietary algorithmic selection of users when running A-B tests. Given that experimenters have diverse reasons for comparing user responses to ads, the authors offer tailored prescriptive guidance to experimenters based on their specific goals.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
24.10
自引率
5.40%
发文量
49
期刊介绍: Founded in 1936,the Journal of Marketing (JM) serves as a premier outlet for substantive research in marketing. JM is dedicated to developing and disseminating knowledge about real-world marketing questions, catering to scholars, educators, managers, policy makers, consumers, and other global societal stakeholders. Over the years,JM has played a crucial role in shaping the content and boundaries of the marketing discipline.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信