Changhak Sunwoo;Hyunjin Kwon;Jong Min Kim;Ho-Hyun Lim;Yongwoo Kim;Dongwook Hwang;Jingoo Kim
{"title":"Automated Business Decision-Making Using Generative AI in Online A/B Testing: Comparative Analysis With Human Decision-Making","authors":"Changhak Sunwoo;Hyunjin Kwon;Jong Min Kim;Ho-Hyun Lim;Yongwoo Kim;Dongwook Hwang;Jingoo Kim","doi":"10.1109/ACCESS.2025.3588480","DOIUrl":null,"url":null,"abstract":"Online A/B testing is widely used as an experimental methodology for product improvement and business optimization. However, interpreting experimental results often involves subjective judgment and biases from experiment designers, which can undermine the reliability and reproducibility of test outcomes. In particular, experiment designers frequently exhibit inconsistent decision-making when dealing with neutral results—cases where neither statistically significant positive nor negative effects are observed. This study aims to explore the feasibility of automating A/B test decision-making using Generative AI and empirically analyze how well AI decisions align with those of experiment designers and experts. Utilizing 1,407 experimental cases from 48 companies on the Hackle online experimentation platform, the study compares decision-making outcomes between experiment designers and Generative AI, analyzing agreement rates and identifying patterns across companies. Statistical analyses, including chi-square tests and inter-rater agreement evaluation, were employed to assess differences and reliability. The findings indicate meaningful discrepancies between AI and experiment designers but demonstrate that AI decisions closely align with expert judgments. These results suggest that Generative AI can serve as a complementary tool to enhance the consistency and reliability of A/B test result interpretation.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"13 ","pages":"124530-124542"},"PeriodicalIF":3.4000,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11079579","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Access","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11079579/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Online A/B testing is widely used as an experimental methodology for product improvement and business optimization. However, interpreting experimental results often involves subjective judgment and biases from experiment designers, which can undermine the reliability and reproducibility of test outcomes. In particular, experiment designers frequently exhibit inconsistent decision-making when dealing with neutral results—cases where neither statistically significant positive nor negative effects are observed. This study aims to explore the feasibility of automating A/B test decision-making using Generative AI and empirically analyze how well AI decisions align with those of experiment designers and experts. Utilizing 1,407 experimental cases from 48 companies on the Hackle online experimentation platform, the study compares decision-making outcomes between experiment designers and Generative AI, analyzing agreement rates and identifying patterns across companies. Statistical analyses, including chi-square tests and inter-rater agreement evaluation, were employed to assess differences and reliability. The findings indicate meaningful discrepancies between AI and experiment designers but demonstrate that AI decisions closely align with expert judgments. These results suggest that Generative AI can serve as a complementary tool to enhance the consistency and reliability of A/B test result interpretation.
IEEE AccessCOMPUTER SCIENCE, INFORMATION SYSTEMSENGIN-ENGINEERING, ELECTRICAL & ELECTRONIC
CiteScore
9.80
自引率
7.70%
发文量
6673
审稿时长
6 weeks
期刊介绍:
IEEE Access® is a multidisciplinary, open access (OA), applications-oriented, all-electronic archival journal that continuously presents the results of original research or development across all of IEEE''s fields of interest.
IEEE Access will publish articles that are of high interest to readers, original, technically correct, and clearly presented. Supported by author publication charges (APC), its hallmarks are a rapid peer review and publication process with open access to all readers. Unlike IEEE''s traditional Transactions or Journals, reviews are "binary", in that reviewers will either Accept or Reject an article in the form it is submitted in order to achieve rapid turnaround. Especially encouraged are submissions on:
Multidisciplinary topics, or applications-oriented articles and negative results that do not fit within the scope of IEEE''s traditional journals.
Practical articles discussing new experiments or measurement techniques, interesting solutions to engineering.
Development of new or improved fabrication or manufacturing techniques.
Reviews or survey articles of new or evolving fields oriented to assist others in understanding the new area.