Safety vs. Performance: How Multi-Objective Learning Reduces Barriers to Market Entry

Meena Jagadeesan, Michael I. Jordan, Jacob Steinhardt
{"title":"Safety vs. Performance: How Multi-Objective Learning Reduces Barriers to Market Entry","authors":"Meena Jagadeesan, Michael I. Jordan, Jacob Steinhardt","doi":"arxiv-2409.03734","DOIUrl":null,"url":null,"abstract":"Emerging marketplaces for large language models and other large-scale machine\nlearning (ML) models appear to exhibit market concentration, which has raised\nconcerns about whether there are insurmountable barriers to entry in such\nmarkets. In this work, we study this issue from both an economic and an\nalgorithmic point of view, focusing on a phenomenon that reduces barriers to\nentry. Specifically, an incumbent company risks reputational damage unless its\nmodel is sufficiently aligned with safety objectives, whereas a new company can\nmore easily avoid reputational damage. To study this issue formally, we define\na multi-objective high-dimensional regression framework that captures\nreputational damage, and we characterize the number of data points that a new\ncompany needs to enter the market. Our results demonstrate how multi-objective\nconsiderations can fundamentally reduce barriers to entry -- the required\nnumber of data points can be significantly smaller than the incumbent company's\ndataset size. En route to proving these results, we develop scaling laws for\nhigh-dimensional linear regression in multi-objective environments, showing\nthat the scaling rate becomes slower when the dataset size is large, which\ncould be of independent interest.","PeriodicalId":501273,"journal":{"name":"arXiv - ECON - General Economics","volume":"50 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - ECON - General Economics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.03734","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Emerging marketplaces for large language models and other large-scale machine learning (ML) models appear to exhibit market concentration, which has raised concerns about whether there are insurmountable barriers to entry in such markets. In this work, we study this issue from both an economic and an algorithmic point of view, focusing on a phenomenon that reduces barriers to entry. Specifically, an incumbent company risks reputational damage unless its model is sufficiently aligned with safety objectives, whereas a new company can more easily avoid reputational damage. To study this issue formally, we define a multi-objective high-dimensional regression framework that captures reputational damage, and we characterize the number of data points that a new company needs to enter the market. Our results demonstrate how multi-objective considerations can fundamentally reduce barriers to entry -- the required number of data points can be significantly smaller than the incumbent company's dataset size. En route to proving these results, we develop scaling laws for high-dimensional linear regression in multi-objective environments, showing that the scaling rate becomes slower when the dataset size is large, which could be of independent interest.
安全与性能:多目标学习如何减少进入市场的障碍
大型语言模型和其他大规模机器学习(ML)模型的新兴市场似乎呈现出市场集中的态势,这引发了人们对此类市场是否存在不可逾越的进入壁垒的担忧。在这项工作中,我们从经济学和分析学的角度研究了这个问题,重点关注一种降低进入壁垒的现象。具体来说,除非现有公司的模式与安全目标足够一致,否则它将面临声誉受损的风险,而新公司则更容易避免声誉受损。为了正式研究这个问题,我们定义了一个捕捉声誉损害的多目标高维回归框架,并描述了新公司进入市场所需的数据点数量。我们的结果证明了多目标考虑如何从根本上降低进入壁垒--所需的数据点数量可以大大小于现有公司的数据集规模。在证明这些结果的过程中,我们开发了多目标环境下高维线性回归的缩放规律,表明当数据集规模较大时,缩放速度会变慢,这可能会引起人们的独立兴趣。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信