Meena Jagadeesan, Michael I. Jordan, Jacob Steinhardt
{"title":"Safety vs. Performance: How Multi-Objective Learning Reduces Barriers to Market Entry","authors":"Meena Jagadeesan, Michael I. Jordan, Jacob Steinhardt","doi":"arxiv-2409.03734","DOIUrl":null,"url":null,"abstract":"Emerging marketplaces for large language models and other large-scale machine\nlearning (ML) models appear to exhibit market concentration, which has raised\nconcerns about whether there are insurmountable barriers to entry in such\nmarkets. In this work, we study this issue from both an economic and an\nalgorithmic point of view, focusing on a phenomenon that reduces barriers to\nentry. Specifically, an incumbent company risks reputational damage unless its\nmodel is sufficiently aligned with safety objectives, whereas a new company can\nmore easily avoid reputational damage. To study this issue formally, we define\na multi-objective high-dimensional regression framework that captures\nreputational damage, and we characterize the number of data points that a new\ncompany needs to enter the market. Our results demonstrate how multi-objective\nconsiderations can fundamentally reduce barriers to entry -- the required\nnumber of data points can be significantly smaller than the incumbent company's\ndataset size. En route to proving these results, we develop scaling laws for\nhigh-dimensional linear regression in multi-objective environments, showing\nthat the scaling rate becomes slower when the dataset size is large, which\ncould be of independent interest.","PeriodicalId":501273,"journal":{"name":"arXiv - ECON - General Economics","volume":"50 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - ECON - General Economics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.03734","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Emerging marketplaces for large language models and other large-scale machine
learning (ML) models appear to exhibit market concentration, which has raised
concerns about whether there are insurmountable barriers to entry in such
markets. In this work, we study this issue from both an economic and an
algorithmic point of view, focusing on a phenomenon that reduces barriers to
entry. Specifically, an incumbent company risks reputational damage unless its
model is sufficiently aligned with safety objectives, whereas a new company can
more easily avoid reputational damage. To study this issue formally, we define
a multi-objective high-dimensional regression framework that captures
reputational damage, and we characterize the number of data points that a new
company needs to enter the market. Our results demonstrate how multi-objective
considerations can fundamentally reduce barriers to entry -- the required
number of data points can be significantly smaller than the incumbent company's
dataset size. En route to proving these results, we develop scaling laws for
high-dimensional linear regression in multi-objective environments, showing
that the scaling rate becomes slower when the dataset size is large, which
could be of independent interest.