{"title":"First, Do No Harm: Algorithms, AI, and Digital Product Liability","authors":"Marc J. Pfeiffer","doi":"arxiv-2311.10861","DOIUrl":null,"url":null,"abstract":"The ethical imperative for technology should be first, do no harm. But\ndigital innovations like AI and social media increasingly enable societal\nharms, from bias to misinformation. As these technologies grow ubiquitous, we\nneed solutions to address unintended consequences. This report proposes a model\nto incentivize developers to prevent foreseeable algorithmic harms. It does\nthis by expanding negligence and product liability laws. Digital product\ndevelopers would be incentivized to mitigate potential algorithmic risks before\ndeployment to protect themselves and investors. Standards and penalties would\nbe set proportional to harm. Insurers would require harm mitigation during\ndevelopment in order to obtain coverage. This shifts tech ethics from move fast\nand break things to first, do no harm. Details would need careful refinement\nbetween stakeholders to enact reasonable guardrails without stifling\ninnovation. Policy and harm prevention frameworks would likely evolve over\ntime. Similar accountability schemes have helped address workplace,\nenvironmental, and product safety. Introducing algorithmic harm negligence\nliability would acknowledge the real societal costs of unethical tech. The\ntiming is right for reform. This proposal provides a model to steer the digital\nrevolution toward human rights and dignity. Harm prevention must be prioritized\nover reckless growth. Vigorous liability policies are essential to stop\ntechnologists from breaking things","PeriodicalId":501487,"journal":{"name":"arXiv - QuantFin - Economics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuantFin - Economics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2311.10861","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The ethical imperative for technology should be first, do no harm. But
digital innovations like AI and social media increasingly enable societal
harms, from bias to misinformation. As these technologies grow ubiquitous, we
need solutions to address unintended consequences. This report proposes a model
to incentivize developers to prevent foreseeable algorithmic harms. It does
this by expanding negligence and product liability laws. Digital product
developers would be incentivized to mitigate potential algorithmic risks before
deployment to protect themselves and investors. Standards and penalties would
be set proportional to harm. Insurers would require harm mitigation during
development in order to obtain coverage. This shifts tech ethics from move fast
and break things to first, do no harm. Details would need careful refinement
between stakeholders to enact reasonable guardrails without stifling
innovation. Policy and harm prevention frameworks would likely evolve over
time. Similar accountability schemes have helped address workplace,
environmental, and product safety. Introducing algorithmic harm negligence
liability would acknowledge the real societal costs of unethical tech. The
timing is right for reform. This proposal provides a model to steer the digital
revolution toward human rights and dignity. Harm prevention must be prioritized
over reckless growth. Vigorous liability policies are essential to stop
technologists from breaking things