{"title":"The American Way — Until Machine Learning Algorithm Beats the Law? Algorithmic Consumer Credit Scoring in the EU and US","authors":"A. Gikay","doi":"10.2139/ssrn.3671488","DOIUrl":null,"url":null,"abstract":"Algorithmic consumer credit scoring has caused anxiety among scholars and policy makers. After a significant legislative effort by the European Union, the General Data Protection Regulation (GDPR) containing provisions tailored to automated decision-making (ADM) was implemented. When the EU Commission and the US Department of Commerce negotiated for US organizations to whom data from EU data controller is transferred to comply with the key principles of EU Data Protection Law under the EU-US Privacy Shield (PS) Framework, the Department of Commerce refused to incorporate the GDPR principles governing ADM in the PS Framework. The EU Commission accepted this refusal reasoning that where US companies make automated decisions with respect to EU data subjects, such as in consumer credit risk scoring, there are laws in the US that protect the consumer from adverse decisions. This view contradicts recommendations for implementing GDPR-Inspired law in the US to tackle the challenges of automated consumer credit scoring. \n \nThis article argues that despite the different approaches to the regulation of automated consumer credit scoring in the EU and the US, consumers are similarly protected in both jurisdictions. US consumer credit laws have the necessary flexibility to ensure that adverse automated decisions are tackled effectively. By analyzing statutes, cases, and empirical evidences, the article demonstrates that the seemingly comprehensive legal rules governing ADM in the GDPR do not make the EU consumers better off. In addition, the challenges presented by the increasing sophistication of Artificial Intelligence (AI) and Machine Learning(ML), consumers in both jurisdictions in a similarly vulnerable position as neither jurisdiction is equipped to tackle decisions made by autonomous, unpredictable and unexplainable algorithms. This is consistent with the EU Commission’s white paper on AI which acknowledges some of the flaws in the GDPR and envisions legislative reform. \n \nIn view of the limits of the existing legal rules in addressing ML decisions and the need to strike a balance between encouraging innovation and consumer protection, the article proposes risk-based approach to regulation and regulatory sandboxing as good starting points.","PeriodicalId":385192,"journal":{"name":"LSN: Other Consumer Credit Issues (Sub-Topic)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"LSN: Other Consumer Credit Issues (Sub-Topic)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/ssrn.3671488","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Algorithmic consumer credit scoring has caused anxiety among scholars and policy makers. After a significant legislative effort by the European Union, the General Data Protection Regulation (GDPR) containing provisions tailored to automated decision-making (ADM) was implemented. When the EU Commission and the US Department of Commerce negotiated for US organizations to whom data from EU data controller is transferred to comply with the key principles of EU Data Protection Law under the EU-US Privacy Shield (PS) Framework, the Department of Commerce refused to incorporate the GDPR principles governing ADM in the PS Framework. The EU Commission accepted this refusal reasoning that where US companies make automated decisions with respect to EU data subjects, such as in consumer credit risk scoring, there are laws in the US that protect the consumer from adverse decisions. This view contradicts recommendations for implementing GDPR-Inspired law in the US to tackle the challenges of automated consumer credit scoring.
This article argues that despite the different approaches to the regulation of automated consumer credit scoring in the EU and the US, consumers are similarly protected in both jurisdictions. US consumer credit laws have the necessary flexibility to ensure that adverse automated decisions are tackled effectively. By analyzing statutes, cases, and empirical evidences, the article demonstrates that the seemingly comprehensive legal rules governing ADM in the GDPR do not make the EU consumers better off. In addition, the challenges presented by the increasing sophistication of Artificial Intelligence (AI) and Machine Learning(ML), consumers in both jurisdictions in a similarly vulnerable position as neither jurisdiction is equipped to tackle decisions made by autonomous, unpredictable and unexplainable algorithms. This is consistent with the EU Commission’s white paper on AI which acknowledges some of the flaws in the GDPR and envisions legislative reform.
In view of the limits of the existing legal rules in addressing ML decisions and the need to strike a balance between encouraging innovation and consumer protection, the article proposes risk-based approach to regulation and regulatory sandboxing as good starting points.