{"title":"Algorithmic management: Assessing the impacts of AI at work","authors":"Aislinn Kelly-Lyth, Anna Thomas","doi":"10.1177/20319525231167478","DOIUrl":null,"url":null,"abstract":"Algorithmic outputs are increasingly shaping the employee experience, presenting a host of risks and impacts with far-reaching consequences. This contribution considers how algorithmic impact assessments should complement, as well as inform, an overarching ‘top-down’ framework for the governance of algorithmic management systems. While generalised obligations are crucial, identifying risk mitigations on a case-by-case basis can provide significant added value by (i) identifying and evaluating risks and impacts, and facilitating context-specific responses; (ii) striking a balance between generalised requirements and complete self-regulation; and (iii) ensuring that due regard to anticipated impacts and risk mitigation is built in from the design and development stages, through to deployment in the workplace. The criteria for an effective impact assessment obligation in the algorithmic management context are identified, including the appropriate stages, actors, and procedure. The Good Work Charter, which operates as a synthesis of legal principles, rights, and obligations, as well as ethical principles as they apply to the workplace, is proposed as an assessment framework. Finally, the article compares the proposed model with the existing obligation to carry out data protection impact assessments for high-risk data processing. The shortcomings of the latter obligation are explored, and a legislative approach to avoid duplication is proposed.","PeriodicalId":41157,"journal":{"name":"European Labour Law Journal","volume":null,"pages":null},"PeriodicalIF":1.1000,"publicationDate":"2023-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Labour Law Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/20319525231167478","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"LAW","Score":null,"Total":0}
引用次数: 1
Abstract
Algorithmic outputs are increasingly shaping the employee experience, presenting a host of risks and impacts with far-reaching consequences. This contribution considers how algorithmic impact assessments should complement, as well as inform, an overarching ‘top-down’ framework for the governance of algorithmic management systems. While generalised obligations are crucial, identifying risk mitigations on a case-by-case basis can provide significant added value by (i) identifying and evaluating risks and impacts, and facilitating context-specific responses; (ii) striking a balance between generalised requirements and complete self-regulation; and (iii) ensuring that due regard to anticipated impacts and risk mitigation is built in from the design and development stages, through to deployment in the workplace. The criteria for an effective impact assessment obligation in the algorithmic management context are identified, including the appropriate stages, actors, and procedure. The Good Work Charter, which operates as a synthesis of legal principles, rights, and obligations, as well as ethical principles as they apply to the workplace, is proposed as an assessment framework. Finally, the article compares the proposed model with the existing obligation to carry out data protection impact assessments for high-risk data processing. The shortcomings of the latter obligation are explored, and a legislative approach to avoid duplication is proposed.