公平合作的劳动自动化:机器为什么以及如何为所有人提供有意义的工作

IF 1.1 3区 哲学 Q3 ETHICS
Denise Celentano
{"title":"公平合作的劳动自动化:机器为什么以及如何为所有人提供有意义的工作","authors":"Denise Celentano","doi":"10.1111/josp.12548","DOIUrl":null,"url":null,"abstract":"<p>By affecting work, resources, organizations, and people's lives, automation processes can be disruptive of the basic structure of society. Nonetheless, we may benefit from this disruption, as automation may offer opportunities to make social cooperation fairer. Just as philosophers have addressed the problem of which values and principles should regulate the distribution of goods, so we may consider the problem of the values and principles guiding technological change with regard to work. Indeed, automation is often addressed from a distributive perspective. A prevailing concern in the debate is about making sure that the technologically unemployed will not lose access to income through unconditional redistributive policies, while some have suggested policies like a “robot tax” to disincentivize companies' investment in labor-saving devices. While crucial given the massive increase in profits afforded by automation and the inequalities that go with it, concerns about income are not the only ones raised by automation. Without underestimating their relevance, in this article I leave aside problems about income to focus on automation from the perspective of work. That is, my concern here is on social cooperation from the perspective of <i>contribution</i> instead of <i>distribution</i>, within a framework that may be called technological contributive justice. If UBI advocates expect everyone to benefit from automation in their income, the contributive perspective postulates that everyone should benefit from automation in their work.</p><p>There are three main reasons behind this shift. First, even in a world in which income were unconditionally accessible to all, there would be the problem of how to fairly organize the un-automated socially necessary labor (e.g., waste collection, care work, etc.). I call this the “somebody's got to do it” problem. It cannot be solved by merely reallocating income, because it concerns the division of labor itself and its norms. Second, by conceptualizing social cooperation only as a matter of markets and distribution but not production, we are not able to see what happens with regard to what people <i>do</i> besides what they <i>own</i>. But this matters too when it comes to pursuing our life plans (see Section 4) as well as the effects on our aims, aspirations, and character (see Section 3.1). Finally, even if work were completely automatable, we would most likely still consider it undesirable to fully automate certain tasks, such as child care or teaching.</p><p>On the other hand, normative thinking about automation often takes the form of utopias of “full automation.” Recent examples include ideas of “fully automated luxury communism” (Bastani, <span>2019</span>) or “post-work” views. A fully automated world, Danaher (<span>2019b</span>) argues, would allow us to pursue a life free from the pressures of economic demands and to enjoy activities for their own sake, much like playing games. From the premise that work is “structurally bad,” he draws the conclusion that we should retreat from work, and the prospect of full automation is convenient to this purpose. This is what he calls a “withdrawal” strategy that disengages from work and its demands altogether.</p><p>While captivating, post-work arguments rely on a debatable premise: that work will, in fact, end. This is by no means certain and, as such, is not falsifiable. There is much controversy around this issue and there are several nuances to consider. In this article, I develop the point that given this uncertainty, we should redirect our attention to the opportunities that hybrid cooperation may bring about for a fairer society. This requires us to reframe the problem of automation from one of whether future joblessness will take place and is desirable, to one of the preferable ways to realize hybrid cooperation.</p><p>Thus, in this article I pursue an alternative “transformative” strategy, aimed at changing the structures of work instead of merely withdrawing from them. To this end, I explore an alternative ideal, which I call “fair hybrid cooperation,” according to which we should arrange the automation processes that affect labor in such a way that provides meaningful work for all. This ideal assumes an always-evolving, hybrid cooperation scenario between humans and machines rather than one of “human obsolescence” (Danaher, <span>2019a</span>). In this ideal, machines help us realize forms of cooperation that make meaningful work possible for all, rather than reserving it only for the few, as is currently the case, or anticipate a mere replacement scenario.</p><p>The article proceeds as follows. First, I discuss the efficiency motives driving current automation processes (Section 2). Then, I explain why efficiency should not be the overriding value orienting automation choices and why the motives of automation deserve further normative scrutiny (3). Automation processes, I argue, are part of the basic structure of society, to which considerations of justice apply. Given the structural interdependence between humans and technology in social cooperation, we may refer to it as a system of “hybrid cooperation.” Following on from this, I consider “fair hybrid cooperation” as an alternative value co-orienting automation priorities (3). Fair hybrid cooperation is achieved when the organizational arrangements between humans and machines—and between humans themselves as a result of technological disruption—do not hinder, and preferably enable, workers' experience of certain primary qualities in their activity (4). Finally (5), I provide some practical examples through a “fair hybrid cooperation test” to show how this ideal may play out in the real world. I then consider alternative cooperative imaginaries inspired by this ideal, focusing on the case of nurse bots. Section 6 concludes.</p><p>The costs of automation have dramatically dropped since the beginning of the computing era. As Nordhaus (<span>2007</span>, p. 1) points out, “depending on the standard used, computer performance has improved since manual computing by a factor between 1.7 trillion and 76 trillion.” Companies have thus quite a strong incentive to substitute human labor: it costs less and produces more.</p><p>Technological innovation has historically served ends of efficiency, which are often in conflict with the quality of workers' cooperation. Think of Adam Smith's (<span>1776</span>) classic reference to the pin factory. The detailed division of labor allowed by technological innovation in the 18th century determined a dramatic increase in productivity. There was a price to pay, though: what Smith called the “stultification” of workers, trapped into a series of mindless tasks ultimately degrading their intelligence and autonomy. Later on, Frederick Taylor's “scientific management” introduced the meticulous quantification of workers' input to maximize the productive output, thereby reducing workers to cogs in a machine. Notoriously, no consideration for the human benefits formed part of the Taylorist experiment. As captured by Taylor's (<span>1919</span>) own words, the goal was different: “in the past the man has been first; in the future the system must be first.” Despite massive organizational transformations, automation's core rationale has not changed since that time. Efficiency might only derivatively benefit workers: that is, whether efficiency will benefit workers does not automatically follow from the general concern for efficiency in itself.</p><p>To be sure, this is not to deny that many technological innovations already serve other values as well. Doctors rely on sophisticated AI devices to optimize diagnostics and surgery, for example. In such cases (and others could be cited), automation serves other purposes such as better healthcare services for patients. Therefore, strictly speaking, economic efficiency is not the only value being pursued. Thus, my argument is not entirely foreign to certain existing automation practices; rather, it explicitly articulates that economic efficiency should not be the exclusive, overarching value being pursued by automation choices. The idea is that distinctive concerns for the benefits of workers ought to be considered, which are often left out of the picture. When faced with opportunities for technological change, we ought to consider not merely productivity gains and performance optimization but also whether such changes will make cooperation fairer for workers.</p><p>A tendency can sometimes be identified in public debates whereby automation is naturalized, that is, technological change is presented as a sort of natural process, much like an incontrollable calamity, despite it being the result of human choices (and of socio-structural processes cumulatively perpetuated by human choices). Naturalizing automation entails keeping it outside of the realm of moral inquiry and demands of justice. We consider as worthy of normative inquiry what we acknowledge as resulting from human choices and social processes, much like we do with taxes, social biases, and all sorts of policies. Theories have argued for the redistribution of goods based on values such as equality, fairness, and human capabilities, presupposing that the way in which goods are distributed depends on human decisions. There is no inherent reason why the same should not be done in the context of work and technological change, in terms of preferable ways to realize labor automation. This requires that we de-naturalize our discourses around automation, fully recognizing the human drive and social genesis of technological change and thereby making space to question its driving motives and to expand their scope. Before addressing the idea of fair cooperation, let me articulate why economic efficiency should not be the overarching value and, more broadly, why it is appropriate to include automation in normative considerations.</p><p>To begin with, automation has a fully human and social genesis. Hence, it falls within the scope of normative inquiry, in which we question motives and ends and deliberate among the most desirable ones based on reasons. Furthermore, as I will argue shortly, automation belongs to the basic structure of society, to which—following John Rawls—considerations of justice apply. Automation affects how we organize cooperation and as such alters organizational forms which are themselves also part of the basic structure. Thus, automation affects workers' chances to pursue their life plans, as much as other institutions of the basic structure. A further reason concerns the consequences of automation: allowing automation to be driven merely by economic efficiency can exacerbate social inequalities and power imbalances. While I will not expand on this point here, arguments could be made that these undesirable outcomes matter as well (e.g., Marmot et al., <span>1997</span>).</p><p>CPQs refer to qualities of the relation between workers and their work activity that workers should experience to pursue their conception of the good life. As is well-known, Rawls' primary goods are all-purpose means necessary for everyone to pursue their life plans. No less than primary goods, however, what we do and how we do it affect our ability to pursue our life plans too.</p><p>Research has shown that work significantly also affects us outside of the workplace, including in our cognitive abilities and overall personality (see for instance Kohn &amp; Schooler, <span>1978</span>, <span>1982</span>; Marmot et al., <span>1997</span>). If work affects our being, it has the power to impact our ability to pursue our life plans overall. The rationale behind CPQs is that when our work activity hinders the experience of these qualities, our ability to pursue ends is severed. Therefore, organizational forms involving divisions of labor between humans and machines that do not hinder, and preferably enable, CPQs are preferable.</p><p>These qualities are primarily relational in nature and organizationally embedded rather than “goods” to be redistributed or “possessed.” They emerge from the relation between workers, their work activity, and the organizational form. Examples of CPQs are: <i>Security</i>, <i>self-direction</i>, <i>self-development</i>, <i>dignity</i>, and <i>recognition</i>. In what follows, I articulate the essential features and rationale of each CPQ.</p><p>In this section I take up a few examples to give a clearer picture of how this ideal might play out in the real world. I show how the criterion provided can be used to assess existing organizational forms. In the next section, I show how they can be used to shape new possible cooperative imaginaries.</p><p>In current organizational arrangements, CPQs are reserved for a small portion of workers, being highly segregated. Think of “ghost work” (Gray &amp; Suri, <span>2019</span>): invisible human labor operating behind the scenes of AI. It includes figures such as “data janitors” (Irani, <span>2019</span>) spending long hours labeling images, for example, and cleansing the internet of inappropriate content. Click-farms and crowd-work are living examples of how cooperative arrangements prioritizing the system over the human are by no means confined to the past. As a rather taskified form of work, benefiting from no security, with little to no room for self-development and self-direction (except in terms of time management to an extent), invisible and therefore not susceptible to recognition, crowd-work does not pass the fair hybrid cooperation test. According to our standard, it is thus objectionable and should be changed in a way that is more conducive to the CPQs.</p><p>Let us now consider automated management in the gig economy, particularly in the ride-hailing sector, to see whether it meets the criteria of fair hybrid cooperation. To begin with, most drivers and riders do not benefit from any kind of job security, as in several cases they are not even recognized as workers. In fact, in most countries, companies such as Uber frame them as “partners” or “independent contractors.” Hence, workers are entirely in charge of the burden of dealing with risks associated with the service they provide. They have, however, some room for self-direction. The aspect in which they enjoy most self-direction tends to be time management. They decide when and for how long to work. Nonetheless, the algorithm nudges them to work for longer hours, via notifications that promise higher earnings in certain areas and times. Likewise, workers are constantly tracked and their data is used to both monitor and control their behavior, besides being a source of value extraction itself. By declining a few orders in a row, they are at risk of being banned from the app or the algorithm ranking them lower. A few negative reviews by passengers may lead to similar sanctions. These aspects suggest high forms of control hindering self-direction. Finally, these highly controlling features do not seem to fit well the dignitarian norms mentioned above. A slogan used by protestors—“We are drivers, not Uber's tools!”—is telling. It suggests a sense of being treated like “mere means” by the company. As for self-development, complaints about the repetitiveness and monotony of this job seem not particularly relevant, so this CPQ might not be lacking.</p><p>While it may be very profitable for the company and convenient for customers, the automation of management here is not arranged in a way that enables fair hybrid cooperation. In order to pass the test, this organizational form should be rearranged so as to enable job security by formally recognizing gig labor as work and therefore providing workers with contractual and social protections; and room for self-direction, for example by limiting datafication, nudges, and sanctions. Such changes may benefit the relational qualities as well.</p><p>Besides assessing existing organizational forms, the fair hybrid cooperation ideal can help us build alternative organizational arrangements, yet to be realized. It can serve our organizational imagination to pursue fairer forms of cooperation in an increasingly hybrid world. As an example of a positive exploration of the ideal, in what follows I consider the potential of care work automation to fulfill this purpose.</p><p>To develop an ideal of “fair hybrid cooperation,” I have argued for the de-naturalization of automation and for the importance of questioning its driving motives. While economic efficiency is one of the main drivers of automation, this article has discussed fair hybrid cooperation as an alternative value to orient labor automation choices. In fact, labor automation processes are part of the basic structure of society, to which considerations of justice apply. Given its structural interdependence with technology, social cooperation may be said to be hybrid. As a process altering this relation, automation raises normative considerations. The contributive primary qualities provide a criterion to normatively assess existing organizational forms and to envisage preferable cooperative arrangements. Fair hybrid cooperation is meant to expand the normative vocabulary at our disposal and to provide practical orientation when it comes to labor automation decisions. This perspective shifts the focus of the debate from ethically desirable lifestyles in a supposedly workless future to the enabling potential of technology for fair cooperation. In its current forms, technology-driven changes in work practices do not benefit everyone. Some are reserved for the crumbs of automation, perpetuating a scenario of meaningful work for the few. The fair hybrid cooperation ideal aims instead at reconciling technological change with the goal of making meaningful work available for all.</p><p>The author has no conflict of interest for the submitted article.</p>","PeriodicalId":46756,"journal":{"name":"Journal of Social Philosophy","volume":"55 1","pages":"25-43"},"PeriodicalIF":1.1000,"publicationDate":"2023-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/josp.12548","citationCount":"0","resultStr":"{\"title\":\"Labor automation for fair cooperation: Why and how machines should provide meaningful work for all\",\"authors\":\"Denise Celentano\",\"doi\":\"10.1111/josp.12548\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>By affecting work, resources, organizations, and people's lives, automation processes can be disruptive of the basic structure of society. Nonetheless, we may benefit from this disruption, as automation may offer opportunities to make social cooperation fairer. Just as philosophers have addressed the problem of which values and principles should regulate the distribution of goods, so we may consider the problem of the values and principles guiding technological change with regard to work. Indeed, automation is often addressed from a distributive perspective. A prevailing concern in the debate is about making sure that the technologically unemployed will not lose access to income through unconditional redistributive policies, while some have suggested policies like a “robot tax” to disincentivize companies' investment in labor-saving devices. While crucial given the massive increase in profits afforded by automation and the inequalities that go with it, concerns about income are not the only ones raised by automation. Without underestimating their relevance, in this article I leave aside problems about income to focus on automation from the perspective of work. That is, my concern here is on social cooperation from the perspective of <i>contribution</i> instead of <i>distribution</i>, within a framework that may be called technological contributive justice. If UBI advocates expect everyone to benefit from automation in their income, the contributive perspective postulates that everyone should benefit from automation in their work.</p><p>There are three main reasons behind this shift. First, even in a world in which income were unconditionally accessible to all, there would be the problem of how to fairly organize the un-automated socially necessary labor (e.g., waste collection, care work, etc.). I call this the “somebody's got to do it” problem. It cannot be solved by merely reallocating income, because it concerns the division of labor itself and its norms. Second, by conceptualizing social cooperation only as a matter of markets and distribution but not production, we are not able to see what happens with regard to what people <i>do</i> besides what they <i>own</i>. But this matters too when it comes to pursuing our life plans (see Section 4) as well as the effects on our aims, aspirations, and character (see Section 3.1). Finally, even if work were completely automatable, we would most likely still consider it undesirable to fully automate certain tasks, such as child care or teaching.</p><p>On the other hand, normative thinking about automation often takes the form of utopias of “full automation.” Recent examples include ideas of “fully automated luxury communism” (Bastani, <span>2019</span>) or “post-work” views. A fully automated world, Danaher (<span>2019b</span>) argues, would allow us to pursue a life free from the pressures of economic demands and to enjoy activities for their own sake, much like playing games. From the premise that work is “structurally bad,” he draws the conclusion that we should retreat from work, and the prospect of full automation is convenient to this purpose. This is what he calls a “withdrawal” strategy that disengages from work and its demands altogether.</p><p>While captivating, post-work arguments rely on a debatable premise: that work will, in fact, end. This is by no means certain and, as such, is not falsifiable. There is much controversy around this issue and there are several nuances to consider. In this article, I develop the point that given this uncertainty, we should redirect our attention to the opportunities that hybrid cooperation may bring about for a fairer society. This requires us to reframe the problem of automation from one of whether future joblessness will take place and is desirable, to one of the preferable ways to realize hybrid cooperation.</p><p>Thus, in this article I pursue an alternative “transformative” strategy, aimed at changing the structures of work instead of merely withdrawing from them. To this end, I explore an alternative ideal, which I call “fair hybrid cooperation,” according to which we should arrange the automation processes that affect labor in such a way that provides meaningful work for all. This ideal assumes an always-evolving, hybrid cooperation scenario between humans and machines rather than one of “human obsolescence” (Danaher, <span>2019a</span>). In this ideal, machines help us realize forms of cooperation that make meaningful work possible for all, rather than reserving it only for the few, as is currently the case, or anticipate a mere replacement scenario.</p><p>The article proceeds as follows. First, I discuss the efficiency motives driving current automation processes (Section 2). Then, I explain why efficiency should not be the overriding value orienting automation choices and why the motives of automation deserve further normative scrutiny (3). Automation processes, I argue, are part of the basic structure of society, to which considerations of justice apply. Given the structural interdependence between humans and technology in social cooperation, we may refer to it as a system of “hybrid cooperation.” Following on from this, I consider “fair hybrid cooperation” as an alternative value co-orienting automation priorities (3). Fair hybrid cooperation is achieved when the organizational arrangements between humans and machines—and between humans themselves as a result of technological disruption—do not hinder, and preferably enable, workers' experience of certain primary qualities in their activity (4). Finally (5), I provide some practical examples through a “fair hybrid cooperation test” to show how this ideal may play out in the real world. I then consider alternative cooperative imaginaries inspired by this ideal, focusing on the case of nurse bots. Section 6 concludes.</p><p>The costs of automation have dramatically dropped since the beginning of the computing era. As Nordhaus (<span>2007</span>, p. 1) points out, “depending on the standard used, computer performance has improved since manual computing by a factor between 1.7 trillion and 76 trillion.” Companies have thus quite a strong incentive to substitute human labor: it costs less and produces more.</p><p>Technological innovation has historically served ends of efficiency, which are often in conflict with the quality of workers' cooperation. Think of Adam Smith's (<span>1776</span>) classic reference to the pin factory. The detailed division of labor allowed by technological innovation in the 18th century determined a dramatic increase in productivity. There was a price to pay, though: what Smith called the “stultification” of workers, trapped into a series of mindless tasks ultimately degrading their intelligence and autonomy. Later on, Frederick Taylor's “scientific management” introduced the meticulous quantification of workers' input to maximize the productive output, thereby reducing workers to cogs in a machine. Notoriously, no consideration for the human benefits formed part of the Taylorist experiment. As captured by Taylor's (<span>1919</span>) own words, the goal was different: “in the past the man has been first; in the future the system must be first.” Despite massive organizational transformations, automation's core rationale has not changed since that time. Efficiency might only derivatively benefit workers: that is, whether efficiency will benefit workers does not automatically follow from the general concern for efficiency in itself.</p><p>To be sure, this is not to deny that many technological innovations already serve other values as well. Doctors rely on sophisticated AI devices to optimize diagnostics and surgery, for example. In such cases (and others could be cited), automation serves other purposes such as better healthcare services for patients. Therefore, strictly speaking, economic efficiency is not the only value being pursued. Thus, my argument is not entirely foreign to certain existing automation practices; rather, it explicitly articulates that economic efficiency should not be the exclusive, overarching value being pursued by automation choices. The idea is that distinctive concerns for the benefits of workers ought to be considered, which are often left out of the picture. When faced with opportunities for technological change, we ought to consider not merely productivity gains and performance optimization but also whether such changes will make cooperation fairer for workers.</p><p>A tendency can sometimes be identified in public debates whereby automation is naturalized, that is, technological change is presented as a sort of natural process, much like an incontrollable calamity, despite it being the result of human choices (and of socio-structural processes cumulatively perpetuated by human choices). Naturalizing automation entails keeping it outside of the realm of moral inquiry and demands of justice. We consider as worthy of normative inquiry what we acknowledge as resulting from human choices and social processes, much like we do with taxes, social biases, and all sorts of policies. Theories have argued for the redistribution of goods based on values such as equality, fairness, and human capabilities, presupposing that the way in which goods are distributed depends on human decisions. There is no inherent reason why the same should not be done in the context of work and technological change, in terms of preferable ways to realize labor automation. This requires that we de-naturalize our discourses around automation, fully recognizing the human drive and social genesis of technological change and thereby making space to question its driving motives and to expand their scope. Before addressing the idea of fair cooperation, let me articulate why economic efficiency should not be the overarching value and, more broadly, why it is appropriate to include automation in normative considerations.</p><p>To begin with, automation has a fully human and social genesis. Hence, it falls within the scope of normative inquiry, in which we question motives and ends and deliberate among the most desirable ones based on reasons. Furthermore, as I will argue shortly, automation belongs to the basic structure of society, to which—following John Rawls—considerations of justice apply. Automation affects how we organize cooperation and as such alters organizational forms which are themselves also part of the basic structure. Thus, automation affects workers' chances to pursue their life plans, as much as other institutions of the basic structure. A further reason concerns the consequences of automation: allowing automation to be driven merely by economic efficiency can exacerbate social inequalities and power imbalances. While I will not expand on this point here, arguments could be made that these undesirable outcomes matter as well (e.g., Marmot et al., <span>1997</span>).</p><p>CPQs refer to qualities of the relation between workers and their work activity that workers should experience to pursue their conception of the good life. As is well-known, Rawls' primary goods are all-purpose means necessary for everyone to pursue their life plans. No less than primary goods, however, what we do and how we do it affect our ability to pursue our life plans too.</p><p>Research has shown that work significantly also affects us outside of the workplace, including in our cognitive abilities and overall personality (see for instance Kohn &amp; Schooler, <span>1978</span>, <span>1982</span>; Marmot et al., <span>1997</span>). If work affects our being, it has the power to impact our ability to pursue our life plans overall. The rationale behind CPQs is that when our work activity hinders the experience of these qualities, our ability to pursue ends is severed. Therefore, organizational forms involving divisions of labor between humans and machines that do not hinder, and preferably enable, CPQs are preferable.</p><p>These qualities are primarily relational in nature and organizationally embedded rather than “goods” to be redistributed or “possessed.” They emerge from the relation between workers, their work activity, and the organizational form. Examples of CPQs are: <i>Security</i>, <i>self-direction</i>, <i>self-development</i>, <i>dignity</i>, and <i>recognition</i>. In what follows, I articulate the essential features and rationale of each CPQ.</p><p>In this section I take up a few examples to give a clearer picture of how this ideal might play out in the real world. I show how the criterion provided can be used to assess existing organizational forms. In the next section, I show how they can be used to shape new possible cooperative imaginaries.</p><p>In current organizational arrangements, CPQs are reserved for a small portion of workers, being highly segregated. Think of “ghost work” (Gray &amp; Suri, <span>2019</span>): invisible human labor operating behind the scenes of AI. It includes figures such as “data janitors” (Irani, <span>2019</span>) spending long hours labeling images, for example, and cleansing the internet of inappropriate content. Click-farms and crowd-work are living examples of how cooperative arrangements prioritizing the system over the human are by no means confined to the past. As a rather taskified form of work, benefiting from no security, with little to no room for self-development and self-direction (except in terms of time management to an extent), invisible and therefore not susceptible to recognition, crowd-work does not pass the fair hybrid cooperation test. According to our standard, it is thus objectionable and should be changed in a way that is more conducive to the CPQs.</p><p>Let us now consider automated management in the gig economy, particularly in the ride-hailing sector, to see whether it meets the criteria of fair hybrid cooperation. To begin with, most drivers and riders do not benefit from any kind of job security, as in several cases they are not even recognized as workers. In fact, in most countries, companies such as Uber frame them as “partners” or “independent contractors.” Hence, workers are entirely in charge of the burden of dealing with risks associated with the service they provide. They have, however, some room for self-direction. The aspect in which they enjoy most self-direction tends to be time management. They decide when and for how long to work. Nonetheless, the algorithm nudges them to work for longer hours, via notifications that promise higher earnings in certain areas and times. Likewise, workers are constantly tracked and their data is used to both monitor and control their behavior, besides being a source of value extraction itself. By declining a few orders in a row, they are at risk of being banned from the app or the algorithm ranking them lower. A few negative reviews by passengers may lead to similar sanctions. These aspects suggest high forms of control hindering self-direction. Finally, these highly controlling features do not seem to fit well the dignitarian norms mentioned above. A slogan used by protestors—“We are drivers, not Uber's tools!”—is telling. It suggests a sense of being treated like “mere means” by the company. As for self-development, complaints about the repetitiveness and monotony of this job seem not particularly relevant, so this CPQ might not be lacking.</p><p>While it may be very profitable for the company and convenient for customers, the automation of management here is not arranged in a way that enables fair hybrid cooperation. In order to pass the test, this organizational form should be rearranged so as to enable job security by formally recognizing gig labor as work and therefore providing workers with contractual and social protections; and room for self-direction, for example by limiting datafication, nudges, and sanctions. Such changes may benefit the relational qualities as well.</p><p>Besides assessing existing organizational forms, the fair hybrid cooperation ideal can help us build alternative organizational arrangements, yet to be realized. It can serve our organizational imagination to pursue fairer forms of cooperation in an increasingly hybrid world. As an example of a positive exploration of the ideal, in what follows I consider the potential of care work automation to fulfill this purpose.</p><p>To develop an ideal of “fair hybrid cooperation,” I have argued for the de-naturalization of automation and for the importance of questioning its driving motives. While economic efficiency is one of the main drivers of automation, this article has discussed fair hybrid cooperation as an alternative value to orient labor automation choices. In fact, labor automation processes are part of the basic structure of society, to which considerations of justice apply. Given its structural interdependence with technology, social cooperation may be said to be hybrid. As a process altering this relation, automation raises normative considerations. The contributive primary qualities provide a criterion to normatively assess existing organizational forms and to envisage preferable cooperative arrangements. Fair hybrid cooperation is meant to expand the normative vocabulary at our disposal and to provide practical orientation when it comes to labor automation decisions. This perspective shifts the focus of the debate from ethically desirable lifestyles in a supposedly workless future to the enabling potential of technology for fair cooperation. In its current forms, technology-driven changes in work practices do not benefit everyone. Some are reserved for the crumbs of automation, perpetuating a scenario of meaningful work for the few. The fair hybrid cooperation ideal aims instead at reconciling technological change with the goal of making meaningful work available for all.</p><p>The author has no conflict of interest for the submitted article.</p>\",\"PeriodicalId\":46756,\"journal\":{\"name\":\"Journal of Social Philosophy\",\"volume\":\"55 1\",\"pages\":\"25-43\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2023-08-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/josp.12548\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Social Philosophy\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/josp.12548\",\"RegionNum\":3,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Social Philosophy","FirstCategoryId":"98","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/josp.12548","RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

摘要

通过影响工作、资源、组织和人们的生活,自动化过程可以破坏社会的基本结构。尽管如此,我们可能会从这种颠覆中受益,因为自动化可能会提供机会,使社会合作更加公平。正如哲学家们讨论了应该用哪些价值观和原则来规范商品分配的问题一样,我们也可以考虑在工作方面指导技术变革的价值观和原则的问题。实际上,自动化通常是从分配的角度来处理的。在这场辩论中,一个普遍关注的问题是,如何通过无条件的再分配政策,确保技术性失业人员不会失去获得收入的机会,而一些人则建议采取“机器人税”等政策,抑制企业对节省劳动力的设备的投资。考虑到自动化带来的利润大幅增加以及随之而来的不平等,这一点至关重要,但对收入的担忧并不是自动化引发的唯一问题。没有低估它们的相关性,在这篇文章中,我把关于收入的问题放在一边,从工作的角度关注自动化。也就是说,我在这里关注的是社会合作,从贡献而不是分配的角度,在一个可以称为技术贡献正义的框架内。如果UBI的支持者期望每个人都能从收入的自动化中受益,那么贡献视角则假设每个人都应该从工作的自动化中受益。这种转变背后有三个主要原因。首先,即使在一个所有人都能无条件获得收入的世界里,如何公平地组织非自动化的社会必要劳动(例如,废物收集、护理工作等)也会存在问题。收稿日期:2022年11月14日修稿日期:2023年7月27日收稿日期:2023年8月7日
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Labor automation for fair cooperation: Why and how machines should provide meaningful work for all

By affecting work, resources, organizations, and people's lives, automation processes can be disruptive of the basic structure of society. Nonetheless, we may benefit from this disruption, as automation may offer opportunities to make social cooperation fairer. Just as philosophers have addressed the problem of which values and principles should regulate the distribution of goods, so we may consider the problem of the values and principles guiding technological change with regard to work. Indeed, automation is often addressed from a distributive perspective. A prevailing concern in the debate is about making sure that the technologically unemployed will not lose access to income through unconditional redistributive policies, while some have suggested policies like a “robot tax” to disincentivize companies' investment in labor-saving devices. While crucial given the massive increase in profits afforded by automation and the inequalities that go with it, concerns about income are not the only ones raised by automation. Without underestimating their relevance, in this article I leave aside problems about income to focus on automation from the perspective of work. That is, my concern here is on social cooperation from the perspective of contribution instead of distribution, within a framework that may be called technological contributive justice. If UBI advocates expect everyone to benefit from automation in their income, the contributive perspective postulates that everyone should benefit from automation in their work.

There are three main reasons behind this shift. First, even in a world in which income were unconditionally accessible to all, there would be the problem of how to fairly organize the un-automated socially necessary labor (e.g., waste collection, care work, etc.). I call this the “somebody's got to do it” problem. It cannot be solved by merely reallocating income, because it concerns the division of labor itself and its norms. Second, by conceptualizing social cooperation only as a matter of markets and distribution but not production, we are not able to see what happens with regard to what people do besides what they own. But this matters too when it comes to pursuing our life plans (see Section 4) as well as the effects on our aims, aspirations, and character (see Section 3.1). Finally, even if work were completely automatable, we would most likely still consider it undesirable to fully automate certain tasks, such as child care or teaching.

On the other hand, normative thinking about automation often takes the form of utopias of “full automation.” Recent examples include ideas of “fully automated luxury communism” (Bastani, 2019) or “post-work” views. A fully automated world, Danaher (2019b) argues, would allow us to pursue a life free from the pressures of economic demands and to enjoy activities for their own sake, much like playing games. From the premise that work is “structurally bad,” he draws the conclusion that we should retreat from work, and the prospect of full automation is convenient to this purpose. This is what he calls a “withdrawal” strategy that disengages from work and its demands altogether.

While captivating, post-work arguments rely on a debatable premise: that work will, in fact, end. This is by no means certain and, as such, is not falsifiable. There is much controversy around this issue and there are several nuances to consider. In this article, I develop the point that given this uncertainty, we should redirect our attention to the opportunities that hybrid cooperation may bring about for a fairer society. This requires us to reframe the problem of automation from one of whether future joblessness will take place and is desirable, to one of the preferable ways to realize hybrid cooperation.

Thus, in this article I pursue an alternative “transformative” strategy, aimed at changing the structures of work instead of merely withdrawing from them. To this end, I explore an alternative ideal, which I call “fair hybrid cooperation,” according to which we should arrange the automation processes that affect labor in such a way that provides meaningful work for all. This ideal assumes an always-evolving, hybrid cooperation scenario between humans and machines rather than one of “human obsolescence” (Danaher, 2019a). In this ideal, machines help us realize forms of cooperation that make meaningful work possible for all, rather than reserving it only for the few, as is currently the case, or anticipate a mere replacement scenario.

The article proceeds as follows. First, I discuss the efficiency motives driving current automation processes (Section 2). Then, I explain why efficiency should not be the overriding value orienting automation choices and why the motives of automation deserve further normative scrutiny (3). Automation processes, I argue, are part of the basic structure of society, to which considerations of justice apply. Given the structural interdependence between humans and technology in social cooperation, we may refer to it as a system of “hybrid cooperation.” Following on from this, I consider “fair hybrid cooperation” as an alternative value co-orienting automation priorities (3). Fair hybrid cooperation is achieved when the organizational arrangements between humans and machines—and between humans themselves as a result of technological disruption—do not hinder, and preferably enable, workers' experience of certain primary qualities in their activity (4). Finally (5), I provide some practical examples through a “fair hybrid cooperation test” to show how this ideal may play out in the real world. I then consider alternative cooperative imaginaries inspired by this ideal, focusing on the case of nurse bots. Section 6 concludes.

The costs of automation have dramatically dropped since the beginning of the computing era. As Nordhaus (2007, p. 1) points out, “depending on the standard used, computer performance has improved since manual computing by a factor between 1.7 trillion and 76 trillion.” Companies have thus quite a strong incentive to substitute human labor: it costs less and produces more.

Technological innovation has historically served ends of efficiency, which are often in conflict with the quality of workers' cooperation. Think of Adam Smith's (1776) classic reference to the pin factory. The detailed division of labor allowed by technological innovation in the 18th century determined a dramatic increase in productivity. There was a price to pay, though: what Smith called the “stultification” of workers, trapped into a series of mindless tasks ultimately degrading their intelligence and autonomy. Later on, Frederick Taylor's “scientific management” introduced the meticulous quantification of workers' input to maximize the productive output, thereby reducing workers to cogs in a machine. Notoriously, no consideration for the human benefits formed part of the Taylorist experiment. As captured by Taylor's (1919) own words, the goal was different: “in the past the man has been first; in the future the system must be first.” Despite massive organizational transformations, automation's core rationale has not changed since that time. Efficiency might only derivatively benefit workers: that is, whether efficiency will benefit workers does not automatically follow from the general concern for efficiency in itself.

To be sure, this is not to deny that many technological innovations already serve other values as well. Doctors rely on sophisticated AI devices to optimize diagnostics and surgery, for example. In such cases (and others could be cited), automation serves other purposes such as better healthcare services for patients. Therefore, strictly speaking, economic efficiency is not the only value being pursued. Thus, my argument is not entirely foreign to certain existing automation practices; rather, it explicitly articulates that economic efficiency should not be the exclusive, overarching value being pursued by automation choices. The idea is that distinctive concerns for the benefits of workers ought to be considered, which are often left out of the picture. When faced with opportunities for technological change, we ought to consider not merely productivity gains and performance optimization but also whether such changes will make cooperation fairer for workers.

A tendency can sometimes be identified in public debates whereby automation is naturalized, that is, technological change is presented as a sort of natural process, much like an incontrollable calamity, despite it being the result of human choices (and of socio-structural processes cumulatively perpetuated by human choices). Naturalizing automation entails keeping it outside of the realm of moral inquiry and demands of justice. We consider as worthy of normative inquiry what we acknowledge as resulting from human choices and social processes, much like we do with taxes, social biases, and all sorts of policies. Theories have argued for the redistribution of goods based on values such as equality, fairness, and human capabilities, presupposing that the way in which goods are distributed depends on human decisions. There is no inherent reason why the same should not be done in the context of work and technological change, in terms of preferable ways to realize labor automation. This requires that we de-naturalize our discourses around automation, fully recognizing the human drive and social genesis of technological change and thereby making space to question its driving motives and to expand their scope. Before addressing the idea of fair cooperation, let me articulate why economic efficiency should not be the overarching value and, more broadly, why it is appropriate to include automation in normative considerations.

To begin with, automation has a fully human and social genesis. Hence, it falls within the scope of normative inquiry, in which we question motives and ends and deliberate among the most desirable ones based on reasons. Furthermore, as I will argue shortly, automation belongs to the basic structure of society, to which—following John Rawls—considerations of justice apply. Automation affects how we organize cooperation and as such alters organizational forms which are themselves also part of the basic structure. Thus, automation affects workers' chances to pursue their life plans, as much as other institutions of the basic structure. A further reason concerns the consequences of automation: allowing automation to be driven merely by economic efficiency can exacerbate social inequalities and power imbalances. While I will not expand on this point here, arguments could be made that these undesirable outcomes matter as well (e.g., Marmot et al., 1997).

CPQs refer to qualities of the relation between workers and their work activity that workers should experience to pursue their conception of the good life. As is well-known, Rawls' primary goods are all-purpose means necessary for everyone to pursue their life plans. No less than primary goods, however, what we do and how we do it affect our ability to pursue our life plans too.

Research has shown that work significantly also affects us outside of the workplace, including in our cognitive abilities and overall personality (see for instance Kohn & Schooler, 1978, 1982; Marmot et al., 1997). If work affects our being, it has the power to impact our ability to pursue our life plans overall. The rationale behind CPQs is that when our work activity hinders the experience of these qualities, our ability to pursue ends is severed. Therefore, organizational forms involving divisions of labor between humans and machines that do not hinder, and preferably enable, CPQs are preferable.

These qualities are primarily relational in nature and organizationally embedded rather than “goods” to be redistributed or “possessed.” They emerge from the relation between workers, their work activity, and the organizational form. Examples of CPQs are: Security, self-direction, self-development, dignity, and recognition. In what follows, I articulate the essential features and rationale of each CPQ.

In this section I take up a few examples to give a clearer picture of how this ideal might play out in the real world. I show how the criterion provided can be used to assess existing organizational forms. In the next section, I show how they can be used to shape new possible cooperative imaginaries.

In current organizational arrangements, CPQs are reserved for a small portion of workers, being highly segregated. Think of “ghost work” (Gray & Suri, 2019): invisible human labor operating behind the scenes of AI. It includes figures such as “data janitors” (Irani, 2019) spending long hours labeling images, for example, and cleansing the internet of inappropriate content. Click-farms and crowd-work are living examples of how cooperative arrangements prioritizing the system over the human are by no means confined to the past. As a rather taskified form of work, benefiting from no security, with little to no room for self-development and self-direction (except in terms of time management to an extent), invisible and therefore not susceptible to recognition, crowd-work does not pass the fair hybrid cooperation test. According to our standard, it is thus objectionable and should be changed in a way that is more conducive to the CPQs.

Let us now consider automated management in the gig economy, particularly in the ride-hailing sector, to see whether it meets the criteria of fair hybrid cooperation. To begin with, most drivers and riders do not benefit from any kind of job security, as in several cases they are not even recognized as workers. In fact, in most countries, companies such as Uber frame them as “partners” or “independent contractors.” Hence, workers are entirely in charge of the burden of dealing with risks associated with the service they provide. They have, however, some room for self-direction. The aspect in which they enjoy most self-direction tends to be time management. They decide when and for how long to work. Nonetheless, the algorithm nudges them to work for longer hours, via notifications that promise higher earnings in certain areas and times. Likewise, workers are constantly tracked and their data is used to both monitor and control their behavior, besides being a source of value extraction itself. By declining a few orders in a row, they are at risk of being banned from the app or the algorithm ranking them lower. A few negative reviews by passengers may lead to similar sanctions. These aspects suggest high forms of control hindering self-direction. Finally, these highly controlling features do not seem to fit well the dignitarian norms mentioned above. A slogan used by protestors—“We are drivers, not Uber's tools!”—is telling. It suggests a sense of being treated like “mere means” by the company. As for self-development, complaints about the repetitiveness and monotony of this job seem not particularly relevant, so this CPQ might not be lacking.

While it may be very profitable for the company and convenient for customers, the automation of management here is not arranged in a way that enables fair hybrid cooperation. In order to pass the test, this organizational form should be rearranged so as to enable job security by formally recognizing gig labor as work and therefore providing workers with contractual and social protections; and room for self-direction, for example by limiting datafication, nudges, and sanctions. Such changes may benefit the relational qualities as well.

Besides assessing existing organizational forms, the fair hybrid cooperation ideal can help us build alternative organizational arrangements, yet to be realized. It can serve our organizational imagination to pursue fairer forms of cooperation in an increasingly hybrid world. As an example of a positive exploration of the ideal, in what follows I consider the potential of care work automation to fulfill this purpose.

To develop an ideal of “fair hybrid cooperation,” I have argued for the de-naturalization of automation and for the importance of questioning its driving motives. While economic efficiency is one of the main drivers of automation, this article has discussed fair hybrid cooperation as an alternative value to orient labor automation choices. In fact, labor automation processes are part of the basic structure of society, to which considerations of justice apply. Given its structural interdependence with technology, social cooperation may be said to be hybrid. As a process altering this relation, automation raises normative considerations. The contributive primary qualities provide a criterion to normatively assess existing organizational forms and to envisage preferable cooperative arrangements. Fair hybrid cooperation is meant to expand the normative vocabulary at our disposal and to provide practical orientation when it comes to labor automation decisions. This perspective shifts the focus of the debate from ethically desirable lifestyles in a supposedly workless future to the enabling potential of technology for fair cooperation. In its current forms, technology-driven changes in work practices do not benefit everyone. Some are reserved for the crumbs of automation, perpetuating a scenario of meaningful work for the few. The fair hybrid cooperation ideal aims instead at reconciling technological change with the goal of making meaningful work available for all.

The author has no conflict of interest for the submitted article.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.20
自引率
12.50%
发文量
44
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信