Privacy, Publicity, and the Right to Be Forgotten

IF 2.9 1区 哲学 Q1 ETHICS
Hannah Carnegy-Arbuthnott
{"title":"Privacy, Publicity, and the Right to Be Forgotten","authors":"Hannah Carnegy-Arbuthnott","doi":"10.1111/jopp.12308","DOIUrl":null,"url":null,"abstract":"<p>Many of us will remember the sense of foreboding induced by the simple threat, usually uttered by a teacher: ‘It will go on your permanent record’. This administrative bogeyman exploits our early awareness of the importance of being able to leave some things in the past. While some of the things we do may go on public record at various points in our lives, it used to be possible to comfort ourselves with the thought that these would soon be buried deep in the archives, where most people would not care to look. In the digital age, however, search engines do the digging and can serve up previously long-forgotten results on a simple search of a person's name.</p><p>This article takes up the question of what kinds of claims we have against information being dug up from our past. Specifically, it focuses on information that is legitimately a matter of public record. When some information has been stored in an archive and has, in principle, been accessible to anyone, what claims, if any, do we have against that information being brought back to light? Many people would find it intrusive for someone to dig through archives and publicize afresh the information they found out about you there, but it is not clear what the basis of such a complaint would be. Moreover, it is this kind of complaint that is provided protection through data-protection provisions colloquially referred to as the right to be forgotten. Such measures are typically framed as privacy protections. While some have argued that we can have privacy rights over information that has been made public,1 I argue that appeals to privacy fail in cases concerning information that is legitimately a matter of public record.</p><p>Paying attention to the reasons we have to object to the dredging up of outdated information reveals a new category of claims that are distinct from claims to privacy, but serve the same general interest in self-presentation that privacy scholars have long been concerned with. I call these claims against distortion. We can understand such claims as falling under a general principle of reputational control embedded in the historic right of personality. That general principle provides the basis for claims to privacy, claims against defamation, and, as I will argue, claims against distortion.</p><p>The purpose of invoking the right of personality is to identify a general principle embedded in that concept, and to then draw out a taxonomy of claims that serve that principle in distinct ways. One advantage of separating these three categories of claims is that it allows us to retain clear boundaries around the concept of privacy, which has often come under fire for suffering from a plethora of meanings.2</p><p>On my account, claims to privacy are claims to prevent certain information from becoming publicly available beyond one's audience of choice. Privacy provides one aspect of reputational control by allowing us to decide who we share various aspects of oneself with. Claims against defamation are claims against having false statements made about oneself in a way that is detrimental to one's reputation. Claims against distortion are claims against having one's public profile distorted in ways that involve the dissemination of true information. Such distortion can happen when true information from someone's past is presented in a way that suggests it would be appropriate to hold them accountable for it, when it is no longer appropriate to do so.</p><p>Identifying the category of claims against distortion and analysing its place alongside claims to privacy and claims against defamation allows us to broaden the scope of the justificatory basis of data-protection provisions. In doing so, it widens the scope of debates about when and why we have reason to give individuals more control over their personal information. Data-protection provisions have tended to be framed and defended as measures which protect the privacy rights of individuals. There is an important place for data-protection regulations which serve that purpose. However, not all data-protection regulations can be understood or justified as privacy protections. Trying to analyse all data protections through the lens of privacy therefore unduly constrains our thinking about how such regulation may be justified on the basis of providing important protections to the individual.</p><p>The main argument proceeds by examining the data-protection provisions known as the ‘right to be forgotten’. Specifically, it addresses the aspect of this regulation that gives individuals claims to the erasure of certain results that are displayed when their name is entered into a search engine. The aspect of the right to be forgotten that requires explaining is the rationale for erasing search-engine results that point to some content, while leaving the same content publicly available at the original source. Privacy-based approaches fail to adequately explain the purpose that is served by this aspect of the right to forgotten, as it has been implemented under the EU's General Data Protection Regulation (GDPR). This is because what is at stake in the case of the curation of a person's search profile is not always what information is publicly available, but how that information is framed or contextualized. When a search engine serves outdated or irrelevant search results for a person's name, this can amount to a distortion of their public profile. We have reason to care about these practices, and give individuals claims against them, but such claims are not based in a right to privacy. They are better understood as claims against distortion.</p><p>While the right to be forgotten is a relatively narrow aspect of data-protection regulation, it shares a feature that is common to many concerns about our current situation with respect to the collection and use of personal data. It is a case in which the information in question has already been made public, and either cannot or should not be made entirely private again. And yet, concerns remain about the erosion of the individual's control over how their personal information is presented to others. Under these circumstances, we need to find a way to shut the stable door after the horse has already bolted. Claims against distortion provide a normative basis for regulations aimed at providing such control.</p><p>What is colloquially referred to as the right to be forgotten is not a general right as such, but a set of provisions laid out in the GDPR.3 Article 17 of the GDPR sets out a ‘right to erasure’, which gives data subjects the right to obtain from data controllers the erasure of personal data concerning the data subject. When relevant criteria are met, such as when the data is no longer necessary to the purposes for which it was collected, the data controller must respond to erasure requests and delete the data in question without undue delay.</p><p>One of the controversial aspects of this legislation is that search-engine providers such as Google are considered to be data controllers, and search-engine results are classed as personal data under certain circumstances.4 This means that an individual can request the removal of certain search results that are displayed when her name is typed into Google as a search term, and Google is obliged to remove them if the relevant conditions set out in Article 17 of the GDPR apply.</p><p>This aspect of the right to be forgotten has come under intense criticism from search-engine operators, media commentators, and scholars alike.5 Critics have argued that such measures go too far in curtailing freedom of information, especially when the search results in question pertain to information that was lawfully made public. It has been suggested that allowing individuals to make private information that has already been in the public domain, and indeed remains publicly accessible through other means, is futile at best, and at worst based on confused or incoherent premises.6 Finally, the way in which the right has been implemented has been criticized almost to the point of ridicule for being counterproductive on its own terms.</p><p>The case that formed the basis of the landmark ruling on the matter by the Court of Justice of the European Union (CJEU) is the prime target of those charges of inconsistency. The plaintiff in the case, Mario Costeja González, had requested the erasure of details of a foreclosure notice issued against him, which had been published on the web pages of the newspaper <i>La Vanguardia</i>. The court decided that the link to the original newspaper record that featured as a search result on Google when Costeja's name was entered as a search term should be erased. However, <i>La Vanguardia</i> was not required to remove the original article from its website. This inconsistency in the application of the rules of erasure between the original source and the Google search-results page is a principal bone of contention in analyses of the ruling.</p><p>Moreover, in an ironic twist, the case became subject to a phenomenon known as the Streisand effect—when the attempt to censor or protect private information results in it becoming much more widely publicized.7 The landmark ruling was widely reported. As a result, now when you enter Costeja's name as a search term on Google, the first page of search results is entirely populated with links to reports about the ruling, which contain reference to the facts of the case, including mention of the original foreclosure notice Costeja was seeking to have ‘forgotten’. Because the facts of the legal case are a relevant and sufficiently weighty matter of public interest, it has not been deemed appropriate for these search results to be erased, but nor has the judgment about the erasure of the original link to the foreclosure notice been reversed.</p><p>Defending these aspects of the ruling through the lens of privacy presents a challenge, given that the information in question remains in the public domain via the original source, and has subsequently been amplified through the Streisand effect. I will argue that the ruling can nevertheless be interpreted as serving an important interest in self-presentation if we understand it as based on a claim against distortion, rather than a claim to privacy. This interpretation allows us to explain away the seeming inconsistency in treating the original source and the Google link differently with respect to erasure. While the Streisand effect is likely to be a rare side-effect particular to the landmark case, analysing this aspect of the case helps to illustrate the rationale for treating search-result links differently from source material. It sheds light on the importance of how information is framed in a given context, over and above the question of whether some piece of information is in the public domain. This idea is central to the case for claims against distortion.</p><p>Before making the positive case for a distortion-based justification for the right to be forgotten, further argument is required to motivate the move beyond privacy-based justifications. Several scholars have made the case in favour of the right to be forgotten by drawing on the idea that the forgetting of information over time has long been an important aspect of our cultural practices. Such gradual forgetting, it is argued, has been crucial to our ability as individuals to lead adequately autonomous lives, unencumbered by mistakes or embarrassing mishaps from our past. From this perspective, the widespread accessibility of old information made possible in the digital age represents a radical and destabilizing shift, which threatens our ability to live our lives on our own terms as the years go by.8</p><p>Proposed solutions to this problem have focused on the principles and pragmatics of how to strike the right balance between the protection of individual privacy on the one hand, and the public interest in information on the other, once we've recognized that the weighting of this balance might shift over time. Viktor Mayer-Schönberger, for example, argues that information should have an expiry date after which it automatically gets deleted.9 Less radically, Giovanni Sartor makes the case that regulators should direct data controllers as to when the balance has shifted such that information warrants deletion.10 On the basis of an argument from moral autonomy, Meg Leta Jones has argued that we should move towards a model of information stewardship for individual users, allowing them ultimate control over their online profiles.11</p><p>I share the general concern that the widespread accessibility of outdated personal information may lead to a stifling social environment in which people are constantly reminded of and held back by events from their past. I am also sympathetic to the idea that individuals should have more control over their personal information profiles. However, to the extent that the focus in this domain has remained within the framework of assessing whether information in various cases should be ‘public’ (accessible) or ‘private’ (erased), this framing of the debate still implies a univocal approach to erasure across different information sources.</p><p>The univocal implications of the privacy framing are demonstrated by Robert Post in his critique of the <i>Google Spain</i> ruling.12 Post argues that the decision conflated two conflicting senses of the right to be forgotten, one based on a bureaucratic principle of data protection, and the other on the notion of dignitary privacy. According to Post, the bureaucratic principle assumes that data are always collected for instrumentally specified purposes, and warrant erasure when no longer required for those purposes. Dignitary privacy rights, by contrast, seek to ‘define and enforce social norms of respectful expression’.13 Protecting dignitary privacy requires a balance to be struck between protecting the dignity of the individual against the public interest in freedom of expression. Post points out that the public interest extends broadly to information that ‘can or will become part of the agenda of public action’, as well as to ‘maintaining the integrity of the structure of communication that makes public discourse possible’.14 Post argues that Google forms part of that structure of communication. The ruling therefore should have followed a principle of dignitary privacy. According to a principle of dignitary privacy, the standard by which to assess whether some information ought to be accessible to the public is the extent to which that information is deemed offensive or newsworthy. If deemed sufficiently offensive to cause harm to the dignity of the individual, the individual has a privacy claim against its publication.</p><p>In the context of dignitary privacy, according to Post, ‘what matters is not the content of information, but the context and meaning of particular communicative acts’.15 A key tenet of Post's argument is that Google links carry no further meaning over and above the content of the underlying web pages to which they point. As a result, they cannot be deemed any more or less offensive or newsworthy than that original content. On Post's account, the court should therefore have applied the same outcome to Google search results as to the underlying websites to which they point.</p><p>Post's point about the contextual meaning of communicative acts is an important one. He interprets the significance of context in terms of the appropriateness of the audience to which the information is exposed, as evidenced by the following example: ‘It may be humiliating to reveal the CT scan of a private person to the general public, but unobjectionable to provide it to a doctor for evaluation’.16</p><p>I will argue in Section III that search-result links do carry meaning over and above the content to which they point, and that this gives us reason in some cases to treat them differently from that underlying content. However, the way in which they convey such contextual meaning is not by virtue of reaching a different or wider audience than the original source material. The contextual meaning of such communicative acts can thus fall foul of standards of offensiveness and give rise to concerns about dignitary harms in a way that is not adequately captured by the concept of privacy. The reason for this is that the concept of privacy is constrained to concerns about the appropriateness of the audience to which some information is exposed. Instead of expanding the concept of privacy, we have good reason to keep it so bounded, and instead expand the category of claims that function in the service of protecting individuals' interests in self-presentation.</p><p>In order to bring out the way in which the concept of privacy turns on the question of audience exposure, it will be instructive to consider key conceptions of privacy. Conceptions of privacy standardly invoke a distinction between public and private spheres.17 On this general view, it is assumed that a person's privacy interest is frustrated whenever some information that ought to be restricted to a specific audience crosses over into a more public domain. For example, in Judith Jarvis Thomson's influential view of privacy rights, once a person has made some information public, whether intentionally or unintentionally, they are considered to have thereby waived their privacy right over that information.18 It is this feature of approaches to privacy that makes it difficult to make sense of the outcome in <i>Google Spain</i>, if we interpret the data-protection measures in question as aimed at protecting a privacy right.</p><p>This general structuring idea underpins a range of approaches to privacy: privacy as control, privacy as accessibility, and privacy as contextual integrity. The conception of privacy as control paints a picture of the individual as sovereign over her personal domain, with her consent required to let others in, at her own discretion.19 In doing so, it most directly invokes a sharp distinction between the private sphere of the individual as a domain that requires protection against unwanted encroachment from the public.</p><p>The conception of privacy as accessibility departs from the notion that privacy is best served when the individual has ultimate control over the flow of information between the private and public spheres. Arguments in this camp point to the fact that the value of privacy can be undermined when individuals willingly open the floodgates, especially if the resulting dissolution of their private sphere is irreversible.20 This represents a more paternalistic stance towards protecting privacy, but one nevertheless based on the premise that some things should be kept within a private sphere protected from public scrutiny.</p><p>The third approach, privacy as contextual integrity, more directly challenges the simple picture of private and public as two distinct spheres. Instead, it emphasizes privacy as a right to an appropriate flow of information, with different information being appropriately known by different parties in different social contexts.21 This approach rejects the idea that the dividing line of privacy is between activities that are carried out in private spheres such as the home or within the family, and those carried out in public. It does so to make space for the idea that certain ways of collecting personal information can constitute violations of privacy even where the individuals in question are acting in public. However, in positing the idea of an appropriate flow of information, it nevertheless draws on the same general structuring idea of different spheres of accessibility of information. Each distinct social context in which some information is appropriately known can thus be thought of as its own restricted sphere as against the rest of the public who are not included in that particular context.</p><p>While each of these approaches to privacy offers its own take on how best to think about the divide between public and private in any given context, they all share the idea that some privacy interest is frustrated when information becomes publicized in a context in which it ought to be restricted to a smaller or different group of people. One response to this might be to seek to expand the concept of privacy beyond this framing, in order to capture concerns about the ways in which different sources present information to us, over and above the question of the audiences to which they make it available. Such concerns about the contextual presentation of information ultimately explain what is at stake in cases like <i>Google Spain</i>.</p><p>However, expanding the concept of privacy to attempt to capture these concerns risks undermining its coherence, leaving it open to the critique that it is an ill-defined term used to cover a range of disparate interests.22 This gives us good reason to keep the concept of privacy bounded and specific, instead of expanding it to capture all the situations that give rise to concerns about reputational control. In keeping the concept of privacy bounded to the question of audience exposure, I follow Carissa Véliz's description of privacy as</p><p>This general structuring framework of the concept of privacy underpins the contention that the ruling in <i>Google Spain</i> was inconsistent, if based on a privacy claim. If privacy is about keeping information restricted to appropriately restricted spheres, then it is hard to see how one could justify treating Google search results differently from the original source material they point to, when that original source material is in principle publicly accessible to the same audience. It would be an even harder task to justify the differential treatment of Google links pointing to different sources, when those sources contain the same informational content, as happened with respect to the links to newspaper articles reporting the details of the <i>Google Spain</i> case compared to the links to the original foreclosure notice in <i>La Vanguardia</i>. If we cannot justify these elements of the right to be forgotten by appeal to the importance of privacy, we need to find another way to justify the measures implemented in <i>Google Spain</i> or accept that the case was dealt with inconsistently.</p><p>A separate question that arises in response to understanding privacy as functioning to keep information restricted to appropriate spheres is whether we can have claims to privacy over information that has already been made public. Benedict Rumbold and James Wilson have argued that people can retain a right to privacy over information which has been made public.25 They argue that one infringes another person's privacy right when one deduces some piece of information about that person that they did not intend to reveal, from information which they did intentionally make public. While it may be possible to apply privacy rights in this way to information that has been made public in the specific way identified by Rumbold and Wilson, the cases relevant to the right to be forgotten do not fit that mould. Instead, they are cases where some piece of information has intentionally been made public, either willingly by the data subject in the first instance, or by virtue of it being a legitimate matter of public record. The particular challenge this article takes up is whether there is any justification for treating different sources for reporting that information differently with respect to erasure. Rumbold and Wilson's argument thus does not apply to the features of the right to be forgotten that are of central concern to this article.</p><p>An alternative path to justifying the right to be forgotten can be drawn from the principles underpinning the more general concept of the right of personality. To the extent that this alternative justification stands up to scrutiny, it extends the theoretical tools we can draw on to defend various aspects of data-protection provisions beyond the constraints of what privacy-based arguments alone can offer.</p><p>The argument is not meant to assume that the measures applied in <i>Google Spain</i> were intuitively correct, and that there must be an explanation for their consistency. Rather, the aim is to show that there is a plausible path to justifying data-protection measures which need not draw on a right to privacy, and provides a more coherent way to make sense of the measures applied in <i>Google Spain</i>.</p><p>The origins of a right to be forgotten pertaining to personal data can be traced back at least as far as a legislative project developed in France in 2010.26 The French legal concept of a <i>droit à l'oubli</i>, sometimes translated rather dramatically into English as a right to oblivion, has historical precedent in setting standards for the way in which spent criminal convictions are expunged from the public record after a certain amount of time has elapsed. Within the legal tradition of continental Europe, the right to be forgotten stems from the right of personality, which has historically been conceived of as protecting the dignity, honour, and right to private life of individuals.27</p><p>As Rolf Weber points out, the right of personality is a right that incorporates a number of different concepts and terminologies, and is mainly intended to protect the moral and legal integrity of the person. It delineates a sphere of privacy as part of that goal. It is in this context that the right to be forgotten and the way it has been enshrined in the provisions of the GDPR has been interpreted and implemented as a privacy right.</p><p>However, as argued above, the framing of the right to be forgotten as a privacy right constrains the scope of data-protection provisions that can be included in such a right. In order to provide a better justificatory basis for those data-protection provisions, we need to shift focus away from a narrow framing around privacy, and instead lean on some of the other concepts invoked by the right of personality to motivate and justify the claims in question.</p><p>In doing so, the aim is not to provide a historical account of the legal origins of these existing data-protection provisions. Rather, it is to provide a philosophical analysis of the principles already embedded in the right of personality which can provide foundations for the right to be forgotten, in a way that provides a path to justifying a broader range of data-protection provisions under that umbrella than a focus on privacy alone. More specifically, the task will be to outline three different categories of claims which serve to protect against different threats to reputational control. These are claims to privacy, claims against defamation, and claims against distortion.</p><p>The concepts of dignity and honour embedded in the right of personality are broad ones, as is the goal of protecting the ‘moral and legal integrity of the person’. When it comes to regulating the sharing of information, what is at stake is the integrity of a person's public persona. The key battleground over the integrity of one's public persona is located in the gap between how other people see, understand, and talk about us, compared to how we would like to present ourselves to others. The question this raises is how much control over the shaping of one's reputation to leave in the hands of individuals, and how much to allow to be shaped by others.</p><p>This idea of reputational control embedded in the more general right of personality has been invoked in support of privacy protections. Consider the following arguments that have been offered in defence of the importance of privacy. As we saw, Warren and Brandeis argued that the right to privacy was based on a principle of inviolate personality, and was required to secure peace of mind for the individual.28 Contemporary accounts have argued that privacy rights are required to enable valuable social relationships, allowing us to regulate the boundaries between intimate relationships and more distanced ones. Relatedly, appeals are made to our interest in having a reasonable degree of control over how one presents to others, or to make decisions about one's own life, protected from the interference and scrutiny of others, with the suggestion that these are key aspects of living an autonomous life.29</p><p>A common thread in the literature on privacy is thus the idea that privacy protects an interest in self-presentation. Marmor, for example, argues that the distinctive thing about privacy rights is that they safeguard our interest in ‘having a reasonable measure of control over ways in which we present ourselves to others and the ability to present different aspects of ourselves, and what is ours, to different people’.30 The second part of Marmor's formulation of the interest reveals again the general framing of privacy in terms of the appropriateness of the audience to which some Information is exposed. Marmor explains the distinctiveness of privacy by reference to the importance of being able to conduct intimate relationships, and the formulation of the second part of the interest is key to that case.</p><p>However, if we focus on the more general interest in having control over how we present to others (as expressed in the first half of Marmor's formulation), we can notice that privacy plays a relatively narrow role in safeguarding that interest. The distinct way in which it does so is by giving us a reasonable measure of control over which audiences we share different aspects of our lives with. That more general interest in self-presentation, however, can be undermined by ways of sharing information which do not infringe any privacy claims. Some ways of presenting information can constitute a claim-infringing distortion of one's reputation, even where the content of the information in question is already legitimately in the public domain.</p><p>Given the focus on the interest in self-presentation, it is no surprise that scholars have been concerned with different ways of distorting or manipulating a person's public presentation, and have sought to capture these under the umbrella of privacy rights. Daniel Solove, for example, includes distortion in his taxonomy of privacy.31 Solove distinguishes distortion from disclosure as follows:</p><p>However, as argued above, expanding the concept of privacy to include distortion and defamation risks playing into the hands of critics who complain that privacy suffers from an embarrassment of meanings. Having identified a general concern to protect the dignity and honour of the individual, we are better off identifying the distinct categories of claims which serve that aim by protecting against different types of threats to one's reputation. We can do this by separating claims to privacy, claims against defamation, and claims against distortion. These claims that come under the general right of personality can be thought of as institutional provisions which help to construct the juridical concept of the individual. They play a role in delineating the boundaries of the domain of individual authority as against the claims of the collective with respect to the shaping of our public personas. They do so by providing distinct elements of reputational control to individuals.</p><p>While Solove defines distortion as involving the dissemination of false or misleading information, I reserve the term distortion to refer to cases that involve the dissemination of true information in ways that nevertheless distort a person's public information profile. To make the case for claims against distortion, I provide an analysis of the way in which various ways of distorting a person's public information profile are distinct from defamation, but function to undermine the same general interest which claims against defamation serve to protect. An analysis of the function and significance of search-engine results sheds light on the importance of claims agaInst distortion and the types of context in which they arise.</p><p>We can think of the search results that are served when someone types your name into a search engine as being an aspect of your public information profile. A person's online search profile forms an increasingly significant part of their public profile, in a way that can affect important parts of one's life. Employers and universities use search engines to research and assess prospective candidates, to an extent that has led to a burgeoning industry in online reputation-management services aimed at keeping clients' top search results appropriate to the professional image they wish to project.33 The same goes for assessing the suitability of a prospective romantic partner.34 ‘Googling’ other people has become an accepted social practice in contexts where we seek to make an assessment of someone's competence or character, and we take search engines such as Google to be useful tools in helping us to make some of those initial assessments.</p><p>Not only do our social practices reflect this, but Google's own strategy for delivering search results is explicitly to deliver a ranking by relevance to the search term in question. The results are also intended to be tailored to the purposes of the searcher, based on the profile that Google has built up about them through their other internet activity.35 The results that are served by a search engine operator when a person's name is entered as a search term are therefore not a neutral reference index. They form a curated profile of the search subject deemed by the search engine operator's algorithm to be most relevant to the purposes of the searcher. As such, this profile carries communicative content over and above the content of the information it points to.</p><p>In light of this, I suggest that when outdated or irrelevant results are featured prominently in the list of content served by a search engine when a person's name is used as a search term (that is, featured as highly relevant results), this can amount to a distortion of that person's public information profile. It is a distortion insofar as outdated or irrelevant results are presented as relevant to the searcher's assessment of the search subject. We have reason to object to this in the same way we would have reason to object to someone digging up some information from an old archive and re-publicizing it. However, further explanation is required as to what the nature of that objection is.</p><p>Claims against distortion contrast importantly with claims against defamation. Much as in the case of privacy, however, paying attention to the normative underpinnings of claims against defamation proves instructive in carving out a role for claims against distortion. In contrasting distortion and defamation, the specific conception of defamation I draw on is Arthur Ripstein's. Ripstein argues that the law against defamation serves to protect each person's entitlement that no other person should determine his or her standing in the eyes of others.36 The right is an inherently relational one, setting out the limits of what others may say about you. It arises from the principle that a person should only be held accountable for actions which are properly attributable to them.</p><p>The interest in reputation which a right against defamation protects is fundamentally juridical. What Ripstein means by this is that it is based on a normative idea that is inherent in private law as a system of individual responsibility and relational rights. The interest in reputation is not one that is contingently taken up by the law against defamation and weighed against other competing interests. It is baked into the very structure of private law itself. As such, Ripstein argues that the right to one's reputation is ‘at the root’ of a system of ordered liberty.</p><p>From this perspective, the notion of an interest in reputation is not merely about protecting the vanity of individuals or their sense of honour. Rights which protect the interest in reputation serve a more fundamental purpose in carving out the balance between the control that individuals have over their own lives as against the encroachments of other members of society. They play a part in ensuring that the control that individuals have over their own lives is never unduly outweighed by more powerful parties. Claims against distortion of one's public information profile have a part to play in protecting that same interest in reputation. They can do so by setting out the boundaries of reputational control with respect to practices of accountability.</p><p>A clear distinction between the forms of distortion mentioned above and defamation is that the information presented by the search engine in the cases in question is, strictly speaking, attributable to the data subject. However, if we draw on one of the principles underpinning defamation, that people should only be held accountable for things that are properly attributable to them, we can make headway by focusing on how practices of accountability might bear on a person's right to reputation.37</p><p>A closer analysis of normative practices for holding people accountable for what they've done will help to flesh out the claim that when outdated or irrelevant search results are featured prominently, that can amount to a distortion of a person's online public profile. It will do so by providing an explanation of why the framing of the information in the context of search results matters, beyond the mere question of whether or not the information contained therein is publicly available.</p><p>A key feature of the <i>Google Spain</i> ruling was that the link to the original foreclosure notice was removed from the search results associated with Costeja's name, while the original source material remained in place. Additionally, as a result of the Streisand effect, other search results which contained the same information presented in a different context were allowed to remain. A key point at issue is therefore whether individuals have claims to control how information about them that is not private, but a matter of public record, is presented.</p><p>While privacy pertains to the question of the audience to which some information is made available, and defamation hangs in part on whether some claim about a person is truly attributable to them, claims against distortion, I suggest, hang on the relevance of the contextual presentation of some information to our practices of accountability.</p><p>The case for allowing individuals some measure of control over their public information profile builds from the idea that the principles embedded in the right of personality are about ensuring adequate protection of individuals' control over how to present themselves to others. Our systems of reputation are not just about what is known about a person, but also how we are held accountable for what is known about us. Drawing on Nagel, we can invoke the importance that ‘no more should be subjected to the demands of public response than is necessary for the requirements of collective life’.38</p><p>The question of what is subjected to the demands of public response, and how, goes beyond what is true or false about a person, or whether certain information is private or made public. Our practices of accountability form an important part of the context that individuals must navigate in presenting themselves to others. And the question of what things they can be appropriately held accountable for bears on a person's ability to navigate valuable social relationships. We therefore have reason to interrogate how presenting outdated information about a person as salient in a given context could undermine or disrupt our practices of accountability in concerning ways.</p><p>For the purposes of this article, I'll focus on a particular aspect of our norms around accountability. This is the idea that there exist various social norms around what information about a person it is acceptable to publicly acknowledge in various social contexts, even if that information is common knowledge. There are some situations where everyone knows some piece of information about a person, and everyone knows that everyone knows this, and yet it would not be polite to mention the information in question, nor to expect the other person to answer for it. Our social norms are such that we would not deem it appropriate in such circumstances to hold someone accountable for the thing they are known to have done.</p><p>Nagel gives a vivid example of this phenomenon. Two academics, one of whom has just written a cutting review of the other's book, meet at a party. What Nagel calls ‘conventions of reticence’ in this case require both to avoid the contentious subject, and to engage in small talk on other topics rather than enter into acrimonious debate over the review. Nagel discusses the importance of such conventions of reticence in terms of the public/private divide:</p><p>When a search-engine operator presents old information as a prominent search result, that implies the relevance of the result to the purposes for which the search operator knows we use its function (including, for example, assessing a prospective date or employee). To the extent that such rankings signal a degree of relevance that cuts against norms of accountability, we can judge that they amount to a distortion of a person's public profile. They do so because they deliver an implicit message that it would be appropriate to hold someone accountable for the information in question. The distortion consists in calling attention to something that it is no longer reasonable to expect a person to answer for, even if that information is a matter of public record.</p><p>An important caveat is required here, which is that claims against distortion will only apply in cases where the information in question is of a kind that would in principle be appropriate to hold someone accountable for, and the appropriateness of holding them so accountable has elapsed. This caveat is required to account for a principle which holds that people cannot be morally assessed for things that are due to factors beyond their control. Without taking account of this principle, the category of claims against distortion would extend too far, to cover search results detailing information about a person that is outside of their control, such as eye colour or place of birth.40 We can exclude such results from being subject to claims against distortion because such information falls outside the scope of norms of accountability. Because it falls outside the norms of accountability, the inclusion of such information within the search results associated with a person's name will not convey the implicit meaning that this is relevant information by which to hold a person accountable or judge their character, though it might imply relevance to the searcher's personal preferences in selecting a romantic partner, for example.</p><p>If there is an important interest in reputational control, we have reason to be wary about practices that increase the social pressure for people to be held to account for things which it is no longer reasonable to hold them accountable for. This gives us some reason to defend the idea that individuals have claims against such distortion of their online public profiles. Claims against distortion serve a similar interest as claims against defamation, but do so in a distinct way, by focusing on questions of accountability, rather than attributability.</p><p>Focusing on distortion from the perspective of accountability, as I've suggested here, also provides an alternative explanation of the claims and duties involved in the kinds of cases discussed by Rumbold and Wilson.41 Their argument about privacy rights over public information has the consequence that there may be many cases in which one infringes a person's privacy right by unintentionally deducing something that a person did not intend to reveal from information which they willingly made public. They note that this is a bullet they are willing to bite. Shifting the focus from claims to privacy to claims against distortion offers a way to avoid that bullet in some of the cases that Rumbold and Wilson address. One would not infringe another person's claim against distortion simply by coming to know the information in question, but only by holding the person to account for it, or by disclosing it to others in certain ways.</p><p>Of course, much of the determination of whether a particular ranking of a search result amounts to distortion will come down to an assessment of whether or not it transgresses the social norms of accountability in the relevant context.42 To illustrate by way of example, I provide a sketch of how this assessment might be applied in the <i>Google Spain</i> case.</p><p>Take, for example, a person who had Googled Costeja ahead of a first date. It has become completely socially acceptable to conduct a Google search of someone before a date, and perhaps even to admit to the other person that one had done so. It may even be deemed acceptable within current norms of accountability to ask one's date about some of the results found in the course of that search.</p><p>On the other hand, if one had gone to the effort of trawling through old newspaper archives to dig up information, our conventions of reticence would probably make it gauche at best to mention that to one's date, or to expect them to answer questions about what had been dug up. From that perspective, we can think of Google's prominent ranking of the outdated information from the newspaper archive as cutting against the social norms by which we take it to be appropriate to hold people accountable in various contexts.</p><p>An important complication arises here. Given the fact that it is now acceptable to Google someone before meeting them, we might think that this has already caused a shift in our social norms of accountability, such that it is deemed generally appropriate to bring up even very old information, which is now readily available through search engines. We therefore cannot make a straightforward assessment of whether any given search result is irrelevant by reference to existing social norms, precisely because the set of information presented in search results pages is part of the shifting landscape of those very social norms.</p><p>There are two considerations to put forward in response to this point. The first is that there is already a wider cultural conversation over the extent to which it is appropriate for people to be publicly held to account for bad behaviour from their past, in an age where it is increasingly easy to find the skeletons in people's digital wardrobes.43 I take this to be part of a public reckoning around our collective norms of accountability, and the extent to which we have reason to resist too radical a shift in those norms.</p><p>While it is overly simplistic to think that we can assess standards of relevance or irrelevance of information by reference to existing social norms of accountability as though those are static, we can nevertheless recognize the importance of making such assessments in light of the standards of accountability we ought to strive for or preserve. Once we recognize the importance of that, it's a short step to proposing that some of our institutional structures and regulatory practices might have a role in seeking to stabilize the norms we have reason to preserve by setting out the boundaries of claims against distortion. One candidate principle to be guided by in the determination of those boundaries is the general liberal principle highlighted by Nagel of ensuring that the demands of collective life do not become too stifling on the individual.</p><p>That principle provides a general guide, rather than a precise recipe for determining the boundaries in every case. One aspect to assess is the severity of transgressions of existing norms of accountability, and the extent to which those transgressions encroach on an individual's reputational control. Another aspect will be to assess the effects of existing norms, relative to how much they encroach in general on the ability of individuals to present themselves to others on their own terms. There is a parallel here to the question of how to strike the balance between privacy claims and the public interest in the assessment of what is deemed ‘newsworthy’. The determination of newsworthiness relies on a combination of considerations, including the factual question of what is of interest to the public, as well as the normative question of what information the public ought to have access to as a matter of public good.44 In assessing the extent to which information-sharing practices risk becoming too stifling to the individual, we'll need to rely on a sociologically informed assessment of the effects of existing accountability norms combined with a normative assessment of how best to balance the public benefit of those information-sharing practices against the importance of securing sufficient reputational control for individuals.</p><p>Second, we have reason to raise concerns about differential abilities of individuals to exert control over their public information profiles, depending on the resources at their disposal. While the configuration of results yielded for any given search term is determined by the search-engine operators' algorithms, it is possible for companies or individuals to exert some degree of control over their search results through a process of search-engine optimization. For those who have enough time, resources, and expertise, this allows them to push inconvenient or embarrassing search results down the search-results ranking for a given search term (for example, their name), by ensuring that other, more positive content is ranked more highly. The fact that some people can engage online reputation-management services to manipulate their own search pages in this way, while others lack the resources to do so, gives us further reason to support data-protection regulation that would give everyone more equitable protection for claims against distortion. The effectiveness of such practices, however, does indicate that claims against distortion may be met by requiring search engines to display certain search results less prominently in the list of results, rather than requiring the removal of the search result in all cases.45</p><p>A broader concern is that search-engine operators have acquired the power not only to distort the public profiles of individuals, but also to shift the very conditions in which we shape our practices of accountability. Giving individuals more control over their public information profiles represents one way to resist such power becoming too concentrated in the hands of a relatively small number of large corporations.</p><p>I have argued for a category of claims against distortion that are distinct from claims against privacy and claims against defamation, but serve the same general interest in reputational control. Claims against distortion apply in cases where some true information that is legitimately a matter of public record is communicated in a way that implies that it is relevant information by which to hold a person accountable, in contexts where that information is outdated or irrelevant to existing norms of accountability.</p><p>The approach I've outlined provides a case for giving individuals protection for claims against the distortion of their online public profiles, and provides some guidance on how to make the assessment of which types of information presentation amount to distortion. A question remains over how this is supposed to help to resolve the seeming inconsistency in <i>Google Spain</i>.</p><p>The three things that were supposed to be in tension in that case were (1) that Google was required to remove the link to the original newspaper article from their search results page for Costeja's name; (2) that the original article was allowed to remain on the newspaper's website; and (3) that subsequent search result links to newspaper articles reporting the CJEU ruling, including all the details of the original debt foreclosure, were not required to be removed. The case in favour of extending claims against the distortion of one's public information profile by reference to normative standards of accountability helps us to explain away the tension here.</p><p>With respect to the original newspaper article, allowing it to remain on the internet, but not directly accessible via a simple Google search of Costeja's name would ensure that anybody who wanted to seek it out would have to dig quite deep. While that level of investigation may be appropriate or even necessary in certain restricted contexts, it would not be deemed appropriate in many of the social contexts in which it might be normal to Google someone's name (such as dating or deciding whether to invite someone to interview after assessing their job application). Ensuring that such information remains accessible, but relatively hard to find, is compatible with preserving social norms around conventions of reticence and accountability, as outlined above.</p><p>As regards the links to newspaper articles reporting the CJEU case being allowed to remain prominently listed, in light of the argument I've proposed about accountability, we can notice that the framing of the information in question will matter to our assessment of its relevance in the search rankings. It matters, from this perspective, that these articles embed the information about the foreclosure in the context of reporting the facts of the <i>Google Spain</i> case. That is, they explicitly mention that the original foreclosure notice was subject to a court ruling in which it was deemed to be sufficiently irrelevant or outdated to warrant erasure from the search results associated with Costeja's name.</p><p>While this result still allows the information about the original foreclosure notice to be readily accessible in the public sphere, it ensures that it is framed in a way that simultaneously signals the fact that it is not the kind of information that is appropriate to hold a person accountable for in the many contexts in which we might conduct a quick Google search of that person's name. In that way, this outcome can also be read as a coherent way to protect individuals from having their public information profiles distorted, by taking measures which seek to reinforce or preserve norms of accountability.</p><p>A possible objection is that while such measures may indeed project some public message about the relevance of information to what we hold people accountable for, this provides no guarantee that people will refrain from holding us accountable for the things in question. This objection could be pressed most strongly by those who would advocate an analysis of accountability and blame in terms of the emotional responses or reactive attitudes that are warranted in response to a person's wrongdoing.46 Even if people recognize that it would not be appropriate to mention a person's particular misdemeanour in a given context, they may nevertheless be holding the relevant attitudes towards them, and in that sense be holding them accountable in their judgements of the person's character. And of course, there would be nothing to stop them from modifying their own behaviour accordingly, for example by cancelling a date.</p><p>It is certainly true that we cannot legislate for the attitudes of others. As Ripstein points out in his discussion of the right against defamation, a right to reputation is not a right that others think well of you. It is a right that nobody else should determine your reputation in the eyes of others. Widening our perspective on the right to reputation to take into consideration the importance of accountability over time, alongside the narrower question of attributability, introduces a grey area on the question of what constitutes someone else interfering with or distorting one's reputation. However, engaging in a careful assessment of existing social practices around what we publicly deem it acceptable for people to be held accountable for, and which of these practices ought to be preserved or protected, provides at least some indication of what data-protection legislation might do in service of that goal.</p><p>What I've suggested is that it is possible to justify data-protection regulations that give people claims against distortions of their public information profiles, and in doing so aim to stabilize or protect norms of accountability. Doing so goes at least some way to ensuring that individuals retain a reasonable degree of control over their reputation as against entities like search-engine operators. In the face of the increasing influence that tech companies have over how we present to others, this provides an institutional counterbalance to put some of that reputational control back into the hands of individuals.</p><p>I hope to have shown that moving beyond the constraints of a focus on privacy rights alone allows us to make a case for a broader range of data-protection provisions. We can do so by appeal to the importance of claims against distortion alongside claims to privacy and claims against defamation. Thinking more carefully about how norms of accountability bear on our assessment of what counts as a distortion of someone's public profile allows us to make a case for extending the right to reputation beyond the narrower standard of attributability on which claims against defamation are based. It does so in a way that addresses concerns about the increasing influence that search-engine operators have over our online public profiles. A key advantage of this approach is that it provides a path to defending and implementing measures that go some way to give individuals control over their personal information even in cases where from the perspective of the privacy/publicity divide, the cat is already out of the bag.</p><p>None relevant.</p><p>There are no potential conflicts of interest relevant to this article.</p><p>The author declares human ethics approval was not needed for this study.</p>","PeriodicalId":47624,"journal":{"name":"Journal of Political Philosophy","volume":"31 4","pages":"494-516"},"PeriodicalIF":2.9000,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jopp.12308","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Political Philosophy","FirstCategoryId":"98","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/jopp.12308","RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

Abstract

Many of us will remember the sense of foreboding induced by the simple threat, usually uttered by a teacher: ‘It will go on your permanent record’. This administrative bogeyman exploits our early awareness of the importance of being able to leave some things in the past. While some of the things we do may go on public record at various points in our lives, it used to be possible to comfort ourselves with the thought that these would soon be buried deep in the archives, where most people would not care to look. In the digital age, however, search engines do the digging and can serve up previously long-forgotten results on a simple search of a person's name.

This article takes up the question of what kinds of claims we have against information being dug up from our past. Specifically, it focuses on information that is legitimately a matter of public record. When some information has been stored in an archive and has, in principle, been accessible to anyone, what claims, if any, do we have against that information being brought back to light? Many people would find it intrusive for someone to dig through archives and publicize afresh the information they found out about you there, but it is not clear what the basis of such a complaint would be. Moreover, it is this kind of complaint that is provided protection through data-protection provisions colloquially referred to as the right to be forgotten. Such measures are typically framed as privacy protections. While some have argued that we can have privacy rights over information that has been made public,1 I argue that appeals to privacy fail in cases concerning information that is legitimately a matter of public record.

Paying attention to the reasons we have to object to the dredging up of outdated information reveals a new category of claims that are distinct from claims to privacy, but serve the same general interest in self-presentation that privacy scholars have long been concerned with. I call these claims against distortion. We can understand such claims as falling under a general principle of reputational control embedded in the historic right of personality. That general principle provides the basis for claims to privacy, claims against defamation, and, as I will argue, claims against distortion.

The purpose of invoking the right of personality is to identify a general principle embedded in that concept, and to then draw out a taxonomy of claims that serve that principle in distinct ways. One advantage of separating these three categories of claims is that it allows us to retain clear boundaries around the concept of privacy, which has often come under fire for suffering from a plethora of meanings.2

On my account, claims to privacy are claims to prevent certain information from becoming publicly available beyond one's audience of choice. Privacy provides one aspect of reputational control by allowing us to decide who we share various aspects of oneself with. Claims against defamation are claims against having false statements made about oneself in a way that is detrimental to one's reputation. Claims against distortion are claims against having one's public profile distorted in ways that involve the dissemination of true information. Such distortion can happen when true information from someone's past is presented in a way that suggests it would be appropriate to hold them accountable for it, when it is no longer appropriate to do so.

Identifying the category of claims against distortion and analysing its place alongside claims to privacy and claims against defamation allows us to broaden the scope of the justificatory basis of data-protection provisions. In doing so, it widens the scope of debates about when and why we have reason to give individuals more control over their personal information. Data-protection provisions have tended to be framed and defended as measures which protect the privacy rights of individuals. There is an important place for data-protection regulations which serve that purpose. However, not all data-protection regulations can be understood or justified as privacy protections. Trying to analyse all data protections through the lens of privacy therefore unduly constrains our thinking about how such regulation may be justified on the basis of providing important protections to the individual.

The main argument proceeds by examining the data-protection provisions known as the ‘right to be forgotten’. Specifically, it addresses the aspect of this regulation that gives individuals claims to the erasure of certain results that are displayed when their name is entered into a search engine. The aspect of the right to be forgotten that requires explaining is the rationale for erasing search-engine results that point to some content, while leaving the same content publicly available at the original source. Privacy-based approaches fail to adequately explain the purpose that is served by this aspect of the right to forgotten, as it has been implemented under the EU's General Data Protection Regulation (GDPR). This is because what is at stake in the case of the curation of a person's search profile is not always what information is publicly available, but how that information is framed or contextualized. When a search engine serves outdated or irrelevant search results for a person's name, this can amount to a distortion of their public profile. We have reason to care about these practices, and give individuals claims against them, but such claims are not based in a right to privacy. They are better understood as claims against distortion.

While the right to be forgotten is a relatively narrow aspect of data-protection regulation, it shares a feature that is common to many concerns about our current situation with respect to the collection and use of personal data. It is a case in which the information in question has already been made public, and either cannot or should not be made entirely private again. And yet, concerns remain about the erosion of the individual's control over how their personal information is presented to others. Under these circumstances, we need to find a way to shut the stable door after the horse has already bolted. Claims against distortion provide a normative basis for regulations aimed at providing such control.

What is colloquially referred to as the right to be forgotten is not a general right as such, but a set of provisions laid out in the GDPR.3 Article 17 of the GDPR sets out a ‘right to erasure’, which gives data subjects the right to obtain from data controllers the erasure of personal data concerning the data subject. When relevant criteria are met, such as when the data is no longer necessary to the purposes for which it was collected, the data controller must respond to erasure requests and delete the data in question without undue delay.

One of the controversial aspects of this legislation is that search-engine providers such as Google are considered to be data controllers, and search-engine results are classed as personal data under certain circumstances.4 This means that an individual can request the removal of certain search results that are displayed when her name is typed into Google as a search term, and Google is obliged to remove them if the relevant conditions set out in Article 17 of the GDPR apply.

This aspect of the right to be forgotten has come under intense criticism from search-engine operators, media commentators, and scholars alike.5 Critics have argued that such measures go too far in curtailing freedom of information, especially when the search results in question pertain to information that was lawfully made public. It has been suggested that allowing individuals to make private information that has already been in the public domain, and indeed remains publicly accessible through other means, is futile at best, and at worst based on confused or incoherent premises.6 Finally, the way in which the right has been implemented has been criticized almost to the point of ridicule for being counterproductive on its own terms.

The case that formed the basis of the landmark ruling on the matter by the Court of Justice of the European Union (CJEU) is the prime target of those charges of inconsistency. The plaintiff in the case, Mario Costeja González, had requested the erasure of details of a foreclosure notice issued against him, which had been published on the web pages of the newspaper La Vanguardia. The court decided that the link to the original newspaper record that featured as a search result on Google when Costeja's name was entered as a search term should be erased. However, La Vanguardia was not required to remove the original article from its website. This inconsistency in the application of the rules of erasure between the original source and the Google search-results page is a principal bone of contention in analyses of the ruling.

Moreover, in an ironic twist, the case became subject to a phenomenon known as the Streisand effect—when the attempt to censor or protect private information results in it becoming much more widely publicized.7 The landmark ruling was widely reported. As a result, now when you enter Costeja's name as a search term on Google, the first page of search results is entirely populated with links to reports about the ruling, which contain reference to the facts of the case, including mention of the original foreclosure notice Costeja was seeking to have ‘forgotten’. Because the facts of the legal case are a relevant and sufficiently weighty matter of public interest, it has not been deemed appropriate for these search results to be erased, but nor has the judgment about the erasure of the original link to the foreclosure notice been reversed.

Defending these aspects of the ruling through the lens of privacy presents a challenge, given that the information in question remains in the public domain via the original source, and has subsequently been amplified through the Streisand effect. I will argue that the ruling can nevertheless be interpreted as serving an important interest in self-presentation if we understand it as based on a claim against distortion, rather than a claim to privacy. This interpretation allows us to explain away the seeming inconsistency in treating the original source and the Google link differently with respect to erasure. While the Streisand effect is likely to be a rare side-effect particular to the landmark case, analysing this aspect of the case helps to illustrate the rationale for treating search-result links differently from source material. It sheds light on the importance of how information is framed in a given context, over and above the question of whether some piece of information is in the public domain. This idea is central to the case for claims against distortion.

Before making the positive case for a distortion-based justification for the right to be forgotten, further argument is required to motivate the move beyond privacy-based justifications. Several scholars have made the case in favour of the right to be forgotten by drawing on the idea that the forgetting of information over time has long been an important aspect of our cultural practices. Such gradual forgetting, it is argued, has been crucial to our ability as individuals to lead adequately autonomous lives, unencumbered by mistakes or embarrassing mishaps from our past. From this perspective, the widespread accessibility of old information made possible in the digital age represents a radical and destabilizing shift, which threatens our ability to live our lives on our own terms as the years go by.8

Proposed solutions to this problem have focused on the principles and pragmatics of how to strike the right balance between the protection of individual privacy on the one hand, and the public interest in information on the other, once we've recognized that the weighting of this balance might shift over time. Viktor Mayer-Schönberger, for example, argues that information should have an expiry date after which it automatically gets deleted.9 Less radically, Giovanni Sartor makes the case that regulators should direct data controllers as to when the balance has shifted such that information warrants deletion.10 On the basis of an argument from moral autonomy, Meg Leta Jones has argued that we should move towards a model of information stewardship for individual users, allowing them ultimate control over their online profiles.11

I share the general concern that the widespread accessibility of outdated personal information may lead to a stifling social environment in which people are constantly reminded of and held back by events from their past. I am also sympathetic to the idea that individuals should have more control over their personal information profiles. However, to the extent that the focus in this domain has remained within the framework of assessing whether information in various cases should be ‘public’ (accessible) or ‘private’ (erased), this framing of the debate still implies a univocal approach to erasure across different information sources.

The univocal implications of the privacy framing are demonstrated by Robert Post in his critique of the Google Spain ruling.12 Post argues that the decision conflated two conflicting senses of the right to be forgotten, one based on a bureaucratic principle of data protection, and the other on the notion of dignitary privacy. According to Post, the bureaucratic principle assumes that data are always collected for instrumentally specified purposes, and warrant erasure when no longer required for those purposes. Dignitary privacy rights, by contrast, seek to ‘define and enforce social norms of respectful expression’.13 Protecting dignitary privacy requires a balance to be struck between protecting the dignity of the individual against the public interest in freedom of expression. Post points out that the public interest extends broadly to information that ‘can or will become part of the agenda of public action’, as well as to ‘maintaining the integrity of the structure of communication that makes public discourse possible’.14 Post argues that Google forms part of that structure of communication. The ruling therefore should have followed a principle of dignitary privacy. According to a principle of dignitary privacy, the standard by which to assess whether some information ought to be accessible to the public is the extent to which that information is deemed offensive or newsworthy. If deemed sufficiently offensive to cause harm to the dignity of the individual, the individual has a privacy claim against its publication.

In the context of dignitary privacy, according to Post, ‘what matters is not the content of information, but the context and meaning of particular communicative acts’.15 A key tenet of Post's argument is that Google links carry no further meaning over and above the content of the underlying web pages to which they point. As a result, they cannot be deemed any more or less offensive or newsworthy than that original content. On Post's account, the court should therefore have applied the same outcome to Google search results as to the underlying websites to which they point.

Post's point about the contextual meaning of communicative acts is an important one. He interprets the significance of context in terms of the appropriateness of the audience to which the information is exposed, as evidenced by the following example: ‘It may be humiliating to reveal the CT scan of a private person to the general public, but unobjectionable to provide it to a doctor for evaluation’.16

I will argue in Section III that search-result links do carry meaning over and above the content to which they point, and that this gives us reason in some cases to treat them differently from that underlying content. However, the way in which they convey such contextual meaning is not by virtue of reaching a different or wider audience than the original source material. The contextual meaning of such communicative acts can thus fall foul of standards of offensiveness and give rise to concerns about dignitary harms in a way that is not adequately captured by the concept of privacy. The reason for this is that the concept of privacy is constrained to concerns about the appropriateness of the audience to which some information is exposed. Instead of expanding the concept of privacy, we have good reason to keep it so bounded, and instead expand the category of claims that function in the service of protecting individuals' interests in self-presentation.

In order to bring out the way in which the concept of privacy turns on the question of audience exposure, it will be instructive to consider key conceptions of privacy. Conceptions of privacy standardly invoke a distinction between public and private spheres.17 On this general view, it is assumed that a person's privacy interest is frustrated whenever some information that ought to be restricted to a specific audience crosses over into a more public domain. For example, in Judith Jarvis Thomson's influential view of privacy rights, once a person has made some information public, whether intentionally or unintentionally, they are considered to have thereby waived their privacy right over that information.18 It is this feature of approaches to privacy that makes it difficult to make sense of the outcome in Google Spain, if we interpret the data-protection measures in question as aimed at protecting a privacy right.

This general structuring idea underpins a range of approaches to privacy: privacy as control, privacy as accessibility, and privacy as contextual integrity. The conception of privacy as control paints a picture of the individual as sovereign over her personal domain, with her consent required to let others in, at her own discretion.19 In doing so, it most directly invokes a sharp distinction between the private sphere of the individual as a domain that requires protection against unwanted encroachment from the public.

The conception of privacy as accessibility departs from the notion that privacy is best served when the individual has ultimate control over the flow of information between the private and public spheres. Arguments in this camp point to the fact that the value of privacy can be undermined when individuals willingly open the floodgates, especially if the resulting dissolution of their private sphere is irreversible.20 This represents a more paternalistic stance towards protecting privacy, but one nevertheless based on the premise that some things should be kept within a private sphere protected from public scrutiny.

The third approach, privacy as contextual integrity, more directly challenges the simple picture of private and public as two distinct spheres. Instead, it emphasizes privacy as a right to an appropriate flow of information, with different information being appropriately known by different parties in different social contexts.21 This approach rejects the idea that the dividing line of privacy is between activities that are carried out in private spheres such as the home or within the family, and those carried out in public. It does so to make space for the idea that certain ways of collecting personal information can constitute violations of privacy even where the individuals in question are acting in public. However, in positing the idea of an appropriate flow of information, it nevertheless draws on the same general structuring idea of different spheres of accessibility of information. Each distinct social context in which some information is appropriately known can thus be thought of as its own restricted sphere as against the rest of the public who are not included in that particular context.

While each of these approaches to privacy offers its own take on how best to think about the divide between public and private in any given context, they all share the idea that some privacy interest is frustrated when information becomes publicized in a context in which it ought to be restricted to a smaller or different group of people. One response to this might be to seek to expand the concept of privacy beyond this framing, in order to capture concerns about the ways in which different sources present information to us, over and above the question of the audiences to which they make it available. Such concerns about the contextual presentation of information ultimately explain what is at stake in cases like Google Spain.

However, expanding the concept of privacy to attempt to capture these concerns risks undermining its coherence, leaving it open to the critique that it is an ill-defined term used to cover a range of disparate interests.22 This gives us good reason to keep the concept of privacy bounded and specific, instead of expanding it to capture all the situations that give rise to concerns about reputational control. In keeping the concept of privacy bounded to the question of audience exposure, I follow Carissa Véliz's description of privacy as

This general structuring framework of the concept of privacy underpins the contention that the ruling in Google Spain was inconsistent, if based on a privacy claim. If privacy is about keeping information restricted to appropriately restricted spheres, then it is hard to see how one could justify treating Google search results differently from the original source material they point to, when that original source material is in principle publicly accessible to the same audience. It would be an even harder task to justify the differential treatment of Google links pointing to different sources, when those sources contain the same informational content, as happened with respect to the links to newspaper articles reporting the details of the Google Spain case compared to the links to the original foreclosure notice in La Vanguardia. If we cannot justify these elements of the right to be forgotten by appeal to the importance of privacy, we need to find another way to justify the measures implemented in Google Spain or accept that the case was dealt with inconsistently.

A separate question that arises in response to understanding privacy as functioning to keep information restricted to appropriate spheres is whether we can have claims to privacy over information that has already been made public. Benedict Rumbold and James Wilson have argued that people can retain a right to privacy over information which has been made public.25 They argue that one infringes another person's privacy right when one deduces some piece of information about that person that they did not intend to reveal, from information which they did intentionally make public. While it may be possible to apply privacy rights in this way to information that has been made public in the specific way identified by Rumbold and Wilson, the cases relevant to the right to be forgotten do not fit that mould. Instead, they are cases where some piece of information has intentionally been made public, either willingly by the data subject in the first instance, or by virtue of it being a legitimate matter of public record. The particular challenge this article takes up is whether there is any justification for treating different sources for reporting that information differently with respect to erasure. Rumbold and Wilson's argument thus does not apply to the features of the right to be forgotten that are of central concern to this article.

An alternative path to justifying the right to be forgotten can be drawn from the principles underpinning the more general concept of the right of personality. To the extent that this alternative justification stands up to scrutiny, it extends the theoretical tools we can draw on to defend various aspects of data-protection provisions beyond the constraints of what privacy-based arguments alone can offer.

The argument is not meant to assume that the measures applied in Google Spain were intuitively correct, and that there must be an explanation for their consistency. Rather, the aim is to show that there is a plausible path to justifying data-protection measures which need not draw on a right to privacy, and provides a more coherent way to make sense of the measures applied in Google Spain.

The origins of a right to be forgotten pertaining to personal data can be traced back at least as far as a legislative project developed in France in 2010.26 The French legal concept of a droit à l'oubli, sometimes translated rather dramatically into English as a right to oblivion, has historical precedent in setting standards for the way in which spent criminal convictions are expunged from the public record after a certain amount of time has elapsed. Within the legal tradition of continental Europe, the right to be forgotten stems from the right of personality, which has historically been conceived of as protecting the dignity, honour, and right to private life of individuals.27

As Rolf Weber points out, the right of personality is a right that incorporates a number of different concepts and terminologies, and is mainly intended to protect the moral and legal integrity of the person. It delineates a sphere of privacy as part of that goal. It is in this context that the right to be forgotten and the way it has been enshrined in the provisions of the GDPR has been interpreted and implemented as a privacy right.

However, as argued above, the framing of the right to be forgotten as a privacy right constrains the scope of data-protection provisions that can be included in such a right. In order to provide a better justificatory basis for those data-protection provisions, we need to shift focus away from a narrow framing around privacy, and instead lean on some of the other concepts invoked by the right of personality to motivate and justify the claims in question.

In doing so, the aim is not to provide a historical account of the legal origins of these existing data-protection provisions. Rather, it is to provide a philosophical analysis of the principles already embedded in the right of personality which can provide foundations for the right to be forgotten, in a way that provides a path to justifying a broader range of data-protection provisions under that umbrella than a focus on privacy alone. More specifically, the task will be to outline three different categories of claims which serve to protect against different threats to reputational control. These are claims to privacy, claims against defamation, and claims against distortion.

The concepts of dignity and honour embedded in the right of personality are broad ones, as is the goal of protecting the ‘moral and legal integrity of the person’. When it comes to regulating the sharing of information, what is at stake is the integrity of a person's public persona. The key battleground over the integrity of one's public persona is located in the gap between how other people see, understand, and talk about us, compared to how we would like to present ourselves to others. The question this raises is how much control over the shaping of one's reputation to leave in the hands of individuals, and how much to allow to be shaped by others.

This idea of reputational control embedded in the more general right of personality has been invoked in support of privacy protections. Consider the following arguments that have been offered in defence of the importance of privacy. As we saw, Warren and Brandeis argued that the right to privacy was based on a principle of inviolate personality, and was required to secure peace of mind for the individual.28 Contemporary accounts have argued that privacy rights are required to enable valuable social relationships, allowing us to regulate the boundaries between intimate relationships and more distanced ones. Relatedly, appeals are made to our interest in having a reasonable degree of control over how one presents to others, or to make decisions about one's own life, protected from the interference and scrutiny of others, with the suggestion that these are key aspects of living an autonomous life.29

A common thread in the literature on privacy is thus the idea that privacy protects an interest in self-presentation. Marmor, for example, argues that the distinctive thing about privacy rights is that they safeguard our interest in ‘having a reasonable measure of control over ways in which we present ourselves to others and the ability to present different aspects of ourselves, and what is ours, to different people’.30 The second part of Marmor's formulation of the interest reveals again the general framing of privacy in terms of the appropriateness of the audience to which some Information is exposed. Marmor explains the distinctiveness of privacy by reference to the importance of being able to conduct intimate relationships, and the formulation of the second part of the interest is key to that case.

However, if we focus on the more general interest in having control over how we present to others (as expressed in the first half of Marmor's formulation), we can notice that privacy plays a relatively narrow role in safeguarding that interest. The distinct way in which it does so is by giving us a reasonable measure of control over which audiences we share different aspects of our lives with. That more general interest in self-presentation, however, can be undermined by ways of sharing information which do not infringe any privacy claims. Some ways of presenting information can constitute a claim-infringing distortion of one's reputation, even where the content of the information in question is already legitimately in the public domain.

Given the focus on the interest in self-presentation, it is no surprise that scholars have been concerned with different ways of distorting or manipulating a person's public presentation, and have sought to capture these under the umbrella of privacy rights. Daniel Solove, for example, includes distortion in his taxonomy of privacy.31 Solove distinguishes distortion from disclosure as follows:

However, as argued above, expanding the concept of privacy to include distortion and defamation risks playing into the hands of critics who complain that privacy suffers from an embarrassment of meanings. Having identified a general concern to protect the dignity and honour of the individual, we are better off identifying the distinct categories of claims which serve that aim by protecting against different types of threats to one's reputation. We can do this by separating claims to privacy, claims against defamation, and claims against distortion. These claims that come under the general right of personality can be thought of as institutional provisions which help to construct the juridical concept of the individual. They play a role in delineating the boundaries of the domain of individual authority as against the claims of the collective with respect to the shaping of our public personas. They do so by providing distinct elements of reputational control to individuals.

While Solove defines distortion as involving the dissemination of false or misleading information, I reserve the term distortion to refer to cases that involve the dissemination of true information in ways that nevertheless distort a person's public information profile. To make the case for claims against distortion, I provide an analysis of the way in which various ways of distorting a person's public information profile are distinct from defamation, but function to undermine the same general interest which claims against defamation serve to protect. An analysis of the function and significance of search-engine results sheds light on the importance of claims agaInst distortion and the types of context in which they arise.

We can think of the search results that are served when someone types your name into a search engine as being an aspect of your public information profile. A person's online search profile forms an increasingly significant part of their public profile, in a way that can affect important parts of one's life. Employers and universities use search engines to research and assess prospective candidates, to an extent that has led to a burgeoning industry in online reputation-management services aimed at keeping clients' top search results appropriate to the professional image they wish to project.33 The same goes for assessing the suitability of a prospective romantic partner.34 ‘Googling’ other people has become an accepted social practice in contexts where we seek to make an assessment of someone's competence or character, and we take search engines such as Google to be useful tools in helping us to make some of those initial assessments.

Not only do our social practices reflect this, but Google's own strategy for delivering search results is explicitly to deliver a ranking by relevance to the search term in question. The results are also intended to be tailored to the purposes of the searcher, based on the profile that Google has built up about them through their other internet activity.35 The results that are served by a search engine operator when a person's name is entered as a search term are therefore not a neutral reference index. They form a curated profile of the search subject deemed by the search engine operator's algorithm to be most relevant to the purposes of the searcher. As such, this profile carries communicative content over and above the content of the information it points to.

In light of this, I suggest that when outdated or irrelevant results are featured prominently in the list of content served by a search engine when a person's name is used as a search term (that is, featured as highly relevant results), this can amount to a distortion of that person's public information profile. It is a distortion insofar as outdated or irrelevant results are presented as relevant to the searcher's assessment of the search subject. We have reason to object to this in the same way we would have reason to object to someone digging up some information from an old archive and re-publicizing it. However, further explanation is required as to what the nature of that objection is.

Claims against distortion contrast importantly with claims against defamation. Much as in the case of privacy, however, paying attention to the normative underpinnings of claims against defamation proves instructive in carving out a role for claims against distortion. In contrasting distortion and defamation, the specific conception of defamation I draw on is Arthur Ripstein's. Ripstein argues that the law against defamation serves to protect each person's entitlement that no other person should determine his or her standing in the eyes of others.36 The right is an inherently relational one, setting out the limits of what others may say about you. It arises from the principle that a person should only be held accountable for actions which are properly attributable to them.

The interest in reputation which a right against defamation protects is fundamentally juridical. What Ripstein means by this is that it is based on a normative idea that is inherent in private law as a system of individual responsibility and relational rights. The interest in reputation is not one that is contingently taken up by the law against defamation and weighed against other competing interests. It is baked into the very structure of private law itself. As such, Ripstein argues that the right to one's reputation is ‘at the root’ of a system of ordered liberty.

From this perspective, the notion of an interest in reputation is not merely about protecting the vanity of individuals or their sense of honour. Rights which protect the interest in reputation serve a more fundamental purpose in carving out the balance between the control that individuals have over their own lives as against the encroachments of other members of society. They play a part in ensuring that the control that individuals have over their own lives is never unduly outweighed by more powerful parties. Claims against distortion of one's public information profile have a part to play in protecting that same interest in reputation. They can do so by setting out the boundaries of reputational control with respect to practices of accountability.

A clear distinction between the forms of distortion mentioned above and defamation is that the information presented by the search engine in the cases in question is, strictly speaking, attributable to the data subject. However, if we draw on one of the principles underpinning defamation, that people should only be held accountable for things that are properly attributable to them, we can make headway by focusing on how practices of accountability might bear on a person's right to reputation.37

A closer analysis of normative practices for holding people accountable for what they've done will help to flesh out the claim that when outdated or irrelevant search results are featured prominently, that can amount to a distortion of a person's online public profile. It will do so by providing an explanation of why the framing of the information in the context of search results matters, beyond the mere question of whether or not the information contained therein is publicly available.

A key feature of the Google Spain ruling was that the link to the original foreclosure notice was removed from the search results associated with Costeja's name, while the original source material remained in place. Additionally, as a result of the Streisand effect, other search results which contained the same information presented in a different context were allowed to remain. A key point at issue is therefore whether individuals have claims to control how information about them that is not private, but a matter of public record, is presented.

While privacy pertains to the question of the audience to which some information is made available, and defamation hangs in part on whether some claim about a person is truly attributable to them, claims against distortion, I suggest, hang on the relevance of the contextual presentation of some information to our practices of accountability.

The case for allowing individuals some measure of control over their public information profile builds from the idea that the principles embedded in the right of personality are about ensuring adequate protection of individuals' control over how to present themselves to others. Our systems of reputation are not just about what is known about a person, but also how we are held accountable for what is known about us. Drawing on Nagel, we can invoke the importance that ‘no more should be subjected to the demands of public response than is necessary for the requirements of collective life’.38

The question of what is subjected to the demands of public response, and how, goes beyond what is true or false about a person, or whether certain information is private or made public. Our practices of accountability form an important part of the context that individuals must navigate in presenting themselves to others. And the question of what things they can be appropriately held accountable for bears on a person's ability to navigate valuable social relationships. We therefore have reason to interrogate how presenting outdated information about a person as salient in a given context could undermine or disrupt our practices of accountability in concerning ways.

For the purposes of this article, I'll focus on a particular aspect of our norms around accountability. This is the idea that there exist various social norms around what information about a person it is acceptable to publicly acknowledge in various social contexts, even if that information is common knowledge. There are some situations where everyone knows some piece of information about a person, and everyone knows that everyone knows this, and yet it would not be polite to mention the information in question, nor to expect the other person to answer for it. Our social norms are such that we would not deem it appropriate in such circumstances to hold someone accountable for the thing they are known to have done.

Nagel gives a vivid example of this phenomenon. Two academics, one of whom has just written a cutting review of the other's book, meet at a party. What Nagel calls ‘conventions of reticence’ in this case require both to avoid the contentious subject, and to engage in small talk on other topics rather than enter into acrimonious debate over the review. Nagel discusses the importance of such conventions of reticence in terms of the public/private divide:

When a search-engine operator presents old information as a prominent search result, that implies the relevance of the result to the purposes for which the search operator knows we use its function (including, for example, assessing a prospective date or employee). To the extent that such rankings signal a degree of relevance that cuts against norms of accountability, we can judge that they amount to a distortion of a person's public profile. They do so because they deliver an implicit message that it would be appropriate to hold someone accountable for the information in question. The distortion consists in calling attention to something that it is no longer reasonable to expect a person to answer for, even if that information is a matter of public record.

An important caveat is required here, which is that claims against distortion will only apply in cases where the information in question is of a kind that would in principle be appropriate to hold someone accountable for, and the appropriateness of holding them so accountable has elapsed. This caveat is required to account for a principle which holds that people cannot be morally assessed for things that are due to factors beyond their control. Without taking account of this principle, the category of claims against distortion would extend too far, to cover search results detailing information about a person that is outside of their control, such as eye colour or place of birth.40 We can exclude such results from being subject to claims against distortion because such information falls outside the scope of norms of accountability. Because it falls outside the norms of accountability, the inclusion of such information within the search results associated with a person's name will not convey the implicit meaning that this is relevant information by which to hold a person accountable or judge their character, though it might imply relevance to the searcher's personal preferences in selecting a romantic partner, for example.

If there is an important interest in reputational control, we have reason to be wary about practices that increase the social pressure for people to be held to account for things which it is no longer reasonable to hold them accountable for. This gives us some reason to defend the idea that individuals have claims against such distortion of their online public profiles. Claims against distortion serve a similar interest as claims against defamation, but do so in a distinct way, by focusing on questions of accountability, rather than attributability.

Focusing on distortion from the perspective of accountability, as I've suggested here, also provides an alternative explanation of the claims and duties involved in the kinds of cases discussed by Rumbold and Wilson.41 Their argument about privacy rights over public information has the consequence that there may be many cases in which one infringes a person's privacy right by unintentionally deducing something that a person did not intend to reveal from information which they willingly made public. They note that this is a bullet they are willing to bite. Shifting the focus from claims to privacy to claims against distortion offers a way to avoid that bullet in some of the cases that Rumbold and Wilson address. One would not infringe another person's claim against distortion simply by coming to know the information in question, but only by holding the person to account for it, or by disclosing it to others in certain ways.

Of course, much of the determination of whether a particular ranking of a search result amounts to distortion will come down to an assessment of whether or not it transgresses the social norms of accountability in the relevant context.42 To illustrate by way of example, I provide a sketch of how this assessment might be applied in the Google Spain case.

Take, for example, a person who had Googled Costeja ahead of a first date. It has become completely socially acceptable to conduct a Google search of someone before a date, and perhaps even to admit to the other person that one had done so. It may even be deemed acceptable within current norms of accountability to ask one's date about some of the results found in the course of that search.

On the other hand, if one had gone to the effort of trawling through old newspaper archives to dig up information, our conventions of reticence would probably make it gauche at best to mention that to one's date, or to expect them to answer questions about what had been dug up. From that perspective, we can think of Google's prominent ranking of the outdated information from the newspaper archive as cutting against the social norms by which we take it to be appropriate to hold people accountable in various contexts.

An important complication arises here. Given the fact that it is now acceptable to Google someone before meeting them, we might think that this has already caused a shift in our social norms of accountability, such that it is deemed generally appropriate to bring up even very old information, which is now readily available through search engines. We therefore cannot make a straightforward assessment of whether any given search result is irrelevant by reference to existing social norms, precisely because the set of information presented in search results pages is part of the shifting landscape of those very social norms.

There are two considerations to put forward in response to this point. The first is that there is already a wider cultural conversation over the extent to which it is appropriate for people to be publicly held to account for bad behaviour from their past, in an age where it is increasingly easy to find the skeletons in people's digital wardrobes.43 I take this to be part of a public reckoning around our collective norms of accountability, and the extent to which we have reason to resist too radical a shift in those norms.

While it is overly simplistic to think that we can assess standards of relevance or irrelevance of information by reference to existing social norms of accountability as though those are static, we can nevertheless recognize the importance of making such assessments in light of the standards of accountability we ought to strive for or preserve. Once we recognize the importance of that, it's a short step to proposing that some of our institutional structures and regulatory practices might have a role in seeking to stabilize the norms we have reason to preserve by setting out the boundaries of claims against distortion. One candidate principle to be guided by in the determination of those boundaries is the general liberal principle highlighted by Nagel of ensuring that the demands of collective life do not become too stifling on the individual.

That principle provides a general guide, rather than a precise recipe for determining the boundaries in every case. One aspect to assess is the severity of transgressions of existing norms of accountability, and the extent to which those transgressions encroach on an individual's reputational control. Another aspect will be to assess the effects of existing norms, relative to how much they encroach in general on the ability of individuals to present themselves to others on their own terms. There is a parallel here to the question of how to strike the balance between privacy claims and the public interest in the assessment of what is deemed ‘newsworthy’. The determination of newsworthiness relies on a combination of considerations, including the factual question of what is of interest to the public, as well as the normative question of what information the public ought to have access to as a matter of public good.44 In assessing the extent to which information-sharing practices risk becoming too stifling to the individual, we'll need to rely on a sociologically informed assessment of the effects of existing accountability norms combined with a normative assessment of how best to balance the public benefit of those information-sharing practices against the importance of securing sufficient reputational control for individuals.

Second, we have reason to raise concerns about differential abilities of individuals to exert control over their public information profiles, depending on the resources at their disposal. While the configuration of results yielded for any given search term is determined by the search-engine operators' algorithms, it is possible for companies or individuals to exert some degree of control over their search results through a process of search-engine optimization. For those who have enough time, resources, and expertise, this allows them to push inconvenient or embarrassing search results down the search-results ranking for a given search term (for example, their name), by ensuring that other, more positive content is ranked more highly. The fact that some people can engage online reputation-management services to manipulate their own search pages in this way, while others lack the resources to do so, gives us further reason to support data-protection regulation that would give everyone more equitable protection for claims against distortion. The effectiveness of such practices, however, does indicate that claims against distortion may be met by requiring search engines to display certain search results less prominently in the list of results, rather than requiring the removal of the search result in all cases.45

A broader concern is that search-engine operators have acquired the power not only to distort the public profiles of individuals, but also to shift the very conditions in which we shape our practices of accountability. Giving individuals more control over their public information profiles represents one way to resist such power becoming too concentrated in the hands of a relatively small number of large corporations.

I have argued for a category of claims against distortion that are distinct from claims against privacy and claims against defamation, but serve the same general interest in reputational control. Claims against distortion apply in cases where some true information that is legitimately a matter of public record is communicated in a way that implies that it is relevant information by which to hold a person accountable, in contexts where that information is outdated or irrelevant to existing norms of accountability.

The approach I've outlined provides a case for giving individuals protection for claims against the distortion of their online public profiles, and provides some guidance on how to make the assessment of which types of information presentation amount to distortion. A question remains over how this is supposed to help to resolve the seeming inconsistency in Google Spain.

The three things that were supposed to be in tension in that case were (1) that Google was required to remove the link to the original newspaper article from their search results page for Costeja's name; (2) that the original article was allowed to remain on the newspaper's website; and (3) that subsequent search result links to newspaper articles reporting the CJEU ruling, including all the details of the original debt foreclosure, were not required to be removed. The case in favour of extending claims against the distortion of one's public information profile by reference to normative standards of accountability helps us to explain away the tension here.

With respect to the original newspaper article, allowing it to remain on the internet, but not directly accessible via a simple Google search of Costeja's name would ensure that anybody who wanted to seek it out would have to dig quite deep. While that level of investigation may be appropriate or even necessary in certain restricted contexts, it would not be deemed appropriate in many of the social contexts in which it might be normal to Google someone's name (such as dating or deciding whether to invite someone to interview after assessing their job application). Ensuring that such information remains accessible, but relatively hard to find, is compatible with preserving social norms around conventions of reticence and accountability, as outlined above.

As regards the links to newspaper articles reporting the CJEU case being allowed to remain prominently listed, in light of the argument I've proposed about accountability, we can notice that the framing of the information in question will matter to our assessment of its relevance in the search rankings. It matters, from this perspective, that these articles embed the information about the foreclosure in the context of reporting the facts of the Google Spain case. That is, they explicitly mention that the original foreclosure notice was subject to a court ruling in which it was deemed to be sufficiently irrelevant or outdated to warrant erasure from the search results associated with Costeja's name.

While this result still allows the information about the original foreclosure notice to be readily accessible in the public sphere, it ensures that it is framed in a way that simultaneously signals the fact that it is not the kind of information that is appropriate to hold a person accountable for in the many contexts in which we might conduct a quick Google search of that person's name. In that way, this outcome can also be read as a coherent way to protect individuals from having their public information profiles distorted, by taking measures which seek to reinforce or preserve norms of accountability.

A possible objection is that while such measures may indeed project some public message about the relevance of information to what we hold people accountable for, this provides no guarantee that people will refrain from holding us accountable for the things in question. This objection could be pressed most strongly by those who would advocate an analysis of accountability and blame in terms of the emotional responses or reactive attitudes that are warranted in response to a person's wrongdoing.46 Even if people recognize that it would not be appropriate to mention a person's particular misdemeanour in a given context, they may nevertheless be holding the relevant attitudes towards them, and in that sense be holding them accountable in their judgements of the person's character. And of course, there would be nothing to stop them from modifying their own behaviour accordingly, for example by cancelling a date.

It is certainly true that we cannot legislate for the attitudes of others. As Ripstein points out in his discussion of the right against defamation, a right to reputation is not a right that others think well of you. It is a right that nobody else should determine your reputation in the eyes of others. Widening our perspective on the right to reputation to take into consideration the importance of accountability over time, alongside the narrower question of attributability, introduces a grey area on the question of what constitutes someone else interfering with or distorting one's reputation. However, engaging in a careful assessment of existing social practices around what we publicly deem it acceptable for people to be held accountable for, and which of these practices ought to be preserved or protected, provides at least some indication of what data-protection legislation might do in service of that goal.

What I've suggested is that it is possible to justify data-protection regulations that give people claims against distortions of their public information profiles, and in doing so aim to stabilize or protect norms of accountability. Doing so goes at least some way to ensuring that individuals retain a reasonable degree of control over their reputation as against entities like search-engine operators. In the face of the increasing influence that tech companies have over how we present to others, this provides an institutional counterbalance to put some of that reputational control back into the hands of individuals.

I hope to have shown that moving beyond the constraints of a focus on privacy rights alone allows us to make a case for a broader range of data-protection provisions. We can do so by appeal to the importance of claims against distortion alongside claims to privacy and claims against defamation. Thinking more carefully about how norms of accountability bear on our assessment of what counts as a distortion of someone's public profile allows us to make a case for extending the right to reputation beyond the narrower standard of attributability on which claims against defamation are based. It does so in a way that addresses concerns about the increasing influence that search-engine operators have over our online public profiles. A key advantage of this approach is that it provides a path to defending and implementing measures that go some way to give individuals control over their personal information even in cases where from the perspective of the privacy/publicity divide, the cat is already out of the bag.

None relevant.

There are no potential conflicts of interest relevant to this article.

The author declares human ethics approval was not needed for this study.

隐私、公开和被遗忘权
我们中的许多人都会记得,通常由老师发出的简单威胁所引发的不祥预感:“它会记录在你的永久记录中”。这个行政怪物利用了我们早期对能够把一些事情留在过去的重要性的认识。虽然我们所做的一些事情可能会在我们生活的各个阶段被公开记录,但过去我们可以安慰自己,这些事情很快就会被深深地埋在档案中,大多数人都不愿意去看。然而,在数字时代,搜索引擎会进行挖掘,只需简单搜索一个人的名字,就可以提供以前被遗忘已久的结果。这篇文章探讨了一个问题,即我们对从过去挖掘出来的信息有什么样的主张。具体而言,它侧重于合法的公共记录信息。当一些信息被存储在档案中,并且原则上任何人都可以访问时,我们对这些信息被曝光有什么主张(如果有的话)?许多人会发现,有人在那里挖掘档案并重新公布他们发现的关于你的信息是一种侵扰,但目前尚不清楚这种投诉的依据是什么。此外,正是这种投诉通过数据保护条款提供保护,通俗地称为“被遗忘的权利”。这些措施通常被界定为隐私保护。虽然有人认为我们可以对公开的信息享有隐私权,1但我认为,在涉及合法公共记录的信息的情况下,对隐私的上诉是失败的。关注我们必须反对挖掘过时信息的原因,揭示了一类新的权利要求,这些权利要求不同于隐私权要求,但与隐私学者长期以来关注的自我呈现的普遍利益相同。我称这些说法是歪曲事实。我们可以将此类主张理解为属于历史人格权中的声誉控制的一般原则。这一一般原则为隐私权主张、针对诽谤的主张以及我将要指出的针对歪曲事实的主张提供了基础。援引人格权的目的是确定这一概念中包含的一项一般原则,然后对以不同方式服务于该原则的权利要求进行分类。将这三类权利要求分开的一个好处是,它使我们能够围绕隐私的概念保持明确的界限,隐私的概念经常因含义过多而受到抨击。隐私提供了声誉控制的一个方面,允许我们决定与谁分享自己的各个方面。针对诽谤的索赔是针对以损害名誉的方式对自己进行虚假陈述的索赔。针对失真的索赔是指以传播真实信息的方式扭曲个人公众形象的索赔。当某人过去的真实信息以某种方式呈现时,这种扭曲可能会发生,这种方式表明追究他们的责任是合适的,当不再适合这样做时。确定针对失真的索赔类别,并分析其与隐私索赔和诽谤索赔的位置,使我们能够扩大数据保护条款的正当性基础的范围。通过这样做,它扩大了关于我们何时以及为什么有理由让个人对其个人信息拥有更多控制权的辩论范围。数据保护条款往往被视为保护个人隐私权的措施来制定和捍卫。数据保护法规有一个重要的位置,可以实现这一目的。然而,并不是所有的数据保护法规都可以被理解或证明是隐私保护。因此,试图从隐私的角度分析所有数据保护,不适当地限制了我们对如何在为个人提供重要保护的基础上证明此类监管的合理性的思考。主要论点通过审查被称为“被遗忘权”的数据保护条款来进行。具体来说,它解决了这项规定的一个方面,即当个人的名字被输入搜索引擎时,个人可以要求删除显示的某些结果。被遗忘权需要解释的一个方面是,删除指向某些内容的搜索引擎结果,同时在原始来源公开相同内容的理由。 虽然任何给定搜索词的结果配置由搜索引擎运营商的算法决定,但公司或个人可以通过搜索引擎优化过程对其搜索结果施加一定程度的控制。对于那些有足够时间、资源和专业知识的人来说,这使他们能够通过确保其他更积极的内容排名更高,将不方便或令人尴尬的搜索结果推到给定搜索词(例如,他们的名字)的搜索结果排名中。事实上,一些人可以通过这种方式使用在线声誉管理服务来操纵自己的搜索页面,而另一些人则缺乏这样做的资源,这让我们有更多的理由支持数据保护法规,该法规将为每个人提供更公平的保护,使其免受扭曲。然而,这种做法的有效性确实表明,可以通过要求搜索引擎在结果列表中不那么突出地显示某些搜索结果来满足针对失真的索赔,而不是要求在所有情况下删除搜索结果。45更广泛的担忧是,搜索引擎运营商不仅获得了扭曲个人公众形象的权力,而且还改变了我们制定问责制的条件。让个人对自己的公共信息档案有更多的控制权,是抵制这种权力过于集中在相对少数的大公司手中的一种方式。我主张一类针对扭曲的索赔,这类索赔与针对隐私和诽谤的索赔不同,但在声誉控制方面符合同样的普遍利益。如果某些合法属于公共记录事项的真实信息以某种方式传播,意味着这是追究个人责任的相关信息,而这些信息已经过时或与现有的问责规范无关,则适用对失真的索赔。我所概述的方法为个人提供了一个保护,使其免受网络公众资料失真的影响,并为如何评估哪些类型的信息呈现构成失真提供了一些指导。还有一个问题是,这将如何帮助解决谷歌西班牙公司看似不一致的问题。在这种情况下,本应处于紧张状态的三件事是:(1)谷歌被要求从搜索结果页面上删除Costeja名字的原始报纸文章链接;(2) 原文章被允许保留在该报的网站上;以及(3)随后的搜索结果链接到报道欧盟法院裁决的报纸文章,包括原始债务止赎的所有细节,不需要删除。支持通过引用规范的问责标准来扩大对个人公共信息档案失真的索赔,这有助于我们解释这里的紧张局势。关于最初的报纸文章,允许它保留在互联网上,但不能通过简单的谷歌搜索Costeja的名字直接访问,这将确保任何想找到它的人都必须深入挖掘。 虽然在某些受限制的情况下,这种程度的调查可能是适当的,甚至是必要的,但在许多社交环境中,用谷歌搜索某人的名字可能是正常的(例如约会或在评估某人的工作申请后决定是否邀请其面试),这是不合适的。如上所述,确保这些信息仍然可以获得,但相对难以找到,这与围绕沉默和问责公约维护社会规范是一致的。关于报道CJEU案件的报纸文章的链接被允许保留在显著位置,鉴于我提出的关于问责制的论点,我们可以注意到,有问题的信息的框架将对我们评估其在搜索排名中的相关性至关重要。从这个角度来看,重要的是,这些文章在报道谷歌西班牙案事实的背景下嵌入了有关丧失抵押品赎回权的信息。也就是说,他们明确提到,最初的止赎通知受到法院裁决的约束,在该裁决中,该通知被认为足够不相关或过时,有必要从与Costeja名字相关的搜索结果中删除。虽然这一结果仍然使有关原始止赎通知的信息能够在公共领域随时获得,但它确保了它的框架同时表明,在我们可能在谷歌上快速搜索此人姓名的许多情况下,这类信息不适合追究此人的责任。 这样,这一结果也可以被解读为一种连贯的方式,通过采取措施加强或维护问责制规范,保护个人的公共信息不被扭曲。一个可能的反对意见是,虽然这些措施确实可能会向公众传达一些信息,表明信息与我们追究人们责任的内容之间的相关性,但这并不能保证人们会不追究我们对相关事情的责任。这种反对意见可能会受到那些主张从情绪反应或反应态度的角度来分析责任和指责的人的强烈反对。46即使人们认识到在特定的背景下提及一个人的特定轻罪是不合适的,然而,他们可能对他们持有相关的态度,并在这个意义上要求他们对个人性格的判断负责。当然,没有什么可以阻止他们相应地改变自己的行为,例如取消约会。当然,我们不能为他人的态度立法。正如里普斯坦在讨论反对诽谤的权利时指出的那样,名誉权并不是别人对你评价良好的权利。别人不应该决定你在别人眼中的声誉,这是一项权利。扩大我们对名誉权的看法,以考虑到随着时间的推移问责制的重要性,以及狭义的归属性问题,在什么构成他人干扰或扭曲个人名誉的问题上引入了灰色地带。然而,围绕我们公开认为人们可以接受的责任,以及其中哪些做法应该得到保留或保护,对现有的社会实践进行仔细评估,至少可以说明数据保护立法可能会为实现这一目标做些什么。我的建议是,有可能证明数据保护法规的合理性,这些法规允许人们对其公共信息档案的扭曲提出索赔,这样做的目的是稳定或保护问责制规范。这样做至少在一定程度上确保了个人相对于搜索引擎运营商等实体对自己的声誉保持合理程度的控制。面对科技公司对我们如何向他人展示的影响力越来越大,这提供了一种制度制衡,将一些声誉控制权重新交到个人手中。我希望已经表明,超越仅关注隐私权的限制,可以让我们为更广泛的数据保护条款提出理由。我们可以通过呼吁反对歪曲的索赔以及隐私权和诽谤索赔的重要性来做到这一点。更仔细地思考问责规范如何影响我们对什么是对某人公众形象的扭曲的评估,可以让我们有理由将名誉权扩展到诽谤索赔所依据的更窄的归属标准之外。它这样做的方式是为了解决人们对搜索引擎运营商对我们的在线公众档案日益增加的影响的担忧。这种方法的一个关键优势是,它提供了一条保护和实施措施的途径,这些措施在一定程度上让个人控制自己的个人信息,即使从隐私/公开的角度来看,猫已经出笼了。无相关。本条不存在潜在的利益冲突。作者宣称这项研究不需要人类伦理的批准。 从隐私角度为裁决的这些方面辩护是一个挑战,因为有问题的信息通过原始来源仍在公共领域,随后又因史翠珊效应而被放大。我认为,尽管如此,如果我们将该裁决理解为基于对扭曲的主张,而不是对隐私的主张,那么该裁决可以被解释为对自我展示有重要利益。这种解释使我们能够解释在删除方面区别对待原始来源和谷歌链接的看似不一致的地方。虽然史翠珊效应可能是这起里程碑式案件特有的罕见副作用,但分析案件的这一方面有助于说明将搜索结果链接与原始材料区别对待的理由。它揭示了在特定背景下如何构建信息的重要性,而不仅仅是某些信息是否在公共领域的问题。这一观点是针对失真索赔的核心。在为被遗忘权提供基于扭曲的正当理由之前,需要进一步论证,以推动超越基于隐私的正当理由。一些学者提出了支持被遗忘权的观点,他们认为,随着时间的推移,信息的遗忘一直是我们文化实践的一个重要方面。有人认为,这种逐渐遗忘对我们个人过上充分自主的生活的能力至关重要,不受过去错误或尴尬事故的影响。从这个角度来看,在数字时代,旧信息的广泛可访问性代表了一种激进和不稳定的转变,随着时间的推移,这威胁到我们按照自己的方式生活的能力。例如,Viktor Mayer Schönberger认为,信息应该有一个到期日,在到期日之后,信息会自动删除,Meg Leta Jones认为,我们应该向个人用户的信息管理模式迈进,让他们最终控制自己的在线个人资料。11我也普遍担心,过时的个人信息的广泛可访问性可能会导致一个令人窒息的社会环境,在这个环境中,人们不断地被过去的事件提醒和阻碍。我也赞同这样一种观点,即个人应该对自己的个人信息档案有更多的控制权。然而,在某种程度上,这一领域的重点仍然停留在评估各种情况下的信息应该是“公共”(可访问)还是“私人”(删除)的框架内,这种辩论的框架仍然意味着对不同信息来源的删除采取统一的方法。罗伯特·波斯特(Robert Post)在其对谷歌西班牙规则的批评中证明了隐私框架的独特含义。12波斯特认为,该决定融合了被遗忘权的两种相互冲突的含义,一种基于数据保护的官僚原则,另一种基于权贵隐私的概念。根据《华盛顿邮报》的说法,官僚原则假设数据总是出于仪器指定的目的收集的,并且在不再需要这些目的时需要删除。相比之下,尊严隐私权寻求“定义和执行尊重言论的社会规范”。13保护权贵隐私需要在保护个人尊严和言论自由的公共利益之间取得平衡。波斯特指出,公众利益广泛延伸到“可以或将成为公共行动议程一部分”的信息,以及“保持使公共话语成为可能的沟通结构的完整性”。14波斯特认为,谷歌是沟通结构的一部分。因此,该裁决本应遵循政要隐私的原则。根据政要隐私原则,评估某些信息是否应该向公众开放的标准是该信息被视为具有攻击性或新闻价值的程度。如果被认为具有足够的冒犯性,会对个人的尊严造成损害,个人有权对其出版物提出隐私要求。 根据《华盛顿邮报》的说法,在权贵隐私的背景下,“重要的不是信息的内容,而是特定交流行为的背景和意义”。15《华盛顿邮报的论点的一个关键原则是,谷歌链接在其指向的基础网页的内容之外没有其他意义。因此,它们不能被认为比原始内容更具攻击性或新闻价值。因此,根据Post的说法,法院本应将同样的结果应用于谷歌搜索结果和他们所指向的基础网站。波斯特关于交际行为语境意义的观点是一个重要的观点。他从信息受众的适当性角度解释了背景的重要性,以下例子证明了这一点:“向公众透露私人的CT扫描可能是一种羞辱,但将其提供给医生进行评估是没有异议的”。16我将在第三节中辩称,搜索结果链接确实在其指向的内容之上具有意义,这使我们有理由在某些情况下将其与底层内容区别对待。然而,它们传达这种语境意义的方式并不是因为接触到了与原始素材不同或更广泛的受众。因此,这种交际行为的语境意义可能会违反冒犯性标准,并引发对权贵伤害的担忧,而隐私概念并没有充分体现这一点。这是因为隐私的概念局限于对某些信息所接触的受众的适当性的担忧。我们没有扩大隐私的概念,而是有充分的理由将其保持在如此有限的范围内,而是扩大了在自我展示中保护个人利益的权利要求的类别。为了揭示隐私的概念如何解决观众暴露的问题,考虑隐私的关键概念将是有益的。隐私概念标准上要求区分公共领域和私人领域。17根据这一普遍观点,人们认为,每当一些本应限于特定受众的信息进入更为公共的领域时,一个人的隐私利益就会受挫。例如,在Judith Jarvis Thomson颇具影响力的隐私权观点中,一旦一个人有意或无意地公开了一些信息,他们就被认为因此放弃了对这些信息的隐私权,如果我们将有问题的数据保护措施解释为旨在保护隐私权。这一总体结构思想支撑了一系列隐私方法:隐私作为控制,隐私作为可访问性,隐私作为上下文完整性。隐私即控制的概念描绘了一幅个人对其个人领域拥有主权的画面,需要她同意才能让他人进入,由她自己决定。19在这样做的过程中,它最直接地将个人的私人领域与需要保护免受公众不必要侵犯的领域区分开来。隐私作为可访问性的概念偏离了这样一种概念,即当个人对私人和公共领域之间的信息流拥有最终控制权时,隐私才是最好的服务。这一阵营的论点指出,当个人自愿打开闸门时,隐私的价值可能会受到损害,特别是如果由此导致的私人领域的解体是不可逆转的。20这代表了对保护隐私的更为家长式的立场,然而,这是基于这样一个前提,即有些事情应该放在一个私人领域内,不受公众监督。第三种方法,即作为上下文完整性的隐私,更直接地挑战了私人和公共作为两个不同领域的简单图景。相反,它强调隐私权是一种获得适当信息流的权利,不同的信息在不同的社会背景下被不同的各方适当地知道。21这种方法拒绝了隐私的分界线是在家庭或家庭等私人领域进行的活动和在公共场合进行的活动之间的观点。它这样做是为了让人们相信,即使相关个人在公开场合行事,某些收集个人信息的方式也可能构成侵犯隐私。然而,在提出适当的信息流动的概念时,它仍然借鉴了信息可及性不同领域的相同的一般结构概念。 因此,每一个不同的社会背景,其中一些信息是适当已知的,都可以被视为其自身的限制范围,而不是不包括在该特定背景中的其他公众。虽然这些隐私方法中的每一种都有自己的观点,即在任何特定的背景下,如何最好地思考公共和私人之间的分歧,但它们都有一个共同的想法,即当信息在本应局限于较小或不同人群的背景下公开时,一些隐私利益就会受挫。对此的一种回应可能是寻求将隐私的概念扩展到这一框架之外,以捕捉人们对不同来源向我们提供信息的方式的担忧,而不仅仅是他们向受众提供信息的问题。这种对信息上下文呈现的担忧最终解释了谷歌西班牙等案件的利害关系。然而,扩大隐私的概念以试图捕捉这些担忧可能会破坏其一致性,使其容易受到批评,认为这是一个定义不清的术语,用于涵盖一系列不同的兴趣。22这给了我们充分的理由来保持隐私概念的局限性和具体性,而不是将其扩展到所有引起对声誉控制担忧的情况。为了将隐私的概念局限于受众暴露的问题,我遵循Carissa Véliz对隐私的描述。隐私概念的一般结构框架支持了这样一种论点,即谷歌西班牙的裁决如果基于隐私声明,则是不一致的。如果隐私是为了将信息限制在适当的限制范围内,那么当谷歌搜索结果与它们所指向的原始源材料原则上可供相同受众公开访问时,很难想象有什么理由将其与原始源材料区别对待。当指向不同来源的谷歌链接包含相同的信息内容时,要证明对这些链接的区别对待是合理的,这将是一项更艰巨的任务,就像与La Vanguardia的原始止赎通知链接相比,报道谷歌西班牙案件细节的报纸文章的链接所发生的那样。如果我们不能通过呼吁隐私的重要性来证明被遗忘权的这些要素是合理的,我们需要找到另一种方法来证明在谷歌西班牙实施的措施是合理的或接受案件处理不一致的说法。在理解隐私的作用是将信息限制在适当的领域时,另一个问题是,我们是否可以对已经公开的信息拥有隐私权。本尼迪克特·伦博尔德(Benedict Rumbold)和詹姆斯·威尔逊(James Wilson)认为,人们可以对公开的信息保留隐私权。虽然可以通过这种方式将隐私权应用于Rumbold和Wilson确定的以特定方式公开的信息,但与被遗忘权相关的案例不符合这种模式。相反,在这些情况下,某些信息是故意公开的,要么是数据主体一开始自愿公开,要么是因为它是合法的公共记录。这篇文章所面临的特别挑战是,在擦除方面,是否有任何理由以不同的方式对待报告信息的不同来源。因此,Rumbold和Wilson的论点不适用于本文关注的被遗忘权的特征。为被遗忘权辩护的另一条途径可以从更普遍的人格权概念所依据的原则中得出。在某种程度上,这种替代理由经得起审查,它扩展了我们可以用来捍卫数据保护条款各个方面的理论工具,超越了基于隐私的论点本身所能提供的限制。这一论点并不是要假设谷歌西班牙应用的措施在直觉上是正确的,而且必须对其一致性做出解释。相反,其目的是表明有一条合理的途径来证明数据保护措施的合理性,这些措施不需要利用隐私权,并提供一种更连贯的方式来理解谷歌西班牙应用的措施。与个人数据有关的被遗忘权的起源至少可以追溯到2010年法国制定的一个立法项目。 然而,在不侵犯任何隐私权的情况下,分享信息的方式可能会破坏人们对自我展示的普遍兴趣。某些呈现信息的方式可能构成对个人声誉的侵权扭曲,即使相关信息的内容已经合法进入公共领域。鉴于对自我展示兴趣的关注,学者们关注扭曲或操纵一个人公开展示的不同方式,并试图在隐私权的保护伞下捕捉这些方式也就不足为奇了。例如,Daniel Solove在他的隐私分类法中包括了扭曲。31 Solove将扭曲与披露区分如下:然而,如上所述,将隐私的概念扩大到包括扭曲和诽谤可能会落入批评者的手中,他们抱怨隐私的含义令人尴尬。在确定了保护个人尊严和荣誉的普遍关切之后,我们最好确定不同类别的索赔,通过保护个人声誉免受不同类型的威胁来达到这一目的。我们可以通过将隐私权主张、诽谤权主张和歪曲权主张分开来做到这一点。这些属于一般人格权的权利主张可以被视为有助于构建个人法律概念的制度规定。它们在划定个人权力领域的边界方面发挥了作用,而不是集体在塑造我们的公众人物方面的主张。他们通过向个人提供不同的声誉控制要素来做到这一点。虽然Solove将失真定义为传播虚假或误导性信息,但我保留“失真”一词是指以扭曲个人公共信息档案的方式传播真实信息的情况。为了证明对歪曲的索赔,我分析了各种歪曲个人公共信息的方式与诽谤不同,但其作用是破坏对诽谤索赔所保护的一般利益。对搜索引擎结果的功能和意义的分析揭示了索赔的重要性和失真及其产生的背景类型。我们可以将当有人在搜索引擎中键入您的姓名时提供的搜索结果视为您公共信息档案的一个方面。一个人的在线搜索档案在其公众档案中越来越重要,这可能会影响一个人生活的重要部分。雇主和大学使用搜索引擎来研究和评估潜在的候选人,在某种程度上,这导致了在线声誉管理服务行业的蓬勃发展,旨在使客户的热门搜索结果与他们希望展现的职业形象相匹配。33评估潜在浪漫伴侣的合适性也是如此。34在我们寻求评估某人的能力或性格,我们认为谷歌等搜索引擎是帮助我们进行一些初步评估的有用工具。我们的社会实践不仅反映了这一点,而且谷歌自己提供搜索结果的策略明确是根据与相关搜索词的相关性进行排名。这些结果也旨在根据搜索者的目的进行定制,基于谷歌通过其其他互联网活动建立的关于他们的个人资料。35因此,当一个人的名字被输入作为搜索词时,搜索引擎运营商提供的结果不是一个中立的参考索引。它们形成了搜索引擎运营商的算法认为与搜索者的目的最相关的搜索主题的精心策划的概况。因此,这个简介除了所指向的信息内容之外,还包含交流内容。有鉴于此,我建议,当一个人的名字被用作搜索词时,当过时或不相关的结果在搜索引擎提供的内容列表中突出显示时(即,被列为高度相关的结果),这可能相当于扭曲了此人的公共信息档案。这是一种扭曲,因为过时或不相关的结果被呈现为与搜索者对搜索主题的评估相关。我们有理由对此表示反对,就像我们有理由反对有人从旧档案中挖掘一些信息并重新公布一样。然而,需要进一步解释反对的性质。针对歪曲的索赔与针对诽谤的索赔形成了重要对比。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.10
自引率
5.60%
发文量
17
期刊介绍: The Journal of Political Philosophy is an international journal devoted to the study of theoretical issues arising out of moral, legal and political life. It welcomes, and hopes to foster, work cutting across a variety of disciplinary concerns, among them philosophy, sociology, history, economics and political science. The journal encourages new approaches, including (but not limited to): feminism; environmentalism; critical theory, post-modernism and analytical Marxism; social and public choice theory; law and economics, critical legal studies and critical race studies; and game theoretic, socio-biological and anthropological approaches to politics. It also welcomes work in the history of political thought which builds to a larger philosophical point and work in the philosophy of the social sciences and applied ethics with broader political implications. Featuring a distinguished editorial board from major centres of thought from around the globe, the journal draws equally upon the work of non-philosophers and philosophers and provides a forum of debate between disparate factions who usually keep to their own separate journals.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信