Fabian Burmeister, Mickey Zar, Tilo Böhmann, N. Elkin-Koren, Christian Kurtz, W. Schulz
{"title":"Toward Architecture-Driven Interdisciplinary Research: Learnings from a Case Study of COVID-19 Contact Tracing Apps","authors":"Fabian Burmeister, Mickey Zar, Tilo Böhmann, N. Elkin-Koren, Christian Kurtz, W. Schulz","doi":"10.1145/3511265.3550451","DOIUrl":"https://doi.org/10.1145/3511265.3550451","url":null,"abstract":"This paper explores the use of an architectural perspective to study complex data ecosystems and to facilitate a normative discourse on such ecosystems. It argues that an architectural perspective is helpful to bridging discursive and methodological gaps between information systems (IS) research and legal studies. Combining architectural and normative perspectives is a novel interdisciplinary research approach that provides a framework for analyzing techno-legal contexts. The merits and challenges of this approach are demonstrated and discussed in this paper using the example of COVID-19 contact tracing apps. We conceptualize our results on three levels of knowledge: the first is the actual knowledge of the exemplary contact tracing app we studied and its ecosystem; the second is knowledge of the architectural meta-model that we used, its benefits and its shortcomings; and the third is knowledge of the interdisciplinary research process of acquiring common knowledge shared by IS scholars and legal experts.","PeriodicalId":254114,"journal":{"name":"Proceedings of the 2022 Symposium on Computer Science and Law","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123156416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julissa Milligan Walsh, Mayank Varia, A. Cohen, A. Sellars, Azer Bestavros
{"title":"Multi-Regulation Computing: Examining the Legal and Policy Questions That Arise From Secure Multiparty Computation","authors":"Julissa Milligan Walsh, Mayank Varia, A. Cohen, A. Sellars, Azer Bestavros","doi":"10.1145/3511265.3550445","DOIUrl":"https://doi.org/10.1145/3511265.3550445","url":null,"abstract":"This work examines privacy laws and regulations that limit disclosure of personal data, and explores whether and how these restrictions apply when participants use cryptographically secure multi-party computation (MPC). By protecting data during use, MPC offers the promise of conducting data science in a way that (in some use cases) meets or even exceeds most people's conceptions of data privacy. With MPC, it is possible to correlate individual records across multiple datasets without revealing the underlying records, to conduct aggregate analysis across datasets which parties are otherwise unwilling to share for competitive reasons, and to analyze aggregate statistics across datasets which no individual party may lawfully hold. However, most adoptions of MPC to date involve data that is not subject to privacy protection under the law. We posit that a major impediment to the adoption of MPC - on the data that society has deemed most worthy of protection - is the difficulty of mapping this new technology onto the design principles of data privacy laws. While a computer scientist might reasonably believe that transforming any data analysis into its privacy-protective variant using MPC is a clear win, we show in this work that the technological guarantees of MPC do not directly imply compliance with privacy laws. Specifically, a lawyer will likely want to ask several important questions about the pre-conditions that are necessary for MPC to succeed, the risk that data might inadvertently or maliciously be disclosed to someone other than the output party, and what recourse to take if this bad event occurs. We have two goals for this work: explaining why the privacy law questions are nuanced and that the lawyer is correct to proceed cautiously, and providing a framework that lawyers can use to reason systematically about whether and how MPC implicates data privacy laws in the context of a specific use case. Our framework revolves around three questions: a definitional question on whether the encodings still constitute 'personal data,' a process question about whether the act of executing MPC constitutes a data disclosure event, and a liability question about what happens if something goes wrong. We conclude by providing advice to regulators and suggestions to early adopters to spur uptake of MPC. It is our hope that this work provides the first step toward a methodology that organizations can use when contemplating the use of MPC.","PeriodicalId":254114,"journal":{"name":"Proceedings of the 2022 Symposium on Computer Science and Law","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122492465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Some Misconceptions about Software in the Copyright Literature","authors":"J. Bloch, Pamela Samuelson","doi":"10.1145/3511265.3550449","DOIUrl":"https://doi.org/10.1145/3511265.3550449","url":null,"abstract":"The technical complexity and functionality of computer programs have made it difficult for courts to apply conventional copyright concepts, such as the idea/expression distinction, in the software copyright case law. This has created fertile ground for significant misconceptions. In this paper, we identify fourteen such misconceptions that arose during the lengthy course of the Google v. Oracle litigation. Most of these misconceptions concern application programming interfaces (APIs). We explain why these misconceptions were strategically significant in Oracle's lawsuit, rebut them, and urge lawyers and computer scientists involved in software copyright litigation to adopt and insist on the use of terminology that is technically sound and unlikely to perpetuate these misconceptions.","PeriodicalId":254114,"journal":{"name":"Proceedings of the 2022 Symposium on Computer Science and Law","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123986991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Blind Justice: Algorithms and Neutrality in the Case of Redistricting","authors":"M. Duchin, D. Spencer","doi":"10.1145/3511265.3550440","DOIUrl":"https://doi.org/10.1145/3511265.3550440","url":null,"abstract":"In several areas of law and public policy, there have been longstanding dreams that computers can secure decisionmaking that takes only some things into account, while remaining demonstrably neutral to other factors. In 2022, the U.S. Supreme Court will consider mandating race-neutrality in multiple domains, notably in college admissions and redistricting. In this piece, we clarify the real and imagined uses of computers in redistricting, considering their application for optimization approaches and, more recently, for representative sampling. The current pitch to the Court for a race-blind Voting Rights Act is discussed at length.","PeriodicalId":254114,"journal":{"name":"Proceedings of the 2022 Symposium on Computer Science and Law","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131511178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Redress for Dark Patterns Privacy Harms? A Case Study on Consent Interactions","authors":"Johanna Gunawan, C. Santos, Irene Kamara","doi":"10.1145/3511265.3550448","DOIUrl":"https://doi.org/10.1145/3511265.3550448","url":null,"abstract":"Internet users are constantly subjected to incessant demands for attention in a noisy digital world. Countless inputs compete for the chance to be clicked, to be seen, and to be interacted with, and they can deploy tactics that take advantage of behavioral psychology to 'nudge' users into doing what they want. Some nudges are benign; others deceive, steer, or manipulate users, as the U.S. FTC Commissioner says, \"into behavior that is profitable for an online service, but often harmful to [us] or contrary to [our] intent\". These tactics are dark patterns, which are manipulative and deceptive interface designs used at-scale in more than ten percent of global shopping websites and more than ninety-five percent of the most popular apps in online services. Literature discusses several types of harms caused by dark patterns that includes harms of a material nature, such as financial harms, or anticompetitive issues, as well as harms of a non-material nature, such as privacy invasion, time loss, addiction, cognitive burdens, loss of autonomy, and emotional or psychological distress. Through a comprehensive literature review of this scholarship and case law analysis conducted by our interdisciplinary team of HCI and legal scholars, this paper investigates whether harms caused by such dark patterns could give rise to redress for individuals subject to dark pattern practices using consent interactions and the GDPR consent requirements as a case study.","PeriodicalId":254114,"journal":{"name":"Proceedings of the 2022 Symposium on Computer Science and Law","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122932537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cryptography, Trust and Privacy: It's Complicated","authors":"Ero Balsa, H. Nissenbaum, Sunoo Park","doi":"10.1145/3511265.3550443","DOIUrl":"https://doi.org/10.1145/3511265.3550443","url":null,"abstract":"Privacy technologies support the provision of online services while protecting user privacy. Cryptography lies at the heart of many such technologies, creating remarkable possibilities in terms of functionality while offering robust guarantees of data confidentiality. The cryptography literature and discourse often represent that these technologies eliminate the need to trust service providers, i.e., they enable users to protect their privacy even against untrusted service providers. Despite their apparent promise, privacy technologies have seen limited adoption in practice, and the most successful ones have been implemented by the very service providers these technologies purportedly protect users from. The adoption of privacy technologies by supposedly adversarial service providers highlights a mismatch between traditional models of trust in cryptography and the trust relationships that underlie deployed technologies in practice. Yet this mismatch, while well known to the cryptography and privacy communities, remains relatively poorly documented and examined in the academic literature---let alone broader media. This paper aims to fill that gap. Firstly, we review how the deployment of cryptographic technologies relies on a chain of trust relationships embedded in the modern computing ecosystem, from the development of software to the provision of online services, that is not fully captured by traditional models of trust in cryptography. Secondly, we turn to two case studies---web search and encrypted messaging---to illustrate how, rather than removing trust in service providers, cryptographic privacy technologies shift trust to a broader community of security and privacy experts and others, which in turn enables service providers to implicitly build and reinforce their trust relationship with users. Finally, concluding that the trust models inherent in the traditional cryptographic paradigm elide certain key trust relationships underlying deployed cryptographic systems, we highlight the need for organizational, policy, and legal safeguards to address that mismatch, and suggest some directions for future work.","PeriodicalId":254114,"journal":{"name":"Proceedings of the 2022 Symposium on Computer Science and Law","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124433098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ayelet Gordon-Tapiero, Alexandra Wood, Katrina Ligett
{"title":"The Case for Establishing a Collective Perspective to Address the Harms of Platform Personalization","authors":"Ayelet Gordon-Tapiero, Alexandra Wood, Katrina Ligett","doi":"10.1145/3511265.3550450","DOIUrl":"https://doi.org/10.1145/3511265.3550450","url":null,"abstract":"Personalization on digital platforms drives a broad range of harms, including misinformation, manipulation, social polarization, subversion of autonomy, and discrimination. In recent years, policymakers, civil society advocates, and researchers have proposed a wide range of interventions to address these challenges. In this article, we argue that the emerging toolkit reflects an individualistic view of both personal data and data-driven harms that will likely be inadequate to address growing harms in the global data ecosystem. We maintain that interventions must be grounded in an understanding of the fundamentally collective nature of data, wherein platforms leverage complex patterns of behaviors and characteristics observed across a large population to draw inferences and make predictions about individuals. Using the lens of the collective nature of data, we evaluate various approaches to addressing personalization-driven harms currently under consideration. This lens also allows us to frame concrete guidance for future legislation in this space and advocate meaningful transparency that goes far beyond current proposals. We offer a roadmap for what meaningful transparency must constitute: a collective perspective providing a third party with ongoing insight into the information gathered and observed about individuals and how it correlates with any personalized content they receive-across a large, representative population. These insights would enable the third party to understand, identify, quantify, and address cases of personalization-driven harms. We discuss how such transparency can be achieved without sacrificing privacy and provide guidelines for legislation to support the development of this proposal.","PeriodicalId":254114,"journal":{"name":"Proceedings of the 2022 Symposium on Computer Science and Law","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122074489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Azer Bestavros, Stacey L. Dogan, Paul Ohm, A. Sellars
{"title":"Bridging the Computer Science -- Law Divide: Recommendations from the Front Lines","authors":"Azer Bestavros, Stacey L. Dogan, Paul Ohm, A. Sellars","doi":"10.1145/3511265.3550497","DOIUrl":"https://doi.org/10.1145/3511265.3550497","url":null,"abstract":"Many pressing societal questions can be answered only by bringing experts from different disciplines together. Questions around misinformation and disinformation, platform power, surveillance capitalism, information privacy, and algorithmic bias, among many others, reside at the intersection of computer science and law. We need to develop institutions that bring together computer scientists and legal scholars to work together on issues like these, and to train new innovators, thought leaders, counselors, and policymakers with hybrid training in both disciplines. In Universities, the disciplines of Computer Science (CS) and Law are separated by many wide chasms. Differences in standards, language, methods, and culture impede professors and other academic researchers who want to collaborate with colleagues on the other side of this divide. Universities place CS and Law in different schools, on different campuses, on different calendars, etc. Researchers in the two disciplines face differing incentives and reward structures for publishing, teaching, funding, and service.","PeriodicalId":254114,"journal":{"name":"Proceedings of the 2022 Symposium on Computer Science and Law","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121347027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dor Bitan, R. Canetti, S. Goldwasser, Rebecca Wexler
{"title":"Using Zero-Knowledge to Reconcile Law Enforcement Secrecy and Fair Trial Rights in Criminal Cases","authors":"Dor Bitan, R. Canetti, S. Goldwasser, Rebecca Wexler","doi":"10.1145/3511265.3550452","DOIUrl":"https://doi.org/10.1145/3511265.3550452","url":null,"abstract":"The use of hidden investigative software to collect evidence of crimes presents courts with a recurring dilemma: On the one hand, there is often clear public interest in keeping the software hidden to preserve its effectiveness in fighting crimes. On the other hand, criminal defendants have rights to inspect and challenge the full evidence against them, including law enforcement's investigative methods. In fact, in the U.S. adversarial legal system, the defendant's rights to scrutinize the government's tools are crucial to the truth-seeking process and to keeping law enforcement conduct lawful and constitutional. Presently, courts balance these conflicting interests on a case-by-case basis through evidentiary privilege law, often voicing their frustration with the challenging dilemma they face. We demonstrate how judicious use of a sophisticated cryptographic tool called Zero Knowledge Proofs (ZKPs) could help to mitigate this dilemma: Based on actual court cases where evidence was collected using a modified version of a peer-to-peer software, we demonstrate how law enforcement could, in these cases, augment their investigative software with a ZKP-based mechanism that would allow them to later provide full responses to challenges made by a defense expert -- and allow a defense expert to independently verify law enforcement claims -- while keeping the software hidden. We demonstrate the technical feasibility of our mechanism via a proof-of-concept implementation. We also propose legal analysis that justifies its use, discusses its merits, and considers the legal implications that the very existence of such a mechanism might have, even in cases where it has not been used. Our proof-of-concept may also extend to other verification dilemmas in the legal landscape.","PeriodicalId":254114,"journal":{"name":"Proceedings of the 2022 Symposium on Computer Science and Law","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123917193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Privacy-Fairness-Accuracy Frontier: A Computational Law & Economics Toolkit for Making Algorithmic Tradeoffs","authors":"Aniket Kesari","doi":"10.1145/3511265.3550437","DOIUrl":"https://doi.org/10.1145/3511265.3550437","url":null,"abstract":"Both law and computer science are concerned with developing frameworks for protecting privacy and ensuring fairness. Both fields often consider these two values separately and develop legal doctrines and machine learning metrics in isolation from one another. Yet, privacy and fairness values can conflict, especially when considered alongside the accuracy of an algorithm. The computer science literature often treats this problem as an \"impossibility theorem\" - we can have privacy or fairness but not both. Legal doctrine is similarly constrained by a focus on the inputs to a decision - did the decisionmaker intend to use information about protected attributes. Despite these challenges, there is a way forward. The law has integrated economic frameworks to consider tradeoffs in other domains, and a similar approach can clarify policymakers' thinking around balancing accuracy, privacy, and fairnesss. This piece illustrates this idea by using a law & economics lens to formalize the notion of a Privacy-Fairness-Accuracy frontier, and demonstrating this framework on a consumer lending dataset. An open-source Python software library and GUI will be made available.","PeriodicalId":254114,"journal":{"name":"Proceedings of the 2022 Symposium on Computer Science and Law","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126707136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}