Alexandre Mojon, Robert Mahari, Sandro Claudio Lera
{"title":"Addressing Information Asymmetry in Legal Disputes through Data-Driven Law Firm Rankings","authors":"Alexandre Mojon, Robert Mahari, Sandro Claudio Lera","doi":"arxiv-2408.16863","DOIUrl":"https://doi.org/arxiv-2408.16863","url":null,"abstract":"Legal disputes are on the rise, contributing to growing litigation costs.\u0000Parties in these disputes must select a law firm to represent them, however,\u0000public rankings of law firms are based on reputation and, we find, have little\u0000correlation with actual litigation outcomes, giving parties with more\u0000experience and inside knowledge an advantage. To enable litigants to make\u0000informed decisions, we present a novel dataset of 310,876 U.S. civil lawsuits\u0000and we apply an algorithm that generalizes the Bradley-Terry model to assess\u0000law firm effectiveness. We find that our outcome-based ranking system better\u0000accounts for future performance than traditional reputation-based rankings,\u0000which often fail to reflect future legal performance. Moreover, this\u0000predictability decays to zero as the number of interactions between law firms\u0000increases, providing new evidence to the long-standing debate about whether\u0000litigation win rates approach 50% as information asymmetry diminishes. By\u0000prioritizing empirical results, our approach aims to provide a more equitable\u0000assessment of law firm quality, challenging existing prestige-focused metrics,\u0000and levels the playing field between litigants.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giada Lalli, James Collier, Yves Moreau, Daniele Raimondi
{"title":"JINet: easy and secure private data analysis for everyone","authors":"Giada Lalli, James Collier, Yves Moreau, Daniele Raimondi","doi":"arxiv-2408.16402","DOIUrl":"https://doi.org/arxiv-2408.16402","url":null,"abstract":"JINet is a web browser-based platform intended to democratise access to\u0000advanced clinical and genomic data analysis software. It hosts numerous data\u0000analysis applications that are run in the safety of each User's web browser,\u0000without the data ever leaving their machine. JINet promotes collaboration,\u0000standardisation and reproducibility by sharing scripts rather than data and\u0000creating a self-sustaining community around it in which Users and data analysis\u0000tools developers interact thanks to JINets interoperability primitives.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Defining Interoperability: a universal standard","authors":"Giada Lalli","doi":"arxiv-2408.16411","DOIUrl":"https://doi.org/arxiv-2408.16411","url":null,"abstract":"Interoperability is crucial for modern scientific advancement, yet its\u0000fragmented definitions across domains hinder researchers' ability to\u0000effectively reap the rewards. This paper proposes a new, universal definition\u0000by tracing the evolution of interoperability and identifying challenges posed\u0000by varying definitions. This definition addresses these inconsistencies,\u0000offering a robust solution applicable across diverse fields. Adopting this\u0000unified approach will enhance global collaboration and drive innovation by\u0000removing obstacles to interoperability posed by conflicting or incomplete\u0000definitions.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ethical AI Governance: Methods for Evaluating Trustworthy AI","authors":"Louise McCormack, Malika Bendechache","doi":"arxiv-2409.07473","DOIUrl":"https://doi.org/arxiv-2409.07473","url":null,"abstract":"Trustworthy Artificial Intelligence (TAI) integrates ethics that align with\u0000human values, looking at their influence on AI behaviour and decision-making.\u0000Primarily dependent on self-assessment, TAI evaluation aims to ensure ethical\u0000standards and safety in AI development and usage. This paper reviews the\u0000current TAI evaluation methods in the literature and offers a classification,\u0000contributing to understanding self-assessment methods in this field.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Avnish Singh Jat, Tor-Morten Grønli, George Ghinea
{"title":"Navigating Design Science Research in mHealth Applications: A Guide to Best Practices","authors":"Avnish Singh Jat, Tor-Morten Grønli, George Ghinea","doi":"arxiv-2409.07470","DOIUrl":"https://doi.org/arxiv-2409.07470","url":null,"abstract":"The rapid proliferation of mobile devices and advancements in wireless\u0000technologies have given rise to a new era of healthcare delivery through mobile\u0000health (mHealth) applications. Design Science Research (DSR) is a widely used\u0000research paradigm that aims to create and evaluate innovative artifacts to\u0000solve real-world problems. This paper presents a comprehensive framework for\u0000employing DSR in mHealth application projects to address healthcare challenges\u0000and improve patient outcomes. We discussed various DSR principles and\u0000methodologies, highlighting their applicability and importance in developing\u0000and evaluating mHealth applications. Furthermore, we present several case\u0000studies to exemplify the successful implementation of DSR in mHealth projects\u0000and provide practical recommendations for researchers and practitioners.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Navigating the Future of Education: Educators' Insights on AI Integration and Challenges in Greece, Hungary, Latvia, Ireland and Armenia","authors":"Evangelia Daskalaki, Katerina Psaroudaki, Paraskevi Fragopoulou","doi":"arxiv-2408.15686","DOIUrl":"https://doi.org/arxiv-2408.15686","url":null,"abstract":"Understanding teachers' perspectives on AI in Education (AIEd) is crucial for\u0000its effective integration into the educational framework. This paper aims to\u0000explore how teachers currently use AI and how it can enhance the educational\u0000process. We conducted a cross-national study spanning Greece, Hungary, Latvia,\u0000Ireland, and Armenia, surveying 1754 educators through an online questionnaire,\u0000addressing three research questions. Our first research question examines\u0000educators' understanding of AIEd, their skepticism, and its integration within\u0000schools. Most educators report a solid understanding of AI and acknowledge its\u0000potential risks. AIEd is primarily used for educator support and engaging\u0000students. However, concerns exist about AI's impact on fostering critical\u0000thinking and exposing students to biased data. The second research question\u0000investigates student engagement with AI tools from educators' perspectives.\u0000Teachers indicate that students use AI mainly to manage their academic\u0000workload, while outside school, AI tools are primarily used for entertainment.\u0000The third research question addresses future implications of AI in education.\u0000Educators are optimistic about AI's potential to enhance educational processes,\u0000particularly through personalized learning experiences. Nonetheless, they\u0000express significant concerns about AI's impact on cultivating critical thinking\u0000and ethical issues related to potential misuse. There is a strong emphasis on\u0000the need for professional development through training seminars, workshops, and\u0000online courses to integrate AI effectively into teaching practices. Overall,\u0000the findings highlight a cautious optimism among educators regarding AI in\u0000education, alongside a clear demand for targeted professional development to\u0000address concerns and enhance skills in using AI tools.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"67 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jill Burstein, Geoffrey T. LaFlair, Kevin Yancey, Alina A. von Davier, Ravit Dotan
{"title":"Responsible AI for Test Equity and Quality: The Duolingo English Test as a Case Study","authors":"Jill Burstein, Geoffrey T. LaFlair, Kevin Yancey, Alina A. von Davier, Ravit Dotan","doi":"arxiv-2409.07476","DOIUrl":"https://doi.org/arxiv-2409.07476","url":null,"abstract":"Artificial intelligence (AI) creates opportunities for assessments, such as\u0000efficiencies for item generation and scoring of spoken and written responses.\u0000At the same time, it poses risks (such as bias in AI-generated item content).\u0000Responsible AI (RAI) practices aim to mitigate risks associated with AI. This\u0000chapter addresses the critical role of RAI practices in achieving test quality\u0000(appropriateness of test score inferences), and test equity (fairness to all\u0000test takers). To illustrate, the chapter presents a case study using the\u0000Duolingo English Test (DET), an AI-powered, high-stakes English language\u0000assessment. The chapter discusses the DET RAI standards, their development and\u0000their relationship to domain-agnostic RAI principles. Further, it provides\u0000examples of specific RAI practices, showing how these practices meaningfully\u0000address the ethical principles of validity and reliability, fairness, privacy\u0000and security, and transparency and accountability standards to ensure test\u0000equity and quality.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"90 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicolas Alder, Kai Ebert, Ralf Herbrich, Philipp Hacker
{"title":"AI, Climate, and Transparency: Operationalizing and Improving the AI Act","authors":"Nicolas Alder, Kai Ebert, Ralf Herbrich, Philipp Hacker","doi":"arxiv-2409.07471","DOIUrl":"https://doi.org/arxiv-2409.07471","url":null,"abstract":"This paper critically examines the AI Act's provisions on climate-related\u0000transparency, highlighting significant gaps and challenges in its\u0000implementation. We identify key shortcomings, including the exclusion of energy\u0000consumption during AI inference, the lack of coverage for indirect greenhouse\u0000gas emissions from AI applications, and the lack of standard reporting\u0000methodology. The paper proposes a novel interpretation to bring\u0000inference-related energy use back within the Act's scope and advocates for\u0000public access to climate-related disclosures to foster market accountability\u0000and public scrutiny. Cumulative server level energy reporting is recommended as\u0000the most suitable method. We also suggests broader policy changes, including\u0000sustainability risk assessments and renewable energy targets, to better address\u0000AI's environmental impact.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Akash R. Wasil, Tom Reed, Jack William Miller, Peter Barnett
{"title":"Verification methods for international AI agreements","authors":"Akash R. Wasil, Tom Reed, Jack William Miller, Peter Barnett","doi":"arxiv-2408.16074","DOIUrl":"https://doi.org/arxiv-2408.16074","url":null,"abstract":"What techniques can be used to verify compliance with international\u0000agreements about advanced AI development? In this paper, we examine 10\u0000verification methods that could detect two types of potential violations:\u0000unauthorized AI training (e.g., training runs above a certain FLOP threshold)\u0000and unauthorized data centers. We divide the verification methods into three\u0000categories: (a) national technical means (methods requiring minimal or no\u0000access from suspected non-compliant nations), (b) access-dependent methods\u0000(methods that require approval from the nation suspected of unauthorized\u0000activities), and (c) hardware-dependent methods (methods that require rules\u0000around advanced hardware). For each verification method, we provide a\u0000description, historical precedents, and possible evasion techniques. We\u0000conclude by offering recommendations for future work related to the\u0000verification and enforcement of international AI governance agreements.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparing diversity, negativity, and stereotypes in Chinese-language AI technologies: a case study on Baidu, Ernie and Qwen","authors":"Geng Liu, Carlo Alberto Bono, Francesco Pierri","doi":"arxiv-2408.15696","DOIUrl":"https://doi.org/arxiv-2408.15696","url":null,"abstract":"Large Language Models (LLMs) and search engines have the potential to\u0000perpetuate biases and stereotypes by amplifying existing prejudices in their\u0000training data and algorithmic processes, thereby influencing public perception\u0000and decision-making. While most work has focused on Western-centric AI\u0000technologies, we study Chinese-based tools by investigating social biases\u0000embedded in the major Chinese search engine, Baidu, and two leading LLMs, Ernie\u0000and Qwen. Leveraging a dataset of 240 social groups across 13 categories\u0000describing Chinese society, we collect over 30k views encoded in the\u0000aforementioned tools by prompting them for candidate words describing such\u0000groups. We find that language models exhibit a larger variety of embedded views\u0000compared to the search engine, although Baidu and Qwen generate negative\u0000content more often than Ernie. We also find a moderate prevalence of\u0000stereotypes embedded in the language models, many of which potentially promote\u0000offensive and derogatory views. Our work highlights the importance of promoting\u0000fairness and inclusivity in AI technologies with a global perspective.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}