The Cambridge Handbook of Responsible Artificial Intelligence最新文献

筛选
英文 中文
AI-Supported Brain-Computer Interfaces and the Emergence of 'Cyberbilities' 人工智能支持的脑机接口和“网络能力”的出现
The Cambridge Handbook of Responsible Artificial Intelligence Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.033
Boris Eßmann, O. Müller
{"title":"AI-Supported Brain-Computer Interfaces and the Emergence of 'Cyberbilities'","authors":"Boris Eßmann, O. Müller","doi":"10.1017/9781009207898.033","DOIUrl":"https://doi.org/10.1017/9781009207898.033","url":null,"abstract":"","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130575705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Corporate Governance to Algorithm Governance: Artificial Intelligence as a Challenge for Corporations and Their Executives 从公司治理到算法治理:人工智能对公司及其高管的挑战
The Cambridge Handbook of Responsible Artificial Intelligence Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.026
Jan Lieder
{"title":"From Corporate Governance to Algorithm Governance: Artificial Intelligence as a Challenge for Corporations and Their Executives","authors":"Jan Lieder","doi":"10.1017/9781009207898.026","DOIUrl":"https://doi.org/10.1017/9781009207898.026","url":null,"abstract":"Every generation has its topic: The topic of our generation is digitalization. At present, we are all witnessing the so-called industrial revolution 4.0. This revolution is characterized by the use of a whole range of new digital technologies that can be combined in a variety of ways.","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134037880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a Global Artificial Intelligence Charter 迈向全球人工智能宪章
The Cambridge Handbook of Responsible Artificial Intelligence Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.013
T. Metzinger
{"title":"Towards a Global Artificial Intelligence Charter","authors":"T. Metzinger","doi":"10.1017/9781009207898.013","DOIUrl":"https://doi.org/10.1017/9781009207898.013","url":null,"abstract":"It is now time to move the ongoing public debate on artificial intelligence (AI) into the political institutions themselves. Many experts believe that we are confronted with an inflection point in history during the next decade, and that there is a closing time window regarding the applied ethics of AI. Political institutions must therefore produce and implement a minimal, but sufficient set of ethical and legal constraints for the beneficial use and future development of AI. They must also create a rational, evidence-based process of critical discussion aimed at continuously updating, improving and revising this first set of normative constraints. Given the current situation, the default outcome is that the values guiding AI development will be set by a very small number of human beings, by large private corporations and military institutions. Therefore, one goal is to proactively integrate as many perspectives as possible – and in a timely manner.","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131949434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Medical AI: Key Elements at the International Level 医疗人工智能:国际层面的关键要素
The Cambridge Handbook of Responsible Artificial Intelligence Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.030
Fruzsina Molnár-Gábor, J. Giesecke
{"title":"Medical AI: Key Elements at the International Level","authors":"Fruzsina Molnár-Gábor, J. Giesecke","doi":"10.1017/9781009207898.030","DOIUrl":"https://doi.org/10.1017/9781009207898.030","url":null,"abstract":"","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":" 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113951790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Moral Agents: Conceptual Issues and Ethical Controversy 人工道德主体:概念问题和伦理争议
The Cambridge Handbook of Responsible Artificial Intelligence Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.005
Catrin Misselhorn
{"title":"Artificial Moral Agents: Conceptual Issues and Ethical Controversy","authors":"Catrin Misselhorn","doi":"10.1017/9781009207898.005","DOIUrl":"https://doi.org/10.1017/9781009207898.005","url":null,"abstract":"","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133335722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Artificial Intelligence and the Past, Present, and Future of Democracy 人工智能与民主的过去、现在和未来
The Cambridge Handbook of Responsible Artificial Intelligence Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.009
Mathias Risse
{"title":"Artificial Intelligence and the Past, Present, and Future of Democracy","authors":"Mathias Risse","doi":"10.1017/9781009207898.009","DOIUrl":"https://doi.org/10.1017/9781009207898.009","url":null,"abstract":"Langdon Winner’s classic essay ‘Do Artifacts Have Politics?’ resists a widespread but naïve view of the role of technology in human life: that technology is neutral, and all depends on use. He does so without enlisting an overbearing determinism that makes technology the sole engine of change. Instead, Winner distinguishes two ways for artefacts to have ‘political qualities’. First, devices or systems might be means for establishing patterns of power or authority, but the design is flexible: such patterns can turn out one way or another. An example is traffic infrastructure, which can assist many people but also keep parts of the population in subordination, say, if they cannot reach suitable workplaces. Secondly, devices or systems are strongly, perhaps unavoidably, tied to certain patterns of power. Winner’s example is atomic energy, which requires industrial, scientific, and military elites to provide and protect energy sources. Artificial Intelligence (AI), I argue, is political the way traffic infrastructure is: It can greatly strengthen democracy, but only with the right efforts. Understanding ‘the politics of AI’ is crucial since Xi Jinping’s China loudly champions one-party rule as a better fit for our digital century. AI is a key component in the contest between authoritarian and democratic rule. Unlike conventional programs, AI algorithms learn by themselves. Programmers provide data, which a set of methods, known as machine learning, analyze for trends and inferences. Owing to their sophistication and sweeping applications, these technologies are poised to dramatically alter our world. Specialized AI is already broadly deployed. At the high end, one may think of AI mastering Chess or Go. More commonly we encounter it in smartphones (Siri, Google Translate, curated newsfeeds), home devices (Alexa, Google Home, Nest), personalized customer services, or GPS systems. Specialized AI is used by law enforcement, the military, in browser searching, advertising and entertainment (e.g., recommender systems), medical diagnostics, logistics, finance (from assessing credit to flagging transactions), in speech recognition producing transcripts, trade bots using market data for predictions, but also in music creations and article drafting (e.g., GPT-3’s text generator writing posts or code). Governments track people using AI in facial, voice, or gait recognition. Smart cities analyze traffic data in real time or design services. COVID-19 accelerated use of AI in drug discovery. Natural language","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124279053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Liability for Artificial Intelligence: The Need to Address Both Safety Risks and Fundamental Rights Risks 人工智能的责任:需要解决安全风险和基本权利风险
The Cambridge Handbook of Responsible Artificial Intelligence Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.016
C. Wendehorst
{"title":"Liability for Artificial Intelligence: The Need to Address Both Safety Risks and Fundamental Rights Risks","authors":"C. Wendehorst","doi":"10.1017/9781009207898.016","DOIUrl":"https://doi.org/10.1017/9781009207898.016","url":null,"abstract":"","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"22 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114134657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Differences That Make a Difference: Computational Profiling and Fairness to Individuals 产生差异的差异:计算分析和对个人的公平
The Cambridge Handbook of Responsible Artificial Intelligence Pub Date : 1900-01-01 DOI: 10.1017/9781009207898.019
W. Hinsch
{"title":"Differences That Make a Difference: Computational Profiling and Fairness to Individuals","authors":"W. Hinsch","doi":"10.1017/9781009207898.019","DOIUrl":"https://doi.org/10.1017/9781009207898.019","url":null,"abstract":"","PeriodicalId":306343,"journal":{"name":"The Cambridge Handbook of Responsible Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132560830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信