arXiv - CS - Computers and Society最新文献

筛选
英文 中文
Bias Begets Bias: The Impact of Biased Embeddings on Diffusion Models 偏见产生偏见:有偏见的嵌入对扩散模型的影响
arXiv - CS - Computers and Society Pub Date : 2024-09-15 DOI: arxiv-2409.09569
Sahil Kuchlous, Marvin Li, Jeffrey G. Wang
{"title":"Bias Begets Bias: The Impact of Biased Embeddings on Diffusion Models","authors":"Sahil Kuchlous, Marvin Li, Jeffrey G. Wang","doi":"arxiv-2409.09569","DOIUrl":"https://doi.org/arxiv-2409.09569","url":null,"abstract":"With the growing adoption of Text-to-Image (TTI) systems, the social biases\u0000of these models have come under increased scrutiny. Herein we conduct a\u0000systematic investigation of one such source of bias for diffusion models:\u0000embedding spaces. First, because traditional classifier-based fairness\u0000definitions require true labels not present in generative modeling, we propose\u0000statistical group fairness criteria based on a model's internal representation\u0000of the world. Using these definitions, we demonstrate theoretically and\u0000empirically that an unbiased text embedding space for input prompts is a\u0000necessary condition for representationally balanced diffusion models, meaning\u0000the distribution of generated images satisfy diversity requirements with\u0000respect to protected attributes. Next, we investigate the impact of biased\u0000embeddings on evaluating the alignment between generated images and prompts, a\u0000process which is commonly used to assess diffusion models. We find that biased\u0000multimodal embeddings like CLIP can result in lower alignment scores for\u0000representationally balanced TTI models, thus rewarding unfair behavior.\u0000Finally, we develop a theoretical framework through which biases in alignment\u0000evaluation can be studied and propose bias mitigation methods. By specifically\u0000adapting the perspective of embedding spaces, we establish new fairness\u0000conditions for diffusion model development and evaluation.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"118 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LabellessFace: Fair Metric Learning for Face Recognition without Attribute Labels LabellessFace:无属性标签人脸识别的公平度量学习
arXiv - CS - Computers and Society Pub Date : 2024-09-14 DOI: arxiv-2409.09274
Tetsushi Ohki, Yuya Sato, Masakatsu Nishigaki, Koichi Ito
{"title":"LabellessFace: Fair Metric Learning for Face Recognition without Attribute Labels","authors":"Tetsushi Ohki, Yuya Sato, Masakatsu Nishigaki, Koichi Ito","doi":"arxiv-2409.09274","DOIUrl":"https://doi.org/arxiv-2409.09274","url":null,"abstract":"Demographic bias is one of the major challenges for face recognition systems.\u0000The majority of existing studies on demographic biases are heavily dependent on\u0000specific demographic groups or demographic classifier, making it difficult to\u0000address performance for unrecognised groups. This paper introduces\u0000``LabellessFace'', a novel framework that improves demographic bias in face\u0000recognition without requiring demographic group labeling typically required for\u0000fairness considerations. We propose a novel fairness enhancement metric called\u0000the class favoritism level, which assesses the extent of favoritism towards\u0000specific classes across the dataset. Leveraging this metric, we introduce the\u0000fair class margin penalty, an extension of existing margin-based metric\u0000learning. This method dynamically adjusts learning parameters based on class\u0000favoritism levels, promoting fairness across all attributes. By treating each\u0000class as an individual in facial recognition systems, we facilitate learning\u0000that minimizes biases in authentication accuracy among individuals.\u0000Comprehensive experiments have demonstrated that our proposed method is\u0000effective for enhancing fairness while maintaining authentication accuracy.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"213 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantitative Insights into Language Model Usage and Trust in Academia: An Empirical Study 对学术界语言模型使用和信任的定量洞察:实证研究
arXiv - CS - Computers and Society Pub Date : 2024-09-13 DOI: arxiv-2409.09186
Minseok Jung, Aurora Zhang, Junho Lee, Paul Pu Liang
{"title":"Quantitative Insights into Language Model Usage and Trust in Academia: An Empirical Study","authors":"Minseok Jung, Aurora Zhang, Junho Lee, Paul Pu Liang","doi":"arxiv-2409.09186","DOIUrl":"https://doi.org/arxiv-2409.09186","url":null,"abstract":"Language models (LMs) are revolutionizing knowledge retrieval and processing\u0000in academia. However, concerns regarding their misuse and erroneous outputs,\u0000such as hallucinations and fabrications, are reasons for distrust in LMs within\u0000academic communities. Consequently, there is a pressing need to deepen the\u0000understanding of how actual practitioners use and trust these models. There is\u0000a notable gap in quantitative evidence regarding the extent of LM usage, user\u0000trust in their outputs, and issues to prioritize for real-world development.\u0000This study addresses these gaps by providing data and analysis of LM usage and\u0000trust. Specifically, our study surveyed 125 individuals at a private school and\u0000secured 88 data points after pre-processing. Through both quantitative analysis\u0000and qualitative evidence, we found a significant variation in trust levels,\u0000which are strongly related to usage time and frequency. Additionally, we\u0000discover through a polling process that fact-checking is the most critical\u0000issue limiting usage. These findings inform several actionable insights:\u0000distrust can be overcome by providing exposure to the models, policies should\u0000be developed that prioritize fact-checking, and user trust can be enhanced by\u0000increasing engagement. By addressing these critical gaps, this research not\u0000only adds to the understanding of user experiences and trust in LMs but also\u0000informs the development of more effective LMs.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Payments Use Cases and Design Options for Interoperability and Funds Locking across Digital Pounds and Commercial Bank Money 数字英镑和商业银行货币互操作性和资金锁定的支付用例和设计方案
arXiv - CS - Computers and Society Pub Date : 2024-09-13 DOI: arxiv-2409.08653
Lee Braine, Shreepad Shukla, Piyush Agrawal, Shrirang Khedekar, Aishwarya Nair
{"title":"Payments Use Cases and Design Options for Interoperability and Funds Locking across Digital Pounds and Commercial Bank Money","authors":"Lee Braine, Shreepad Shukla, Piyush Agrawal, Shrirang Khedekar, Aishwarya Nair","doi":"arxiv-2409.08653","DOIUrl":"https://doi.org/arxiv-2409.08653","url":null,"abstract":"Central banks are actively exploring retail central bank digital currencies\u0000(CBDCs), with the Bank of England currently in the design phase for a potential\u0000UK retail CBDC, the digital pound. In a previous paper, we defined and explored\u0000the important concept of functional consistency (which is the principle that\u0000different forms of money have the same operational characteristics) and\u0000evaluated design options to support functional consistency across digital\u0000pounds and commercial bank money, based on a set of key capabilities. In this\u0000paper, we continue to analyse the design options for supporting functional\u0000consistency and, in order to perform a detailed analysis, we focus on three key\u0000capabilities: communication between digital pound ecosystem participants, funds\u0000locking, and interoperability across digital pounds and commercial bank money.\u0000We explore these key capabilities via three payments use cases:\u0000person-to-person push payment, merchant-initiated request to pay, and lock\u0000funds and pay on physical delivery. We then present and evaluate the\u0000suitability of design options to provide the specific capabilities for each use\u0000case and draw initial insights. We conclude that a financial market\u0000infrastructure (FMI) providing specific capabilities could simplify the\u0000experience of ecosystem participants, simplify the operating platforms for both\u0000the Bank of England and digital pound Payment Interface Providers (PIPs), and\u0000facilitate the creation of innovative services. We also identify potential next\u0000steps.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Grading Rubric for AI Safety Frameworks 人工智能安全框架评分标准
arXiv - CS - Computers and Society Pub Date : 2024-09-13 DOI: arxiv-2409.08751
Jide Alaga, Jonas Schuett, Markus Anderljung
{"title":"A Grading Rubric for AI Safety Frameworks","authors":"Jide Alaga, Jonas Schuett, Markus Anderljung","doi":"arxiv-2409.08751","DOIUrl":"https://doi.org/arxiv-2409.08751","url":null,"abstract":"Over the past year, artificial intelligence (AI) companies have been\u0000increasingly adopting AI safety frameworks. These frameworks outline how\u0000companies intend to keep the potential risks associated with developing and\u0000deploying frontier AI systems to an acceptable level. Major players like\u0000Anthropic, OpenAI, and Google DeepMind have already published their frameworks,\u0000while another 13 companies have signaled their intent to release similar\u0000frameworks by February 2025. Given their central role in AI companies' efforts\u0000to identify and address unacceptable risks from their systems, AI safety\u0000frameworks warrant significant scrutiny. To enable governments, academia, and\u0000civil society to pass judgment on these frameworks, this paper proposes a\u0000grading rubric. The rubric consists of seven evaluation criteria and 21\u0000indicators that concretize the criteria. Each criterion can be graded on a\u0000scale from A (gold standard) to F (substandard). The paper also suggests three\u0000methods for applying the rubric: surveys, Delphi studies, and audits. The\u0000purpose of the grading rubric is to enable nuanced comparisons between\u0000frameworks, identify potential areas of improvement, and promote a race to the\u0000top in responsible AI development.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Affective Computing Has Changed: The Foundation Model Disruption 情感计算已经改变:基础模型的颠覆
arXiv - CS - Computers and Society Pub Date : 2024-09-13 DOI: arxiv-2409.08907
Björn Schuller, Adria Mallol-Ragolta, Alejandro Peña Almansa, Iosif Tsangko, Mostafa M. Amin, Anastasia Semertzidou, Lukas Christ, Shahin Amiriparian
{"title":"Affective Computing Has Changed: The Foundation Model Disruption","authors":"Björn Schuller, Adria Mallol-Ragolta, Alejandro Peña Almansa, Iosif Tsangko, Mostafa M. Amin, Anastasia Semertzidou, Lukas Christ, Shahin Amiriparian","doi":"arxiv-2409.08907","DOIUrl":"https://doi.org/arxiv-2409.08907","url":null,"abstract":"The dawn of Foundation Models has on the one hand revolutionised a wide range\u0000of research problems, and, on the other hand, democratised the access and use\u0000of AI-based tools by the general public. We even observe an incursion of these\u0000models into disciplines related to human psychology, such as the Affective\u0000Computing domain, suggesting their affective, emerging capabilities. In this\u0000work, we aim to raise awareness of the power of Foundation Models in the field\u0000of Affective Computing by synthetically generating and analysing multimodal\u0000affective data, focusing on vision, linguistics, and speech (acoustics). We\u0000also discuss some fundamental problems, such as ethical issues and regulatory\u0000aspects, related to the use of Foundation Models in this research area.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mapping Technical Safety Research at AI Companies: A literature review and incentives analysis 人工智能公司的技术安全研究图谱:文献回顾与激励分析
arXiv - CS - Computers and Society Pub Date : 2024-09-12 DOI: arxiv-2409.07878
Oscar Delaney, Oliver Guest, Zoe Williams
{"title":"Mapping Technical Safety Research at AI Companies: A literature review and incentives analysis","authors":"Oscar Delaney, Oliver Guest, Zoe Williams","doi":"arxiv-2409.07878","DOIUrl":"https://doi.org/arxiv-2409.07878","url":null,"abstract":"As artificial intelligence (AI) systems become more advanced, concerns about\u0000large-scale risks from misuse or accidents have grown. This report analyzes the\u0000technical research into safe AI development being conducted by three leading AI\u0000companies: Anthropic, Google DeepMind, and OpenAI. We define safe AI development as developing AI systems that are unlikely to\u0000pose large-scale misuse or accident risks. This encompasses a range of\u0000technical approaches aimed at ensuring AI systems behave as intended and do not\u0000cause unintended harm, even as they are made more capable and autonomous. We analyzed all papers published by the three companies from January 2022 to\u0000July 2024 that were relevant to safe AI development, and categorized the 61\u0000included papers into eight safety approaches. Additionally, we noted three\u0000categories representing nascent approaches explored by academia and civil\u0000society, but not currently represented in any papers by the three companies.\u0000Our analysis reveals where corporate attention is concentrated and where\u0000potential gaps lie. Some AI research may stay unpublished for good reasons, such as to not inform\u0000adversaries about security techniques they would need to overcome to misuse AI\u0000systems. Therefore, we also considered the incentives that AI companies have to\u0000research each approach. In particular, we considered reputational effects,\u0000regulatory burdens, and whether the approaches could make AI systems more\u0000useful. We identified three categories where there are currently no or few papers and\u0000where we do not expect AI companies to become more incentivized to pursue this\u0000research in the future. These are multi-agent safety, model organisms of\u0000misalignment, and safety by design. Our findings provide an indication that\u0000these approaches may be slow to progress without funding or efforts from\u0000government, civil society, philanthropists, or academia.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection and Classification of Twitter Users' Opinions on Drought Crises in Iran Using Machine Learning Techniques 利用机器学习技术检测和分类 Twitter 用户对伊朗干旱危机的看法
arXiv - CS - Computers and Society Pub Date : 2024-09-11 DOI: arxiv-2409.07611
Somayeh Labafi, Leila Rabiei, Zeinab Rajabi
{"title":"Detection and Classification of Twitter Users' Opinions on Drought Crises in Iran Using Machine Learning Techniques","authors":"Somayeh Labafi, Leila Rabiei, Zeinab Rajabi","doi":"arxiv-2409.07611","DOIUrl":"https://doi.org/arxiv-2409.07611","url":null,"abstract":"The main objective of this research is to identify and classify the opinions\u0000of Persian-speaking Twitter users related to drought crises in Iran and\u0000subsequently develop a model for detecting these opinions on the platform. To\u0000achieve this, a model has been developed using machine learning and text mining\u0000methods to detect the opinions of Persian-speaking Twitter users regarding the\u0000drought issues in Iran. The statistical population for the research included\u000042,028 drought-related tweets posted over a one-year period. These tweets were\u0000extracted from Twitter using keywords related to the drought crises in Iran.\u0000Subsequently, a sample of 2,300 tweets was qualitatively analyzed, labeled,\u0000categorized, and examined. Next, a four-category classification of users`\u0000opinions regarding drought crises and Iranians' resilience to these crises was\u0000identified. Based on these four categories, a machine learning model based on\u0000logistic regression was trained to predict and detect various opinions in\u0000Twitter posts. The developed model exhibits an accuracy of 66.09% and an\u0000F-score of 60%, indicating that this model has good performance for detecting\u0000Iranian Twitter users' opinions regarding drought crises. The ability to detect\u0000opinions regarding drought crises on platforms like Twitter using machine\u0000learning methods can intelligently represent the resilience level of the\u0000Iranian society in the face of these crises, and inform policymakers in this\u0000area about changes in public opinion.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Fairer Health Recommendations: finding informative unbiased samples via Word Sense Disambiguation 实现更公平的健康建议:通过词义消歧找到信息丰富的无偏样本
arXiv - CS - Computers and Society Pub Date : 2024-09-11 DOI: arxiv-2409.07424
Gavin Butts, Pegah Emdad, Jethro Lee, Shannon Song, Chiman Salavati, Willmar Sosa Diaz, Shiri Dori-Hacohen, Fabricio Murai
{"title":"Towards Fairer Health Recommendations: finding informative unbiased samples via Word Sense Disambiguation","authors":"Gavin Butts, Pegah Emdad, Jethro Lee, Shannon Song, Chiman Salavati, Willmar Sosa Diaz, Shiri Dori-Hacohen, Fabricio Murai","doi":"arxiv-2409.07424","DOIUrl":"https://doi.org/arxiv-2409.07424","url":null,"abstract":"There have been growing concerns around high-stake applications that rely on\u0000models trained with biased data, which consequently produce biased predictions,\u0000often harming the most vulnerable. In particular, biased medical data could\u0000cause health-related applications and recommender systems to create outputs\u0000that jeopardize patient care and widen disparities in health outcomes. A recent\u0000framework titled Fairness via AI posits that, instead of attempting to correct\u0000model biases, researchers must focus on their root causes by using AI to debias\u0000data. Inspired by this framework, we tackle bias detection in medical curricula\u0000using NLP models, including LLMs, and evaluate them on a gold standard dataset\u0000containing 4,105 excerpts annotated by medical experts for bias from a large\u0000corpus. We build on previous work by coauthors which augments the set of\u0000negative samples with non-annotated text containing social identifier terms.\u0000However, some of these terms, especially those related to race and ethnicity,\u0000can carry different meanings (e.g., \"white matter of spinal cord\"). To address\u0000this issue, we propose the use of Word Sense Disambiguation models to refine\u0000dataset quality by removing irrelevant sentences. We then evaluate fine-tuned\u0000variations of BERT models as well as GPT models with zero- and few-shot\u0000prompting. We found LLMs, considered SOTA on many NLP tasks, unsuitable for\u0000bias detection, while fine-tuned BERT models generally perform well across all\u0000evaluated metrics.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"157 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Legal Fact Prediction: Task Definition and Dataset Construction 法律事实预测:任务定义和数据集构建
arXiv - CS - Computers and Society Pub Date : 2024-09-11 DOI: arxiv-2409.07055
Junkai Liu, Yujie Tong, Hui Huang, Shuyuan Zheng, Muyun Yang, Peicheng Wu, Makoto Onizuka, Chuan Xiao
{"title":"Legal Fact Prediction: Task Definition and Dataset Construction","authors":"Junkai Liu, Yujie Tong, Hui Huang, Shuyuan Zheng, Muyun Yang, Peicheng Wu, Makoto Onizuka, Chuan Xiao","doi":"arxiv-2409.07055","DOIUrl":"https://doi.org/arxiv-2409.07055","url":null,"abstract":"Legal facts refer to the facts that can be proven by acknowledged evidence in\u0000a trial. They form the basis for the determination of court judgments. This\u0000paper introduces a novel NLP task: legal fact prediction, which aims to predict\u0000the legal fact based on a list of evidence. The predicted facts can instruct\u0000the parties and their lawyers involved in a trial to strengthen their\u0000submissions and optimize their strategies during the trial. Moreover, since\u0000real legal facts are difficult to obtain before the final judgment, the\u0000predicted facts also serve as an important basis for legal judgment prediction.\u0000We construct a benchmark dataset consisting of evidence lists and ground-truth\u0000legal facts for real civil loan cases, LFPLoan. Our experiments on this dataset\u0000show that this task is non-trivial and requires further considerable research\u0000efforts.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信