arXiv - CS - Computers and Society最新文献

筛选
英文 中文
Advancing Towards a Marine Digital Twin Platform: Modeling the Mar Menor Coastal Lagoon Ecosystem in the South Western Mediterranean 迈向海洋数字孪生平台:地中海西南部马梅诺尔沿海泻湖生态系统建模
arXiv - CS - Computers and Society Pub Date : 2024-09-16 DOI: arxiv-2409.10134
Yu Ye, Aurora González-Vidal, Alejandro Cisterna-García, Angel Pérez-Ruzafa, Miguel A. Zamora Izquierdo, Antonio F. Skarmeta
{"title":"Advancing Towards a Marine Digital Twin Platform: Modeling the Mar Menor Coastal Lagoon Ecosystem in the South Western Mediterranean","authors":"Yu Ye, Aurora González-Vidal, Alejandro Cisterna-García, Angel Pérez-Ruzafa, Miguel A. Zamora Izquierdo, Antonio F. Skarmeta","doi":"arxiv-2409.10134","DOIUrl":"https://doi.org/arxiv-2409.10134","url":null,"abstract":"Coastal marine ecosystems face mounting pressures from anthropogenic\u0000activities and climate change, necessitating advanced monitoring and modeling\u0000approaches for effective management. This paper pioneers the development of a\u0000Marine Digital Twin Platform aimed at modeling the Mar Menor Coastal Lagoon\u0000Ecosystem in the Region of Murcia. The platform leverages Artificial\u0000Intelligence to emulate complex hydrological and ecological models,\u0000facilitating the simulation of what-if scenarios to predict ecosystem responses\u0000to various stressors. We integrate diverse datasets from public sources to\u0000construct a comprehensive digital representation of the lagoon's dynamics. The\u0000platform's modular design enables real-time stakeholder engagement and informed\u0000decision-making in marine management. Our work contributes to the ongoing\u0000discourse on advancing marine science through innovative digital twin\u0000technologies.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Instigating Cooperation among LLM Agents Using Adaptive Information Modulation 利用自适应信息调制促进 LLM 代理之间的合作
arXiv - CS - Computers and Society Pub Date : 2024-09-16 DOI: arxiv-2409.10372
Qiliang ChenSepehr, AlirezaSepehr, Ilami, Nunzio Lore, Babak Heydari
{"title":"Instigating Cooperation among LLM Agents Using Adaptive Information Modulation","authors":"Qiliang ChenSepehr, AlirezaSepehr, Ilami, Nunzio Lore, Babak Heydari","doi":"arxiv-2409.10372","DOIUrl":"https://doi.org/arxiv-2409.10372","url":null,"abstract":"This paper introduces a novel framework combining LLM agents as proxies for\u0000human strategic behavior with reinforcement learning (RL) to engage these\u0000agents in evolving strategic interactions within team environments. Our\u0000approach extends traditional agent-based simulations by using strategic LLM\u0000agents (SLA) and introducing dynamic and adaptive governance through a\u0000pro-social promoting RL agent (PPA) that modulates information access across\u0000agents in a network, optimizing social welfare and promoting pro-social\u0000behavior. Through validation in iterative games, including the prisoner\u0000dilemma, we demonstrate that SLA agents exhibit nuanced strategic adaptations.\u0000The PPA agent effectively learns to adjust information transparency, resulting\u0000in enhanced cooperation rates. This framework offers significant insights into\u0000AI-mediated social dynamics, contributing to the deployment of AI in real-world\u0000team settings.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"194 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bias Begets Bias: The Impact of Biased Embeddings on Diffusion Models 偏见产生偏见:有偏见的嵌入对扩散模型的影响
arXiv - CS - Computers and Society Pub Date : 2024-09-15 DOI: arxiv-2409.09569
Sahil Kuchlous, Marvin Li, Jeffrey G. Wang
{"title":"Bias Begets Bias: The Impact of Biased Embeddings on Diffusion Models","authors":"Sahil Kuchlous, Marvin Li, Jeffrey G. Wang","doi":"arxiv-2409.09569","DOIUrl":"https://doi.org/arxiv-2409.09569","url":null,"abstract":"With the growing adoption of Text-to-Image (TTI) systems, the social biases\u0000of these models have come under increased scrutiny. Herein we conduct a\u0000systematic investigation of one such source of bias for diffusion models:\u0000embedding spaces. First, because traditional classifier-based fairness\u0000definitions require true labels not present in generative modeling, we propose\u0000statistical group fairness criteria based on a model's internal representation\u0000of the world. Using these definitions, we demonstrate theoretically and\u0000empirically that an unbiased text embedding space for input prompts is a\u0000necessary condition for representationally balanced diffusion models, meaning\u0000the distribution of generated images satisfy diversity requirements with\u0000respect to protected attributes. Next, we investigate the impact of biased\u0000embeddings on evaluating the alignment between generated images and prompts, a\u0000process which is commonly used to assess diffusion models. We find that biased\u0000multimodal embeddings like CLIP can result in lower alignment scores for\u0000representationally balanced TTI models, thus rewarding unfair behavior.\u0000Finally, we develop a theoretical framework through which biases in alignment\u0000evaluation can be studied and propose bias mitigation methods. By specifically\u0000adapting the perspective of embedding spaces, we establish new fairness\u0000conditions for diffusion model development and evaluation.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"118 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Synthetic Data to Mitigate Unfairness and Preserve Privacy through Single-Shot Federated Learning 使用合成数据,通过单次联合学习减少不公平现象并保护隐私
arXiv - CS - Computers and Society Pub Date : 2024-09-14 DOI: arxiv-2409.09532
Chia-Yuan Wu, Frank E. Curtis, Daniel P. Robinson
{"title":"Using Synthetic Data to Mitigate Unfairness and Preserve Privacy through Single-Shot Federated Learning","authors":"Chia-Yuan Wu, Frank E. Curtis, Daniel P. Robinson","doi":"arxiv-2409.09532","DOIUrl":"https://doi.org/arxiv-2409.09532","url":null,"abstract":"To address unfairness issues in federated learning (FL), contemporary\u0000approaches typically use frequent model parameter updates and transmissions\u0000between the clients and server. In such a process, client-specific information\u0000(e.g., local dataset size or data-related fairness metrics) must be sent to the\u0000server to compute, e.g., aggregation weights. All of this results in high\u0000transmission costs and the potential leakage of client information. As an\u0000alternative, we propose a strategy that promotes fair predictions across\u0000clients without the need to pass information between the clients and server\u0000iteratively and prevents client data leakage. For each client, we first use\u0000their local dataset to obtain a synthetic dataset by solving a bilevel\u0000optimization problem that addresses unfairness concerns during the learning\u0000process. We then pass each client's synthetic dataset to the server, the\u0000collection of which is used to train the server model using conventional\u0000machine learning techniques (that do not take fairness metrics into account).\u0000Thus, we eliminate the need to handle fairness-specific aggregation weights\u0000while preserving client privacy. Our approach requires only a single\u0000communication between the clients and the server, thus making it\u0000computationally cost-effective, able to maintain privacy, and able to ensuring\u0000fairness. We present empirical evidence to demonstrate the advantages of our\u0000approach. The results illustrate that our method effectively uses synthetic\u0000data as a means to mitigate unfairness and preserve client privacy.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LabellessFace: Fair Metric Learning for Face Recognition without Attribute Labels LabellessFace:无属性标签人脸识别的公平度量学习
arXiv - CS - Computers and Society Pub Date : 2024-09-14 DOI: arxiv-2409.09274
Tetsushi Ohki, Yuya Sato, Masakatsu Nishigaki, Koichi Ito
{"title":"LabellessFace: Fair Metric Learning for Face Recognition without Attribute Labels","authors":"Tetsushi Ohki, Yuya Sato, Masakatsu Nishigaki, Koichi Ito","doi":"arxiv-2409.09274","DOIUrl":"https://doi.org/arxiv-2409.09274","url":null,"abstract":"Demographic bias is one of the major challenges for face recognition systems.\u0000The majority of existing studies on demographic biases are heavily dependent on\u0000specific demographic groups or demographic classifier, making it difficult to\u0000address performance for unrecognised groups. This paper introduces\u0000``LabellessFace'', a novel framework that improves demographic bias in face\u0000recognition without requiring demographic group labeling typically required for\u0000fairness considerations. We propose a novel fairness enhancement metric called\u0000the class favoritism level, which assesses the extent of favoritism towards\u0000specific classes across the dataset. Leveraging this metric, we introduce the\u0000fair class margin penalty, an extension of existing margin-based metric\u0000learning. This method dynamically adjusts learning parameters based on class\u0000favoritism levels, promoting fairness across all attributes. By treating each\u0000class as an individual in facial recognition systems, we facilitate learning\u0000that minimizes biases in authentication accuracy among individuals.\u0000Comprehensive experiments have demonstrated that our proposed method is\u0000effective for enhancing fairness while maintaining authentication accuracy.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"213 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantitative Insights into Language Model Usage and Trust in Academia: An Empirical Study 对学术界语言模型使用和信任的定量洞察:实证研究
arXiv - CS - Computers and Society Pub Date : 2024-09-13 DOI: arxiv-2409.09186
Minseok Jung, Aurora Zhang, Junho Lee, Paul Pu Liang
{"title":"Quantitative Insights into Language Model Usage and Trust in Academia: An Empirical Study","authors":"Minseok Jung, Aurora Zhang, Junho Lee, Paul Pu Liang","doi":"arxiv-2409.09186","DOIUrl":"https://doi.org/arxiv-2409.09186","url":null,"abstract":"Language models (LMs) are revolutionizing knowledge retrieval and processing\u0000in academia. However, concerns regarding their misuse and erroneous outputs,\u0000such as hallucinations and fabrications, are reasons for distrust in LMs within\u0000academic communities. Consequently, there is a pressing need to deepen the\u0000understanding of how actual practitioners use and trust these models. There is\u0000a notable gap in quantitative evidence regarding the extent of LM usage, user\u0000trust in their outputs, and issues to prioritize for real-world development.\u0000This study addresses these gaps by providing data and analysis of LM usage and\u0000trust. Specifically, our study surveyed 125 individuals at a private school and\u0000secured 88 data points after pre-processing. Through both quantitative analysis\u0000and qualitative evidence, we found a significant variation in trust levels,\u0000which are strongly related to usage time and frequency. Additionally, we\u0000discover through a polling process that fact-checking is the most critical\u0000issue limiting usage. These findings inform several actionable insights:\u0000distrust can be overcome by providing exposure to the models, policies should\u0000be developed that prioritize fact-checking, and user trust can be enhanced by\u0000increasing engagement. By addressing these critical gaps, this research not\u0000only adds to the understanding of user experiences and trust in LMs but also\u0000informs the development of more effective LMs.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Payments Use Cases and Design Options for Interoperability and Funds Locking across Digital Pounds and Commercial Bank Money 数字英镑和商业银行货币互操作性和资金锁定的支付用例和设计方案
arXiv - CS - Computers and Society Pub Date : 2024-09-13 DOI: arxiv-2409.08653
Lee Braine, Shreepad Shukla, Piyush Agrawal, Shrirang Khedekar, Aishwarya Nair
{"title":"Payments Use Cases and Design Options for Interoperability and Funds Locking across Digital Pounds and Commercial Bank Money","authors":"Lee Braine, Shreepad Shukla, Piyush Agrawal, Shrirang Khedekar, Aishwarya Nair","doi":"arxiv-2409.08653","DOIUrl":"https://doi.org/arxiv-2409.08653","url":null,"abstract":"Central banks are actively exploring retail central bank digital currencies\u0000(CBDCs), with the Bank of England currently in the design phase for a potential\u0000UK retail CBDC, the digital pound. In a previous paper, we defined and explored\u0000the important concept of functional consistency (which is the principle that\u0000different forms of money have the same operational characteristics) and\u0000evaluated design options to support functional consistency across digital\u0000pounds and commercial bank money, based on a set of key capabilities. In this\u0000paper, we continue to analyse the design options for supporting functional\u0000consistency and, in order to perform a detailed analysis, we focus on three key\u0000capabilities: communication between digital pound ecosystem participants, funds\u0000locking, and interoperability across digital pounds and commercial bank money.\u0000We explore these key capabilities via three payments use cases:\u0000person-to-person push payment, merchant-initiated request to pay, and lock\u0000funds and pay on physical delivery. We then present and evaluate the\u0000suitability of design options to provide the specific capabilities for each use\u0000case and draw initial insights. We conclude that a financial market\u0000infrastructure (FMI) providing specific capabilities could simplify the\u0000experience of ecosystem participants, simplify the operating platforms for both\u0000the Bank of England and digital pound Payment Interface Providers (PIPs), and\u0000facilitate the creation of innovative services. We also identify potential next\u0000steps.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Grading Rubric for AI Safety Frameworks 人工智能安全框架评分标准
arXiv - CS - Computers and Society Pub Date : 2024-09-13 DOI: arxiv-2409.08751
Jide Alaga, Jonas Schuett, Markus Anderljung
{"title":"A Grading Rubric for AI Safety Frameworks","authors":"Jide Alaga, Jonas Schuett, Markus Anderljung","doi":"arxiv-2409.08751","DOIUrl":"https://doi.org/arxiv-2409.08751","url":null,"abstract":"Over the past year, artificial intelligence (AI) companies have been\u0000increasingly adopting AI safety frameworks. These frameworks outline how\u0000companies intend to keep the potential risks associated with developing and\u0000deploying frontier AI systems to an acceptable level. Major players like\u0000Anthropic, OpenAI, and Google DeepMind have already published their frameworks,\u0000while another 13 companies have signaled their intent to release similar\u0000frameworks by February 2025. Given their central role in AI companies' efforts\u0000to identify and address unacceptable risks from their systems, AI safety\u0000frameworks warrant significant scrutiny. To enable governments, academia, and\u0000civil society to pass judgment on these frameworks, this paper proposes a\u0000grading rubric. The rubric consists of seven evaluation criteria and 21\u0000indicators that concretize the criteria. Each criterion can be graded on a\u0000scale from A (gold standard) to F (substandard). The paper also suggests three\u0000methods for applying the rubric: surveys, Delphi studies, and audits. The\u0000purpose of the grading rubric is to enable nuanced comparisons between\u0000frameworks, identify potential areas of improvement, and promote a race to the\u0000top in responsible AI development.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Affective Computing Has Changed: The Foundation Model Disruption 情感计算已经改变:基础模型的颠覆
arXiv - CS - Computers and Society Pub Date : 2024-09-13 DOI: arxiv-2409.08907
Björn Schuller, Adria Mallol-Ragolta, Alejandro Peña Almansa, Iosif Tsangko, Mostafa M. Amin, Anastasia Semertzidou, Lukas Christ, Shahin Amiriparian
{"title":"Affective Computing Has Changed: The Foundation Model Disruption","authors":"Björn Schuller, Adria Mallol-Ragolta, Alejandro Peña Almansa, Iosif Tsangko, Mostafa M. Amin, Anastasia Semertzidou, Lukas Christ, Shahin Amiriparian","doi":"arxiv-2409.08907","DOIUrl":"https://doi.org/arxiv-2409.08907","url":null,"abstract":"The dawn of Foundation Models has on the one hand revolutionised a wide range\u0000of research problems, and, on the other hand, democratised the access and use\u0000of AI-based tools by the general public. We even observe an incursion of these\u0000models into disciplines related to human psychology, such as the Affective\u0000Computing domain, suggesting their affective, emerging capabilities. In this\u0000work, we aim to raise awareness of the power of Foundation Models in the field\u0000of Affective Computing by synthetically generating and analysing multimodal\u0000affective data, focusing on vision, linguistics, and speech (acoustics). We\u0000also discuss some fundamental problems, such as ethical issues and regulatory\u0000aspects, related to the use of Foundation Models in this research area.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mapping Technical Safety Research at AI Companies: A literature review and incentives analysis 人工智能公司的技术安全研究图谱:文献回顾与激励分析
arXiv - CS - Computers and Society Pub Date : 2024-09-12 DOI: arxiv-2409.07878
Oscar Delaney, Oliver Guest, Zoe Williams
{"title":"Mapping Technical Safety Research at AI Companies: A literature review and incentives analysis","authors":"Oscar Delaney, Oliver Guest, Zoe Williams","doi":"arxiv-2409.07878","DOIUrl":"https://doi.org/arxiv-2409.07878","url":null,"abstract":"As artificial intelligence (AI) systems become more advanced, concerns about\u0000large-scale risks from misuse or accidents have grown. This report analyzes the\u0000technical research into safe AI development being conducted by three leading AI\u0000companies: Anthropic, Google DeepMind, and OpenAI. We define safe AI development as developing AI systems that are unlikely to\u0000pose large-scale misuse or accident risks. This encompasses a range of\u0000technical approaches aimed at ensuring AI systems behave as intended and do not\u0000cause unintended harm, even as they are made more capable and autonomous. We analyzed all papers published by the three companies from January 2022 to\u0000July 2024 that were relevant to safe AI development, and categorized the 61\u0000included papers into eight safety approaches. Additionally, we noted three\u0000categories representing nascent approaches explored by academia and civil\u0000society, but not currently represented in any papers by the three companies.\u0000Our analysis reveals where corporate attention is concentrated and where\u0000potential gaps lie. Some AI research may stay unpublished for good reasons, such as to not inform\u0000adversaries about security techniques they would need to overcome to misuse AI\u0000systems. Therefore, we also considered the incentives that AI companies have to\u0000research each approach. In particular, we considered reputational effects,\u0000regulatory burdens, and whether the approaches could make AI systems more\u0000useful. We identified three categories where there are currently no or few papers and\u0000where we do not expect AI companies to become more incentivized to pursue this\u0000research in the future. These are multi-agent safety, model organisms of\u0000misalignment, and safety by design. Our findings provide an indication that\u0000these approaches may be slow to progress without funding or efforts from\u0000government, civil society, philanthropists, or academia.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信