arXiv - CS - Operating Systems最新文献

筛选
英文 中文
Configuration Validation with Large Language Models 大型语言模型的配置验证
arXiv - CS - Operating Systems Pub Date : 2023-10-15 DOI: arxiv-2310.09690
Xinyu Lian, Yinfang Chen, Runxiang Cheng, Jie Huang, Parth Thakkar, Tianyin Xu
{"title":"Configuration Validation with Large Language Models","authors":"Xinyu Lian, Yinfang Chen, Runxiang Cheng, Jie Huang, Parth Thakkar, Tianyin Xu","doi":"arxiv-2310.09690","DOIUrl":"https://doi.org/arxiv-2310.09690","url":null,"abstract":"Misconfigurations are the major causes of software failures. Existing\u0000configuration validation techniques rely on manually written rules or test\u0000cases, which are expensive to implement and maintain, and are hard to be\u0000comprehensive. Leveraging machine learning (ML) and natural language processing\u0000(NLP) for configuration validation is considered a promising direction, but has\u0000been facing challenges such as the need of not only large-scale configuration\u0000data, but also system-specific features and models which are hard to\u0000generalize. Recent advances in Large Language Models (LLMs) show the promises\u0000to address some of the long-lasting limitations of ML/NLP-based configuration\u0000validation techniques. In this paper, we present an exploratory analysis on the\u0000feasibility and effectiveness of using LLMs like GPT and Codex for\u0000configuration validation. Specifically, we take a first step to empirically\u0000evaluate LLMs as configuration validators without additional fine-tuning or\u0000code generation. We develop a generic LLM-based validation framework, named\u0000Ciri, which integrates different LLMs. Ciri devises effective prompt\u0000engineering with few-shot learning based on both valid configuration and\u0000misconfiguration data. Ciri also validates and aggregates the outputs of LLMs\u0000to generate validation results, coping with known hallucination and\u0000nondeterminism of LLMs. We evaluate the validation effectiveness of Ciri on\u0000five popular LLMs using configuration data of six mature, widely deployed\u0000open-source systems. Our analysis (1) confirms the potential of using LLMs for\u0000configuration validation, (2) understands the design space of LLMbased\u0000validators like Ciri, especially in terms of prompt engineering with few-shot\u0000learning, and (3) reveals open challenges such as ineffectiveness in detecting\u0000certain types of misconfigurations and biases to popular configuration\u0000parameters.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"56 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138522171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Taking the Shortcut: Actively Incorporating the Virtual Memory Index of the OS to Hardware-Accelerate Database Indexing 走捷径:积极将操作系统的虚拟内存索引整合到硬件加速数据库索引中
arXiv - CS - Operating Systems Pub Date : 2023-10-13 DOI: arxiv-2310.09124
Felix Schuhknecht
{"title":"Taking the Shortcut: Actively Incorporating the Virtual Memory Index of the OS to Hardware-Accelerate Database Indexing","authors":"Felix Schuhknecht","doi":"arxiv-2310.09124","DOIUrl":"https://doi.org/arxiv-2310.09124","url":null,"abstract":"Index structures often materialize one or multiple levels of explicit\u0000indirections (aka pointers) to allow for a quick traversal to the data of\u0000interest. Unfortunately, dereferencing a pointer to go from one level to the\u0000other is costly since additionally to following the address, it involves two\u0000address translations from virtual memory to physical memory under the hood. In\u0000the worst case, such an address translation is resolved by an index access\u0000itself, namely by a lookup into the page table, a central hardware-accelerated\u0000index structure of the OS. However, if the page table is anyways constantly\u0000queried, it raises the question whether we can actively incorporate it into our\u0000database indexes and make it work for us. Precisely, instead of materializing\u0000indirections in form of pointers, we propose to express these indirections\u0000directly in the page table wherever possible. By introducing such shortcuts, we\u0000(a) effectively reduce the height of traversal during lookups and (b) exploit\u0000the hardware-acceleration of lookups in the page table. In this work, we\u0000analyze the strengths and considerations of this approach and showcase its\u0000effectiveness at the case of the real-world indexing scheme extendible hashing.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"24 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138522170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a debuggable kernel design 迈向可调试的内核设计
arXiv - CS - Operating Systems Pub Date : 2023-10-09 DOI: arxiv-2310.05399
Chandrika Parimoo, Ashish Gupta
{"title":"Towards a debuggable kernel design","authors":"Chandrika Parimoo, Ashish Gupta","doi":"arxiv-2310.05399","DOIUrl":"https://doi.org/arxiv-2310.05399","url":null,"abstract":"This paper describes what it means for a kernel to be debuggable and proposes\u0000a kernel design with debuggability in mind. We evaluate the proposed kernel\u0000design by comparing the iterations required in cyclic debugging for different\u0000classes of bugs in a vanilla monolithic kernel to a variant enhanced with our\u0000design rules for debuggability. We discuss the trade offs involved in designing\u0000a debuggable kernel.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"30 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138522100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prompt-to-OS (P2OS): Revolutionizing Operating Systems and Human-Computer Interaction with Integrated AI Generative Models 即时操作系统(P2OS):革命性的操作系统和集成人工智能生成模型的人机交互
arXiv - CS - Operating Systems Pub Date : 2023-10-07 DOI: arxiv-2310.04875
Gabriele Tolomei, Cesare Campagnano, Fabrizio Silvestri, Giovanni Trappolini
{"title":"Prompt-to-OS (P2OS): Revolutionizing Operating Systems and Human-Computer Interaction with Integrated AI Generative Models","authors":"Gabriele Tolomei, Cesare Campagnano, Fabrizio Silvestri, Giovanni Trappolini","doi":"arxiv-2310.04875","DOIUrl":"https://doi.org/arxiv-2310.04875","url":null,"abstract":"In this paper, we present a groundbreaking paradigm for human-computer\u0000interaction that revolutionizes the traditional notion of an operating system. Within this innovative framework, user requests issued to the machine are\u0000handled by an interconnected ecosystem of generative AI models that seamlessly\u0000integrate with or even replace traditional software applications. At the core\u0000of this paradigm shift are large generative models, such as language and\u0000diffusion models, which serve as the central interface between users and\u0000computers. This pioneering approach leverages the abilities of advanced\u0000language models, empowering users to engage in natural language conversations\u0000with their computing devices. Users can articulate their intentions, tasks, and\u0000inquiries directly to the system, eliminating the need for explicit commands or\u0000complex navigation. The language model comprehends and interprets the user's\u0000prompts, generating and displaying contextual and meaningful responses that\u0000facilitate seamless and intuitive interactions. This paradigm shift not only streamlines user interactions but also opens up\u0000new possibilities for personalized experiences. Generative models can adapt to\u0000individual preferences, learning from user input and continuously improving\u0000their understanding and response generation. Furthermore, it enables enhanced\u0000accessibility, as users can interact with the system using speech or text,\u0000accommodating diverse communication preferences. However, this visionary concept raises significant challenges, including\u0000privacy, security, trustability, and the ethical use of generative models.\u0000Robust safeguards must be in place to protect user data and prevent potential\u0000misuse or manipulation of the language model. While the full realization of this paradigm is still far from being achieved,\u0000this paper serves as a starting point for envisioning this transformative\u0000potential.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"39 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138522091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Victima: Drastically Increasing Address Translation Reach by Leveraging Underutilized Cache Resources 受害者:通过利用未充分利用的缓存资源大幅增加地址转换范围
arXiv - CS - Operating Systems Pub Date : 2023-10-06 DOI: arxiv-2310.04158
Konstantinos Kanellopoulos, Hong Chul Nam, F. Nisa Bostanci, Rahul Bera, Mohammad Sadrosadati, Rakesh Kumar, Davide-Basilio Bartolini, Onur Mutlu
{"title":"Victima: Drastically Increasing Address Translation Reach by Leveraging Underutilized Cache Resources","authors":"Konstantinos Kanellopoulos, Hong Chul Nam, F. Nisa Bostanci, Rahul Bera, Mohammad Sadrosadati, Rakesh Kumar, Davide-Basilio Bartolini, Onur Mutlu","doi":"arxiv-2310.04158","DOIUrl":"https://doi.org/arxiv-2310.04158","url":null,"abstract":"Address translation is a performance bottleneck in data-intensive workloads\u0000due to large datasets and irregular access patterns that lead to frequent\u0000high-latency page table walks (PTWs). PTWs can be reduced by using (i) large\u0000hardware TLBs or (ii) large software-managed TLBs. Unfortunately, both\u0000solutions have significant drawbacks: increased access latency, power and area\u0000(for hardware TLBs), and costly memory accesses, the need for large contiguous\u0000memory blocks, and complex OS modifications (for software-managed TLBs). We\u0000present Victima, a new software-transparent mechanism that drastically\u0000increases the translation reach of the processor by leveraging the\u0000underutilized resources of the cache hierarchy. The key idea of Victima is to\u0000repurpose L2 cache blocks to store clusters of TLB entries, thereby providing\u0000an additional low-latency and high-capacity component that backs up the\u0000last-level TLB and thus reduces PTWs. Victima has two main components. First, a\u0000PTW cost predictor (PTW-CP) identifies costly-to-translate addresses based on\u0000the frequency and cost of the PTWs they lead to. Second, a TLB-aware cache\u0000replacement policy prioritizes keeping TLB entries in the cache hierarchy by\u0000considering (i) the translation pressure (e.g., last-level TLB miss rate) and\u0000(ii) the reuse characteristics of the TLB entries. Our evaluation results show\u0000that in native (virtualized) execution environments Victima improves average\u0000end-to-end application performance by 7.4% (28.7%) over the baseline four-level\u0000radix-tree-based page table design and by 6.2% (20.1%) over a state-of-the-art\u0000software-managed TLB, across 11 diverse data-intensive workloads. Victima (i)\u0000is effective in both native and virtualized environments, (ii) is completely\u0000transparent to application and system software, and (iii) incurs very small\u0000area and power overheads on a modern high-end CPU.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"24 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138522169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Motivating Next-Generation OS Physical Memory Management for Terabyte-Scale NVMMs 推动下一代操作系统物理内存管理的tb级nvmm
arXiv - CS - Operating Systems Pub Date : 2023-10-05 DOI: arxiv-2310.03370
Shivank Garg, Aravinda Prasad, Debadatta Mishra, Sreenivas Subramoney
{"title":"Motivating Next-Generation OS Physical Memory Management for Terabyte-Scale NVMMs","authors":"Shivank Garg, Aravinda Prasad, Debadatta Mishra, Sreenivas Subramoney","doi":"arxiv-2310.03370","DOIUrl":"https://doi.org/arxiv-2310.03370","url":null,"abstract":"Software managed byte-addressable hybrid memory systems consisting of DRAMs\u0000and NVMMs offer a lot of flexibility to design efficient large scale data\u0000processing applications. Operating systems (OS) play an important role in\u0000enabling the applications to realize the integrated benefits of DRAMs' low\u0000access latency and NVMMs' large capacity along with its persistent\u0000characteristics. In this paper, we comprehensively analyze the performance of\u0000conventional OS physical memory management subsystems that were designed only\u0000based on the DRAM memory characteristics in the context of modern hybrid\u0000byte-addressable memory systems. To study the impact of high access latency and large capacity of NVMMs on\u0000physical memory management, we perform an extensive evaluation on Linux with\u0000Intel's Optane NVMM. We observe that the core memory management functionalities\u0000such as page allocation are negatively impacted by high NVMM media latency,\u0000while functionalities such as conventional fragmentation management are\u0000rendered inadequate. We also demonstrate that certain traditional memory\u0000management functionalities are affected by neither aspects of modern NVMMs. We\u0000conclusively motivate the need to overhaul fundamental aspects of traditional\u0000OS physical memory management in order to fully exploit terabyte-scale NVMMs.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138542488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Co-Optimizing Cache Partitioning and Multi-Core Task Scheduling: Exploit Cache Sensitivity or Not? 协同优化缓存分区和多核任务调度:利用缓存敏感性与否?
arXiv - CS - Operating Systems Pub Date : 2023-10-04 DOI: arxiv-2310.02959
Binqi Sun, Debayan Roy, Tomasz Kloda, Andrea Bastoni, Rodolfo Pellizzoni, Marco Caccamo
{"title":"Co-Optimizing Cache Partitioning and Multi-Core Task Scheduling: Exploit Cache Sensitivity or Not?","authors":"Binqi Sun, Debayan Roy, Tomasz Kloda, Andrea Bastoni, Rodolfo Pellizzoni, Marco Caccamo","doi":"arxiv-2310.02959","DOIUrl":"https://doi.org/arxiv-2310.02959","url":null,"abstract":"Cache partitioning techniques have been successfully adopted to mitigate\u0000interference among concurrently executing real-time tasks on multi-core\u0000processors. Considering that the execution time of a cache-sensitive task\u0000strongly depends on the cache available for it to use, co-optimizing cache\u0000partitioning and task allocation improves the system's schedulability. In this\u0000paper, we propose a hybrid multi-layer design space exploration technique to\u0000solve this multi-resource management problem. We explore the interplay between\u0000cache partitioning and schedulability by systematically interleaving three\u0000optimization layers, viz., (i) in the outer layer, we perform a breadth-first\u0000search combined with proactive pruning for cache partitioning; (ii) in the\u0000middle layer, we exploit a first-fit heuristic for allocating tasks to cores;\u0000and (iii) in the inner layer, we use the well-known recurrence relation for the\u0000schedulability analysis of non-preemptive fixed-priority (NP-FP) tasks in a\u0000uniprocessor setting. Although our focus is on NP-FP scheduling, we evaluate\u0000the flexibility of our framework in supporting different scheduling policies\u0000(NP-EDF, P-EDF) by plugging in appropriate analysis methods in the inner layer.\u0000Experiments show that, compared to the state-of-the-art techniques, the\u0000proposed framework can improve the real-time schedulability of NP-FP task sets\u0000by an average of 15.2% with a maximum improvement of 233.6% (when tasks are\u0000highly cache-sensitive) and a minimum of 1.6% (when cache sensitivity is low).\u0000For such task sets, we found that clustering similar-period (or mutually\u0000compatible) tasks often leads to higher schedulability (on average 7.6%) than\u0000clustering by cache sensitivity. In our evaluation, the framework also achieves\u0000good results for preemptive and dynamic-priority scheduling policies.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138542516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Persistent Memory File Systems: A Survey 持久性内存文件系统:综述
arXiv - CS - Operating Systems Pub Date : 2023-10-04 DOI: arxiv-2310.02880
Wiebe van Breukelen, Animesh Trivedi
{"title":"Persistent Memory File Systems: A Survey","authors":"Wiebe van Breukelen, Animesh Trivedi","doi":"arxiv-2310.02880","DOIUrl":"https://doi.org/arxiv-2310.02880","url":null,"abstract":"Persistent Memory (PM) is non-volatile byte-addressable memory that offers\u0000read and write latencies in the order of magnitude smaller than flash storage,\u0000such as SSDs. This survey discusses how file systems address the most prominent\u0000challenges in the implementation of file systems for Persistent Memory. First,\u0000we discuss how the properties of Persistent Memory change file system design.\u0000Second, we discuss work that aims to optimize small file I/O and the associated\u0000meta-data resolution. Third, we address how existing Persistent Memory file\u0000systems achieve (meta) data persistence and consistency.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138542517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Case Study: Securing Embedded Linux Using CHERI 案例研究:使用CHERI保护嵌入式Linux
arXiv - CS - Operating Systems Pub Date : 2023-10-02 DOI: arxiv-2310.00933
Hesham Almatary
{"title":"Case Study: Securing Embedded Linux Using CHERI","authors":"Hesham Almatary","doi":"arxiv-2310.00933","DOIUrl":"https://doi.org/arxiv-2310.00933","url":null,"abstract":"The current embedded Linux variant lacks security as it does not have or use\u0000MMU support. It does not also use MPUs as they do not fit with its software\u0000model because of the design drawbacks of MPUs (i.e., coarse-grained protection\u0000with fixed number of protected regions). We secure the existing embedded Linux\u0000version of the RISC-V port using CHERI. CHERI is hardware-software\u0000capability-based system that leverages the ISA, toolchain, programming\u0000lanaguages, operating systems, and applications in order to provide complete\u0000pointer and memory safety. We believe that CHERI could provide significant\u0000security guarantees for high-end dynamic embedded systems at lower costs,\u0000compared to MMUs and MPUs, by: 1) building the entire software stack in\u0000pure-capability CHERI C mode which provides complete spatial memory safety at\u0000the kernel and user-level, 2) isolating user programs as separate ELFs, each\u0000with its own CHERI-based capability table; this provides spatial memory safety\u0000similar to what the MMU offers (i.e., user programs cannot access each other's\u0000memory), 3) isolating user programs from the kernel as the kernel has its own\u0000capability table from the users and vice versa, and 4) compartmentalising\u0000kernel modules using CompartOS' linkage-based compartmentalisation. This offers\u0000a new security front that is not possible using the current MMU-based Linux,\u0000where vulnerable/malicious kernel modules (e.g., device drivers) executing in\u0000the kernel space would not compromise or take down the entire system. These are\u0000the four main contributions of this paper, presenting novel CHERI-based\u0000mechanisms to secure embedded Linux.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"25 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138522168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The First Principles of Big Memory Systems 大内存系统的第一原理
arXiv - CS - Operating Systems Pub Date : 2023-09-30 DOI: arxiv-2310.00428
Yu Hua
{"title":"The First Principles of Big Memory Systems","authors":"Yu Hua","doi":"arxiv-2310.00428","DOIUrl":"https://doi.org/arxiv-2310.00428","url":null,"abstract":"In this paper, we comprehensively analyze the vertical and horizontal\u0000extensions of existing memory hierarchy. The difference between memory and big\u0000memory is well reported. We present the state-of-the-art studies upon the big\u0000memory systems, together with design methodology and implementations.\u0000Persistence is the first principle of big memory systems. We further show the\u0000full-stack and moving persistence.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138522090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信