arXiv - CS - Operating Systems最新文献

筛选
英文 中文
Skip TLB flushes for reused pages within mmap's 跳过在 mmap 的
arXiv - CS - Operating Systems Pub Date : 2024-09-17 DOI: arxiv-2409.10946
Frederic Schimmelpfennig, André Brinkmann, Hossein Asadi, Reza Salkhordeh
{"title":"Skip TLB flushes for reused pages within mmap's","authors":"Frederic Schimmelpfennig, André Brinkmann, Hossein Asadi, Reza Salkhordeh","doi":"arxiv-2409.10946","DOIUrl":"https://doi.org/arxiv-2409.10946","url":null,"abstract":"Memory access efficiency is significantly enhanced by caching recent address\u0000translations in the CPUs' Translation Lookaside Buffers (TLBs). However, since\u0000the operating system is not aware of which core is using a particular mapping,\u0000it flushes TLB entries across all cores where the application runs whenever\u0000addresses are unmapped, ensuring security and consistency. These TLB flushes,\u0000known as TLB shootdowns, are costly and create a performance and scalability\u0000bottleneck. A key contributor to TLB shootdowns is memory-mapped I/O,\u0000particularly during mmap-munmap cycles and page cache evictions. Often, the\u0000same physical pages are reassigned to the same process post-eviction,\u0000presenting an opportunity for the operating system to reduce the frequency of\u0000TLB shootdowns. We demonstrate, that by slightly extending the mmap function,\u0000TLB shootdowns for these \"recycled pages\" can be avoided. Therefore we introduce and implement the \"fast page recycling\" (FPR) feature\u0000within the mmap system call. FPR-mmaps maintain security by only triggering TLB\u0000shootdowns when a page exits its recycling cycle and is allocated to a\u0000different process. To ensure consistency when FPR-mmap pointers are used, we\u0000made minor adjustments to virtual memory management to avoid the ABA problem.\u0000Unlike previous methods to mitigate shootdown effects, our approach does not\u0000require any hardware modifications and operates transparently within the\u0000existing Linux virtual memory framework. Our evaluations across a variety of CPU, memory, and storage setups,\u0000including persistent memory and Optane SSDs, demonstrate that FPR delivers\u0000notable performance gains, with improvements of up to 28% in real-world\u0000applications and 92% in micro-benchmarks. Additionally, we show that TLB\u0000shootdowns are a significant source of bottlenecks, previously misattributed to\u0000other components of the Linux kernel.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of Synchronization Mechanisms in Operating Systems 操作系统中的同步机制分析
arXiv - CS - Operating Systems Pub Date : 2024-09-17 DOI: arxiv-2409.11271
Oluwatoyin Kode, Temitope Oyemade
{"title":"Analysis of Synchronization Mechanisms in Operating Systems","authors":"Oluwatoyin Kode, Temitope Oyemade","doi":"arxiv-2409.11271","DOIUrl":"https://doi.org/arxiv-2409.11271","url":null,"abstract":"This research analyzed the performance and consistency of four\u0000synchronization mechanisms-reentrant locks, semaphores, synchronized methods,\u0000and synchronized blocks-across three operating systems: macOS, Windows, and\u0000Linux. Synchronization ensures that concurrent processes or threads access\u0000shared resources safely, and efficient synchronization is vital for maintaining\u0000system performance and reliability. The study aimed to identify the\u0000synchronization mechanism that balances efficiency, measured by execution time,\u0000and consistency, assessed by variance and standard deviation, across platforms.\u0000The initial hypothesis proposed that mutex-based mechanisms, specifically\u0000synchronized methods and blocks, would be the most efficient due to their\u0000simplicity. However, empirical results showed that reentrant locks had the\u0000lowest average execution time (14.67ms), making them the most efficient\u0000mechanism, but with the highest variability (standard deviation of 1.15). In\u0000contrast, synchronized methods, blocks, and semaphores exhibited higher average\u0000execution times (16.33ms for methods and 16.67ms for blocks) but with greater\u0000consistency (variance of 0.33). The findings indicated that while reentrant\u0000locks were faster, they were more platform-dependent, whereas mutex-based\u0000mechanisms provided more predictable performance across all operating systems.\u0000The use of virtual machines for Windows and Linux was a limitation, potentially\u0000affecting the results. Future research should include native testing and\u0000explore additional synchronization mechanisms and higher concurrency levels.\u0000These insights help developers and system designers optimize synchronization\u0000strategies for either performance or stability, depending on the application's\u0000requirements.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"191 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
eBPF-mm: Userspace-guided memory management in Linux with eBPF eBPF-mm:使用 eBPF 在 Linux 中进行用户空间引导式内存管理
arXiv - CS - Operating Systems Pub Date : 2024-09-17 DOI: arxiv-2409.11220
Konstantinos Mores, Stratos Psomadakis, Georgios Goumas
{"title":"eBPF-mm: Userspace-guided memory management in Linux with eBPF","authors":"Konstantinos Mores, Stratos Psomadakis, Georgios Goumas","doi":"arxiv-2409.11220","DOIUrl":"https://doi.org/arxiv-2409.11220","url":null,"abstract":"We leverage eBPF in order to implement custom policies in the Linux memory\u0000subsystem. Inspired by CBMM, we create a mechanism that provides the kernel\u0000with hints regarding the benefit of promoting a page to a specific size. We\u0000introduce a new hook point in Linux page fault handling path for eBPF programs,\u0000providing them the necessary context to determine the page size to be used. We\u0000then develop a framework that allows users to define profiles for their\u0000applications and load them into the kernel. A profile consists of memory\u0000regions of interest and their expected benefit from being backed by 4KB, 64KB\u0000and 2MB pages. In our evaluation, we profiled our workloads to identify hot\u0000memory regions using DAMON.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BULKHEAD: Secure, Scalable, and Efficient Kernel Compartmentalization with PKS BULKHEAD:利用 PKS 实现安全、可扩展和高效的内核分隔
arXiv - CS - Operating Systems Pub Date : 2024-09-15 DOI: arxiv-2409.09606
Yinggang Guo, Zicheng Wang, Weiheng Bai, Qingkai Zeng, Kangjie Lu
{"title":"BULKHEAD: Secure, Scalable, and Efficient Kernel Compartmentalization with PKS","authors":"Yinggang Guo, Zicheng Wang, Weiheng Bai, Qingkai Zeng, Kangjie Lu","doi":"arxiv-2409.09606","DOIUrl":"https://doi.org/arxiv-2409.09606","url":null,"abstract":"The endless stream of vulnerabilities urgently calls for principled\u0000mitigation to confine the effect of exploitation. However, the monolithic\u0000architecture of commodity OS kernels, like the Linux kernel, allows an attacker\u0000to compromise the entire system by exploiting a vulnerability in any kernel\u0000component. Kernel compartmentalization is a promising approach that follows the\u0000least-privilege principle. However, existing mechanisms struggle with the\u0000trade-off on security, scalability, and performance, given the challenges\u0000stemming from mutual untrustworthiness among numerous and complex components. In this paper, we present BULKHEAD, a secure, scalable, and efficient kernel\u0000compartmentalization technique that offers bi-directional isolation for\u0000unlimited compartments. It leverages Intel's new hardware feature PKS to\u0000isolate data and code into mutually untrusted compartments and benefits from\u0000its fast compartment switching. With untrust in mind, BULKHEAD introduces a\u0000lightweight in-kernel monitor that enforces multiple important security\u0000invariants, including data integrity, execute-only memory, and compartment\u0000interface integrity. In addition, it provides a locality-aware two-level scheme\u0000that scales to unlimited compartments. We implement a prototype system on Linux\u0000v6.1 to compartmentalize loadable kernel modules (LKMs). Extensive evaluation\u0000confirms the effectiveness of our approach. As the system-wide impacts,\u0000BULKHEAD incurs an average performance overhead of 2.44% for real-world\u0000applications with 160 compartmentalized LKMs. While focusing on a specific\u0000compartment, ApacheBench tests on ipv6 show an overhead of less than 2%.\u0000Moreover, the performance is almost unaffected by the number of compartments,\u0000which makes it highly scalable.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142250094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking Programmed I/O for Fast Devices, Cheap Cores, and Coherent Interconnects 重新思考面向快速设备、廉价内核和相干互连的编程 I/O
arXiv - CS - Operating Systems Pub Date : 2024-09-12 DOI: arxiv-2409.08141
Anastasiia Ruzhanskaia, Pengcheng Xu, David Cock, Timothy Roscoe
{"title":"Rethinking Programmed I/O for Fast Devices, Cheap Cores, and Coherent Interconnects","authors":"Anastasiia Ruzhanskaia, Pengcheng Xu, David Cock, Timothy Roscoe","doi":"arxiv-2409.08141","DOIUrl":"https://doi.org/arxiv-2409.08141","url":null,"abstract":"Conventional wisdom holds that an efficient interface between an OS running\u0000on a CPU and a high-bandwidth I/O device should be based on Direct Memory\u0000Access (DMA), descriptor rings, and interrupts: DMA offloads transfers from the\u0000CPU, descriptor rings provide buffering and queuing, and interrupts facilitate\u0000asynchronous interaction between cores and device with a lightweight\u0000notification mechanism. In this paper we question this wisdom in the light of\u0000modern hardware and workloads, particularly in cloud servers. We argue that the\u0000assumptions that led to this model are obsolete, and in many use-cases use of\u0000programmed I/O, where the CPU explicitly transfers data and control information\u0000to and from a device via loads and stores, actually results in a more efficient\u0000system. We quantitatively demonstrate these advantages using three use-cases:\u0000fine-grained RPC-style invocation of functions on an accelerator, offloading of\u0000operators in a streaming dataflow engine, and a network interface targeting for\u0000serverless functions. Moreover, we show that while these advantages are\u0000significant over a modern PCIe peripheral bus, a truly cache-coherent\u0000interconnect offers significant additional efficiency gains.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142219776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SafeBPF: Hardware-assisted Defense-in-depth for eBPF Kernel Extensions SafeBPF:eBPF 内核扩展的硬件辅助深度防御
arXiv - CS - Operating Systems Pub Date : 2024-09-11 DOI: arxiv-2409.07508
Soo Yee Lim, Tanya Prasad, Xueyuan Han, Thomas Pasquier
{"title":"SafeBPF: Hardware-assisted Defense-in-depth for eBPF Kernel Extensions","authors":"Soo Yee Lim, Tanya Prasad, Xueyuan Han, Thomas Pasquier","doi":"arxiv-2409.07508","DOIUrl":"https://doi.org/arxiv-2409.07508","url":null,"abstract":"The eBPF framework enables execution of user-provided code in the Linux\u0000kernel. In the last few years, a large ecosystem of cloud services has\u0000leveraged eBPF to enhance container security, system observability, and network\u0000management. Meanwhile, incessant discoveries of memory safety vulnerabilities\u0000have left the systems community with no choice but to disallow unprivileged\u0000eBPF programs, which unfortunately limits eBPF use to only privileged users. To\u0000improve run-time safety of the framework, we introduce SafeBPF, a general\u0000design that isolates eBPF programs from the rest of the kernel to prevent\u0000memory safety vulnerabilities from being exploited. We present a pure software\u0000implementation using a Software-based Fault Isolation (SFI) approach and a\u0000hardware-assisted implementation that leverages ARM's Memory Tagging Extension\u0000(MTE). We show that SafeBPF incurs up to 4% overhead on macrobenchmarks while\u0000achieving desired security properties.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142219778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The HitchHiker's Guide to High-Assurance System Observability Protection with Efficient Permission Switches 利用高效权限开关实现高可靠性系统可观察性保护的搭便车者指南
arXiv - CS - Operating Systems Pub Date : 2024-09-06 DOI: arxiv-2409.04484
Chuqi Zhang, Jun Zeng, Yiming Zhang, Adil Ahmad, Fengwei Zhang, Hai Jin, Zhenkai Liang
{"title":"The HitchHiker's Guide to High-Assurance System Observability Protection with Efficient Permission Switches","authors":"Chuqi Zhang, Jun Zeng, Yiming Zhang, Adil Ahmad, Fengwei Zhang, Hai Jin, Zhenkai Liang","doi":"arxiv-2409.04484","DOIUrl":"https://doi.org/arxiv-2409.04484","url":null,"abstract":"Protecting system observability records (logs) from compromised OSs has\u0000gained significant traction in recent times, with several note-worthy\u0000approaches proposed. Unfortunately, none of the proposed approaches achieve\u0000high performance with tiny log protection delays. They also leverage risky\u0000environments for protection (eg many use general-purpose hypervisors or\u0000TrustZone, which have large TCB and attack surfaces). HitchHiker is an attempt\u0000to rectify this problem. The system is designed to ensure (a) in-memory\u0000protection of batched logs within a short and configurable real-time deadline\u0000by efficient hardware permission switching, and (b) an end-to-end\u0000high-assurance environment built upon hardware protection primitives with\u0000debloating strategies for secure log protection, persistence, and management.\u0000Security evaluations and validations show that HitchHiker reduces log\u0000protection delay by 93.3--99.3% compared to the state-of-the-art, while\u0000reducing TCB by 9.4--26.9X. Performance evaluations show HitchHiker incurs a\u0000geometric mean of less than 6% overhead on diverse real-world programs,\u0000improving on the state-of-the-art approach by 61.9--77.5%.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142227307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Head-First Memory Allocation on Best-Fit with Space-Fitting 带空间拟合的最佳拟合上的先头内存分配
arXiv - CS - Operating Systems Pub Date : 2024-09-05 DOI: arxiv-2409.03488
Adam Noto Hakarsa
{"title":"Head-First Memory Allocation on Best-Fit with Space-Fitting","authors":"Adam Noto Hakarsa","doi":"arxiv-2409.03488","DOIUrl":"https://doi.org/arxiv-2409.03488","url":null,"abstract":"Although best-fit is known to be slow, it excels at optimizing memory space\u0000utilization. Interestingly, by keeping the free memory region at the top of the\u0000memory, the process of memory allocation and deallocation becomes approximately\u000034.86% faster while also maintaining external fragmentation at minimum.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"72 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142219779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FlexBSO: Flexible Block Storage Offload for Datacenters FlexBSO:数据中心的灵活块存储卸载
arXiv - CS - Operating Systems Pub Date : 2024-09-04 DOI: arxiv-2409.02381
Vojtech Aschenbrenner, John Shawger, Sadman Sakib
{"title":"FlexBSO: Flexible Block Storage Offload for Datacenters","authors":"Vojtech Aschenbrenner, John Shawger, Sadman Sakib","doi":"arxiv-2409.02381","DOIUrl":"https://doi.org/arxiv-2409.02381","url":null,"abstract":"Efficient virtualization of CPU and memory is standardized and mature.\u0000Capabilities such as Intel VT-x [3] have been added by manufacturers for\u0000efficient hypervisor support. In contrast, virtualization of a block device and\u0000its presentation to the virtual machines on the host can be done in multiple\u0000ways. Indeed, hyperscalers develop in-house solutions to improve performance\u0000and cost-efficiency of their storage solutions for datacenters. Unfortunately,\u0000these storage solutions are based on specialized hardware and software which\u0000are not publicly available. The traditional solution is to expose virtual block\u0000device to the VM through a paravirtualized driver like virtio [2]. virtio\u0000provides significantly better performance than real block device driver\u0000emulation because of host OS and guest OS cooperation. The IO requests are then\u0000fulfilled by the host OS either with a local block device such as an SSD drive\u0000or with some form of disaggregated storage over the network like NVMe-oF or\u0000iSCSI. There are three main problems to the traditional solution. 1) Cost. IO\u0000operations consume host CPU cycles due to host OS involvement. These CPU cycles\u0000are doing useless work from the application point of view. 2) Inflexibility.\u0000Any change of the virtualized storage stack requires host OS and/or guest OS\u0000cooperation and cannot be done silently in production. 3) Performance. IO\u0000operations are causing recurring VM EXITs to do the transition from non-root\u0000mode to root mode on the host CPU. This results into excessive IO performance\u0000impact. We propose FlexBSO, a hardware-assisted solution, which solves all the\u0000mentioned issues. Our prototype is based on the publicly available Bluefield-2\u0000SmartNIC with NVIDIA SNAP support, hence can be deployed without any obstacles.","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142219782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreactor: Exploiting Storage I/O Parallelism with Explicit Speculation Foreactor:通过明确推测利用存储 I/O 并行性
arXiv - CS - Operating Systems Pub Date : 2024-09-03 DOI: arxiv-2409.01580
Guanzhou Hu, Andrea Arpaci-Dusseau, Remzi Arpaci-Dusseau
{"title":"Foreactor: Exploiting Storage I/O Parallelism with Explicit Speculation","authors":"Guanzhou Hu, Andrea Arpaci-Dusseau, Remzi Arpaci-Dusseau","doi":"arxiv-2409.01580","DOIUrl":"https://doi.org/arxiv-2409.01580","url":null,"abstract":"We introduce explicit speculation, a variant of I/O speculation technique\u0000where I/O system calls can be parallelized under the guidance of explicit\u0000application code knowledge. We propose a formal abstraction -- the foreaction\u0000graph -- which describes the exact pattern of I/O system calls in an\u0000application function as well as any necessary computation associated to produce\u0000their argument values. I/O system calls can be issued ahead of time if the\u0000graph says it is safe and beneficial to do so. With explicit speculation,\u0000serial applications can exploit storage I/O parallelism without involving\u0000expensive prediction or checkpointing mechanisms. Based on explicit speculation, we implement Foreactor, a library framework\u0000that allows application developers to concretize foreaction graphs and enable\u0000concurrent I/O with little or no modification to application source code.\u0000Experimental results show that Foreactor is able to improve the performance of\u0000both synthetic benchmarks and real applications by significant amounts\u0000(29%-50%).","PeriodicalId":501333,"journal":{"name":"arXiv - CS - Operating Systems","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142219798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信