2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO)最新文献

筛选
英文 中文
r3d3: Optimized Query Compilation on GPUs r3d3: gpu上的优化查询编译
2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO) Pub Date : 2021-02-27 DOI: 10.1109/CGO51591.2021.9370323
Alexander Krolik, Clark Verbrugge, L. Hendren
{"title":"r3d3: Optimized Query Compilation on GPUs","authors":"Alexander Krolik, Clark Verbrugge, L. Hendren","doi":"10.1109/CGO51591.2021.9370323","DOIUrl":"https://doi.org/10.1109/CGO51591.2021.9370323","url":null,"abstract":"Query compilation is an effective approach to improve the performance of repeated database queries. GPU-based approaches have significant promise, but face difficulties in managing compilation time, data transfer costs, and in addressing a reasonably comprehensive range of SQL operations. In this work we describe a hybrid AoT/JIT approach to GPU-based query compilation. We use multiple optimizations to reduce execution, compile, and data transfer times, improving performance over both other GPU-based approaches and CPU-based query compilers as well. Our design addresses a wide range of SQL queries, sufficient to demonstrate the practicality of using GPUs for query optimization.","PeriodicalId":275062,"journal":{"name":"2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114831479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Loop Parallelization using Dynamic Commutativity Analysis 使用动态交换性分析的循环并行化
2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO) Pub Date : 2021-02-27 DOI: 10.1109/CGO51591.2021.9370319
Christos Vasiladiotis, Roberto Castañeda Lozano, M. Cole, Björn Franke
{"title":"Loop Parallelization using Dynamic Commutativity Analysis","authors":"Christos Vasiladiotis, Roberto Castañeda Lozano, M. Cole, Björn Franke","doi":"10.1109/CGO51591.2021.9370319","DOIUrl":"https://doi.org/10.1109/CGO51591.2021.9370319","url":null,"abstract":"Automatic parallelization has largely failed to keep its promise of extracting parallelism from sequential legacy code to maximize performance on multi-core systems outside the numerical domain. In this paper, we develop a novel dynamic commutativity analysis (DCA) for identifying parallelizable loops. Using commutativity instead of dependence tests, DCA avoids many of the overly strict data dependence constraints limiting existing parallelizing compilers. DCA extends the scope of automatic parallelization to uniformly include both regular array-based and irregular pointer-based codes. We have prototyped our novel parallelism detection analysis and evaluated it extensively against five state-of-the-art dependence-based techniques in two experimental settings. First, when applied to the NAS benchmarks which contain almost 1400 loops, DCA is able to identify as many parallel loops (over 1200) as the profile-guided dependence techniques and almost twice as many as all the static techniques combined. We then apply DCA to complex pointer-based loops, where it can successfully detect parallelism, while existing techniques fail to identify any. When combined with existing parallel code generation techniques, this results in an average speedup of 3.6 × (and up to 55x) across the NAS benchmarks on a 72-core host, and up to 36.9x for the pointer-based loops, demonstrating the effectiveness of DCA in identifying profitable parallelism across a wide range of loops.","PeriodicalId":275062,"journal":{"name":"2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134214007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Experience with Code-Size Optimization for Production iOS Mobile Applications iOS移动应用的代码大小优化经验
2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO) Pub Date : 2021-02-27 DOI: 10.1109/CGO51591.2021.9370306
Milind Chabbi, Jin Lin, R. Barik
{"title":"An Experience with Code-Size Optimization for Production iOS Mobile Applications","authors":"Milind Chabbi, Jin Lin, R. Barik","doi":"10.1109/CGO51591.2021.9370306","DOIUrl":"https://doi.org/10.1109/CGO51591.2021.9370306","url":null,"abstract":"Modern mobile application binaries are bulky for many reasons: software and its dependencies, fast-paced addition of new features, high-level language constructs, and statically linked platform libraries. Reduced application size is critical not only for the end-user experience but also for vendor's download size limitations. Moreover, download size restrictions may impact revenues for critical businesses. In this paper, we highlight some of the key reasons of code-size bloat in iOS mobile applications, specifically apps written using a mix of Swift and Objective-C. Our observation reveals that machine code sequences systematically repeat throughout the app's binary. We highlight source-code patterns and high-level language constructs that lead to an increase in the code size. We propose whole-program, fine-grained machine-code outlining as an effective optimization to constrain the code-size growth. We evaluate the effectiveness of our new optimization pipeline on the UberRider iOS app used by millions of customers daily. Our optimizations reduce the code size by 23%. The impact of our optimizations on the code size grows in magnitude over time as the code evolves. For a set of performance spans defined by the app developers, the optimizations do not statistically regress production performance. We applied the same optimizations to Uber's UberDriver and UberEats apps and gained 17% and 19% size savings, respectively.","PeriodicalId":275062,"journal":{"name":"2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133516133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Seamless Compiler Integration of Variable Precision Floating-Point Arithmetic 可变精度浮点运算的无缝编译集成
2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO) Pub Date : 2021-02-27 DOI: 10.1109/CGO51591.2021.9370331
T. Jost, Y. Durand, Christian Fabre, Albert Cohen, F. Pétrot
{"title":"Seamless Compiler Integration of Variable Precision Floating-Point Arithmetic","authors":"T. Jost, Y. Durand, Christian Fabre, Albert Cohen, F. Pétrot","doi":"10.1109/CGO51591.2021.9370331","DOIUrl":"https://doi.org/10.1109/CGO51591.2021.9370331","url":null,"abstract":"Floating-Point (FP) units in processors are generally limited to supporting a subset of formats defined by the IEEE 754 standard. As a result, high-efficiency languages and optimizing compilers for high-performance computing only support IEEE standard types and applications needing higher precision involve cumbersome memory management and calls to external libraries, resulting in code bloat and making the intent of the program unclear. We present an extension of the C type system that can represent generic FP operations and formats, supporting both static precision and dynamically variable precision. We design and implement a compilation flow bridging the abstraction gap between this type system and low-level FP instructions or software libraries. The effectiveness of our solution is demonstrated through an LLVM-based implementation, leveraging aggressive optimizations in LLVM including the Polly loop nest optimizer, which targets two backend code generators: one for the ISA of a variable precision FP arithmetic coprocessor, and one for the MPFR multi-precision floating-point library. Our optimizing compilation flow targeting MPFR outperforms the Boost programming interface for the MPFR library by a factor of 1.80 × and 1.67 × in sequential execution of the Poly Bench and RAJAPerf suites, respectively, and by a factor of 7.62 x on an 8-core (and 16-thread) machine for RAJAPerf in OpenMP.","PeriodicalId":275062,"journal":{"name":"2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO)","volume":"45 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124430643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Thread-Aware Area-Efficient High-Level Synthesis Compiler for Embedded Devices 面向嵌入式设备的线程感知区域高效高级综合编译器
2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO) Pub Date : 2021-02-27 DOI: 10.1109/CGO51591.2021.9370341
Changsu Kim, Shinnung Jeong, Sungjun Cho, Yongwoo Lee, William J. Song, Youngsok Kim, Hanjun Kim
{"title":"Thread-Aware Area-Efficient High-Level Synthesis Compiler for Embedded Devices","authors":"Changsu Kim, Shinnung Jeong, Sungjun Cho, Yongwoo Lee, William J. Song, Youngsok Kim, Hanjun Kim","doi":"10.1109/CGO51591.2021.9370341","DOIUrl":"https://doi.org/10.1109/CGO51591.2021.9370341","url":null,"abstract":"In the embedded device market, custom hardware platforms such as an application specific integrated circuit (ASIC) and a field programmable gate array (FPGA) are attractive thanks to their high performance and power efficiency. However, its huge design costs make it challenging for manufacturers to timely launch new devices. High-level synthesis (HLS) helps significantly reduce the design costs by automating the translation of service algorithms into hardware logics; however, current HLS compilers do not fit well to embedded devices as they fail to produce area-efficient solutions while supporting concurrent events from diverse peripherals such as sensors, actuators and network modules. This paper proposes a new thread-aware HLS compiler named Duro that produces area-efficient embedded devices. Duro shares commonly-invoked functions and operators across different callers and threads with a new thread-aware area cost model, and thus effectively reduces the logic size. Moreover, Duro supports a variety of device peripherals by automatically integrating peripheral controllers and interfaces as peripheral drivers. The experiment results of six embedded devices with ten peripherals demonstrate that Duro reduces the area and energy dissipation of embedded devices by 28.5% and 25.3% compared with the designs generated by the state-of-the-art HLS compiler. This work also implements FPGA prototypes of the six devices using Duro, and the measurement results show 65.3% energy saving over Raspberry Pi Zero with slightly better computation performance.","PeriodicalId":275062,"journal":{"name":"2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132493184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GoBench: A Benchmark Suite of Real-World Go Concurrency Bugs GoBench:一个真实世界的Go并发性bug的基准套件
2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO) Pub Date : 2021-02-27 DOI: 10.1109/CGO51591.2021.9370317
Ting Yuan, Guangwei Li, Jie Lu, Chen Liu, Lian Li, Jingling Xue
{"title":"GoBench: A Benchmark Suite of Real-World Go Concurrency Bugs","authors":"Ting Yuan, Guangwei Li, Jie Lu, Chen Liu, Lian Li, Jingling Xue","doi":"10.1109/CGO51591.2021.9370317","DOIUrl":"https://doi.org/10.1109/CGO51591.2021.9370317","url":null,"abstract":"Go, a fast growing programming language, is often considered as “the programming language of the cloud”. The language provides a rich set of synchronization primitives, making it easy to write concurrent programs with great parallelism. However. the rich set of primitives also introduces many bugs. We build Gobench, the first benchmark suite for Go concurrency bugs. Currently, Gobench consists of 82 real bugs from 9 popular open source applications and 103 bug kernels. The bug kernels are carefully extracted and simplified from 67 out of these 82 bugs and 36 additional bugs reported in a recent study to preserve their bug-inducing complexities as much as possible. These bugs cover a variety of concurrency issues, both traditional and Go-specific. We believe Gobench will be instrumental in helping researchers understand concurrency bugs in Go and develop effective tools for their detection. We have therefore evaluated a range of representative concurrency error detection tools using Gobench. Our evaluation has revealed their limitations and provided insights for making further improvements.","PeriodicalId":275062,"journal":{"name":"2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO)","volume":"258 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123287410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Memory-Safe Elimination of Side Channels 内存安全消除侧通道
2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO) Pub Date : 2021-02-27 DOI: 10.1109/CGO51591.2021.9370305
Luigi Soares, F. Pereira
{"title":"Memory-Safe Elimination of Side Channels","authors":"Luigi Soares, F. Pereira","doi":"10.1109/CGO51591.2021.9370305","DOIUrl":"https://doi.org/10.1109/CGO51591.2021.9370305","url":null,"abstract":"A program is said to be isochronous if its running time does not depend on classified information. The programming languages literature contains much work that transforms programs to ensure isochronicity. The current state-of-the-art approach is a code transformation technique due to Wu et al., published in 2018. That technique has an important virtue: it ensures that the transformed program runs exactly the same set of operations, regardless of inputs. However, in this paper we demonstrate that it has also a shortcoming: it might add out-of-bounds memory accesses into programs that were originally memory sound. From this observation, we show how to deliver the same runtime guarantees that Wu et al. provide, in a memory-safe way. In addition to being safer, our LLVM-based implementation is more efficient than its original inspiration, achieving shorter repairing times, and producing code that is smaller and faster.","PeriodicalId":275062,"journal":{"name":"2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116319528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Vulkan Vision: Ray Tracing Workload Characterization using Automatic Graphics Instrumentation Vulkan Vision:使用自动图形仪器的光线追踪工作负载表征
2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO) Pub Date : 2021-02-27 DOI: 10.1109/CGO51591.2021.9370320
D. Pankratz, Tyler Nowicki, Ahmed Eltantawy, J. N. Amaral
{"title":"Vulkan Vision: Ray Tracing Workload Characterization using Automatic Graphics Instrumentation","authors":"D. Pankratz, Tyler Nowicki, Ahmed Eltantawy, J. N. Amaral","doi":"10.1109/CGO51591.2021.9370320","DOIUrl":"https://doi.org/10.1109/CGO51591.2021.9370320","url":null,"abstract":"While there are mature performance monitoring, profiling and instrumentation tools to help understanding the dynamic behaviour of general-purpose GPU applications, the abstract programming models of graphics applications have limited the development of such tools for graphics. This paper introduces Vulkan Vision (V- Vision), a framework for collecting detailed GPU execution data from Vulkan applications to guide hardware-informed improvements. A core contribution of V- Vision is providing out-of-the-box data collection for capturing complete dynamic warp and thread execution traces. V- Vision also provides analyses for the follow purposes: identifying and visualizing application hotspots to guide optimization, characterizing application behaviour and estimating the effect of architectural modifications. This paper demonstrates the potential for these analyses in applications that utilize the recent ray-tracing extension in Vulkan and describes new insights about the applications and the underlying hardware.","PeriodicalId":275062,"journal":{"name":"2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126594988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
HHVM Jump-Start: Boosting Both Warmup and Steady-State Performance at Scale HHVM Jump-Start:大规模提升热身和稳态性能
2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO) Pub Date : 2021-02-27 DOI: 10.1109/CGO51591.2021.9370314
Guilherme Ottoni, B. Liu
{"title":"HHVM Jump-Start: Boosting Both Warmup and Steady-State Performance at Scale","authors":"Guilherme Ottoni, B. Liu","doi":"10.1109/CGO51591.2021.9370314","DOIUrl":"https://doi.org/10.1109/CGO51591.2021.9370314","url":null,"abstract":"Just-In-Time (JIT) compilation is often employed in Virtual Machines (VMs) to translate their virtual-machine languages into real-machine code. This approach not only brings portability, but it also enables aggressive compiler optimizations based on runtime behavior observed via profiling. The downside of JIT compilation, compared to Ahead-Of-Time native compilation, is that the profiling and compilation overheads are incurred during execution. To mitigate these overheads, previous work have proposed sharing either profile data or final JIT compiled code across VM executions. Unfortunately, these techniques have drawbacks, including steady-state performance degradation and difficulty of use. To address these issues, this paper presents the Jump-Start mechanism implemented inside the Hip Hop Virtual Machine (HHVM). Jump-Start is a practical approach to share VM profile data at a large scale, being used to power one of the largest websites in the world. In this paper, we argue for HHVM's Jump-Start approach, describe it in detail, and present steady-state optimizations built on top of it. Running the Facebook website, we demonstrate that Jump-Start effectively solves the warmup problem in HHVM, reducing the server capacity loss during warmup by 54.9%, while also improving steady-state performance by 5.4%.","PeriodicalId":275062,"journal":{"name":"2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125827243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Unleashing the Low-Precision Computation Potential of Tensor Cores on GPUs 释放gpu上张量核的低精度计算潜力
2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO) Pub Date : 2021-02-27 DOI: 10.1109/CGO51591.2021.9370335
Guangli Li, Jingling Xue, Lei Liu, Xueying Wang, Xiu Ma, Xiao-jun Dong, Jiansong Li, Xiaobing Feng
{"title":"Unleashing the Low-Precision Computation Potential of Tensor Cores on GPUs","authors":"Guangli Li, Jingling Xue, Lei Liu, Xueying Wang, Xiu Ma, Xiao-jun Dong, Jiansong Li, Xiaobing Feng","doi":"10.1109/CGO51591.2021.9370335","DOIUrl":"https://doi.org/10.1109/CGO51591.2021.9370335","url":null,"abstract":"Tensor-specialized hardware for supporting low-precision arithmetic has become an inevitable trend due to the ever-increasing demand on computational capability and energy efficiency in intelligent applications. The main challenge faced when accelerating a tensor program on tensor-specialized hardware is how to achieve the best performance possible in reduced precision by fully utilizing its computational resources while keeping the precision loss in a controlled manner. In this paper, we address this challenge by proposing QUANTENSOR, a new approach for accelerating general-purpose tensor programs by replacing its tensor computations with low-precision quantized tensor computations on NVIDIA Tensor Cores. The key novelty is a new residual-based precision refinement technique for controlling the quantization errors, allowing tradeoffs between performance and precision to be made. Evaluation with GEMM, deep neural networks, and linear algebra applications shows that QUANTENSOR can achieve remarkable performance improvements while reducing the precision loss incurred significantly at acceptable overheads.","PeriodicalId":275062,"journal":{"name":"2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128468220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信