2020 IEEE High Performance Extreme Computing Conference (HPEC)最新文献

筛选
英文 中文
SparTen: Leveraging Kokkos for On-node Parallelism in a Second-Order Method for Fitting Canonical Polyadic Tensor Models to Poisson Data SparTen:利用Kokkos在二阶方法中的节点并行性拟合规范多进张量模型到泊松数据
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286251
K. Teranishi, Daniel M. Dunlavy, J. Myers, R. Barrett
{"title":"SparTen: Leveraging Kokkos for On-node Parallelism in a Second-Order Method for Fitting Canonical Polyadic Tensor Models to Poisson Data","authors":"K. Teranishi, Daniel M. Dunlavy, J. Myers, R. Barrett","doi":"10.1109/HPEC43674.2020.9286251","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286251","url":null,"abstract":"Canonical Polyadic tensor decomposition using alternate Poisson regression (CP-APR) is an effective analysis tool for large sparse count datasets. One of the variants using projected damped Newton optimization for row subproblems (PDNR) offers quadratic convergence and is amenable to parallelization. Despite its potential effectiveness, PDNR performance on modern high performance computing (HPC) systems is not well understood. To remedy this, we have developed a parallel implementation of PDNR using Kokkos, a performance portable parallel programming framework supporting efficient runtime of a single code base on multiple HPC systems. We demonstrate that the performance of parallel PDNR can be poor if load imbalance associated with the irregular distribution of nonzero entries in the tensor data is not addressed. Preliminary results using tensors from the FROSTT data set indicate that using multiple kernels to address this imbalance when solving the PDNR row subproblems in parallel can improve performance, with up to 80% speedup on CPUs and 10-fold speedup on NVIDIA GPUs.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115010216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Projecting Performance for PIUMA using Down-Scaled Simulation PIUMA的降比例模拟性能预测
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286184
Stijn Eyerman, W. Heirman, Y. Demir, Kristof Du Bois, I. Hur
{"title":"Projecting Performance for PIUMA using Down-Scaled Simulation","authors":"Stijn Eyerman, W. Heirman, Y. Demir, Kristof Du Bois, I. Hur","doi":"10.1109/HPEC43674.2020.9286184","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286184","url":null,"abstract":"Programmable Integrated Unified Memory Architecture (PIUMA) is Intel's novel graph analysis optimized processor architecture, targeted at efficiently executing graph algorithms on very large graphs. Simulation is used to project its performance for various algorithms before the system is built. However, simulators are limited in the number of cores and threads they can simulate, because of their low simulation speed, high resource usage and poor scalability. Therefore, it is practically impossible to simulate PIUMA at its full system scale. In this paper, we present downscaled simulation, a technique to project performance of a large scale system using a small scale simulation. We apply the technique to PIUMA, showing how to configure the downscaled system in order to accurately reflect the characteristics of the full system. We evaluate downscaled simulation on a set of graph applications, showing that it accurately tracks simulation results of small scale simulations, as well as the projections to large systems made by an analytical model.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126438818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Framework for Task Mapping onto Heterogeneous Platforms 异构平台任务映射框架
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286211
Ta-Yang Wang, Ajitesh Srivastava, V. Prasanna
{"title":"A Framework for Task Mapping onto Heterogeneous Platforms","authors":"Ta-Yang Wang, Ajitesh Srivastava, V. Prasanna","doi":"10.1109/HPEC43674.2020.9286211","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286211","url":null,"abstract":"While heterogeneous systems provide considerable opportunities for accelerating big data applications, the variation in processing capacities and communication latency of different resources makes it challenging to effectively map the applications on the platform. To generate an optimized mapping of the input application on a variety of heterogeneous platforms, we design a flexible annotated task interaction graph based framework which 1) allows modeling of mixed CPU and GPU architectures, and 2) identifies an efficient task-hardware mapping of the input application, given the dependencies and communication costs between tasks that constitute the applications. The annotated task interaction graph (ATIG) representation captures all the information that is necessary to execute the application and the meta-data, such as performance models for estimating runtime on a target resource and communication latencies. Our framework supports solving the problem of mapping tasks in the ATIG onto available resources by including variations of greedy algorithm and LP relaxations with rounding. We show that our framework can achieve high speedup, allowing domain experts to efficiently compile a broad set of programs to parallel and heterogeneous hardware.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128458260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Scalability of Streaming on Migrating Threads 迁移线程流的可扩展性
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286193
Brian A. Page, P. Kogge
{"title":"Scalability of Streaming on Migrating Threads","authors":"Brian A. Page, P. Kogge","doi":"10.1109/HPEC43674.2020.9286193","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286193","url":null,"abstract":"Applications where streams of data are passed through large data structures are becoming of increasing importance. Unfortunately, when implemented on conventional architectures such applications become horribly inefficient, especially when attempts are made to scale up performance via some sort of parallelism. This paper discusses the implementation of the Firehose streaming benchmark on a novel parallel architecture with greatly enhanced multi-threading characteristics that avoids the conventional inefficiencies. Results are promising, with both far better scaling and increased performance over previously reported implementations, on a prototype platform with consid-erably less intrinsic hardware computational resources.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128532578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Deep Q-Learning Approach for GPU Task Scheduling GPU任务调度的深度q -学习方法
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286238
R. Luley, Qinru Qiu
{"title":"A Deep Q-Learning Approach for GPU Task Scheduling","authors":"R. Luley, Qinru Qiu","doi":"10.1109/HPEC43674.2020.9286238","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286238","url":null,"abstract":"Efficient utilization of resources is critical to system performance and effectiveness for high performance computing systems. In a graphics processing unit (GPU) -based system, one method for enabling higher utilization is concurrent kernel execution - allowing multiple independent kernels to simultaneously execute on the GPU. However, resource contention due to the manner in which kernel tasks are scheduled may still lead to suboptimal task performance and utilization. In this work, we present a deep Q-learning approach to identify an ordering for a given set of tasks which achieves near-optimal average task performance and high resource utilization. Our solution outperforms other similar approaches and has additional benefit of being adaptable to dynamic task characteristics or GPU resource configurations.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133894417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Packing Narrow-Width Operands to Improve Energy Efficiency of General-Purpose GPU Computing 压缩窄宽操作数提高通用GPU计算的能效
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286215
Xin Eric Wang, Wei Zhang
{"title":"Packing Narrow-Width Operands to Improve Energy Efficiency of General-Purpose GPU Computing","authors":"Xin Eric Wang, Wei Zhang","doi":"10.1109/HPEC43674.2020.9286215","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286215","url":null,"abstract":"In this paper, we study the use of OWAR, an _Qperand-Width-A_ware Register packing mechanism for GPU energy saving. In order to efficiently use the GPU register file (RF), OWAR employs a power gating method to shut down unused register sub-arrays for reducing dynamic and leakage energy consumption of RF. As the number of register accesses is reduced due to the packing of the narrow width operands, the dynamic energy dissipation is further decreased. Finally, with the help of RF usage optimized by register packing, OWAR allows GPUs to support more TLP (Thread Level Parallelism) through assigning additional thread blocks on SMs (Streaming Multiprocessors) for GPGPU (General-Purpose GPU) applications that suffer from the deficiency of register resources. The extra TLP opens opportunities for hiding more memory latencies and thus reduce the overall execution time, which can lower the overall energy consumption. We evaluate OWAR using a set of representative GPU benchmarks. The experimental results show that compared to the baseline without optimization, OWAR can reduce the GPGPU's total energy up to 29.6% and 9.5% on average. In addition, OWAR achieves performance improvement upto 1.97X and 1.18X on average.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130484052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Bit-Error Aware Quantization for DCT-based Lossy Compression 基于dct的有损压缩误码感知量化
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286177
Jialing Zhang, Jiaxi Chen, Aekyeung Moon, Xiaoyan Zhuo, S. Son
{"title":"Bit-Error Aware Quantization for DCT-based Lossy Compression","authors":"Jialing Zhang, Jiaxi Chen, Aekyeung Moon, Xiaoyan Zhuo, S. Son","doi":"10.1109/HPEC43674.2020.9286177","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286177","url":null,"abstract":"Scientific simulations run by high-performance computing (HPC) systems produce a large amount of data, which causes an extreme I/O bottleneck and a huge storage burden. Applying compression techniques can mitigate such overheads through reducing the data size. Unlike traditional lossless compressions, error-controlled lossy compressions, such as SZ, ZFP, and DCTZ, designed for scientists who demand not only high compression ratios but also a guarantee of certain degree of precision, is coming into prominence. While rate-distortion efficiency of recent lossy compressors, especially the DCT-based one, is promising due to its high-compression encoding, the overall coding architecture is still conservative, necessitating the quantization that strikes a balance between different encoding possibilities and varying rate-distortions. In this paper, we aim to improve the performance of DCT-based compressor, namely DCTZ, by optimizing the quantization model and encoding mechanism. Specifically, we propose a bit-efficient quantizer based on the DCTZ framework, develop a unique ordering mechanism based on the quantization table, and extend the encoding index. We evaluate the performance of our optimized DCTZ in terms of rate-distortion using real-world HPC datasets. Our experimental evaluations demonstrate that, on average, our proposed approach can improve the compression ratio of the original DCTZ by 1.38x. Moreover, combined with the extended encoding mechanism, the optimized DCTZ shows a competitive performance with state-of-the-art lossy compressors, SZ and ZFP.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"2012 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132870950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Total Ionizing Dose Radiation Testing of NVIDIA Jetson Nano GPUs NVIDIA Jetson Nano gpu的总电离剂量辐射测试
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286222
Windy S. Slater, Nayana P. Tiwari, Tyler M. Lovelly, J. Mee
{"title":"Total Ionizing Dose Radiation Testing of NVIDIA Jetson Nano GPUs","authors":"Windy S. Slater, Nayana P. Tiwari, Tyler M. Lovelly, J. Mee","doi":"10.1109/HPEC43674.2020.9286222","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286222","url":null,"abstract":"On-board electronics for small satellites can achiew high performance and power efficiency by using state-of-the-art commercial processors such as graphical processing units (GPUs). However, because commercial GPUs are not designed to operate in a space environment, they must be evaluated to determine their tolerance to radiation effects including Total Ionizing Dose (TID). In this research, TID radiation testing is performed on NVIDIA Jetson Nano GPUs using the U.S. Air Force Research Laboratory's Cobalt-60 panoramic irradiator. Preliminary results suggest operation beyond20 krad(Si), which is sufficient radiation tolerance for short duration small satellite missions.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"182 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116288290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Implementing Sparse Linear Algebra Kernels on the Lucata Pathfinder-A Computer 在Lucata Pathfinder-A计算机上实现稀疏线性代数核
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286207
Géraud Krawezik, Shannon K. Kuntz, P. Kogge
{"title":"Implementing Sparse Linear Algebra Kernels on the Lucata Pathfinder-A Computer","authors":"Géraud Krawezik, Shannon K. Kuntz, P. Kogge","doi":"10.1109/HPEC43674.2020.9286207","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286207","url":null,"abstract":"We present the implementation of two sparse linear algebra kernels on a migratory memory-side processing architecture. The first is the Sparse Matrix-Vector (SpMV) multiplication, and the second is the Symmetric Gauss-Seidel (SymGS) method. Both were chosen as they account for the largest run time of the HPCG benchmark. We introduce the system used for the experiments, as well as its programming model and key aspects to get the most performance from it. We describe the data distribution used to allow an efficient parallelization of the algorithms, and their actual implementations. We then present hardware results and simulator traces to explain their behavior. We show an almost linear strong scaling with the code, and discuss future work and improvements.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122618089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Approximate Inverse Chain Preconditioner: Iteration Count Case Study for Spectral Support Solvers 近似逆链预调节器:谱支持解算器的迭代计数案例研究
2020 IEEE High Performance Extreme Computing Conference (HPEC) Pub Date : 2020-09-22 DOI: 10.1109/HPEC43674.2020.9286201
Harper Langston, Pierre-David Létourneau, Julia Wei, Larry Weintraub, M. Harris, R. Lethin, E. Papenhausen, Meifeng Lin
{"title":"Approximate Inverse Chain Preconditioner: Iteration Count Case Study for Spectral Support Solvers","authors":"Harper Langston, Pierre-David Létourneau, Julia Wei, Larry Weintraub, M. Harris, R. Lethin, E. Papenhausen, Meifeng Lin","doi":"10.1109/HPEC43674.2020.9286201","DOIUrl":"https://doi.org/10.1109/HPEC43674.2020.9286201","url":null,"abstract":"As the growing availability of computational power slows, there has been an increasing reliance on algorithmic advances. However, faster algorithms alone will not necessarily bridge the gap in allowing computational scientists to study problems at the edge of scientific discovery in the next several decades. Often, it is necessary to simplify or precondition solvers to accelerate the study of large systems of linear equations commonly seen in a number of scientific fields. Preconditioning a problem to increase efficiency is often seen as the best approach; yet, preconditioners which are fast, smart, and efficient do not always exist. Following the progress of [1], we present a new preconditioner for symmetric diagonally dominant (SDD) systems of linear equations. These systems are common in certain PDEs, network science, and supervised learning among others. Based on spectral support graph theory, this new preconditioner builds off of the work of [2], computing and applying a V-cycle chain of approximate inverse matrices. This preconditioner approach is both algebraic in nature as well as hierarchically-constrained depending on the condition number of the system to be solved. Due to its generation of an Approximate Inverse Chain of matrices, we refer to this as the AIC preconditioner. We further accelerate the AIC preconditioner by utilizing precomputations to simplify setup and multiplications in the context of an iterative Krylov-subspace solver. While these iterative solvers can greatly reduce solution time, the number of iterations can grow large quickly in the absence of good preconditioners. Initial results for the AIC preconditioner have shown a very large reduction in iteration counts for SDD systems as compared to standard preconditioners such as Incomplete Cholesky (ICC) and Multigrid (MG). We further show significant reduction in iteration counts against the more advanced Combinatorial Mnltiortd (CMG-)preeconditioner. We have further developed no-fill sparsification techniques to ensure that the computational cost of applying the AIC preconditioner does not grow prohibitively large as the depth of the V-cycle grows for systems with larger condition numbers. Our numerical results have shown that these sparsifiers maintain the sparsity structure of our system while also displaying significant reductions in iteration counts.11The research in this document was performed in connection with con-tract/instrument DARPA HR0011-12-C-0123 with the U.S. Air Force Research Laboratory and DARPA. The views expressed are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government. Distribution Statement “A” (Approved for Public Release, Distribution Unlimited). The information in this report is proprietary information of Reservoir Labs, Inc.22Further support from the Department of Energy under DOE STTR Phase I/II Projects DE-FOA-00000760/DE-FOA-000101.","PeriodicalId":168544,"journal":{"name":"2020 IEEE High Performance Extreme Computing Conference (HPEC)","volume":"168 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122674599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信