2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)最新文献

筛选
英文 中文
Layercake: Efficient Inference Serving with Cloud and Mobile Resources Layercake:基于云和移动资源的高效推理服务
Samuel S. Ogden, Tian Guo
{"title":"Layercake: Efficient Inference Serving with Cloud and Mobile Resources","authors":"Samuel S. Ogden, Tian Guo","doi":"10.1109/CCGrid57682.2023.00027","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00027","url":null,"abstract":"Many mobile applications are now integrating deep learning models into their core functionality. These functionalities have diverse latency requirements while demanding high-accuracy results. Currently, mobile applications statically decide to use either in-cloud inference, relying on a fast and consistent network, or on-device execution, relying on sufficient local resources. However, neither mobile networks nor computation resources deliver consistent performance in practice. Consequently, mobile inference often experiences variable performance or struggles to meet performance goals, when inference execution decisions are not made dynamically. In this paper, we introduce Layer Cake, a deep-learning inference framework that dynamically selects the best model and location for executing inferences. Layercake accomplishes this by tracking model state and availability, both locally and remotely, as well as the network bandwidth, allowing for accurate estimations of model response time. By doing so, Layercake achieves latency targets in up to 96.4% of cases, which is an improvement of 16.7% over similar systems, while decreasing the cost of cloud-based resources by over 68.33% than in-cloud inference.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124661111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mixed Precision Based Parallel Optimization of Tensor Mathematical Operations on a New-generation Sunway Processor 基于混合精度的新一代神威处理器张量数学运算并行优化
Shuwei Fan, Yao Liu, Juliang Su, Xianyou Wu, Qiong Jiang
{"title":"Mixed Precision Based Parallel Optimization of Tensor Mathematical Operations on a New-generation Sunway Processor","authors":"Shuwei Fan, Yao Liu, Juliang Su, Xianyou Wu, Qiong Jiang","doi":"10.1109/CCGrid57682.2023.00062","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00062","url":null,"abstract":"As an important part of high-performance computing (HPC) applications, tensor mathematical operations have a wide and significant impact on application performance. However, due to the unique heterogeneous architecture and software environment of the new-generation Sunway processors, it is critical to utilize the computing capacities of the processor for tensor mathematical operations. The existing research has not fully considered the computing characteristics of tensor mathematical operations and the hardware features of the new-generation Sunway processor. In this paper, we propose an optimization method for tensor mathematical operations on the new-generation Sunway processor. Firstly, an optimization method for elementary functions is proposed, which implements high-performance vector elementary functions with variable precision. Then, an mixed precision optimization method is proposed, which realizes expression computation with variable precision according to precision requirements of users. Finally, a multi-level parallel optimization method is proposed, which realizes asynchronous parallelism of the master core and the slave cores. The experimental results show that, compared with the native implementation, optimized tensor mathematical operations can achieve an average speedup of 112.19× on 64 cores, which exceeds the theoretical speedup.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127703976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KalpaVriksh: Efficient and Cost-effective GUI Application Hosting using Singleton Snapshots KalpaVriksh:使用单例快照的高效且经济的GUI应用程序托管
Sumaiya Shaikh, Saurabh Kumar, Debadatta Mishra
{"title":"KalpaVriksh: Efficient and Cost-effective GUI Application Hosting using Singleton Snapshots","authors":"Sumaiya Shaikh, Saurabh Kumar, Debadatta Mishra","doi":"10.1109/CCGrid57682.2023.00026","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00026","url":null,"abstract":"Hosting popular GUI applications in different virtual machines (VMs) in a cloud can provide strong intra- application isolation and enhance the security of end-user devices. In this context, micro-VMs can be a very good fit where different applications are hosted in different micro-VMs hosted in the cloud. However, one of the challenges for the cloud service provider is to launch the application quickly when requested by any client. Techniques like VM snapshots can be used to improve the application launch time as shown in many existing research works. In this paper, we argue that GUI applications are different from snapshot-optimized cloud services like FaaS because the GUI applications are stateful and require specialized techniques for snapshot management. To manage application snapshots in a memory-efficient manner, the proposed KalpaVriksh framework maintains a single snapshot to launch multiple GUI applications from different end users. Furthermore, the unified snapshot framework does not impact the application launch time by using intelligent snapshot creation procedures. The experimental analysis shows that KalpaVriksh snapshot techniques apart from being memory- efficient, reach the farthest feasible point of snapshot capture (i.e., first external communication) during application execution, faster than a normal application launch (by 4.9x).","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121026710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HyQ: Hybrid I/O Queue Architecture for NVMe over Fabrics to Enable High- Performance Hardware Offloading HyQ: NVMe在fabric上的混合I/O队列架构,以实现高性能硬件卸载
Yiquan Chen, Jinlong Chen, Yijing Wang, Yi Chen, Zhengxu Jin, Jiexiong Xu, Guoju Fang, Wenhai Lin, Chengkun Wei, Wenzhi Chen
{"title":"HyQ: Hybrid I/O Queue Architecture for NVMe over Fabrics to Enable High- Performance Hardware Offloading","authors":"Yiquan Chen, Jinlong Chen, Yijing Wang, Yi Chen, Zhengxu Jin, Jiexiong Xu, Guoju Fang, Wenhai Lin, Chengkun Wei, Wenzhi Chen","doi":"10.1109/CCGrid57682.2023.00012","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00012","url":null,"abstract":"NVMe over Fabrics (NVMe-oF) has been widely applied as a remote storage protocol in cloud computing. The existing NVMe-oF software stack consumes a large number of CPU resources. Emerging devices, such as Smart NICs and DPUs, have supported hardware offloading of NVMe-oF to free these valuable CPU cores. However, NVMe-oF offloading capacity is always compromised because of limited hardware resources on design. Additionally, from thorough evaluations, we found that NVMe-oF inevitably suffers from severe performance degradation on complex application I/O patterns when using hardware offloading. It is challenging to achieve high performance and fully utilize NVMe-oF offloading simultaneously. In this paper, we propose HyQ, a novel hybrid I/O queue architecture for NVMe-oF, to achieve high performance while gaining the advantages of hardware offloading. HyQ realizes the coexistence of hardware offloading and software non-offloading queues, thus enabling the dynamic dispatching of I/O requests to appropriate processing queues according to user-defined I/O scheduling policies. Additionally, HyQ provides a request scheduling framework to support customized schedulers that select appropriate queues for I/O requests. In our evaluation, HyQ achieves up to 1.91x IOPS and 8.36x bandwidth performance improvement over the original hardware offloading scheme.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"185 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126955386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Learning Pipeline Parallel Optimization Method 一种深度学习管道并行优化方法
Tiantian Lv, Lu Wu, Zhigang Zhao, Chunxiao Wang, Chuantao Li
{"title":"A Deep Learning Pipeline Parallel Optimization Method","authors":"Tiantian Lv, Lu Wu, Zhigang Zhao, Chunxiao Wang, Chuantao Li","doi":"10.1109/CCGrid57682.2023.00031","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00031","url":null,"abstract":"In recent years, with the continuous development of artificial intelligence, deep learning algorithms are becoming more and more complex, and the scale of model training is also growing. The artificial intelligence platform also involves large-scale model training in our computing network operating system project. However, with the increasing size of data sets and models, the traditional single-card training makes the training speed very slow, and the training accuracy needs to converge, which has yet to meet people's computational needs. This has led to the development of GPipe, PipeDream, and other famous pipelines. In this paper, an efficient pipeline parallel training optimization method is proposed. In our approach, multiple computing nodes process small batches of data in parallel in a pipeline manner. We have mainly done the following two aspects of work: First, we designed a weight buffer strategy to limit the number of weight versions generated and ensure the model's accuracy. And we also developed a tensor compression mechanism to improve the transmission rate. Secondly, we propose a prefix sum partition algorithm to ensure that the pipeline can achieve balanced partitioning and save the memory of computing resources. Compared with several popular pipeline parallel frameworks, the proposed method can achieve about twice the training acceleration and save about 30% - 40% of the memory usage.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116291391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EMPI: Enhanced Message Passing Interface in Modern C++ 现代c++中增强的消息传递接口
Majid Salimi Beni, Luigi Crisci, Biagio Cosenza
{"title":"EMPI: Enhanced Message Passing Interface in Modern C++","authors":"Majid Salimi Beni, Luigi Crisci, Biagio Cosenza","doi":"10.1109/CCGrid57682.2023.00023","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00023","url":null,"abstract":"Message Passing Interface (MPI) is a well-known standard for programming distributed and HPC systems. While the community has been continuously improving MPI to address the requirements of next-generation architectures and applications, its interface has not substantially evolved. In fact, MPI only provides an interface to C and Fortran and does not support recent features of modern C++. Moreover, MPI programs are error-prone and subject to different syntactic and semantic errors. This paper introduces EMPI, an Enhanced Message Passing Interface based on modern C++, which is directly mapped to the OpenMPI implementation and exploits modern C++ for safe and efficient distributed programming. EMPI proposes novel C++RAII-based semantics and constant specialization to prevent error-prone code patterns such as parameter mismatch, and reduce the overhead of handling multiple objects and perinvocation time. Consequently, EMPI programs are safer: six out of nine well-known MPI error patterns do not occur while correctly using EMPI semantics. Experimental results on five microbenchmarks and two applications on a large-scale cluster using up to 1024 processes show that EMPI's performance is very similar to native MPI and considerably faster than the MPL C++ interface.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116458557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
hsSpMV: A Heterogeneous and SPM-aggregated SpMV for SW26010-Pro many-core processor hsSpMV:用于SW26010-Pro多核处理器的异构spm聚合SpMV
J. Pan, Lei Xiao, Min Tian, Li Wang, Chaochao Yang, Renjiang Chen, Zenghui Ren, Anjun Liu, Guanghui Zhu
{"title":"hsSpMV: A Heterogeneous and SPM-aggregated SpMV for SW26010-Pro many-core processor","authors":"J. Pan, Lei Xiao, Min Tian, Li Wang, Chaochao Yang, Renjiang Chen, Zenghui Ren, Anjun Liu, Guanghui Zhu","doi":"10.1109/CCGrid57682.2023.00016","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00016","url":null,"abstract":"Sparse matrix vector multiplication (SpMV) is a critical performance bottleneck for numerical simulation and artificial intelligence training. The new generation of Sunway supercomputer is the advanced exascale supercomputer in China. The SW26010-Pro many-core processor renders itself as a competitive candidate for its attractive computational power in both numerical simulation and artificial intelligence training. In this paper, we propose a heterogeneous and SPM-aggregated SpMV kernel, specifically designed for the SW26010-Pro many-core processor. To fully exploit the computational power of the SW26010-Pro and balance the load of each core group(CG) during computation, we employ asynchronous computation workflow and propose the SPM-aggregated strategy and vector adaptive mapping algorithm. In addition, we propose the two-level data partition scheme to implement computational load balance. In order to improve memory access efficiency, we directly access memory via DMA controller to replace the discrete memory access. Using several optimizations, we achieve a 77.16x speedup compared to the original implementation. Our experimental results show that the hsSpMV yields up to 3.82× speedups on average compared to the SpMV kernel of the state-of-the-art Sunway math library xMath2.0.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"283 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114609165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Cloud-Fog Architecture for Video Analytics on Large Scale Camera Networks Using Semantic Scene Analysis 基于语义场景分析的大规模摄像机网络视频分析云雾架构
Kunal Jain, Kishan Sairam Adapa, Kunwar Grover, R. Sarvadevabhatla, Suresh Purini
{"title":"A Cloud-Fog Architecture for Video Analytics on Large Scale Camera Networks Using Semantic Scene Analysis","authors":"Kunal Jain, Kishan Sairam Adapa, Kunwar Grover, R. Sarvadevabhatla, Suresh Purini","doi":"10.1109/CCGrid57682.2023.00054","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00054","url":null,"abstract":"This paper proposes a scalable distributed video analytics framework that can process thousands of video streams from sources such as CCTV cameras using semantic scene analysis. The main idea is to deploy deep learning pipelines on the fog nodes and generate semantic scene description records (SDRs) of video feeds from the associated CCTV cameras. These SDRs are transmitted to the cloud instead of video frames saving on network bandwidth. Using these SDRs stored on the cloud database, we can answer many complex queries and perform rich video analytics, within extremely low latencies. There is no need to scan and process the video streams again on a per query basis. The software architecture on the fog nodes allows for integrating new deep learning pipelines dynamically into the existing system, thereby supporting novel analytics and queries. We demonstrate the effectiveness of the system by proposing a novel distributed algorithm for real-time vehicle pursuit. The proposed algorithm involves asking multiple spatio-temporal queries in an adaptive fashion to reduce the query processing time and is robust to inaccuracies in the deployed deep learning pipelines and camera failures.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115195752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Empirical Study of Container Image Configurations and Their Impact on Start Times 容器镜像配置及其对启动时间影响的实证研究
Martin Straesser, A. Bauer, Robert Leppich, N. Herbst, K. Chard, I. Foster, Samuel Kounev
{"title":"An Empirical Study of Container Image Configurations and Their Impact on Start Times","authors":"Martin Straesser, A. Bauer, Robert Leppich, N. Herbst, K. Chard, I. Foster, Samuel Kounev","doi":"10.1109/CCGrid57682.2023.00019","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00019","url":null,"abstract":"A core selling point of application containers is their fast start times compared to other virtualization approaches like virtual machines. Predictable and fast container start times are crucial for improving and guaranteeing the performance of containerized cloud, serverless, and edge applications. While previous work has investigated container starts, there remains a lack of understanding of how start times may vary across container configurations. We address this shortcoming by presenting and analyzing a dataset of approximately 200,000 open-source Docker Hub images featuring different image configurations (e.g., image size and exposed ports). Leveraging this dataset, we investigate the start times of containers in two environments and identify the most influential features. Our experiments show that container start times can vary between hundreds of milliseconds and tens of seconds in the same environment. Moreover, we conclude that no single dominant configuration feature determines a container's start time, and hardware and software parameters must be considered together for an accurate assessment.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133659670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Scheduling DNN Inferencing on Edge and Cloud for Personalized UAV Fleets 基于边缘和云的个性化无人机机群调度DNN推理
Suman Raj, Harshil Gupta, Yogesh L. Simmhan
{"title":"Scheduling DNN Inferencing on Edge and Cloud for Personalized UAV Fleets","authors":"Suman Raj, Harshil Gupta, Yogesh L. Simmhan","doi":"10.1109/CCGrid57682.2023.00063","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00063","url":null,"abstract":"Drone fleets with onboard cameras coupled with DNN inferencing models can support diverse applications, from infrastructure monitoring to package deliveries. Here, we propose to use one or more “buddy” drones to help Visually Impaired People (VIPs) lead an active lifestyle. Video inferencing tasks from such drones are used to navigate the drone and alert the VIP to threats, and hence have strict execution deadlines. They have a choice to execute either on an accelerated edge like Nvidia Jetson linked to the drone, or on a cloud INFerencing-as-a-Service (INFaaS). However, making this decision is a challenge given the latency and cost trade-offs, and network variability in outdoor environments. We propose a deadline-driven heuristic to schedule a stream of diverse DNN inferencing tasks executing over video segments generated by multiple drones linked to an edge, with the option to execute on the cloud. We use strategies like task dropping, work stealing and migration, and dynamic adaptation to cloud variability, to fully utilize the captive edge with intelligent offloading to the cloud, to maximize the utility and the number of tasks completed. We evaluate our strategies using a setup that emulates a fleet of > 50 drones within city conditions supporting> 25 VIPs, with real DNN models executing on drone video streams, using Jetson Nano edges and AWS Lambda cloud functions. Our detailed comparison of our strategy exhibits a task completion rate of up to 91 %, up to 2.5× higher utility compared to the baselines and 68% higher utility with network variability.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131257603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信