2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)最新文献

筛选
英文 中文
Balancing Computation and Communication in Distributed Sparse Matrix-Vector Multiplication 分布式稀疏矩阵向量乘法中的平衡计算与通信
Hongli Mi, Xiangrui Yu, Xiaosong Yu, Shuangyuan Wu, Weifeng Liu
{"title":"Balancing Computation and Communication in Distributed Sparse Matrix-Vector Multiplication","authors":"Hongli Mi, Xiangrui Yu, Xiaosong Yu, Shuangyuan Wu, Weifeng Liu","doi":"10.1109/CCGrid57682.2023.00056","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00056","url":null,"abstract":"Sparse Matrix-Vector Multiplication (SpMV) is a fundamental operation in a number of scientific and engineering problems. When the sparse matrices processed are large enough, distributed memory systems should be used to accelerate SpMV. At present, the optimization techniques for distributed SpMV mainly focus on reordering through graph or hypergraph partitioning. However, although the reordering could reduce the amount of communications in general, there are still load balancing challenges in computations and communications on distributed platforms that are not well addressed. In this paper, we propose two strategies to optimize SpMV on distributed clusters: (1) resizing the number of row blocks on the nodes for balancing the amount of computations, and (2) adjusting the column number of the diagonal blocks for balancing tasks and reducing communications among compute nodes. The experimental results show that compared with the classic distributed SpMV implementation and its variant reordered with graph partitioning, our algorithm achieves on average 77.20x and 5.18x (up to 460.52x and 27.50x) speedups, respectively. Also, our method bring on average 19.56x (up to 48.49x) speedup over a recently proposed hybrid distributed SpMV algorithm. In addition, our algorithm achieves obviously better scalability over these existing distributed SpMV methods.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115314511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
HDFL: A Heterogeneity and Client Dropout-Aware Federated Learning Framework HDFL:一个异构性和客户退学意识的联邦学习框架
Syed Zawad, A. Anwar, Yi Zhou, N. Baracaldo, Feng Yan
{"title":"HDFL: A Heterogeneity and Client Dropout-Aware Federated Learning Framework","authors":"Syed Zawad, A. Anwar, Yi Zhou, N. Baracaldo, Feng Yan","doi":"10.1109/CCGrid57682.2023.00037","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00037","url":null,"abstract":"Cross-device Federated Learning (FL) enables training machine learning (ML) models on private data that is heterogeneously distributed over many IoT end devices without violating privacy requirements. Clients typically vary significantly in data quality, hardware resources and stability, which results in challenges such as increased training times, higher resource costs, sub-par model performance and biased training. Existing works tend to address each of these challenges in isolation, but overlook how they might impact each other holistically. We perform a first of its kind characterization study that empirically demonstrates how these properties interact with each other to impact important performance metrics such as model error, fairness, resource cost and training time. We then propose a method called HDFL based on our observations, which is the first framework to our knowledge that comprehensively considers the multiple aforementioned important challenges of practical FL systems. We implement HDFL on a real distributed system and evaluate it on multiple benchmark datasets which show that HDFL achieves better Pareto frontier compared to both the state-of-the-practice and state-of-the-art systems with up to 4-10% better model accuracy, 33% improved good-intent fairness, 63% lower cost, and 17% faster training time.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115548817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PrivFlow: Secure and Privacy Preserving Serverless Workflows on Cloud PrivFlow:云上的安全和隐私保护无服务器工作流
Surabhi Garg, Meena Singh Dilip Thakur, R. A, L. Maddali, Vigneswaran Ramachandran
{"title":"PrivFlow: Secure and Privacy Preserving Serverless Workflows on Cloud","authors":"Surabhi Garg, Meena Singh Dilip Thakur, R. A, L. Maddali, Vigneswaran Ramachandran","doi":"10.1109/CCGrid57682.2023.00049","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00049","url":null,"abstract":"The recent advancement of serverless computing in the widespread deployment of applications prompts the need to protect serverless workflows against cloud vulnerabilities and threats. We propose PrivFlow, a workflow-centric, privacy preserving framework to protect the information flow in serverless computing applications in semi-honest (S-PrivFlow) and malicious (M-PrivFlow) adversarial settings. An Authenticated Data Structure is used to store the valid workflows encoded in the proposed format. The validation of workflows is performed in a privacy preserving manner that leaks no sensitive information to any unauthorized user. We focus on the two most prevalent attacks on the serverless cloud platforms, namely the Denial-of-Wallet and Wrong Function Invocation attacks. We demonstrate that PrivFlow mitigates both of these attacks. Further, we evaluate PrivFlow on the popular benchmark application- Hello Retail, and a customized scaled application. Though the comparison with the state-of-the-art approaches in terms of the runtime performance shows a latency of 1.6 times for S-PrivFlow and 8 times for M-PrivFlow, the PrivFlow provides high security and privacy. PrivFlow acts as a wrapper to the application resulting in no change to the source code.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124456276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Overcoming Noisy Labels in Federated Learning Through Local Self-Guiding 局部自引导克服联邦学习中的噪声标签
Daokuan Bai, Shanshan Wang, Wenyue Wang, Hua Wang, Chuan Zhao, Peng Yuan, Zhenxiang Chen
{"title":"Overcoming Noisy Labels in Federated Learning Through Local Self-Guiding","authors":"Daokuan Bai, Shanshan Wang, Wenyue Wang, Hua Wang, Chuan Zhao, Peng Yuan, Zhenxiang Chen","doi":"10.1109/CCGrid57682.2023.00042","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00042","url":null,"abstract":"Federated Learning (FL) is a privacy-preserving machine learning paradigm that enables clients such as Internet of Things (IoT) devices, and smartphones, to train a high-performance global model jointly. However, in real-world FL deployments, carefully human-annotated labels are expensive and time-consuming. So the presence of incorrect labels (noisy labels) in the local training data of the clients is inevitable, which will cause the performance degradation of the global model. To tackle this problem, we propose a simple but effective method Local Self-Guiding (LSG) to let clients guide themselves during training in the presence of noisy labels. Specifically, LSG keeps the model from memorizing noisy labels by enhancing the confidence of model predictions. Meanwhile, it utilizes the knowledge from local historical models which haven't fit noisy patterns to extract potential ground truth labels of samples. To keep the knowledge without storing models, LSG records the exponential moving average (EMA) of model output logits at different local training epochs as self-ensemble logits on clients' devices, which will lead to negligible computation and storage overhead. Then logit-based knowledge distillation is conducted to guide the local training. Experiments on MNIST, Fashion-MNIST, CIFAR-10, ImageNet-100 with multiple noise levels, and an unbalanced noisy dataset, Clothing1M, demonstrate the resistance of LSG to noisy labels. The code of LSG is available at https://github.com/DaokuanBai/LSG-Main","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128090683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WiDual: User Identified Gesture Recognition Using Commercial WiFi 手动:用户识别手势识别使用商用WiFi
Miaoling Dai, Chenhong Cao, Tong Liu, Meijia Su, Yufeng Li, Jiangtao Li
{"title":"WiDual: User Identified Gesture Recognition Using Commercial WiFi","authors":"Miaoling Dai, Chenhong Cao, Tong Liu, Meijia Su, Yufeng Li, Jiangtao Li","doi":"10.1109/CCGrid57682.2023.00068","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00068","url":null,"abstract":"WiFi-based human gesture recognition has recently enjoyed increasing popularity in the Internet of Things (IoT) scenarios. Simultaneously recognizing user identities and user gestures is of great importance for enhancing the system security and user quality of experience (QoE). State-of-the-art approaches that perform dual tasks suffer from increased latency or degraded accuracy in cross-domain scenarios. In this paper, we present WiDual, a dual-task system that achieves accurate cross-domain gesture recognition and user identification based on WiFi in a real-time manner. The basic idea of WiDual is to use the attention mechanism to adaptively explore cross-domain features worthy of attention for dual tasks. WiDual employs a CSI (Channel Statement Information) visualization method that transfers WiFi signals to images for further feature extraction and model training. In this way, WiDual mitigates the possible loss of useful information and excessive delays caused by extracting handcrafted features directly from the WiFi signal. Furthermore, WiDual utilizes a collaboration module to combine gesture features and user identity features to enhance the performance of dual-task recognition. We implement WiDual and evaluate its performance extensively on a public dataset including 6 gestures and 6 users performed across domains. Results show that WiDual outperforms state-of-the-art approaches, with 26% and 8% improvements on the accuracy of cross-domain user identification and gesture recognition respectively.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"234 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131460157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ScaMP: Scalable Meta-Parallelism for Deep Learning Search ScaMP:深度学习搜索的可伸缩元并行
Quentin G. Anthony, Lang Xu, A. Shafi, H. Subramoni, Dhabaleswar K. Panda
{"title":"ScaMP: Scalable Meta-Parallelism for Deep Learning Search","authors":"Quentin G. Anthony, Lang Xu, A. Shafi, H. Subramoni, Dhabaleswar K. Panda","doi":"10.1109/CCGrid57682.2023.00044","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00044","url":null,"abstract":"Deep Learning (DL) models are growing exponentially and require increasingly powerful High Performance Computing (HPC) systems to train them. Achieving state-of-the-art results requires carefully tuning the DL model architecture and training settings, which is a time-consuming process commonly relegated to distributed search frameworks and trial-and-error. However, search frameworks don't provide a flexible parallelism scheme within and among the chosen DL framework for modern out-of-core DL models. In this paper, we propose Scalable Meta-Parallelism for Deep Learning Search (ScaMP): a distributed Hyperparameter Optimization (HPO) and Neural Architecture Search (NAS) framework that supports out-of-core models with flexible parallelism schemes. SCaMP is integrated into the modern DL ecosystem, and enables both efficient parallel training of concurrent candidate architectures and aggregate device memory saturation via a powerful load balancing engine. SCaMP estimates the memory requirements of each candidate architecture and automatically applies the appropriate model-parallel degree and maximum batch size supported for the given candidate. Further, HPO and NAS with SCaMP are highly customizable via flexible configuration options. We evaluate the benefits of our designs on synthetic training benchmarks and in training a state-of-the-art vision transformer model. We select transformers as a candidate DL model type and demonstrate a 29% improvement in end-to-end HPO time on 32 V100 GPUs on the Lassen and ThetaGPU HPC systems. Further, we demonstrate a reduction in the proportion of NAS time spent in communication from 28% to 15%. Finally, we thoroughly verify the correctness of SCaMP by training a state-of-the-art SwinIR model.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128531352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CUDAsap: Statically-Determined Execution Statistics as Alternative to Execution-Based Profiling CUDAsap:静态确定的执行统计作为基于执行的分析的替代方案
Yannick Emonds, Lorenz Braun, H. Fröning
{"title":"CUDAsap: Statically-Determined Execution Statistics as Alternative to Execution-Based Profiling","authors":"Yannick Emonds, Lorenz Braun, H. Fröning","doi":"10.1109/CCGrid57682.2023.00021","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00021","url":null,"abstract":"Today a variety of different GPU types exists, raising questions regarding high-level tasks such as provisioning and scheduling. To predict execution time on different GPU types accurately, we propose a method to obtain execution statistics based on compile-time static code analysis, in which the control flow graph for the code's basic blocks is determined. This graph is represented as an adjacency matrix and used in a system of linear equations to calculate the basic block execution frequencies. Kernel execution itself is not necessary for this analysis. We analyze the proposed method for five different benchmark suites, showing that 76 out of 79 evaluated kernels can be analyzed with an average error of 0.4 %, primarily due to different LLVM versions, with an average prediction time of 203.96 ms. Furthermore, repetitive kernels make memoization effective, and the underlying analysis is largely independent of problem size.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128566983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COUNSEL: Cloud Resource Configuration Management using Deep Reinforcement Learning 建议:使用深度强化学习的云资源配置管理
Adithya Hegde, Sameer G. Kulkarni, Abhinandan S. Prasad
{"title":"COUNSEL: Cloud Resource Configuration Management using Deep Reinforcement Learning","authors":"Adithya Hegde, Sameer G. Kulkarni, Abhinandan S. Prasad","doi":"10.1109/CCGrid57682.2023.00035","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00035","url":null,"abstract":"Internet Clouds are essentially service factories that offer various networked services through different service models, viz., Infrastructure, Platform, Software, and Functions as a Service. Meeting the desired service level objectives (SLOs) while ensuring efficient resource utilization requires significant efforts to provision the associated cloud resources correctly and on time. Therefore, one of the critical issues for any cloud service provider is resource configuration management. On one end, i.e., from the cloud operator's perspective, resource management affects overall resource utilization and efficiency. In contrast, from the cloud user/customer perspective, resource configuration affects the performance, cost, and offered SLOs. However, the state-of-the-art solutions for finding the configurations are limited to a single component or handle static workloads. Further, these solutions are computationally expensive and introduce profiling overhead, limiting scalability. Therefore, we propose COUNSEL, a deep reinforcement learning-based framework to handle the dynamic workloads and efficiently manage the configurations of an arbitrary multi-component service. We evaluate COUNSEL with three initial policies: over-provisioning, under-provisioning, and expert provisioning. In all the cases, COUNSEL eliminates the profiling overhead and achieves the average reward between 20 - 60% without violating the SLOs and budget constraints. Moreover, the inference time of COUNSEL has a constant time complexity.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131038569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RoUD: Scalable RDMA over UD in Lossy Data Center Networks 路:在有损数据中心网络中可扩展的RDMA
Zhiqiang He, Yuxin Chen, Bei Hua
{"title":"RoUD: Scalable RDMA over UD in Lossy Data Center Networks","authors":"Zhiqiang He, Yuxin Chen, Bei Hua","doi":"10.1109/CCGrid57682.2023.00014","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00014","url":null,"abstract":"Remote direct memory access (RDMA) has been widely deployed in data centers due to the lower latency and higher throughput of the kernel TCP/IP stack. However, RDMA still faces a scalability problem including connection scalability and network scalability issues. In this paper, we present RoUD, a userspace network stack that leverages the unreliable datagram (UD) transport mode of RDMA to improve connection scalability. RoUD also eliminates the dependency on PFC in data center networks, thereby enhancing network scalability. RoUdimplements three performance optimizations in the userspace network stack and introduces two types of flow control to avoid packet loss on the host from happening on the host for high performance. We built a prototype of RoUD based on the standard InfiniBand Verbs library. The evaluation results on a testbed with 100 Gbps RNICs show that in the case of large-scale connections its throughput is 1.4× better than the widely used reliable connection (RC) transport.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130593897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Congestion Minimization using Fog-deployed DRL-Agent Feedback enabled Traffic Light Cooperative Framework 使用雾部署的DRL-Agent反馈实现交通灯合作框架的拥塞最小化
Anuj Sachan, Nisha Singh Chauhan, Neetesh Kumar
{"title":"Congestion Minimization using Fog-deployed DRL-Agent Feedback enabled Traffic Light Cooperative Framework","authors":"Anuj Sachan, Nisha Singh Chauhan, Neetesh Kumar","doi":"10.1109/CCGrid57682.2023.00058","DOIUrl":"https://doi.org/10.1109/CCGrid57682.2023.00058","url":null,"abstract":"Congestion at signalized intersections can be alleviated by improving traffic signal control system's performance. In this context, Deep Reinforcement Learning (DRL) methods are increasingly gaining attention towards collaborative traffic signal control in vehicular networks for improving the traffic-flow. However, the existing collaborative methods lack in accounting the influence of neighbouring intersections traffic while working at a particular junction as built on the top of traditional client-server architecture. To address this, a Fog integrated DRL-based Smart Traffic Light Controller (STLC) cooperative framework is proposed via TCP/IP based communication among Fog node, Road Side Cameras (RSCs) and STLCs at the edge. The significant contributions of this work are: (1) A Fog node integrated DRL agent is proposed to minimize average waiting time and queue length, at the intersection, by generating Cycle Phase Duration (CPD) for the STLC via an appropriate coordination among neighboring intersections; (2) Utilizing the Fog node generated CPD as the feedback, a max-pressure based algorithm is proposed, for the STLC at the edge to improve the congestion at the intersection; (3) The performance of the proposed framework is analyzed on Indian cities OpenStreetMap utilizing the Simulation of Urban MObility (SUMO) simulator by varying arrival rate of the vehicles. The results demonstrate the effectiveness of the method over same line state-of-the-art methods.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126786386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信