2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)最新文献

筛选
英文 中文
Competitor Attack Model for Privacy-Preserving Deep Learning 隐私保护深度学习的竞争对手攻击模型
Dongdong Zhao, Songsong Liao, Huanhuan Li, Jianwen Xiang
{"title":"Competitor Attack Model for Privacy-Preserving Deep Learning","authors":"Dongdong Zhao, Songsong Liao, Huanhuan Li, Jianwen Xiang","doi":"10.1109/CCGridW59191.2023.00034","DOIUrl":"https://doi.org/10.1109/CCGridW59191.2023.00034","url":null,"abstract":"Since deep learning models usually handle a large amount of data, the ensuing problems of privacy leakage have attracted more and more attention. Although various privacy-preserving deep learning (PPDL) methods have been proposed, these methods may still have the risk of privacy leakage in some cases. To better investigate the security of existing PPDL methods, we establish a new attack model, which may be exploited by the competitors of the owners of private data. Specifically, we assume that the competitor has some data that belongs to the same domain as the private data of the other party, and he applies the same PPDL methods to his data as the other party to obtain the perturbed data, and then he trains a model to inverse the perturbed data. Data perturbation-based PPDL methods are selected in four scenarios and their security against the proposed competitor attack model (CAM) is investigated. The experimental results on three public datasets, i.e. MNIST, CIFAR10 and LFW, demonstrate that the selected methods tend to be vulnerable to CAM. On average, the recognition accuracy for the images reconstructed by CAM is about 10% lower than that for the original images, and PSNR is more than 15. The outline of the image and other information can be seen by the naked eye.","PeriodicalId":341115,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123564212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Controlling Air Pollution in Data Centers using Green Data Centers 利用绿色数据中心控制数据中心的空气污染
Sweta Dey, Sujata Pal
{"title":"Controlling Air Pollution in Data Centers using Green Data Centers","authors":"Sweta Dey, Sujata Pal","doi":"10.1109/CCGridW59191.2023.00043","DOIUrl":"https://doi.org/10.1109/CCGridW59191.2023.00043","url":null,"abstract":"Growing air pollution has become a global threat to the environment. Controlling this global threat is very challenging and costly. Therefore, this paper proposes an air pollution control measure using green metrics (GMs). For doing this, we minimize the carbon emission from traditional data centers (DCs) by designing the ‘Green Data Centers’ (GDCs). GDCs are the control mechanism, which includes a set of different green protocols. GDCs are designed in such a way that they can minimize the carbon emission (i.e., CO, and $CO_{2})$ from the traditional DCs. The design of GDCs is also responsible for optimizing energy consumption, cost-effectiveness, efficient network infrastructures, load scheduling algorithms, and the number of used devices like switches, ports, and linecards. GDCs are constructed by taking care of the idle server because it consumes massive energy than the computing server. This paper also presents a taxonomy of the existing research work, which contains the research of GDCs related to DCs, like cloud computing and cooling techniques. Apart from this, we discuss various green metrics (GMs), green computing, and networking proposals of GDCs.","PeriodicalId":341115,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123036663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Web Services Relocation and Reallocation for Data Residency Compliance 数据驻留遵从性的Web服务重新定位和重新分配
Pankaj Sahu, S. Roy, M. Gharote, S. Lodha
{"title":"Web Services Relocation and Reallocation for Data Residency Compliance","authors":"Pankaj Sahu, S. Roy, M. Gharote, S. Lodha","doi":"10.1109/CCGridW59191.2023.00033","DOIUrl":"https://doi.org/10.1109/CCGridW59191.2023.00033","url":null,"abstract":"Compliance with data residency regulations is a huge challenge for most enterprises and cloud service providers. With the emerging data regulations, enterprises need to intermittently review their web service deployment decisions. To be compliant, enterprises need to relocate and reallocate some of the non-compliant users and web services. It can be obtained by minimal or relatively large migrations; however, it has implications over cost.In this paper, we propose an optimization model for web services relocation and reallocation problem (referred to as WSLAP-Repair). The goal is to achieve minimal reallocations with lower additional operational costs while meeting the latency requirements of the concerned web services. We propose a novel heuristic for solving WSLAP-Repair problem. We demonstrate our results on small problem instances using an integer programming open-source solver.","PeriodicalId":341115,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121678754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Importance-driven In situ Analysis and Visualization 重要性驱动的现场分析和可视化
M. A. Wani, Preeti Malakar
{"title":"Importance-driven In situ Analysis and Visualization","authors":"M. A. Wani, Preeti Malakar","doi":"10.1109/CCGridW59191.2023.00073","DOIUrl":"https://doi.org/10.1109/CCGridW59191.2023.00073","url":null,"abstract":"The advent of exascale has enhanced the computing capacity to unprecedented scales. Scientific applications now generate massive amounts of data in a few seconds. However, improvement in memory, I/O and network bandwidth has been sub-exponential, resulting in an increasing gap between the rate at which data may be generated and consumed. Data is typically analyzed and visualized after simulations. In situ processing implies analyzing/visualizing data as soon as it is generated, often bypassing the disk I/O bottleneck. Analyzing every time step will increase the end-to-end simulation-analysis time. However most works determine the frequency of analysis/visualization without examining the data content. This may result in omission of critical time steps of the simulations. We propose improving the simulation-analysis-visualization workflow time considering the importance of data. We monitor the changes in data in an ongoing simulation and transfer only the most significant time steps thereby further reducing the data transfer time (by 68%), which is often a bottleneck for in situ analysis.","PeriodicalId":341115,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132912883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Industry 4.0 Readiness: Monolith Application Refactoring using Graph Attention Networks 改进工业4.0准备:使用图注意力网络重构整体应用
Tanisha Rathod, Christina Terese Joseph, J. P. Martin
{"title":"Improving Industry 4.0 Readiness: Monolith Application Refactoring using Graph Attention Networks","authors":"Tanisha Rathod, Christina Terese Joseph, J. P. Martin","doi":"10.1109/CCGridW59191.2023.00046","DOIUrl":"https://doi.org/10.1109/CCGridW59191.2023.00046","url":null,"abstract":"Industry 4.0 utilizes cyber-physical systems to bridge the technological gap for the implementation of smart manufacturing techniques. This encompasses the use of advanced technologies like artificial intelligence, cloud and edge computing, and augmented reality. Machines need to work in harmony in order to achieve enhanced speeds and productivity. This harmony can be effectuated via the synchronization among machines using APIs to modernize their legacy systems. In other words, the long-existing monolithic frameworks in factory environments must be refactored into microservices. Software systems can be naturally represented as graphs. Software entities and their dependencies can be portrayed as nodes and edges, respectively. So, the task of refactoring can be condensed into a graph based clustering task. A novel graph attention based network is proposed in this work, to detect outliers to delineate the top refactor candidates, as well as to recommend clusters of microservices. Industrial microservice benchmarks have been identified to validate our model. Results show that our graph attention network improves state-of-the-art performance when compared to existing graph representation based refactoring techniques.","PeriodicalId":341115,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131749618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unique Prefix vs. Unique Mask for Minimizing SDN Flows with Transparent Edge Access 使用透明边缘访问最小化SDN流的唯一前缀与唯一掩码
Josef Hammer, H. Hellwagner
{"title":"Unique Prefix vs. Unique Mask for Minimizing SDN Flows with Transparent Edge Access","authors":"Josef Hammer, H. Hellwagner","doi":"10.1109/CCGridW59191.2023.00085","DOIUrl":"https://doi.org/10.1109/CCGridW59191.2023.00085","url":null,"abstract":"Multi-access Edge Computing (MEC) is a central piece of 5G telecommunication systems and is essential to satisfy the challenging low-latency demands of future applications. MEC provides a cloud computing platform at the edge of the radio access network that developers can utilize for their applications. Our previous publications argue that edge computing should be transparent to clients. We introduced an efficient solution to implement such a transparent approach, leveraging Software-Defined Networking (SDN) and virtual IP+port addresses for registered edge services. In this work, we introduce the Unique Mask, a solution superior to the Unique Prefix presented in our previous work that considerably reduces the number of required flows in the switches. Our evaluations show that both algorithms perform very well, with the Unique Mask capable of reducing the number of flows by up to 98 %.","PeriodicalId":341115,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132365775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MUAR: Maximizing Utilization of Available Resources for Query Processing MUAR:最大限度地利用查询处理的可用资源
Mayank Patel, Minal Bhise
{"title":"MUAR: Maximizing Utilization of Available Resources for Query Processing","authors":"Mayank Patel, Minal Bhise","doi":"10.1109/CCGridW59191.2023.00040","DOIUrl":"https://doi.org/10.1109/CCGridW59191.2023.00040","url":null,"abstract":"Processing large datasets requires significant hardware resources and energy. Researchers have observed that most database management systems could not utilize available resources efficiently, increasing data to result time and application running costs. This research explores techniques that can maximize the utilization of available resources to efficiently process large datasets on limited resource systems. The work implemented single and multiple resource maximization techniques and observed improvements in total workload execution time (WET). Results showed that combining CPU and RAM resource maximization techniques can reduce WET by 61-81% compared to the original WET observed with default resource allocation configuration. This work proposes a lightweight real-time resource allocation and task scheduling algorithm MUAR (Maximizing Utilization of Available Resources). It maximizes the utilization of available resources considering the real-time availability of resources and workload task complexity. The algorithm identifies complex multi-join queries and allocates maximum available resources for faster execution. MUAR is capable of estimating work memory value with 15-20% error required to achieve the best query performance with only single query run data. A comparison of MUAR with machine learning-based techniques like PCC and AutoToken is also presented.","PeriodicalId":341115,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126355422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
To Think Like a Vertex (or Not) for Distributed Training of Graph Neural Networks 像顶点一样思考(或不像顶点)的分布式图神经网络训练
Varad Kulkarni, Akarsh Chaturvedi, Pranjal Naman, Yogesh L. Simmhan
{"title":"To Think Like a Vertex (or Not) for Distributed Training of Graph Neural Networks","authors":"Varad Kulkarni, Akarsh Chaturvedi, Pranjal Naman, Yogesh L. Simmhan","doi":"10.1109/CCGridW59191.2023.00082","DOIUrl":"https://doi.org/10.1109/CCGridW59191.2023.00082","url":null,"abstract":"Graph Neural Networks (GNNs) train neural networks that combine the topological properties of a graph with the vertex and edge features to perform tasks such as node classification and link prediction. We propose a novel middleware that approaches GNN training from the perspective of a vertex-centric model (VCM) of distributed graph processing and overlays neural network training over it. Giraph Graph Neural Network (G2N2) uses a three-phase execution pattern by construction a distributed computation graph per mini-batch, and maps the forward and backward passes of the GNN training to VCM. We implement a prototype of G2N2 in Apache Giraph and report results from a preliminary evaluation using two real-world graphs on a commodity cluster.","PeriodicalId":341115,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124226878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scalable, High-Quality Scheduling of Data Center Workloads 可扩展的、高质量的数据中心工作负载调度
Meghana Thiyyakat, Subramaniam Kalambur, D. Sitaram
{"title":"Scalable, High-Quality Scheduling of Data Center Workloads","authors":"Meghana Thiyyakat, Subramaniam Kalambur, D. Sitaram","doi":"10.1109/CCGridW59191.2023.00079","DOIUrl":"https://doi.org/10.1109/CCGridW59191.2023.00079","url":null,"abstract":"Data center schedulers must make complex tradeoffs and optimizations to achieve their scalability and high-quality scheduling goals. Scalability refers to the scheduler’s ability to support both the increasing scale of the workloads (in terms of arrival rate and resource demand) and the infrastructure required by these workloads. Scheduling quality refers to the scheduler’s ability to meet the user’s performance requirements (such as latency guarantees and placement constraints) without compromising the data center’s resource utilization.We propose two solutions to achieve scalability and high-quality scheduling. The first is Megha, a decentralized, federated scheduling framework that uses an eventually-consistent global state to make fast, high-quality scheduling decisions for data centers with tens of thousands of nodes. The second is an intra-node scheduling technique called Niyama, which provides robust CPU bandwidth isolation to latency-sensitive tasks - protecting them from interference from co-located tasks. The two frameworks have been evaluated using workloads generated from publicly available cluster traces, and the results show significant improvements over existing state-of-the-art solutions.","PeriodicalId":341115,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134224971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ScaMP: Scalable Meta-Parallelism for Deep Learning Search ScaMP:深度学习搜索的可伸缩元并行
Quentin G. Anthony, Lang Xu, A. Shafi, H. Subramoni, Dhabaleswar K. Panda
{"title":"ScaMP: Scalable Meta-Parallelism for Deep Learning Search","authors":"Quentin G. Anthony, Lang Xu, A. Shafi, H. Subramoni, Dhabaleswar K. Panda","doi":"10.1109/CCGridW59191.2023.00080","DOIUrl":"https://doi.org/10.1109/CCGridW59191.2023.00080","url":null,"abstract":"In this paper, we propose Scalable Meta-Parallelism for Deep Learning Search (ScaMP): a distributed Hyperparameter Optimization (HPO) and Neural Architecture Search (NAS) framework that supports out-of-core models with flexible parallelism schemes. SCaMP is integrated into the modern DL ecosystem, and enables both efficient parallel training of concurrent candidate architectures and aggregate device memory saturation via a powerful load balancing engine. SCaMP estimates the memory requirements of each candidate architecture and automatically applies the appropriate model-parallel degree and maximum batch size supported for the given candidate.We evaluate the benefits of our designs on synthetic training benchmarks and in training a state-of-the-art vision transformer model. We select transformers as a candidate DL model type and demonstrate a 29% improvement in end-to-end HPO time on 32 V100 GPUs on the Lassen and ThetaGPU HPC systems. Further, we demonstrate a reduction in the proportion of NAS time spent in communication from 28% to 15%. Finally, we thoroughly verify the correctness of SCaMP by training a state-of-the-art SwinIR model.","PeriodicalId":341115,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123849510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信