2023 IEEE International Conference on Edge Computing and Communications (EDGE)最新文献

筛选
英文 中文
Fault Tolerant Horizontal Computation Offloading 容错水平计算卸载
2023 IEEE International Conference on Edge Computing and Communications (EDGE) Pub Date : 2023-05-24 DOI: 10.1109/EDGE60047.2023.00036
Alexander Droob, Daniel Morratz, Frederik Langkilde Jakobsen, Jacob Carstensen, Magnus Mathiesen, Rune Bohnstedt, M. Albano, Sergio Moreschini, D. Taibi
{"title":"Fault Tolerant Horizontal Computation Offloading","authors":"Alexander Droob, Daniel Morratz, Frederik Langkilde Jakobsen, Jacob Carstensen, Magnus Mathiesen, Rune Bohnstedt, M. Albano, Sergio Moreschini, D. Taibi","doi":"10.1109/EDGE60047.2023.00036","DOIUrl":"https://doi.org/10.1109/EDGE60047.2023.00036","url":null,"abstract":"The broad development and usage of edge devices has highlighted the importance of creating resilient and computationally advanced edge-to-cloud continuum environments. When working with edge devices these desiderata are usually achieved through replication and offloading. This paper reports on the design and implementation of a fault-tolerant service that enables the offloading of jobs from devices with limited computational power. We propose a solution that allows users to upload jobs through a web service, which will be executed on edge nodes within the system. The solution is designed to be fault tolerant and scalable, with no single point of failure as well as the ability to accommodate growth, if the service is expanded. The use of Docker checkpointing on the worker machines ensures that jobs can be resumed in the event of a fault. We provide a mathematical approach to optimize the number of checkpoints that are created along a computation, given that we can forecast the time needed to execute a job. We present experiments that indicate in which scenarios checkpointing benefits job execution. Our experiments shows the benefits of using checkpointing and restore when the completion jobs’ time rises compared with the forecast fault rate.","PeriodicalId":369407,"journal":{"name":"2023 IEEE International Conference on Edge Computing and Communications (EDGE)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121933497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
LightESD: Fully-Automated and Lightweight Anomaly Detection Framework for Edge Computing LightESD:用于边缘计算的全自动轻量级异常检测框架
2023 IEEE International Conference on Edge Computing and Communications (EDGE) Pub Date : 2023-05-20 DOI: 10.1109/EDGE60047.2023.00032
Ronit Das, Tie Luo
{"title":"LightESD: Fully-Automated and Lightweight Anomaly Detection Framework for Edge Computing","authors":"Ronit Das, Tie Luo","doi":"10.1109/EDGE60047.2023.00032","DOIUrl":"https://doi.org/10.1109/EDGE60047.2023.00032","url":null,"abstract":"Anomaly detection is widely used in a broad range of domains from cybersecurity to manufacturing, finance, and so on. Deep learning based anomaly detection has recently drawn much attention because of its superior capability of recognizing complex data patterns and identifying outliers accurately. However, deep learning models are typically iteratively optimized in a central server with input data gathered from edge devices, and such data transfer between edge devices and the central server impose substantial overhead on the network and incur additional latency and energy consumption. To overcome this problem, we propose a fully-automated, lightweight, statistical learning based anomaly detection framework called LightESD. It is an on-device learning method without the need for data transfer between edge and server, and is extremely lightweight that most low-end edge devices can easily afford with negligible delay, CPU/memory utilization, and power consumption. Yet, it achieves highly competitive detection accuracy. Another salient feature is that it can auto-adapt to probably any dataset without manually setting or configuring model parameters or hyperparameters, which is a drawback of most existing methods. We focus on time series data due to its pervasiveness in edge applications such as IoT. Our evaluation demonstrates that LightESD outperforms other SOTA methods on detection accuracy, efficiency, and resource consumption. Additionally, its fully automated feature gives it another competitive advantage in terms of practical usability and generalizability.","PeriodicalId":369407,"journal":{"name":"2023 IEEE International Conference on Edge Computing and Communications (EDGE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130777299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing 模拟:一个神经网络设计框架的准确推理与模拟在内存计算
2023 IEEE International Conference on Edge Computing and Communications (EDGE) Pub Date : 2023-05-17 DOI: 10.1109/EDGE60047.2023.00045
Hadjer Benmeziane, C. Lammie, I. Boybat, M. Rasch, M. L. Gallo, H. Tsai, R. Muralidhar, S. Niar, Hamza Ouarnoughi, V. Narayanan, A. Sebastian, Kaoutar El Maghraoui
{"title":"AnalogNAS: A Neural Network Design Framework for Accurate Inference with Analog In-Memory Computing","authors":"Hadjer Benmeziane, C. Lammie, I. Boybat, M. Rasch, M. L. Gallo, H. Tsai, R. Muralidhar, S. Niar, Hamza Ouarnoughi, V. Narayanan, A. Sebastian, Kaoutar El Maghraoui","doi":"10.1109/EDGE60047.2023.00045","DOIUrl":"https://doi.org/10.1109/EDGE60047.2023.00045","url":null,"abstract":"The advancement of Deep Learning (DL) is driven by efficient Deep Neural Network (DNN) design and new hardware accelerators. Current DNN design is primarily tailored for general-purpose use and deployment on commercially viable platforms. Inference at the edge requires low latency, compact and power-efficient models, and must be cost-effective. Digital processors based on typical von Neumann architectures are not conducive to edge AI given the large amounts of required data movement in and out of memory. Conversely, analog/mixed-signal in-memory computing hardware accelerators can easily transcend the memory wall of von Neuman architectures when accelerating inference workloads. They offer increased area-and power efficiency, which are paramount in edge resource-constrained environments. In this paper, we propose AnalogNAS, a framework for automated DNN design targeting deployment on analog In-Memory Computing (IMC) inference accelerators. We conduct extensive hardware simulations to demonstrate the performance of AnalogNAS on State-Of-The-Art (SOTA) models in terms of accuracy and deployment efficiency on various Tiny Machine Learning (TinyML) tasks. We also present experimental results that show AnalogNAS models achieving higher accuracy than SOTA models when implemented on a 64-core IMC chip based on Phase Change Memory (PCM). The AnalogNAS search code is released1","PeriodicalId":369407,"journal":{"name":"2023 IEEE International Conference on Edge Computing and Communications (EDGE)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128983900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Transfer-Once-For-All: AI Model Optimization for Edge transfer -一劳永逸:Edge的AI模型优化
2023 IEEE International Conference on Edge Computing and Communications (EDGE) Pub Date : 2023-03-27 DOI: 10.1109/EDGE60047.2023.00017
Achintya Kundu, L. Wynter, Rhui Dih Lee, L. A. Bathen
{"title":"Transfer-Once-For-All: AI Model Optimization for Edge","authors":"Achintya Kundu, L. Wynter, Rhui Dih Lee, L. A. Bathen","doi":"10.1109/EDGE60047.2023.00017","DOIUrl":"https://doi.org/10.1109/EDGE60047.2023.00017","url":null,"abstract":"Weight-sharing neural architecture search aims to optimize a configurable neural network model (supernet) for a variety of deployment scenarios across many devices with different resource constraints. Existing approaches use evolutionary search to extract models of different sizes from a supernet trained on a very large data set, and then fine-tune the extracted models on the typically small, real-world data set of interest. The computational cost of training thus grows linearly with the number of different model deployment scenarios. Hence, we propose Transfer-Once-For-All (TOFA) for supernet-style training on small data sets with constant computational training cost over any number of edge deployment scenarios. Given a task, TOFA obtains custom neural networks, both the topology and the weights, optimized for any number of edge deployment scenarios. To overcome the challenges arising from small data, TOFA utilizes a unified semi-supervised training loss to simultaneously train all subnets within the supernet, coupled with on-the-fly architecture selection at deployment time.","PeriodicalId":369407,"journal":{"name":"2023 IEEE International Conference on Edge Computing and Communications (EDGE)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116932840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Context-Aware Task Handling in Resource-Constrained Robots with Virtualization 基于虚拟化的资源受限机器人环境感知任务处理
2023 IEEE International Conference on Edge Computing and Communications (EDGE) Pub Date : 2021-04-09 DOI: 10.1109/EDGE60047.2023.00047
Ramyad Hadidi, Nima Shoghi Ghaleshahi, Bahar Asgari, Hyesoon Kim
{"title":"Context-Aware Task Handling in Resource-Constrained Robots with Virtualization","authors":"Ramyad Hadidi, Nima Shoghi Ghaleshahi, Bahar Asgari, Hyesoon Kim","doi":"10.1109/EDGE60047.2023.00047","DOIUrl":"https://doi.org/10.1109/EDGE60047.2023.00047","url":null,"abstract":"Intelligent mobile robots are critical in several scenarios. However, as their computational resources are limited, mobile robots struggle to handle several tasks concurrently while guaranteeing real timeliness. To address this challenge and improve the real-timeliness of critical tasks under resource constraints, we propose a fast context-aware task handling technique. To effectively handle tasks in real-time, our proposed context-aware technique comprises three main ingredients: (i) a dynamic time-sharing mechanism, coupled with (ii) an event-driven task scheduling using reactive programming paradigm to mindfully use the limited resources; and, (iii) a lightweight virtualized execution to easily integrate functionalities and their dependencies. We showcase our technique on a Raspberry-Pi-based robot with a variety of tasks such as Simultaneous localization and mapping (SLAM), sign detection, and speech recognition with a 42% speedup in total execution time compared to the common Linux scheduler.","PeriodicalId":369407,"journal":{"name":"2023 IEEE International Conference on Edge Computing and Communications (EDGE)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128695397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Reducing Inference Latency with Concurrent Architectures for Image Recognition at Edge 利用并行架构减少边缘图像识别的推理延迟
2023 IEEE International Conference on Edge Computing and Communications (EDGE) Pub Date : 2020-11-13 DOI: 10.1109/EDGE60047.2023.00046
Ramyad Hadidi, Jiashen Cao, M. Ryoo, Hyesoon Kim
{"title":"Reducing Inference Latency with Concurrent Architectures for Image Recognition at Edge","authors":"Ramyad Hadidi, Jiashen Cao, M. Ryoo, Hyesoon Kim","doi":"10.1109/EDGE60047.2023.00046","DOIUrl":"https://doi.org/10.1109/EDGE60047.2023.00046","url":null,"abstract":"Satisfying the high computation demand of modern deep learning architectures is challenging for achieving low inference latency. The current approaches in decreasing latency only increase parallelism within a layer. This is because architectures typically capture a single-chain dependency pattern that prevents efficient distribution with a higher concurrency (i.e., simultaneous execution of one inference among devices). Such single-chain dependencies are so widespread that even implicitly biases recent neural architecture search (NAS) studies. In this visionary paper, we draw attention to an entirely new space of NAS that relaxes the single-chain dependency to provide higher concurrency and distribution opportunities. To quantitatively compare these architectures, we propose a score that encapsulates crucial metrics such as communication, concurrency, and load balancing. Additionally, we propose a new generator and transformation block that consistently deliver superior architectures compared to current state-of-the-art methods. Finally, our preliminary results show that these new architectures reduce the inference latency and deserve more attention.","PeriodicalId":369407,"journal":{"name":"2023 IEEE International Conference on Edge Computing and Communications (EDGE)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126250702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信