{"title":"Guest Editorial IEEE Transactions on Emerging Topics in Computing Special Section on Advances in Emerging Privacy-Preserving Computing","authors":"Jinguang Han;Patrick Schaumont;Willy Susilo","doi":"10.1109/TETC.2024.3374568","DOIUrl":"https://doi.org/10.1109/TETC.2024.3374568","url":null,"abstract":"Machine learning and cloud computing have dramatically increased the utility of data. These technologies facilitate our life and provide smart and intelligent services. Notably, machine learning algorithms need to learn from massive training data to improve accuracy. Hence, data is the core component of machine learning and plays an important role. Cloud computing is a new computing model that provides on-demand services, such as data storage, computing power, and infrastructure. Data owners are allowed to outsource their data to cloud servers, but will lose direct control of their data. The rising trend in data breach shows that privacy and security have been major issues in machine learning and cloud computing.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 1","pages":"266-268"},"PeriodicalIF":5.9,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10474207","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140164101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IEEE Transactions on Emerging Topics in Computing Information for Authors","authors":"","doi":"10.1109/TETC.2024.3377773","DOIUrl":"https://doi.org/10.1109/TETC.2024.3377773","url":null,"abstract":"","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 1","pages":"C2-C2"},"PeriodicalIF":5.9,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10474198","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140161123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Design Framework for Hardware-Efficient Logarithmic Floating-Point Multipliers","authors":"Tingting Zhang;Zijing Niu;Jie Han","doi":"10.1109/TETC.2024.3365650","DOIUrl":"10.1109/TETC.2024.3365650","url":null,"abstract":"The symbiotic use of logarithmic approximation in floating-point (FP) multiplication can significantly reduce the hardware complexity of a multiplier. However, it is difficult for a limited number of logarithmic FP multipliers (LFPMs) to fit in a specific error-tolerant application, such as neural networks (NNs) and digital signal processing, due to their unique error characteristics. This article proposes a design framework for generating LFPMs. We consider two FP representation formats with different ranges of mantissas, the IEEE 754 Standard FP Format and the Nearest Power of Two FP Format. For both logarithm and anti-logarithm computation, the applicable regions of inputs are first evenly divided into several intervals, and then approximation methods with negative or positive errors are developed for each sub-region. By using piece-wise functions, different configurations of approximation methods throughout applicable regions are created, leading to LFPMs with various trade-offs between accuracy and hardware cost. The variety of error characteristics of LFPMs is discussed and the generic hardware implementation is illustrated. As case studies, two LFPM designs are presented and evaluated in applications of JPEG compression and NNs. They do not only increase the classification accuracy, but also achieve smaller PDPs compared to the exact FP multiplier, while being more accurate than a recent logarithmic FP design.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 4","pages":"991-1001"},"PeriodicalIF":5.1,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MiniFloats on RISC-V Cores: ISA Extensions With Mixed-Precision Short Dot Products","authors":"Luca Bertaccini;Gianna Paulin;Matheus Cavalcante;Tim Fischer;Stefan Mach;Luca Benini","doi":"10.1109/TETC.2024.3365354","DOIUrl":"10.1109/TETC.2024.3365354","url":null,"abstract":"Low-precision floating-point (FP) formats have recently been intensely investigated in the context of machine learning inference and training applications. While 16-bit formats are already widely used, 8-bit FP data types have lately emerged as a viable option for neural network training when employed in a mixed-precision scenario and combined with rounding methods increasing the precision in compound additions, such as stochastic rounding. So far, hardware implementations supporting FP8 are mostly implemented within domain-specific accelerators. We propose two RISC-V instruction set architecture (ISA) extensions, enhancing respectively scalar and vector general-purpose cores with low and mixed-precision capabilities. The extensions support two 8-bit and two 16-bit FP formats and are based on dot-product instructions accumulating at higher precision. We develop a hardware unit supporting mixed-precision dot products and stochastic rounding and integrate it into an open-source floating-point unit (FPU). Finally, we integrate the enhanced FPU into a cluster of scalar cores, as well as a cluster of vector cores, and implement them in a 12 nm FinFET technology. The former achieves 575 GFLOPS/W on FP8-to-FP16 matrix multiplications at 0.8 V, 1.26 GHz; the latter reaches 860 GFLOPS/W at 0.8 V, 1.08 GHz, 1.93x higher efficiency than computing on FP16-to-FP32.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 4","pages":"1040-1055"},"PeriodicalIF":5.1,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139948120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kai Di;Fulin Chen;Yuanshuang Jiang;Pan Li;Tianyi Liu;Yichuan Jiang
{"title":"Adaptive Task Migration in Multiplex Networked Industrial Chains","authors":"Kai Di;Fulin Chen;Yuanshuang Jiang;Pan Li;Tianyi Liu;Yichuan Jiang","doi":"10.1109/TETC.2024.3364703","DOIUrl":"10.1109/TETC.2024.3364703","url":null,"abstract":"In recent years, the cooperation structures of industrial chains have evolved into multiplex networks, in which product agents are connected through various types of links. Due to the constraints of the multi-coupled interaction structure of the multiplex networked industrial chains, the load imbalances generated by the industrial production processes will cascade in and between different network layers, thus affecting the load balance of the whole system. The challenges that arise when attempting such load balancing among multiplex networked industrial chains are twofold: 1) The multiplex networked interaction structure adds new constraints to traditional multiagent task migration problems, which increases the solution space dimension, and 2) The cascaded load imbalances require tasks to be migrated adaptively, which complicates the solution space structure, and it is proven \u0000<inline-formula><tex-math>$mathcal {NP}$</tex-math></inline-formula>\u0000-hard to achieve such load balancing. Then, a hierarchical cascade-triggered task migration algorithm is designed, where key agents are selected to cooperate with each other in a hierarchical control form to achieve load balancing between network layers, and appropriate agents are cascade-triggered to migrate tasks adaptively to achieve load balancing in network layers. Finally, the algorithm is extensively evaluated in experiments, concluding that it can significantly increase the resulting utility and task completion proportion, while efficiently reducing the task completion cost. In particular, the algorithm does not appear to be statistically different in the resulting optimization objectives from the optimal result computed by the CPLEX solver, but it may consume less runtime.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 4","pages":"1025-1039"},"PeriodicalIF":5.1,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nathaniel Hobbs;Periklis A. Papakonstantinou;Jaideep Vaidya
{"title":"Engravings, Secrets, and Interpretability of Neural Networks","authors":"Nathaniel Hobbs;Periklis A. Papakonstantinou;Jaideep Vaidya","doi":"10.1109/TETC.2024.3358759","DOIUrl":"10.1109/TETC.2024.3358759","url":null,"abstract":"This work proposes a definition and examines the problem of undetectably engraving special input/output information into a Neural Network (NN). Investigation of this problem is significant given the ubiquity of neural networks and society's reliance on their proper training and use. We systematically study this question and provide (1) definitions of security for secret engravings, (2) machine learning methods for the construction of an engraved network, (3) a threat model that is instantiated with state-of-the-art interpretability methods to devise distinguishers/attackers. In this work, there are two kinds of algorithms. First, the constructions of engravings through machine learning training methods. Second, the distinguishers associated with the threat model. The weakest of our engraved NN constructions are insecure and can be broken by our distinguishers, whereas other, more systematic engravings are resilient to each of our distinguishing attacks on three prototypical image classification datasets. Our threat model is of independent interest, as it provides a concrete quantification/benchmark for the “goodness” of interpretability methods.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 4","pages":"1093-1104"},"PeriodicalIF":5.1,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Personalized Privacy-Preserving Framework for Cross-Silo Federated Learning","authors":"Van-Tuan Tran;Huy-Hieu Pham;Kok-Seng Wong","doi":"10.1109/TETC.2024.3356068","DOIUrl":"10.1109/TETC.2024.3356068","url":null,"abstract":"Federated learning (FL) is recently surging as a promising decentralized deep learning (DL) framework that enables DL-based approaches trained collaboratively across clients without sharing private data. However, in the context of the central party being active and dishonest, the data of individual clients might be perfectly reconstructed, leading to the high possibility of sensitive information being leaked. Moreover, FL also suffers from the nonindependent and identically distributed (non-IID) data among clients, resulting in the degradation in the inference performance on local clients’ data. In this paper, we propose a novel framework, namely Personalized Privacy-Preserving Federated Learning (PPPFL), with a concentration on cross-silo FL to overcome these challenges. Specifically, we introduce a stabilized variant of the Model-Agnostic Meta-Learning (MAML) algorithm to collaboratively train a global initialization from clients’ synthetic data generated by Differential Private Generative Adversarial Networks (DP-GANs). After reaching convergence, the global initialization will be locally adapted by the clients to their private data. Through extensive experiments, we empirically show that our proposed framework outperforms multiple FL baselines on different datasets, including MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 4","pages":"1014-1024"},"PeriodicalIF":5.1,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139956627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huimin Zeng;Zhenrui Yue;Lanyu Shang;Yang Zhang;Dong Wang
{"title":"Unsupervised Domain Adaptation via Contrastive Adversarial Domain Mixup: A Case Study on COVID-19","authors":"Huimin Zeng;Zhenrui Yue;Lanyu Shang;Yang Zhang;Dong Wang","doi":"10.1109/TETC.2024.3354419","DOIUrl":"10.1109/TETC.2024.3354419","url":null,"abstract":"Training large deep learning (DL) models with high performance for natural language downstream tasks usually requires rich-labeled data. However, in a real-world application of COVID-19 information service (e.g., misinformation detection, question answering), a fundamental challenge is the lack of the labeled COVID data to enable supervised end-to-end training of the models for different downstream tasks, especially at the early stage of the pandemic. To address this challenge, we propose an unsupervised domain adaptation framework using contrastive learning and adversarial domain mixup to transfer the knowledge from an existing source data domain to the target COVID-19 data domain. In particular, to bridge the gap between the source domain and the target domain, our method reduces a radial basis function (RBF) based discrepancy between these two domains. Moreover, we leverage the power of domain adversarial examples to establish an intermediate domain mixup, where the latent representations of the input text from both domains could be mixed during the training process. In this paper, we focus on two prevailing downstream tasks in mining COVID-19 text data: COVID-19 misinformation detection and COVID-19 news question answering. Extensive domain adaptation experiments on multiple real-world datasets suggest that our method can effectively adapt misinformation detection and question answering systems to the unseen COVID-19 target domain with significant improvements compared to the state-of-the-art baselines.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 4","pages":"1105-1116"},"PeriodicalIF":5.1,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10415352","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139947901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hardware-Aware DNN Compression via Diverse Pruning and Mixed-Precision Quantization","authors":"Konstantinos Balaskas;Andreas Karatzas;Christos Sad;Kostas Siozios;Iraklis Anagnostopoulos;Georgios Zervakis;Jörg Henkel","doi":"10.1109/TETC.2023.3346944","DOIUrl":"10.1109/TETC.2023.3346944","url":null,"abstract":"Deep Neural Networks (DNNs) have shown significant advantages in a wide variety of domains. However, DNNs are becoming computationally intensive and energy hungry at an exponential pace, while at the same time, there is a vast demand for running sophisticated DNN-based services on resource constrained embedded devices. In this paper, we target energy-efficient inference on embedded DNN accelerators. To that end, we propose an automated framework to compress DNNs in a hardware-aware manner by jointly employing pruning and quantization. We explore, for the first time, per-layer fine- and coarse-grained pruning, in the same DNN architecture, in addition to low bit-width mixed-precision quantization for weights and activations. Reinforcement Learning (RL) is used to explore the associated design space and identify the pruning-quantization configuration so that the energy consumption is minimized whilst the prediction accuracy loss is retained at acceptable levels. Using our novel composite RL agent we are able to extract energy-efficient solutions without requiring retraining and/or fine tuning. Our extensive experimental evaluation over widely used DNNs and the CIFAR-10/100 and ImageNet datasets demonstrates that our framework achieves 39% average energy reduction for 1.7% average accuracy loss and outperforms significantly the state-of-the-art approaches.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 4","pages":"1079-1092"},"PeriodicalIF":5.1,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139163154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francesco Buccafurri;Gianluca Lax;Denis Migdal;Lorenzo Musarella;Christophe Rosenberger
{"title":"Combining Trust Graphs and Keystroke Dynamics to Counter Fake Identities in Social Networks","authors":"Francesco Buccafurri;Gianluca Lax;Denis Migdal;Lorenzo Musarella;Christophe Rosenberger","doi":"10.1109/TETC.2023.3346691","DOIUrl":"10.1109/TETC.2023.3346691","url":null,"abstract":"Fake identity in social networks is a phenomenon that is strongly increasing, and it is used for discovering personal information, identity theft, influencing people, spreading fake news, fraud, and so on. In this article, we face this problem by introducing the concept of certified social profiles and by propagating this property through a collaborative approach that exploits keystroke-dynamic-recognition techniques to identify illegal access to certified profiles. We propose a decentralized approach to compute the trust level of a social profile, and we show the robustness of the proposal by analyzing the security of the trust mechanism through experimental validation.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"12 4","pages":"1066-1078"},"PeriodicalIF":5.1,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139956623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}