{"title":"FLCL: Feature-Level Contrastive Learning for Few-Shot Image Classification","authors":"Wenming Cao;Jiewen Zeng;Qifan Liu","doi":"10.1109/TETC.2025.3546366","DOIUrl":"https://doi.org/10.1109/TETC.2025.3546366","url":null,"abstract":"Few-shot classification is the task of recognizing unseen classes using a limited number of samples. In this paper, we propose a new contrastive learning method called Feature-Level Contrastive Learning (FLCL). FLCL conducts contrastive learning at the feature level and leverages the subtle relationships between positive and negative samples to achieve more effective classification. Additionally, we address the challenges of requiring a large number of negative samples and the difficulty of selecting high-quality negative samples in traditional contrastive learning methods. For feature learning, we design a Feature Enhancement Coding (FEC) module to analyze the interactions and correlations between nonlinear features, enhancing the quality of feature representations. In the metric stage, we propose a centered hypersphere projection metric to map feature vectors onto the hypersphere, improving the comparison between the support and query sets. Experimental results on four few-shot classification benchmark datasets demonstrate that our method, while simple in design, outperforms previous methods and achieves state-of-the-art performance. A detailed ablation study further confirms the effectiveness of each component of our model.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"935-946"},"PeriodicalIF":5.4,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyungbae Jang;Sejin Lim;Yujin Oh;Hyunjun Kim;Anubhab Baksi;Sumanta Chakraborty;Hwajeong Seo
{"title":"Quantum Implementation and Analysis of SHA-2 and SHA-3","authors":"Kyungbae Jang;Sejin Lim;Yujin Oh;Hyunjun Kim;Anubhab Baksi;Sumanta Chakraborty;Hwajeong Seo","doi":"10.1109/TETC.2025.3546648","DOIUrl":"https://doi.org/10.1109/TETC.2025.3546648","url":null,"abstract":"Quantum computers have the potential to solve a number of hard problems that are believed to be almost impossible to solve by classical computers. This observation has sparked a surge of research to apply quantum algorithms against the cryptographic systems to evaluate its quantum resistance. In assessing the security strength of the cryptographic algorithms against the upcoming quantum threats, it is crucial to precisely estimate the quantum resource requirement (generally in terms of circuit depth and quantum bit count). The National Institute of Standards and Technology by the US government specified five quantum security levels so that the relative quantum strength of a given cipher can be compared to the standard ones. There have been some progress in the NIST-specified quantum security levels for the odd levels (i.e., 1, 3 and 5), following the work of Jaques et al. (Eurocrypt’20). However, levels 2 and 4, which correspond to the quantum collision finding attacks for the SHA-2 and SHA-3 hash functions, quantum attack complexities are arguably not well-studied. This is where our article fits in. In this article, we present novel techniques for optimizing the quantum circuit implementations for SHA-2 and SHA-3 algorithms in all the categories specified by NIST. After that, we evaluate the quantum circuits of target cryptographic hash functions for quantum collision search. Finally, we define the quantum attack complexity for levels 2 and 4, and comment on the security strength of the extended level. We present new concepts to optimize the quantum circuits at the component level and the architecture level.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"919-934"},"PeriodicalIF":5.4,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"2024 Reviewers List*","authors":"","doi":"10.1109/TETC.2025.3530016","DOIUrl":"https://doi.org/10.1109/TETC.2025.3530016","url":null,"abstract":"We thank the following reviewers for the time and energy they have given to <italic>TETC</i>:","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 1","pages":"276-278"},"PeriodicalIF":5.1,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10918565","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial Special Section on Emerging Edge AI for Human-in-the-Loop Cyber Physical Systems","authors":"Radu Marculescu;Jorge Sá Silva","doi":"10.1109/TETC.2024.3472428","DOIUrl":"https://doi.org/10.1109/TETC.2024.3472428","url":null,"abstract":"Edge Artificial Intelligence (AI) enables us to deploy distributed AI models, optimize computational and energy resources, minimize communication demands, and, most importantly, meet privacy requirements for Internet of Things (IoT) applications. Since data remains on the end-devices and only model parameters are shared with the server, it becomes possible to leverage the vast amount of data collected from smartphones and IoT devices without compromising the user's privacy. However, Federated Learning (FL) solutions also have well-known limitations. In particular, as systems that account for human behaviour become increasingly vital, future technologies need to become attuned to human behaviours. Indeed, we are already witnessing unparalleled advancements in technology that empower our tools and devices with intelligence, sensory abilities, and communication features. At the same time, continued advances in the miniaturization of computational capabilities can enable us to go far beyond the simple tagging and identification, towards integrating computational resources directly into these objects, thus making our tools “intelligent”. Yet, there is limited scientific work that considers humans as an integral part of these IoT-powered cyber-physical systems.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 1","pages":"3-4"},"PeriodicalIF":5.1,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10918564","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samitha Somathilaka;Sasitharan Balasubramaniam;Daniel P. Martins
{"title":"Analyzing Wet-Neuromorphic Computing Using Bacterial Gene Regulatory Neural Networks","authors":"Samitha Somathilaka;Sasitharan Balasubramaniam;Daniel P. Martins","doi":"10.1109/TETC.2025.3546119","DOIUrl":"https://doi.org/10.1109/TETC.2025.3546119","url":null,"abstract":"Biocomputing envisions the development computing paradigms using biological systems, ranging from micron-level components to collections of cells, including organoids. This paradigm shift exploits hidden natural computing properties, to develop miniaturized wet-computing devices that can be deployed in harsh environments, and to explore designs of novel energy-efficient systems. In parallel, we witness the emergence of AI hardware, including neuromorphic processors with the aim of improving computational capacity. This study brings together the concept of biocomputing and neuromorphic systems by focusing on the bacterial gene regulatory networks and their transformation into Gene Regulatory Neural Networks (GRNNs). We explore the intrinsic properties of gene regulations, map this to a gene-perceptron function, and propose an application-specific sub-GRNN search algorithm that maps the network structure to match a computing problem. Focusing on the model organism Escherichia coli, the base-GRNN is initially extracted and validated for accuracy. Subsequently, a comprehensive feasibility analysis of the derived GRNN confirms its computational prowess in classification and regression tasks. Furthermore, we discuss the possibility of performing a well-known digit classification task as a use case. Our analysis and simulation experiments show promising results in the offloading of computation tasks to GRNN in bacterial cells, advancing wet-neuromorphic computing using natural cells.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"902-918"},"PeriodicalIF":5.4,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Han Yu;Hongming Cai;Shengtung Tsai;Mengyao Li;Pan Hu;Jiaoyan Chen;Bingqing Shen
{"title":"Exploiting Entity Information for Robust Prediction Over Event Knowledge Graphs","authors":"Han Yu;Hongming Cai;Shengtung Tsai;Mengyao Li;Pan Hu;Jiaoyan Chen;Bingqing Shen","doi":"10.1109/TETC.2025.3534243","DOIUrl":"https://doi.org/10.1109/TETC.2025.3534243","url":null,"abstract":"Script event prediction is the task of predicting the subsequent event given a sequence of events that already took place. It benefits task planning and process scheduling for event-centric systems including enterprise systems, IoT systems, etc. Sequence-based and graph-based learning models have been applied to this task. However, when learning data is limited, especially in a multiple-participant-involved enterprise environment, the performance of such models falls short of expectations as they heavily rely on large-scale training data. To take full advantage of given data, in this article we propose a new type of knowledge graph (KG) that models not just events but also entities participating in the events, and we design a collaborative event prediction model exploiting such KGs. Our model identifies semantically similar vertices as collaborators to resolve unknown events, applies gated graph neural networks to extract event-wise sequential features, and exploits a heterogeneous attention network to cope with entity-wise influence in event sequences. To verify the effectiveness of our approach, we designed multiple-choice narrative cloze tasks with inadequate knowledge. Our experimental evaluation with three datasets generated from well-known corpora shows our method can successfully defend against such incompleteness of data and outperforms the state-of-the-art approaches for event prediction.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"890-901"},"PeriodicalIF":5.4,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Label-Efficient Deep Learning-Based Aging-Related Bug Prediction With Spiking Convolutional Neural Networks","authors":"Yunzhe Tian;Yike Li;Kang Chen;Zhenguo Zhang;Endong Tong;Jiqiang Liu;Fangyun Qin;Zheng Zheng;Wenjia Niu","doi":"10.1109/TETC.2025.3531051","DOIUrl":"https://doi.org/10.1109/TETC.2025.3531051","url":null,"abstract":"Recent advances in Deep Learning (DL) have enhanced Aging-Related Bug (ARB) prediction for mitigating software aging. However, DL-based ARB prediction models face a dual challenge: overcoming overfitting to enhance generalization and managing the high labeling costs associated with extensive data requirements. To address the first issue, we utilize the sparse and binary nature of spiking communication in Spiking Neural Networks (SNNs), which inherently provides brain-inspired regularization to effectively alleviate overfitting. Therefore, we propose a Spiking Convolutional Neural Network (SCNN)-based ARB prediction model along with a training framework that handles the model’s spatial-temporal dynamics and non-differentiable nature. To reduce labeling costs, we introduce a Bio-inspired and Diversity-aware Active Learning framework (BiDAL), which prioritizes highly informative and diverse samples, enabling more efficient usage of the limited labeling budget. This framework incorporates bio-inspired uncertainty to enhance informativeness measurement along with using a diversity-aware selection strategy based on clustering to prevent redundant labeling. Experiments on three ARB datasets show that ARB-SCNN effectively reduces overfitting, improving generalization performance by 6.65% over other DL-based classifiers. Additionally, BiDAL boosts label efficiency for ARB-SCNN training, outperforming four state-of-the-art active learning methods by 4.77% within limited labeling budgets.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 2","pages":"314-329"},"PeriodicalIF":5.1,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144323163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Continual Test-Time Adaptation With Weighted Contrastive Learning and Pseudo-Label Correction","authors":"Shih-Chieh Chuang;Ching-Hu Lu","doi":"10.1109/TETC.2025.3528985","DOIUrl":"https://doi.org/10.1109/TETC.2025.3528985","url":null,"abstract":"Real-time adaptability is often required to maintain system accuracy in scenarios involving domain shifts caused by constantly changing environments. While continual test-time adaptation has been proposed to handle such scenarios, existing methods rely on high-accuracy pseudo-labels. Moreover, contrastive learning methods for continuous test-time adaptation consider the aggregation of features from the same class while neglecting the problem of aggregating similar features within the same class. Therefore, we propose “Weighted Contrastive Learning” and apply it to both pre-training and continual test-time adaptation. To address the issue of catastrophic forgetting caused by continual adaptation, previous studies have employed source-domain knowledge to stochastically recover the target-domain model. However, significant domain shifts may cause the source-domain knowledge to behave as noise, thus impacting the model's adaptability. Therefore, we propose “Domain-aware Pseudo-label Correction” to mitigate catastrophic forgetting and error accumulation without accessing the original source-domain data while minimizing the impact on model adaptability. The thorough evaluations in our experiments have demonstrated the effectiveness of our proposed approach.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"866-877"},"PeriodicalIF":5.4,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Pervasive Edge Computing Model for Proactive Intelligent Data Migration","authors":"Georgios Boulougaris;Kostas Kolomvatsos","doi":"10.1109/TETC.2025.3528994","DOIUrl":"https://doi.org/10.1109/TETC.2025.3528994","url":null,"abstract":"Currently, there is a great attention of the research community for the intelligent management of data in a context-aware manner at the intersection of the Internet of Things (IoT) and Edge Computing (EC). In this article, we propose a strategy to be adopted by autonomous edge nodes related to their decision on what data should be migrated to specific locations of the infrastructure and support the desired requests for processing. Our intention is to arm nodes with the ability of learning the access patterns of offloaded data-driven tasks and predict which data should be migrated to the original ‘owners’ of tasks. Naturally, these tasks are linked to the processing of data that are absent at the original hosting nodes indicating the required data assets that need to be accessed directly. To identify these data intervals, we employ an ensemble scheme that combines a statistically oriented model and a machine learning scheme. Hence, we are able not only to detect the density of the requests but also to learn and infer the ‘strong’ data assets. The proposed approach is analyzed in detail by presenting the corresponding formulations being also evaluated and compared against baselines and models found in the respective literature.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"878-889"},"PeriodicalIF":5.4,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145057421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amir Sabbagh Molahosseini;JunKyu Lee;Hans Vandierendonck
{"title":"Software-Defined Number Formats for High-Speed Belief Propagation","authors":"Amir Sabbagh Molahosseini;JunKyu Lee;Hans Vandierendonck","doi":"10.1109/TETC.2025.3528972","DOIUrl":"https://doi.org/10.1109/TETC.2025.3528972","url":null,"abstract":"This article presents the design and implementation of Software-Defined Floating-Point (SDF) number formats for high-speed implementation of the Belief Propagation (BP) algorithm. SDF formats are designed specifically to meet the numeric needs of the computation and are more compact representations of the data. They reduce memory footprint and memory bandwidth requirements without sacrificing accuracy, given that BP for loopy graphs inherently involves algorithmic errors. This article designs several SDF formats for sum-product BP applications by careful analysis of the computation. Our theoretical analysis leads to the design of 16-bit (half-precision) and 8-bit (mini-precision) widths. We moreover present highly efficient software implementation of the proposed SDF formats which is centered around conversion to hardware-supported single-precision arithmetic hardware. Our solution demonstrates negligible conversion overhead on commercially available CPUs. For Ising grids with sizes from 100 × 100 to 500 × 500, the 16- and 8-bit SDF formats along with our conversion module produce equivalent accuracy to double-precision floating-point format but with 2.86× speedups on average on an Intel Xeon processor. Particularly, increasing the grid size results in higher speed-up. For example, the proposed half-precision format with 3-bit exponent and 13-bit mantissa achieved the minimum and maximum speedups of 1.30× and 1.39× over single-precision, and 2.55× and 3.40× over double-precision, by increasing grid size from 100 × 100 to 500 × 500.","PeriodicalId":13156,"journal":{"name":"IEEE Transactions on Emerging Topics in Computing","volume":"13 3","pages":"853-865"},"PeriodicalIF":5.4,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145051083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}