{"title":"高性能三元组网络的自适应分离约束三元组损失(A-SCTL)","authors":"Ziheng Wang;Farzad Niknia;Shanshan Liu;Honglan Jiang;Siting Liu;Pedro Reviriego;Jun Zhou;Fabrizio Lombardi","doi":"10.1109/TNANO.2025.3552233","DOIUrl":null,"url":null,"abstract":"Triplet Networks (TNs) consist of three subchannels and are widely utilized in machine learning applications. The efficacy of TNs is highly dependent on the loss function employed during training. This paper proposes a novel loss function for TNs, referred to as the Adaptive Separately Constrained Triplet Loss (A-SCTL). The unique feature of A-SCTL is the separation of intra-class and inter-class constraints, strictly adhering to the objective of similarity-measuring networks. Its adaptive strategy leverages the dynamics between inter-class and intra-class terms to achieve a balanced convergence; without manually adjusting hyperparameters, it enhances flexibility and facilitates adaptation across various applications. Moreover, A-SCTL mitigates possible false solutions and offers insights into network behavior through the dependency of the two constraint terms. Performance metrics of the loss functions are evaluated in deep metric learning classification and face recognition tasks. Simulations illustrate the evolution of the two loss terms and the adaptive hyperparameter across training epochs; the results demonstrate that TNs utilizing A-SCTL outperform other existing loss functions in accuracy. Additionally, this paper details the hardware implementation of A-SCTL and evaluates its associated overhead. Results show that compared to other losses, the additional hardware overhead required for A-SCTL is negligible (0.008% energy per operation) when considering the entire TN system.","PeriodicalId":449,"journal":{"name":"IEEE Transactions on Nanotechnology","volume":"24 ","pages":"157-165"},"PeriodicalIF":2.1000,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adaptive Separately Constrained Triplet Loss (A-SCTL) for High-Performance Triplet Networks\",\"authors\":\"Ziheng Wang;Farzad Niknia;Shanshan Liu;Honglan Jiang;Siting Liu;Pedro Reviriego;Jun Zhou;Fabrizio Lombardi\",\"doi\":\"10.1109/TNANO.2025.3552233\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Triplet Networks (TNs) consist of three subchannels and are widely utilized in machine learning applications. The efficacy of TNs is highly dependent on the loss function employed during training. This paper proposes a novel loss function for TNs, referred to as the Adaptive Separately Constrained Triplet Loss (A-SCTL). The unique feature of A-SCTL is the separation of intra-class and inter-class constraints, strictly adhering to the objective of similarity-measuring networks. Its adaptive strategy leverages the dynamics between inter-class and intra-class terms to achieve a balanced convergence; without manually adjusting hyperparameters, it enhances flexibility and facilitates adaptation across various applications. Moreover, A-SCTL mitigates possible false solutions and offers insights into network behavior through the dependency of the two constraint terms. Performance metrics of the loss functions are evaluated in deep metric learning classification and face recognition tasks. Simulations illustrate the evolution of the two loss terms and the adaptive hyperparameter across training epochs; the results demonstrate that TNs utilizing A-SCTL outperform other existing loss functions in accuracy. Additionally, this paper details the hardware implementation of A-SCTL and evaluates its associated overhead. Results show that compared to other losses, the additional hardware overhead required for A-SCTL is negligible (0.008% energy per operation) when considering the entire TN system.\",\"PeriodicalId\":449,\"journal\":{\"name\":\"IEEE Transactions on Nanotechnology\",\"volume\":\"24 \",\"pages\":\"157-165\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2025-03-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Nanotechnology\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10930644/\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Nanotechnology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10930644/","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Adaptive Separately Constrained Triplet Loss (A-SCTL) for High-Performance Triplet Networks
Triplet Networks (TNs) consist of three subchannels and are widely utilized in machine learning applications. The efficacy of TNs is highly dependent on the loss function employed during training. This paper proposes a novel loss function for TNs, referred to as the Adaptive Separately Constrained Triplet Loss (A-SCTL). The unique feature of A-SCTL is the separation of intra-class and inter-class constraints, strictly adhering to the objective of similarity-measuring networks. Its adaptive strategy leverages the dynamics between inter-class and intra-class terms to achieve a balanced convergence; without manually adjusting hyperparameters, it enhances flexibility and facilitates adaptation across various applications. Moreover, A-SCTL mitigates possible false solutions and offers insights into network behavior through the dependency of the two constraint terms. Performance metrics of the loss functions are evaluated in deep metric learning classification and face recognition tasks. Simulations illustrate the evolution of the two loss terms and the adaptive hyperparameter across training epochs; the results demonstrate that TNs utilizing A-SCTL outperform other existing loss functions in accuracy. Additionally, this paper details the hardware implementation of A-SCTL and evaluates its associated overhead. Results show that compared to other losses, the additional hardware overhead required for A-SCTL is negligible (0.008% energy per operation) when considering the entire TN system.
期刊介绍:
The IEEE Transactions on Nanotechnology is devoted to the publication of manuscripts of archival value in the general area of nanotechnology, which is rapidly emerging as one of the fastest growing and most promising new technological developments for the next generation and beyond.