High-Confidence Computing最新文献

筛选
英文 中文
Kubernetes application performance benchmarking on heterogeneous CPU architecture: An experimental review Kubernetes应用程序在异构CPU架构上的性能基准测试:实验回顾
IF 3.2
High-Confidence Computing Pub Date : 2024-12-18 DOI: 10.1016/j.hcc.2024.100276
Jannatun Noor, MD Badsha Faysal, MD Sheikh Amin, Bushra Tabassum, Tamim Raiyan Khan, Tanvir Rahman
{"title":"Kubernetes application performance benchmarking on heterogeneous CPU architecture: An experimental review","authors":"Jannatun Noor,&nbsp;MD Badsha Faysal,&nbsp;MD Sheikh Amin,&nbsp;Bushra Tabassum,&nbsp;Tamim Raiyan Khan,&nbsp;Tanvir Rahman","doi":"10.1016/j.hcc.2024.100276","DOIUrl":"10.1016/j.hcc.2024.100276","url":null,"abstract":"<div><div>With the rapid advancement of cloud technologies, cloud services have enormously contributed to the cloud community for application development life-cycle. In this context, Kubernetes has played a pivotal role as a cloud computing tool, enabling developers to adopt efficient and automated deployment strategies. Using Kubernetes as an orchestration tool and a cloud computing system as a manager of the infrastructures, developers can boost the development and deployment process. With cloud providers such as GCP, AWS, Azure, and Oracle offering Kubernetes services, the availability of both x86 and ARM platforms has become evident. However, while x86 currently dominates the market, ARM-based solutions have seen limited adoption, with only a few individuals actively working on ARM deployments. This study explores the efficiency and cost-effectiveness of implementing Kubernetes on different CPU platforms. By comparing the performance of x86 and ARM platforms, this research seeks to ascertain whether transitioning to ARM presents a more advantageous option for Kubernetes deployments. Through a comprehensive evaluation of scalability, cost, and overall performance, this study aims to shed light on the viability of leveraging ARM on different CPUs by providing valuable insights.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 1","pages":"Article 100276"},"PeriodicalIF":3.2,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143403423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scale-aware Gaussian mixture loss for crowd localization transformers 群体定位变压器的尺度感知高斯混合损耗
IF 3.2
High-Confidence Computing Pub Date : 2024-12-10 DOI: 10.1016/j.hcc.2024.100296
Alabi Mehzabin Anisha, Sriram Chellappan
{"title":"Scale-aware Gaussian mixture loss for crowd localization transformers","authors":"Alabi Mehzabin Anisha,&nbsp;Sriram Chellappan","doi":"10.1016/j.hcc.2024.100296","DOIUrl":"10.1016/j.hcc.2024.100296","url":null,"abstract":"<div><div>A fundamental problem in crowd localization using computer vision techniques stems from intrinsic scale shifts. Scale shifts occur when the crowd density within an image is uneven and chaotic, a feature common in dense crowds. At locations nearer to the camera, crowd density is lower than those farther away. Consequently, there is a significant change in the number of pixels representing a person across locations in an image depending on the camera’s position. Existing crowd localization methods do not effectively handle scale shifts, resulting in relatively poor performance in dense crowd images. In this paper, we explicitly address this challenge. Our method, called Gaussian Loss Transformers (GLT), directly incorporates scale variants in crowds by adapting loss functions to handle them in the end-to-end training pipeline. To inform the model about the scale variants within the crowd, we utilize a Gaussian mixture model (GMM) for pre-processing the ground truths into non-overlapping clusters. This cluster information is utilized as a weighting factor while computing the localization loss for that cluster. Extensive experiments on state-of-the-art datasets and computer vision models reveal that our method improves localization performance in dense crowd images. We also analyze the effect of multiple parameters in our technique and report findings on their impact on crowd localization performance.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 3","pages":"Article 100296"},"PeriodicalIF":3.2,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144678974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum to “Exploring Personalized Internet of Things (PIoT), social connectivity, and Artificial Social Intelligence (ASI): A survey” [High-Confidence Computing 4 (2024) 100242] “探索个性化物联网(PIoT)、社会连接和人工社会智能(ASI):一项调查”的勘误[高置信度计算4 (2024)100242]
IF 3.2
High-Confidence Computing Pub Date : 2024-12-01 DOI: 10.1016/j.hcc.2024.100294
Bisma Gulzar , Shabir Ahmad Sofi , Sahil Sholla
{"title":"Erratum to “Exploring Personalized Internet of Things (PIoT), social connectivity, and Artificial Social Intelligence (ASI): A survey” [High-Confidence Computing 4 (2024) 100242]","authors":"Bisma Gulzar ,&nbsp;Shabir Ahmad Sofi ,&nbsp;Sahil Sholla","doi":"10.1016/j.hcc.2024.100294","DOIUrl":"10.1016/j.hcc.2024.100294","url":null,"abstract":"","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"4 4","pages":"Article 100294"},"PeriodicalIF":3.2,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Connectivity maintenance against link uncertainty and heterogeneity in adversarial networks 对抗网络中链路不确定性和异质性的连通性维护
IF 3.2
High-Confidence Computing Pub Date : 2024-11-28 DOI: 10.1016/j.hcc.2024.100293
Jianzhi Tang , Luoyi Fu , Lei Zhou , Xinbing Wang , Chenghu Zhou
{"title":"Connectivity maintenance against link uncertainty and heterogeneity in adversarial networks","authors":"Jianzhi Tang ,&nbsp;Luoyi Fu ,&nbsp;Lei Zhou ,&nbsp;Xinbing Wang ,&nbsp;Chenghu Zhou","doi":"10.1016/j.hcc.2024.100293","DOIUrl":"10.1016/j.hcc.2024.100293","url":null,"abstract":"<div><div>This paper delves into the challenge of maintaining connectivity in adversarial networks, focusing on the preservation of essential links to prevent the disintegration of network components under attack. Unlike previous approaches that assume a stable and homogeneous network topology, this study introduces a more realistic model that incorporates both link uncertainty and heterogeneity. Link uncertainty necessitates additional probing to confirm link existence, while heterogeneity reflects the varying resilience of links against attacks. We model the network as a random graph where each link is defined by its existence probability, probing cost, and resilience. The primary objective is to devise a defensive strategy that maximizes the expected size of the largest connected component at the end of an adversarial process while minimizing the probing cost, irrespective of the attack patterns employed. We begin by establishing the NP-hardness of the problem and then introduce an optimal defensive strategy based on dynamic programming. Due to the high computational cost of achieving optimality, we also develop two approximate strategies that offer efficient solutions within polynomial time. The first is a heuristic method that assesses link importance across three heterogeneous subnetworks, and the second is an adaptive minimax policy designed to minimize the defender’s potential worst-case loss, with guaranteed performance. Through extensive testing on both synthetic and real-world datasets across various attack scenarios, our strategies demonstrate significant advantages over existing methods.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 3","pages":"Article 100293"},"PeriodicalIF":3.2,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144678975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Redactable Blockchain from Accountable Weight Threshold Chameleon Hash 可读的区块链从可问责权重阈值变色龙哈希
IF 3.2
High-Confidence Computing Pub Date : 2024-11-08 DOI: 10.1016/j.hcc.2024.100281
Qiang Ma , Yanqi Zhao , Xiangyu Liu , Xiaoyi Yang , Min Xie , Yong Yu
{"title":"Redactable Blockchain from Accountable Weight Threshold Chameleon Hash","authors":"Qiang Ma ,&nbsp;Yanqi Zhao ,&nbsp;Xiangyu Liu ,&nbsp;Xiaoyi Yang ,&nbsp;Min Xie ,&nbsp;Yong Yu","doi":"10.1016/j.hcc.2024.100281","DOIUrl":"10.1016/j.hcc.2024.100281","url":null,"abstract":"<div><div>The redactable blockchain provides the editability of blocks, which guarantees the data immutability of blocks while removing illegal content on the blockchain. However, the existing redactable blockchain relies on trusted assumptions regarding a single editing authority. Ateniese et al. (EuroS&amp;P 2017) and Li et al. (TIFS 2023) proposed solutions by using threshold chameleon hash functions, but these lack accountability for malicious editing. This paper delves into this problem and proposes an accountability weight threshold blockchain editing scheme. Specifically, we first formalize the model of a redactable blockchain with accountability. Then, we introduce the novel concept of the Accountable Weight Threshold Chameleon Hash Function (AWTCH). This function collaboratively generates a chameleon hash trapdoor through a weight committee protocol, where only sets of committees meeting the weight threshold can edit data. Additionally, it incorporates a tracer to identify and hold accountable any disputing editors, thus enabling supervision of editing rights. We propose a generic construction for AWTCH. Then, we introduce an efficient construction of AWTCH and develop a redactable blockchain scheme by leveraging AWTCH. Finally, we demonstrate our scheme’s practicality. The editing efficiency of our scheme is twice that of Tian et al. (TIFS 2023) with the same number of editing blocks.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 3","pages":"Article 100281"},"PeriodicalIF":3.2,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144713671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three-dimensional dynamic gesture recognition method based on convolutional neural network 基于卷积神经网络的三维动态手势识别方法
IF 3.2
High-Confidence Computing Pub Date : 2024-11-06 DOI: 10.1016/j.hcc.2024.100280
Ji Xi , Weiqi Zhang , Zhe Xu , Saide Zhu , Linlin Tang , Li Zhao
{"title":"Three-dimensional dynamic gesture recognition method based on convolutional neural network","authors":"Ji Xi ,&nbsp;Weiqi Zhang ,&nbsp;Zhe Xu ,&nbsp;Saide Zhu ,&nbsp;Linlin Tang ,&nbsp;Li Zhao","doi":"10.1016/j.hcc.2024.100280","DOIUrl":"10.1016/j.hcc.2024.100280","url":null,"abstract":"<div><div>With the rapid advancement of virtual reality, dynamic gesture recognition technology has become an indispensable and critical technique for users to achieve human–computer interaction in virtual environments. The recognition of dynamic gestures is a challenging task due to the high degree of freedom and the influence of individual differences and the change of gesture space. To solve the problem of low recognition accuracy of existing networks, an improved dynamic gesture recognition algorithm based on ResNeXt architecture is proposed. The algorithm employs three-dimensional convolution techniques to effectively capture the spatiotemporal features intrinsic to dynamic gestures. Additionally, to enhance the model’s focus and improve its accuracy in identifying dynamic gestures, a lightweight convolutional attention mechanism is introduced. This mechanism not only augments the model’s precision but also facilitates faster convergence during the training phase. In order to further optimize the performance of the model, a deep attention submodule is added to the convolutional attention mechanism module to strengthen the network’s capability in temporal feature extraction. Empirical evaluations on EgoGesture and NvGesture datasets show that the accuracy of the proposed model in dynamic gesture recognition reaches 95.03% and 86.21%, respectively. When operating in RGB mode, the accuracy reached 93.49% and 80.22%, respectively. These results underscore the effectiveness of the proposed algorithm in recognizing dynamic gestures with high accuracy, showcasing its potential for applications in advanced human–computer interaction systems.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 1","pages":"Article 100280"},"PeriodicalIF":3.2,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143387521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning-based cooperative content caching and sharing for multi-layer vehicular networks 基于学习的多层车辆网络协同内容缓存与共享
IF 3.2
High-Confidence Computing Pub Date : 2024-11-05 DOI: 10.1016/j.hcc.2024.100277
Jun Shi , Yuanzhi Ni , Lin Cai , Zhuocheng Du
{"title":"Learning-based cooperative content caching and sharing for multi-layer vehicular networks","authors":"Jun Shi ,&nbsp;Yuanzhi Ni ,&nbsp;Lin Cai ,&nbsp;Zhuocheng Du","doi":"10.1016/j.hcc.2024.100277","DOIUrl":"10.1016/j.hcc.2024.100277","url":null,"abstract":"<div><div>Caching and sharing the content files are critical and fundamental for various future vehicular applications. However, how to satisfy the content demands in a timely manner with limited storage is an open issue owing to the high mobility of vehicles and the unpredictable distribution of dynamic requests. To better serve the requests from the vehicles, a cache-enabled multi-layer architecture, consisting of a Micro Base Station (MBS) and several Small Base Stations (SBSs), is proposed in this paper. Considering that vehicles usually travel through the coverage of multiple SBSs in a short time period, the cooperative caching and sharing strategy is introduced, which can provide comprehensive and stable cache services to vehicles. In addition, since the content popularity profile is unknown, we model the content caching problems in a Multi-Armed Bandit (MAB) perspective to minimize the total delay while gradually estimating the popularity of content files. The reinforcement learning-based algorithms with a novel Q-value updating module are employed to update the caching files in different timescales for MBS and SBSs, respectively. Simulation results show the proposed algorithm outperforms benchmark algorithms with static or varying content popularity. In the high-speed environment, the cooperation between SBSs effectively improves the cache hit rate and further improves service performance.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 2","pages":"Article 100277"},"PeriodicalIF":3.2,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143917787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A study on an efficient OSS inspection scheme based on encrypted GML 基于加密GML的OSS检测方案研究
IF 3.2
High-Confidence Computing Pub Date : 2024-11-05 DOI: 10.1016/j.hcc.2024.100279
Seok-Joon Jang , Im-Yeong Lee , Daehee Seo , Su-Hyun Kim
{"title":"A study on an efficient OSS inspection scheme based on encrypted GML","authors":"Seok-Joon Jang ,&nbsp;Im-Yeong Lee ,&nbsp;Daehee Seo ,&nbsp;Su-Hyun Kim","doi":"10.1016/j.hcc.2024.100279","DOIUrl":"10.1016/j.hcc.2024.100279","url":null,"abstract":"<div><div>The importance of Open Source Software (OSS) has increased in recent years. OSS is software that is jointly developed and maintained globally through open collaboration and knowledge sharing. OSS plays an important role, especially in the Information Technology (IT) field, by increasing the efficiency of software development and reducing costs. However, licensing issues, security issues, etc., may arise when using OSS. Some services analyze source code and provide OSS-related data to solve these problems, a representative example being Blackduck. Blackduck inspects the entiresource code within the project and provides OSS information and related data included in the whole project. Therefore, there are problems such as inefficiency due to full inspection of the source code and difficulty in determining the exact location where OSS is identified. This paper proposes a scheme to intuitively analyze source code through Graph Modelling Language (GML) conversion to solve these problems. Additionally, encryption is applied to GML to performsecure GML-based OSS inspection. The study explains the process of converting source code to GML and performing OSS inspection. Afterward, we compare the capacity and accuracy of text-based OSS inspection and GML-based OSS inspection. Signcryption is applied to performsafe, GML-based, efficient OSS inspection.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 2","pages":"Article 100279"},"PeriodicalIF":3.2,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143891464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IDL-LTSOJ: Research and implementation of an intelligent online judge system utilizing DNN for defect localization IDL-LTSOJ:基于深度神经网络的缺陷定位智能在线判断系统的研究与实现
IF 3.2
High-Confidence Computing Pub Date : 2024-11-01 DOI: 10.1016/j.hcc.2024.100268
Lihua Song , Ying Han , Yufei Guo , Chenying Cai
{"title":"IDL-LTSOJ: Research and implementation of an intelligent online judge system utilizing DNN for defect localization","authors":"Lihua Song ,&nbsp;Ying Han ,&nbsp;Yufei Guo ,&nbsp;Chenying Cai","doi":"10.1016/j.hcc.2024.100268","DOIUrl":"10.1016/j.hcc.2024.100268","url":null,"abstract":"<div><div>The evolution of artificial intelligence has thrust the Online Judge (OJ) systems into the forefront of research, particularly within programming education, with a focus on enhancing performance and efficiency. Addressing the shortcomings of the current OJ systems in coarse defect localization granularity and heavy task scheduling architecture, this paper introduces an innovative Integrated Intelligent Defect Localization and Lightweight Task Scheduling Online Judge (IDL-LTSOJ) system. Firstly, to achieve token-level fine-grained defect localization, a Deep Fine-Grained Defect Localization (Deep-FGDL) deep neural network model is developed. By integrating Bidirectional Long Short-Term Memory (BiLSTM) and Bidirectional Gated Recurrent Unit (BiGRU), this model extracts fine-grained information from the abstract syntax tree (AST) of code, enabling more accurate defect localization. Subsequently, we propose a lightweight task scheduling architecture to tackle issues, such as limited concurrency in task evaluation and high equipment costs. This architecture integrates a Kafka messaging system with an optimized task distribution strategy to enable concurrent execution of evaluation tasks, substantially enhancing system evaluation efficiency. The experimental results demonstrate that the Deep-FGDL model improves the accuracy by 35.9% in the Top-20 rank compared to traditional machine learning benchmark methods for fine-grained defect localization tasks. Moreover, the lightweight task scheduling strategy notably reduces response time by nearly 6000ms when handling 120 task volumes, which represents a significant improvement in evaluation efficiency over centralized evaluation methods.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 2","pages":"Article 100268"},"PeriodicalIF":3.2,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143895507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel deep high-level concept-mining jointing hashing model for unsupervised cross-modal retrieval 一种新的用于无监督跨模态检索的深度高级概念挖掘连接哈希模型
IF 3.2
High-Confidence Computing Pub Date : 2024-10-29 DOI: 10.1016/j.hcc.2024.100274
Chun-Ru Dong , Jun-Yan Zhang , Feng Zhang , Qiang Hua , Dachuan Xu
{"title":"A novel deep high-level concept-mining jointing hashing model for unsupervised cross-modal retrieval","authors":"Chun-Ru Dong ,&nbsp;Jun-Yan Zhang ,&nbsp;Feng Zhang ,&nbsp;Qiang Hua ,&nbsp;Dachuan Xu","doi":"10.1016/j.hcc.2024.100274","DOIUrl":"10.1016/j.hcc.2024.100274","url":null,"abstract":"<div><div>Unsupervised cross-modal hashing has achieved great success in various information retrieval applications owing to its efficient storage usage and fast retrieval speed. Recent studies have primarily focused on training the hash-encoded networks by calculating a sample-based similarity matrix to improve the retrieval performance. However, there are two issues remain to solve: (1) The current sample-based similarity matrix only considers the similarity between image-text pairs, ignoring the different information densities of each modality, which may introduce additional noise and fail to mine key information for retrieval; (2) Most existing unsupervised cross-modal hashing methods only consider alignment between different modalities, while ignoring consistency between each modality, resulting in semantic conflicts. To tackle these challenges, a novel Deep High-level Concept-mining Jointing Hashing (DHCJH) model for unsupervised cross-modal retrieval is proposed in this study. DHCJH is able to capture the essential high-level semantic information from image modalities and integrate into the text modalities to improve the accuracy of guidance information. Additionally, a new hashing loss with a regularization term is introduced to avoid the cross-modal semantic collision and false positive pairs problems. To validate the proposed method, extensive comparison experiments on benchmark datasets are conducted. Experimental findings reveal that DHCJH achieves superior performance in both accuracy and efficiency. The code of DHCJH is available at Github.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 2","pages":"Article 100274"},"PeriodicalIF":3.2,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143917786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信