IEEE Transactions on Emerging Topics in Computational Intelligence最新文献

筛选
英文 中文
Evolutionary Optimization for Proactive and Dynamic Computing Resource Allocation in Open Radio Access Network 开放无线接入网络中主动与动态计算资源分配的进化优化
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-12-11 DOI: 10.1109/TETCI.2024.3499997
Gan Ruan;Leandro L. Minku;Zhao Xu;Xin Yao
{"title":"Evolutionary Optimization for Proactive and Dynamic Computing Resource Allocation in Open Radio Access Network","authors":"Gan Ruan;Leandro L. Minku;Zhao Xu;Xin Yao","doi":"10.1109/TETCI.2024.3499997","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3499997","url":null,"abstract":"In Open Radio Access Network (O-RAN), intelligent techniques are urged to achieve the automation of the computing resource allocation, so as to save computing resources and increase their utilization rate, as well as decrease the network delay. However, the existing formulation of this problem as an optimization problem defines the capacity utility of resource in an inappropriate way and it tends to cause much delay. Moreover, the only algorithm proposed to solve this problem is a greedy search algorithm, which is not ideal as it could get stuck into local optima. To overcome these issues, a new formulation that better describes the problem is proposed. In addition, an evolutionary algorithm (EA) is designed to find a resource allocation scheme to proactively and dynamically deploy the computing resource for processing upcoming traffic data. A multivariate long short-term memory model is used in the proposed EA to predict future traffic data for the production of deployment scheme. As a global search approach, the EA is less likely to get stuck in local optima than greed search, leading to better solutions. Experimental studies carried out on real-world datasets and artificially generated datasets with different scenarios and properties have demonstrated the significant superiority of our proposed EA over a baseline greedy algorithm under all parameter settings. Moreover, experimental studies with all afore-mentioned datasets are performed to compare the proposed EA and two variants under different parameter settings, to demonstrate the impact of different algorithm choices.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"1001-1018"},"PeriodicalIF":5.3,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143361047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Multi-View Clustering via Essential Tensorized Bipartite Graph Learning 基于基本张化二部图学习的高效多视图聚类
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-12-09 DOI: 10.1109/TETCI.2024.3502459
Wanrong Gu;Junlong Guo;Haiyan Wang;Guangyu Zhang;Bin Zhang;Jiazhou Chen;Hongmin Cai
{"title":"Efficient Multi-View Clustering via Essential Tensorized Bipartite Graph Learning","authors":"Wanrong Gu;Junlong Guo;Haiyan Wang;Guangyu Zhang;Bin Zhang;Jiazhou Chen;Hongmin Cai","doi":"10.1109/TETCI.2024.3502459","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3502459","url":null,"abstract":"Multi-view spectral clustering has garnered significant attention for its capacity to integrate intrinsic feature information from multiple perspectives, resulting in improved performance. However, the oversight of inter-view correlations has led to suboptimal outcomes. Furthermore, the conventional method of constructing an <inline-formula><tex-math>$N times N$</tex-math></inline-formula> graph in multi-view clustering imposes a substantial time burden when dealing with large-scale scenarios. To address these challenges, this paper presents an efficient multi-view clustering approach via <italic>E</i>ssential <italic>T</i>ensorized <italic>B</i>ipartite <italic>G</i>raph <italic>L</i>earning (ETBGL). Specifically, ETBGL utilizes the low-rank tensor Schatten <inline-formula><tex-math>$p$</tex-math></inline-formula>-norm to capture inter-view similarity, effectively capturing high-order correlation information embedded in multiple views. Simultaneously, by incorporating bipartite graph learning, ETBGL efficiently mitigates the computational demands and spatial complexity associated with tensor operations. Additionally, we introduce the <inline-formula><tex-math>$ell _{2,1}$</tex-math></inline-formula>-norm of tensor as a sparse penalty to the error term, with the aim of filtering out noise and preserving shared information, thus enhancing clustering robustness. We solve our objective by an efficient algorithm that is time-economical and has good convergence. Comprehensive evaluations on diverse datasets demonstrate the exceptional performance of our proposed model.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 4","pages":"2952-2964"},"PeriodicalIF":5.3,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144687721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Streamlined and Resource-Efficient Estimation of Epistemic Uncertainty in Deep Ensemble Classification Decision via Regression 基于回归的深度集成分类决策中认知不确定性的流线型高效估计
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-12-09 DOI: 10.1109/TETCI.2024.3508846
Jordan F. Masakuna;Djeff K. Nkashama;Arian Soltani;Marc Frappier;Pierre M. Tardif;Froduald Kabanza
{"title":"Streamlined and Resource-Efficient Estimation of Epistemic Uncertainty in Deep Ensemble Classification Decision via Regression","authors":"Jordan F. Masakuna;Djeff K. Nkashama;Arian Soltani;Marc Frappier;Pierre M. Tardif;Froduald Kabanza","doi":"10.1109/TETCI.2024.3508846","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3508846","url":null,"abstract":"Ensemble deep learning (EDL) has emerged as a leading tool for epistemic uncertainty quantification (UQ) in predictive modelling. Our study focuses on the utilization of EDL, composed of auto-encoders (AEs) for out-of-distribution (OoD) detection. EDL offers straightforward interpretability and valuable practical insights. Conventionally, employing multiple AEs in an ensemble requires regular training for each model whenever substantial changes occur in the data, a process that can become computationally expensive, especially when dealing with large ensembles. To address this computational challenge, we introduce an innovative strategy that treats ensemble UQ as a regression problem. During initial training, once the uncertainty distribution is established, we map this distribution to one ensemble member. This approach ensures that during subsequent trainings and inferences, only one ensemble member and the regression model are needed to predict uncertainties, eliminating the need to maintain the entire ensemble. This streamlined approach is particularly advantageous for systems with limited computational resources or situations that demand rapid decision-making, such as alert management in cybersecurity. Our evaluations on five benchmark OoD detection data sets demonstrate that the uncertainty estimates obtained with our proposed method can, in most cases, align with the uncertainty distribution learned by the ensemble, all while significantly reducing the computational resource requirements.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 4","pages":"2940-2951"},"PeriodicalIF":5.3,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144687673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PASTA: Neural Architecture Search for Anomaly Detection in Multivariate Time Series 面食:多元时间序列异常检测的神经结构搜索
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-12-09 DOI: 10.1109/TETCI.2024.3508845
Patara Trirat;Jae-Gil Lee
{"title":"PASTA: Neural Architecture Search for Anomaly Detection in Multivariate Time Series","authors":"Patara Trirat;Jae-Gil Lee","doi":"10.1109/TETCI.2024.3508845","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3508845","url":null,"abstract":"Time-series anomaly detection uncovers rare errors or intriguing events of interest that significantly deviate from normal patterns. In order to precisely detect anomalies, a detector needs to capture intricate underlying temporal dynamics of a time series, often in multiple scales. Thus, a fixed-designed neural network may not be optimal for capturing such complex dynamics as different time-series data require different learning processes to reflect their unique characteristics. This paper proposes a <italic>P</i>rediction-based neural <italic>A</i>rchitecture <italic>S</i>earch for <italic>T</i>ime series <italic>A</i>nomaly detection framework, dubbed <italic>PASTA</i>. Unlike previous work, besides searching for a connection between operations, we design a novel search space to search for optimal connections in the temporal dimension among recurrent cells within/between each layer, i.e., <italic>temporal connectivity</i>, and encode them via <italic>multi-level configuration encoding</i> networks. Experimental results from both real-world and synthetic benchmarks show that the discovered architectures by <italic>PASTA</i> outperform the second-best state-of-the-art baseline by around 13.6% in the enhanced time-series aware <inline-formula><tex-math>$F_{1}$</tex-math></inline-formula> score on average, confirming that the design of temporal connectivity is critical for time-series anomaly detection.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 4","pages":"2924-2939"},"PeriodicalIF":5.3,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144687759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convex-Concave Programming: An Effective Alternative for Optimizing Shallow Neural Networks 凹凸规划:优化浅层神经网络的一种有效方法
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-12-05 DOI: 10.1109/TETCI.2024.3502463
Mohammad Askarizadeh;Alireza Morsali;Sadegh Tofigh;Kim Khoa Nguyen
{"title":"Convex-Concave Programming: An Effective Alternative for Optimizing Shallow Neural Networks","authors":"Mohammad Askarizadeh;Alireza Morsali;Sadegh Tofigh;Kim Khoa Nguyen","doi":"10.1109/TETCI.2024.3502463","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3502463","url":null,"abstract":"In this study, we address the challenges of non-convex optimization in neural networks (NNs) by formulating the training of multilayer perceptron (MLP) NNs as a difference of convex functions (DC) problem. Utilizing the basic convex–concave algorithm to solve our DC problems, we introduce two alternative optimization techniques, <italic>DC-GD</i> and <italic>DC-OPT</i>, for determining MLP parameters. By leveraging the non-uniqueness property of the convex components in DC functions, we generate strongly convex components for the DC NN cost function. This strong convexity enables our proposed algorithms, <italic>DC-GD</i> and <italic>DC-OPT</i>, to achieve an <italic>iteration complexity</i> of <inline-formula><tex-math>$Oleft(log left(frac{1}{varepsilon }right)right)$</tex-math></inline-formula>, surpassing that of other solvers, such as stochastic gradient descent (<italic>SGD</i>), which has an <italic>iteration complexity</i> of <inline-formula><tex-math>$Oleft(frac{1}{varepsilon }right)$</tex-math></inline-formula>. This improvement raises the convergence rate from sublinear (<italic>SGD</i>) to linear (ours) while maintaining comparable <italic>total computational costs</i>. Furthermore, conventional NN optimizers like <italic>SGD</i>, <italic>RMSprop</i>, and <italic>Adam</i> are highly sensitive to the learning rate, adding computational overhead for practitioners in selecting an appropriate learning rate. In contrast, our <italic>DC-OPT</i> algorithm is hyperparameter-free (i.e., it requires no learning rate), and our <italic>DC-GD</i> algorithm is less sensitive to the learning rate, offering comparable accuracy to other solvers. Additionally, we extend our approach to a convolutional NN architecture, demonstrating its applicability to modern NNs. We evaluate the performance of our proposed algorithms by comparing them to conventional optimizers such as <italic>SGD</i>, <italic>RMSprop</i>, and <italic>Adam</i> across various test cases. The results suggest that our approach is a viable alternative for optimizing shallow MLP NNs.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 4","pages":"2894-2907"},"PeriodicalIF":5.3,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144687615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EarlyBirdFL: Leveraging Early Bird Ticket Networks for Enhanced Personalized Learning EarlyBirdFL:利用早鸟票网络增强个性化学习
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-12-04 DOI: 10.1109/TETCI.2024.3500009
Dongdong Li;Weiwei Lin;Wenying Duan;Bo Liu;Victor Chang
{"title":"EarlyBirdFL: Leveraging Early Bird Ticket Networks for Enhanced Personalized Learning","authors":"Dongdong Li;Weiwei Lin;Wenying Duan;Bo Liu;Victor Chang","doi":"10.1109/TETCI.2024.3500009","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3500009","url":null,"abstract":"Federated learning (FL) is revolutionizing mobile computing and IoT development by enhancing data privacy. However, restricted computational and communication resources and the statistical variability of data stored on devices present substantial obstacles to ongoing progress in FL. We introduce EarlyBirdFL, a novel FL framework that leverages an Early-Bird Ticket-inspired pruning and masking technique for efficient training and communication in federated settings. EarlyBirdFL enables each client to achieve fast local training by identifying efficient subnetworks early in the training process, communicating only these pruned networks between the server and the client. Unlike classical personalized FL, in which the client-side model learns differences, EarlyBirdFL allows each client to identify these efficient subnetworks using a mask metric quickly. Experimental results demonstrate that EarlyBirdFL outperforms traditional computation time and accuracy methods, achieving a 1.53-4.98 times speedup and 1.01-1.15 times higher accuracy. Furthermore, EarlyBirdFL remains stable even when its parameters are adjusted and performs well in different non-IID environments, maintaining or surpassing the performance of other methods. This approach combines elements of early efficient subnetwork identification, pruning, masking, and personalized federated learning to address the unique challenges of FL.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 4","pages":"2879-2893"},"PeriodicalIF":5.3,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144687723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainable Artificial Intelligence Approach for Demand-Side Management in a 1-Phase Multi-Type Consumer Base: Enhancing Efficiency and Transparency 一阶段多类型消费者群中需求侧管理的可解释人工智能方法:提高效率和透明度
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-12-02 DOI: 10.1109/TETCI.2024.3499326
Uttamarani Pati;Khyati D. Mistry
{"title":"Explainable Artificial Intelligence Approach for Demand-Side Management in a 1-Phase Multi-Type Consumer Base: Enhancing Efficiency and Transparency","authors":"Uttamarani Pati;Khyati D. Mistry","doi":"10.1109/TETCI.2024.3499326","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3499326","url":null,"abstract":"Technological advancements have enabled electricity utilities to experiment with various artificially intelligent approaches to minimize the challenges posed by end-user demand volatility. Although the introduction of such techniques has made operating the system easier, it has also made the internal process difficult to interpret. It makes difficult for the operator to solve any issues raised due to any fault in the model design. Designing demand response strategies that are simple to comprehend is crucial for this reason. Hence, the consumer demand response model will exhibit the much-needed system behavior of transparency, trust, and objectivity. The fundamental goal of this research is to introduce an explainable artificially intelligent (XAI) demand response (DR) model based on machine learning (ML) that assures supply-demand equilibrium across the power system network. The proposed methodology combines an integrated load forecasting approach with a DR model based on Jaya optimization. Subsequently, the effectiveness of the DR program is illustrated in relation to the accuracy of the load forecasting model. The operation of this integrated ML-based technique was shown using an XAI-based model architecture. The proposed technique was modelled and tested in the MATLAB interface utilizing a database of 24 end-user energy usage.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 4","pages":"2869-2878"},"PeriodicalIF":5.3,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144687587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PETformer: Long-Term Time Series Forecasting via Placeholder-Enhanced Transformer PETformer:通过占位符增强的变压器进行长期时间序列预测
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-12-02 DOI: 10.1109/TETCI.2024.3502437
Shengsheng Lin;Weiwei Lin;Wentai Wu;Songbo Wang;Yongxiang Wang
{"title":"PETformer: Long-Term Time Series Forecasting via Placeholder-Enhanced Transformer","authors":"Shengsheng Lin;Weiwei Lin;Wentai Wu;Songbo Wang;Yongxiang Wang","doi":"10.1109/TETCI.2024.3502437","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3502437","url":null,"abstract":"Recently, the superiority of Transformer for long-term time series forecasting (LTSF) tasks has been challenged, particularly since recent work has shown that simple models can outperform numerous Transformer-based approaches. This evidence suggests that a notable gap remains in fully leveraging the potential of Transformer in LTSF tasks. Therefore, this study investigates key issues when applying Transformer to LTSF, encompassing aspects of temporal continuity, information density, and multi-channel relationships. We introduce the Placeholder-enhanced Technique (PET) to enhance the computational efficiency and predictive accuracy of Transformer in LTSF tasks. Furthermore, we delve into the impact of larger patch strategies and channel interaction strategies on Transformer's performance, specifically Long Sub-sequence Division (LSD) and Multi-channel Separation and Interaction (MSI). These strategies collectively constitute a novel model termed PETformer. Extensive experiments have demonstrated that PETformer achieves state-of-the-art performance on eight commonly used public datasets for LTSF, surpassing all existing models. The insights and enhancement methodologies presented in this paper serve as valuable reference points and sources of inspiration for future research endeavors.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 2","pages":"1189-1201"},"PeriodicalIF":5.3,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143716519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the Horizons of Meta-Learning in Neural Networks: A Survey of the State-of-the-Art 探索神经网络中元学习的视野:最新进展综述
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-12-02 DOI: 10.1109/TETCI.2024.3502355
Asit Barman;Swalpa Kumar Roy;Swagatam Das;Paramartha Dutta
{"title":"Exploring the Horizons of Meta-Learning in Neural Networks: A Survey of the State-of-the-Art","authors":"Asit Barman;Swalpa Kumar Roy;Swagatam Das;Paramartha Dutta","doi":"10.1109/TETCI.2024.3502355","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3502355","url":null,"abstract":"In the vast landscape of machine learning, meta-learning stands out as a challenging and dynamic area of exploration. While traditional machine learning models rely on standard algorithms to learn from data, meta-learning elevates this process by leveraging prior knowledge to adapt and improve learning experiences, mimicking the adaptive nature of human learning. This paradigm offers promising avenues for addressing the limitations of conventional deep learning approaches, such as data and computational constraints, as well as issues related to generalization. In this comprehensive survey, we delve into the intricacies of meta-learning methodologies. Beginning with a foundational overview of meta-learning and its associated fields, we present a detailed methodology elucidating the workings of meta-learning. Recognizing the importance of rigorous evaluation, we also furnish a comprehensive summary of prevalent benchmark datasets and recent advancements in meta-learning techniques applied to these datasets. Additionally, we explore meta-learning's diverse applications and achievements in domains like reinforcement learning and few-shot learning. Lastly, we examine practical hurdles and potential research directions, providing insights for future endeavors in this burgeoning field.","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"9 1","pages":"27-42"},"PeriodicalIF":5.3,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143106793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2024 Index IEEE Transactions on Emerging Topics in Computational Intelligence Vol. 8 2024索引IEEE计算智能新兴主题交易卷8
IF 5.3 3区 计算机科学
IEEE Transactions on Emerging Topics in Computational Intelligence Pub Date : 2024-12-02 DOI: 10.1109/TETCI.2024.3508953
{"title":"2024 Index IEEE Transactions on Emerging Topics in Computational Intelligence Vol. 8","authors":"","doi":"10.1109/TETCI.2024.3508953","DOIUrl":"https://doi.org/10.1109/TETCI.2024.3508953","url":null,"abstract":"","PeriodicalId":13135,"journal":{"name":"IEEE Transactions on Emerging Topics in Computational Intelligence","volume":"8 6","pages":"4261-4326"},"PeriodicalIF":5.3,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10772053","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142777730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信