International Journal of Intelligent Systems最新文献

筛选
英文 中文
Feature Transformation Reconstruction (FTR) Network for Unsupervised Anomaly Detection 面向无监督异常检测的特征变换重构网络
IF 5 2区 计算机科学
International Journal of Intelligent Systems Pub Date : 2025-04-23 DOI: 10.1155/int/1780499
Linna Zhang, Lanyao Zhang, Qi Cao, Shichao Kan, Yigang Cen, Fugui Zhang, Yansen Huang
{"title":"Feature Transformation Reconstruction (FTR) Network for Unsupervised Anomaly Detection","authors":"Linna Zhang,&nbsp;Lanyao Zhang,&nbsp;Qi Cao,&nbsp;Shichao Kan,&nbsp;Yigang Cen,&nbsp;Fugui Zhang,&nbsp;Yansen Huang","doi":"10.1155/int/1780499","DOIUrl":"https://doi.org/10.1155/int/1780499","url":null,"abstract":"<div>\u0000 <p>The goal of the feature reconstruction network based on an autoencoder in the training phase is to force the network to reconstruct the input features well. The network tends to learn shortcuts of “identity mapping,” which leads to the network outputting abnormal features as they are in the inference phase. As such, the abnormal features based on reconstruction error cannot be distinguished from normal features, significantly limiting the detection performance of such methods. To address this issue, we propose a feature transformation reconstruction (FTR) network, which can avoid the identity mapping problem. Specifically, we use a normalizing flow model as a feature transformation (FT) network to transform input features into other forms. The training goal of the feature reconstruction (FR) network is no longer to reconstruct the input features but to reconstruct the transformed features, effectively avoiding the shortcut of learning the “identity map.” Furthermore, this paper proposes a masked convolutional attention (MCA) module, which randomly masks the input features in the training phase and reconstructs the input features in a self-supervised manner. In the testing phase, the MCA can effectively suppress the excessive reconstruction of abnormal features and further improve anomaly detection performance. FTR achieves the scores of the area under the receiver operating characteristic curve (AUROC) at 99.5% and 97.8% on the MVTec AD and BTAD datasets, respectively, outperforming other state-of-the-art methods. Moreover, FTR is faster than the existing methods, with a high speed of 137 frames per second (FPS) on a 3080ti GPU.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/1780499","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143861884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IntFedSV: A Novel Participants’ Contribution Evaluation Mechanism for Federated Learning IntFedSV:联合学习的新型参与者贡献评估机制
IF 5 2区 计算机科学
International Journal of Intelligent Systems Pub Date : 2025-04-22 DOI: 10.1155/int/3466867
Tianxu Cui, Ying Shi, Wenge Li, Rijia Ding, Qing Wang
{"title":"IntFedSV: A Novel Participants’ Contribution Evaluation Mechanism for Federated Learning","authors":"Tianxu Cui,&nbsp;Ying Shi,&nbsp;Wenge Li,&nbsp;Rijia Ding,&nbsp;Qing Wang","doi":"10.1155/int/3466867","DOIUrl":"https://doi.org/10.1155/int/3466867","url":null,"abstract":"<div>\u0000 <p>Federated learning (FL), which is a distributed privacy computing technology, has demonstrated strong capabilities in addressing potential privacy leakage for multisource data fusion and has been widely applied in various industries. Existing contribution evaluation mechanisms based on Shapley values uniquely allocate the total utility of a federation based on the marginal contributions of participants. However, in practical engineering applications, participants from different data sources typically exhibit significant differences and uncertainties in terms of their contributions to a federation, thus rendering it difficult to represent their contributions precisely. To evaluate the contribution of each participant to FL more effectively, we propose a novel interval federated Shapley value (IntFedSV) contribution evaluation mechanism. Second, to improve computational efficiency, we utilize a matrix semitensor product-based method to compute the IntFedSV. Finally, extensive experiments on four public datasets (MNIST, CIFAR10, AG_NEWS, and IMDB) demonstrate its potential in engineering applications. Our proposed mechanism can effectively evaluate the contribution levels of participants. Compared with the case of three advanced baseline methods, the minimum and maximum improvement rates of standard deviation for our proposed mechanism are 11.83% and 99.00%, respectively, thus demonstrating its greater stability and fault tolerance. This study contributes positively to promoting engineering applications of FL.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/3466867","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143857063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethical Principles of Integrating ChatGPT Into IoT–Based Software Wearables: A Fuzzy-TOPSIS Ranking and Analysis Approach 基于物联网的软件可穿戴设备集成ChatGPT的伦理原则:一种模糊topsis排序与分析方法
IF 5 2区 计算机科学
International Journal of Intelligent Systems Pub Date : 2025-04-22 DOI: 10.1155/int/6660868
Maseeh Ullah Khan, Muhammad Farhat Ullah, Sabeeh Ullah Khan, Weiqiang Kong
{"title":"Ethical Principles of Integrating ChatGPT Into IoT–Based Software Wearables: A Fuzzy-TOPSIS Ranking and Analysis Approach","authors":"Maseeh Ullah Khan,&nbsp;Muhammad Farhat Ullah,&nbsp;Sabeeh Ullah Khan,&nbsp;Weiqiang Kong","doi":"10.1155/int/6660868","DOIUrl":"https://doi.org/10.1155/int/6660868","url":null,"abstract":"<div>\u0000 <p>The rapid development of the internet of things (IoT) prompts organizations and developers to seek innovative approaches for future IoT device development and research. Leveraging advanced artificial intelligence (AI) models such as ChatGPT holds promise in reshaping the conceptualization, development, and commercialization of IoT devices. Through real-world data utilization, AI enhances the effectiveness, adaptability, and intelligence of IoT devices and wearables, expediting their production process from ideation to deployment and customer assistance. However, integrating ChatGPT into IoT–based devices and wearables poses ethical concerns including data ownership, security, privacy, accessibility, bias, accountability, cost, design, quality, storage, model training, explainability, consistency, fairness, safety, transparency, trust, and generalizability. Addressing these ethical principles necessitates a comprehensive review of the literature to identify and classify relevant principles. The author identified 14 ethical principles from the literature using a systematic literature review (SLR) with a criteria of frequency ≥ 50% based on similarities. Four categories emerge based on the identified ethical principles, culminating in the application of Fuzzy-TOPSIS for analyzing, categorizing, ranking, and prioritizing these ethical principles. From the Fuzzy-TOPSIS technique results, the principle of data security and privacy is the highly ranked ethical principle for IoT–based software wearable devices with the ranking value of “0.925” as a consistency coefficient index. This method, well-established in computer science, effectively navigates fuzzy and uncertain decision-making scenarios. The pioneer outcomes of this study provide a taxonomy-based valuable insight for software manufacturers, facilitating the analysis, ranking, categorization, and prioritization of ethical principles amid the integration of ChatGPT in IoT–based devices and wearables’ research and development.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/6660868","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143861674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial Expression Recognition Method Based on Octonion Orthogonal Feature Extraction and Octonion Vision Transformer 基于八叉正交特征提取和八叉视觉变换器的面部表情识别方法
IF 5 2区 计算机科学
International Journal of Intelligent Systems Pub Date : 2025-04-21 DOI: 10.1155/int/6388642
Yuan Tian, Hang Cai, Huang Yao, Di Chen
{"title":"Facial Expression Recognition Method Based on Octonion Orthogonal Feature Extraction and Octonion Vision Transformer","authors":"Yuan Tian,&nbsp;Hang Cai,&nbsp;Huang Yao,&nbsp;Di Chen","doi":"10.1155/int/6388642","DOIUrl":"https://doi.org/10.1155/int/6388642","url":null,"abstract":"<div>\u0000 <p>In the field of artificial intelligence, facial expression recognition (FER) in natural scenes is a challenging topic. In recent years, vision transformer (ViT) models have been applied to FER tasks. The direct use of the original ViT structure consumes a lot of computational resources and longer training time. To overcome these problems, we propose a FER method based on octonion orthogonal feature extraction and octonion ViT. First, to reduce feature redundancy, we propose an orthogonal feature decomposition method to map the extracted features onto seven orthogonal sub-features. Then, an octonion orthogonal representation method is introduced to correlate the orthogonal features, maintain the intrinsic dependencies between different orthogonal features, and enhance the model’s ability to extract features. Finally, an octonion ViT is presented, which reduces the number of parameters to one-eighth of ViT while improving the accuracy of FER. Experimental results on three commonly used facial expression datasets show that the proposed method outperforms several state-of-the-art models with a significant reduction in the number of parameters.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/6388642","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143852711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Integrated Radio Detection and Identification Deep Learning Architecture 一种高效集成的无线电检测与识别深度学习体系结构
IF 5 2区 计算机科学
International Journal of Intelligent Systems Pub Date : 2025-04-21 DOI: 10.1155/int/4477742
Zhiyong Luo, Yanru Wang, Xiti Wang
{"title":"An Efficient Integrated Radio Detection and Identification Deep Learning Architecture","authors":"Zhiyong Luo,&nbsp;Yanru Wang,&nbsp;Xiti Wang","doi":"10.1155/int/4477742","DOIUrl":"https://doi.org/10.1155/int/4477742","url":null,"abstract":"<div>\u0000 <p>The detection and identification of radio signals play a crucial role in cognitive radio, electronic reconnaissance, noncooperative communication, etc. Deep neural networks have emerged as a promising approach for electromagnetic signal detection and identification, outperforming traditional methods. Nevertheless, the present deep neural networks not only overlook the characteristics of electromagnetic signals but also treat these two tasks as independent components, similar to conventional methods. These issues limit overall performance and unnecessarily increase computational consumption. In this paper, we have designed a novel and universally applicable integrated radio detection and identification deep architecture and corresponding training method, which organically combines detection and identification networks. Furthermore, we extract signal features using only one-dimensional horizontal convolution based on the characteristics of the impact of wireless channels on time-domain signals. Experiments show that the proposed methods perform signal detection and identification more efficiently, which can not only reduce unnecessary computational consumption but also improve the accuracy and robustness of both detection and identification simultaneously. More specifically, the ability to distinguish different modulated signal categories tends to increase with the rise in SNRs, and the upper limit of detection accuracy can exceed 95% at SNRs above 0 dB. The proposed method can improve both signal detection and identification accuracy from 83.44% to 83.56% and from 61.27% to 62.32%, respectively.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/4477742","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143856695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Resilience Recovery Method for Complex Traffic Network Security Based on Trend Forecasting 基于趋势预测的复杂流量网络安全复原方法
IF 5 2区 计算机科学
International Journal of Intelligent Systems Pub Date : 2025-04-21 DOI: 10.1155/int/3715086
Sheng Hong, Tianyu Yue, Yang You, Zhengnan Lv, Xu Tang, Jing Hu, Hongwei Yin
{"title":"A Resilience Recovery Method for Complex Traffic Network Security Based on Trend Forecasting","authors":"Sheng Hong,&nbsp;Tianyu Yue,&nbsp;Yang You,&nbsp;Zhengnan Lv,&nbsp;Xu Tang,&nbsp;Jing Hu,&nbsp;Hongwei Yin","doi":"10.1155/int/3715086","DOIUrl":"https://doi.org/10.1155/int/3715086","url":null,"abstract":"<div>\u0000 <p>Due to the rapid development of information technology, a huge and complex traffic network has been established across various sectors including aviation, aerospace, vehicles, ships, electric power, and industry. However, because of the complexity and diversity of its structure, the complex traffic network is vulnerable to be attacked and faces serious security challenges. Therefore, this paper innovatively proposes a traffic network resilience recovery method based on resilience trend forecasting. In this paper, the risk value is introduced into the analysis of network fault propagation process, and the Susceptible, Infectious, Recovered, Dead-Risk (SIRD-R) fault propagation model is established. The resilience model of traffic network, which encompasses real-time resilience and overall resilience, is constructed through the integration of network resilience bearing capacity and resilience recovery capacity. Then, the resilience of complex traffic network is forecasted by using long short-term memory network, and the resilience recovery strategy of complex traffic network based on forecasting is proposed. Finally, the effectiveness and scalability of the proposed method are demonstrated through experimental analysis conducted on a diverse range of complex traffic networks, affirming its applicability in real-world scenarios.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/3715086","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143852712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-Grained Dance Style Classification Using an Optimized Hybrid Convolutional Neural Network Architecture for Video Processing Over Multimedia Networks 利用优化的混合卷积神经网络架构为多媒体网络视频处理提供精细的舞蹈风格分类
IF 5 2区 计算机科学
International Journal of Intelligent Systems Pub Date : 2025-04-21 DOI: 10.1155/int/6434673
Na Guo, Ahong Yang, Yan Wang, Elaheh Dastbaravardeh
{"title":"Fine-Grained Dance Style Classification Using an Optimized Hybrid Convolutional Neural Network Architecture for Video Processing Over Multimedia Networks","authors":"Na Guo,&nbsp;Ahong Yang,&nbsp;Yan Wang,&nbsp;Elaheh Dastbaravardeh","doi":"10.1155/int/6434673","DOIUrl":"https://doi.org/10.1155/int/6434673","url":null,"abstract":"<div>\u0000 <p>Dance style recognition through video analysis during university training can significantly benefit both instructors and novice dancers. Employing video analysis in training offers substantial advantages, including the potential to train future dancers using innovative technologies. Over time, intricate dance gestures can be honed, reducing the burden on instructors who would, otherwise, need to provide repetitive demonstrations. Recognizing dancers’ movements, evaluating and adjusting their gestures, and extracting cognitive functions for efficient evaluation and classification are pivotal aspects of our model. Deep learning currently stands as one of the most effective approaches for achieving these objectives, particularly with short video clips. However, limited research has focused on automated analysis of dance videos for training purposes and assisting instructors. In addition, assessing the quality and accuracy of performance video recordings presents a complex challenge, especially when judges cannot fully focus on the on-stage performance. This paper proposes an alternative to manual evaluation through a video-based approach for dance assessment. By utilizing short video clips, we conduct dance analysis employing techniques such as fine-grained dance style classification in video frames, convolutional neural networks (CNNs) with channel attention mechanisms (CAMs), and autoencoders (AEs). These methods enable accurate evaluation and data gathering, leading to precise conclusions. Furthermore, utilizing cloud space for real-time processing of video frames is essential for timely analysis of dance styles, enhancing the efficiency of information processing. Experimental results demonstrate the effectiveness of our evaluation method in terms of accuracy and F1-score calculation, with accuracy exceeding 97.24% and the F1-score reaching 97.30%. These findings corroborate the efficacy and precision of our approach in dance evaluation analysis.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/6434673","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143856991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MultiResFF-Net: Multilevel Residual Block-Based Lightweight Feature Fused Network With Attention for Gastrointestinal Disease Diagnosis MultiResFF-Net:基于多级残差块的轻量级特征融合网络,用于胃肠道疾病诊断
IF 5 2区 计算机科学
International Journal of Intelligent Systems Pub Date : 2025-04-15 DOI: 10.1155/int/1902285
Sohaib Asif, Yajun Ying, Tingting Qian, Jun Yao, Jinjie Qu, Vicky Yang Wang, Rongbiao Ying, Dong Xu
{"title":"MultiResFF-Net: Multilevel Residual Block-Based Lightweight Feature Fused Network With Attention for Gastrointestinal Disease Diagnosis","authors":"Sohaib Asif,&nbsp;Yajun Ying,&nbsp;Tingting Qian,&nbsp;Jun Yao,&nbsp;Jinjie Qu,&nbsp;Vicky Yang Wang,&nbsp;Rongbiao Ying,&nbsp;Dong Xu","doi":"10.1155/int/1902285","DOIUrl":"https://doi.org/10.1155/int/1902285","url":null,"abstract":"<div>\u0000 <p>Accurate detection of gastrointestinal (GI) diseases is crucial due to their high prevalence. Screening is often inefficient with existing methods, and the complexity of medical images challenges single-model approaches. Leveraging diverse model features can improve accuracy and simplify detection. In this study, we introduce a novel deep learning model tailored for the diagnosis of GI diseases through the analysis of endoscopy images. This innovative model, named MultiResFF-Net, employs a multilevel residual block-based feature fusion network. The key strategy involves the integration of features from truncated DenseNet121 and MobileNet architectures. This fusion not only optimizes the model’s diagnostic performance but also strategically minimizes complexity and computational demands, making MultiResFF-Net a valuable tool for efficient and accurate disease diagnosis in GI endoscopy images. A pivotal component enhancing the model’s performance is the introduction of the Modified MultiRes-Block (MMRes-Block) and the Convolutional Block Attention Module (CBAM). The MMRes-Block, a customized residual learning component, optimally handles fused features at the endpoint of both models, fostering richer feature sets without escalating parameters. Simultaneously, the CBAM ensures dynamic recalibration of feature maps, emphasizing relevant channels and spatial locations. This dual incorporation significantly reduces overfitting, augments precision, and refines the feature extraction process. Extensive evaluations on three diverse datasets—endoscopic images, GastroVision data, and histopathological images—demonstrate exceptional accuracy of 99.37%, 97.47%, and 99.80%, respectively. Notably, MultiResFF-Net achieves superior efficiency, requiring only 2.22 MFLOPS and 0.47 million parameters, outperforming state-of-the-art models in both accuracy and cost-effectiveness. These results establish MultiResFF-Net as a robust and practical diagnostic tool for GI disease detection.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/1902285","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143836091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fusion Network Model Based on Broad Learning System for Multidimensional Time-Series Forecasting 基于广泛学习系统的多维时间序列预测融合网络模型
IF 5 2区 计算机科学
International Journal of Intelligent Systems Pub Date : 2025-04-12 DOI: 10.1155/int/1649220
Yuting Bai, Xinyi Xue, Xuebo Jin, Zhiyao Zhao, Yulei Zhang
{"title":"Fusion Network Model Based on Broad Learning System for Multidimensional Time-Series Forecasting","authors":"Yuting Bai,&nbsp;Xinyi Xue,&nbsp;Xuebo Jin,&nbsp;Zhiyao Zhao,&nbsp;Yulei Zhang","doi":"10.1155/int/1649220","DOIUrl":"https://doi.org/10.1155/int/1649220","url":null,"abstract":"<div>\u0000 <p>Multidimensional time-series prediction is significant in various fields, such as human production and life, weather forecasting, and artificial intelligence. However, a single model can only focus on specific features of time-series data, making it unable to consider both linear and nonlinear components simultaneously. In this study, we propose a fusion network that combines the advantages of deep and broad networks for multidimensional time-series prediction tasks. The complex multidimensional time-series data are divided into nonlinear and time-series data. Restricted Boltzmann machine and mapping functions are used for feature learning and generating mapping nodes at the mapping layer. The echo state network and gate recurrent unit are applied in the enhancement layer. The proposed model has been validated on PM2.5 and wind turbine power datasets, proving superior performance in multistep prediction tasks compared to the baseline models.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/1649220","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143824674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LDSGAN: Unsupervised Image-to-Image Translation With Long-Domain Search GAN for Generating High-Quality Anime Images LDSGAN:基于长域搜索的无监督图像到图像转换GAN,用于生成高质量动画图像
IF 5 2区 计算机科学
International Journal of Intelligent Systems Pub Date : 2025-04-12 DOI: 10.1155/int/4450460
Hao Wang, Chenbin Wang, Xin Cheng, Hao Wu, Jiawei Zhang, Jinwei Wang, Xiangyang Luo, Bin Ma
{"title":"LDSGAN: Unsupervised Image-to-Image Translation With Long-Domain Search GAN for Generating High-Quality Anime Images","authors":"Hao Wang,&nbsp;Chenbin Wang,&nbsp;Xin Cheng,&nbsp;Hao Wu,&nbsp;Jiawei Zhang,&nbsp;Jinwei Wang,&nbsp;Xiangyang Luo,&nbsp;Bin Ma","doi":"10.1155/int/4450460","DOIUrl":"https://doi.org/10.1155/int/4450460","url":null,"abstract":"<div>\u0000 <p>Image-to-image (<b>I2I</b>) translation has emerged as a valuable tool for privacy protection in the digital age, offering effective ways to safeguard portrait rights in cyberspace. In addition, I2I translation is applied in real-world tasks such as image synthesis, super-resolution, virtual fitting, and virtual live streaming. Traditional I2I translation models demonstrate strong performance when handling similar datasets. However, when the domain distance between two datasets is large, translation quality may degrade significantly due to notable differences in image shape and edges. To address this issue, we propose Long-Domain Search GAN (<b>LDSGAN</b>), an unsupervised I2I translation network that employs a GAN structure as its backbone, incorporating a novel Real-Time Routing Search (<b>RTRS</b>) module and Sketch Loss. Specifically, RTRS aids in expanding the search space within the target domain, aligning feature projection with images closest to the optimization target. Additionally, Sketch Loss retains human visual similarity during long-domain distance translation. Experimental results indicate that LDSGAN surpasses existing I2I translation models in both image quality and semantic similarity between input and generated images, as reflected by its mean FID and LPIPS scores of 31.509 and 0.581, respectively.</p>\u0000 </div>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/4450460","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143822113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信