Applied Intelligence最新文献

筛选
英文 中文
An innovative deep learning-driven technique for restoration of lost high-density surface electromyography signals 一种创新的深度学习驱动技术,用于恢复丢失的高密度表面肌电信号
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-04-09 DOI: 10.1007/s10489-025-06471-9
Juzheng Mao, Honghan Li, Yongkun Zhao
{"title":"An innovative deep learning-driven technique for restoration of lost high-density surface electromyography signals","authors":"Juzheng Mao,&nbsp;Honghan Li,&nbsp;Yongkun Zhao","doi":"10.1007/s10489-025-06471-9","DOIUrl":"10.1007/s10489-025-06471-9","url":null,"abstract":"<div><p>High-density surface electromyography (HD-sEMG) plays a crucial role in medical diagnostics, prosthetic control, and human-machine interactions. Compared to traditional bipolar sEMG, HD-sEMG employs smaller electrode spacing and sizes. This configuration not only reduces the signal collection area but also increases sensitivity to individual variations in skin impedance. Additionally, smaller high-density electrodes are more susceptible to environmental electromagnetic interference, thereby increasing the risk of signal loss and limiting the further development and application of HD-sEMG technology. To address this issue, this study introduces a novel deep learning-based technique specifically designed to restore lost HD-sEMG signals. Through an improved novel convolutional neural network (CNN), our method can reconstruct HD-sEMG signals both efficiently and accurately. Experimental results demonstrate that the proposed CNN algorithm effectively reconstructs lost HD-sEMG signals with high fidelity. The average root mean square error (RMSE) across all participants was 0.108, the mean absolute error (MAE) was 0.070, and the coefficient of determination (<span>(R^2)</span>) was 0.98. Furthermore, the model achieved an average structural similarity index measure (SSIM) of 0.96 and a peak signal-to-noise ratio (PSNR) of 29.13 dB, indicating high levels of structural similarity and signal clarity in the reconstructed data. These findings highlight the robustness and effectiveness of our method, suggesting its potential for enhancing the reliability and utility of HD-sEMG signals in real applications.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10489-025-06471-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143801231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward imperceptible and robust image watermarking against screen-shooting with dense blocks and CBAM 基于密集块和CBAM的防截屏图像水印研究
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-04-09 DOI: 10.1007/s10489-025-06496-0
Jiamin Wang, Xiaobing Kang, Wei Li, Jing Geng, Yalin Miao, Yajun Chen
{"title":"Toward imperceptible and robust image watermarking against screen-shooting with dense blocks and CBAM","authors":"Jiamin Wang,&nbsp;Xiaobing Kang,&nbsp;Wei Li,&nbsp;Jing Geng,&nbsp;Yalin Miao,&nbsp;Yajun Chen","doi":"10.1007/s10489-025-06496-0","DOIUrl":"10.1007/s10489-025-06496-0","url":null,"abstract":"<div><p>In cross-media information communication, it is essential to embed watermarks imperceptibly while also robustly resisting screen- shooting attacks. However, existing robust watermarking methods often struggle to achieve both objectives simultaneously. Therefore, this paper proposes a novel end-to-end screen-shooting resistant image watermarking method based on dense blocks and the convolutional block attention module (CBAM) attention mechanism. In the watermark embedding phase, an encoder that integrates dense connections and CBAM is employed. This approach effectively extracts features from the cover image, enhancing the visual quality of watermarked images while ensuring a certain level of robustness. The noise layer simulated by differentiable function not only contains moiré patterns, illumination, and perspective distortions—factors that significantly impact the screen-shooting process—but also encompasses Gaussian noise, which is commonly present. During the watermark extraction phase, a gradient mask is utilized to guide the encoder in generating watermarked images that facilitate more effective decoding, thereby enabling accurate extraction of the watermark. Ultimately, the robustness is improved by the encoder, the introduced noise layer, and the decoder through joint training. Experimental results demonstrate that the proposed method not only achieves excellent visual quality, with a PSNR value of 36.04 dB for the watermarked images, but also maintains a watermark extraction rate exceeding 95% under various shooting conditions (including different distances, angles, and devices). Notably, the extraction rate reaches 100% at shooting distances of 20 cm and 30 cm, showcasing strong robustness.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143809137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PSCNet: Long sequence time-series forecasting for photovoltaic power via period selection and cross-variable attention 基于周期选择和交叉变量关注的长序列时间序列光伏发电预测
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-04-09 DOI: 10.1007/s10489-025-06526-x
Hao Tan, Jinghui Qin, Zizheng Li, Weiyan Wu
{"title":"PSCNet: Long sequence time-series forecasting for photovoltaic power via period selection and cross-variable attention","authors":"Hao Tan,&nbsp;Jinghui Qin,&nbsp;Zizheng Li,&nbsp;Weiyan Wu","doi":"10.1007/s10489-025-06526-x","DOIUrl":"10.1007/s10489-025-06526-x","url":null,"abstract":"<div><p>With the continuous expansion of photovoltaic installation capacity, accurate prediction of photovoltaic power generation is crucial for balancing electricity supply and demand, optimizing energy storage systems, and improving energy efficiency. With the help of deep learning technologies, the stability and reliability of the photovoltaic power generation prediction have been significantly improved. However, existing methods primarily focus on temporal dependencies and often fall short in capturing the multivariate correlations between variables. In this paper, we propose a novel long-sequence time-series forecasting network for photovoltaic power via period selection and Cross-variable attention, named PSCNet. Specifically, we first propose the Top-K periodicity selection module (TPSM) to identify the Top-K principal periods for decoupling overlapped multi-periodic patterns, enabling the model to attend to periodic changes across different scales simultaneously. Then, we design a time-variate cascade perceptron to capture both temporal change patterns and variate change patterns in the time series. It contains two elaborate modules named Time-mixing MLP (TM-MLP) and Cross-variable Attention Module (CvAM). The former module aims to capture long-short term variations in time series while the latter one integrates the effective information from different auxiliary variates that have an impact on photovoltaic power forecasting to enhance the feature representation for better power prediction. Extensive experiments on the DKASC, Alice Springs dataset demonstrate that our model can outperform existing state-of-the-art photovoltaic power forecasting methods in terms of three common-used metrics including Mean Average Error (MAE), Mean Squared Error (MSE), and Mean Absolute Percentage Error (MAPE).</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143801233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2D-Variation convolution-based generative adversarial network for unsupervised time series anomaly detection: a MSTL enhanced data preprocessing approach 基于二维变异卷积的无监督时间序列异常检测生成对抗网络:一种MSTL增强数据预处理方法
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-04-09 DOI: 10.1007/s10489-025-06469-3
Qingdong Wang, Lei Zou, Weibo Liu
{"title":"2D-Variation convolution-based generative adversarial network for unsupervised time series anomaly detection: a MSTL enhanced data preprocessing approach","authors":"Qingdong Wang,&nbsp;Lei Zou,&nbsp;Weibo Liu","doi":"10.1007/s10489-025-06469-3","DOIUrl":"10.1007/s10489-025-06469-3","url":null,"abstract":"<div><p>Time series anomaly detection (TSAD) is a critical task in various research fields such as quantitative trading, cyber attack detection, and semiconductor outlier detection. As a binary classification task, the performance of TSAD is significantly influenced by the data imbalance problem, where the datasets heavily skew towards the normal class due to the extreme scarcity of abnormal data. Furthermore, the limited availability of anomaly data makes it challenging to perform manual labeling, which leads to the development of unsupervised anomaly detection approaches. In this paper, we propose a novel generative adversarial network (GAN) with Multiple-Seasonal-Trend decomposition using Loess (MSTL) data preprocessing algorithm for unsupervised anomaly detection on time series data. With the MSTL data preprocessing algorithm, the network architecture is simplified, thereby alleviating computational burden. A 2D-variation convolution-based method is integrated into the GAN to enhance feature extraction and generalization capabilities. To avoid the model collapse problem caused by data deficiency, multiple generators are employed, and a joint loss function is designed to improve the robustness of the training process. Experiments on several benchmark datasets from various domains demonstrate the efficacy and superiority of our approach compared to existing competitive approaches.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143801180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EnsembleSleepNet: a novel ensemble deep learning model based on transformers and attention mechanisms using multimodal data for sleep stages classification EnsembleSleepNet:一种基于变压器和注意机制的新型集成深度学习模型,使用多模态数据进行睡眠阶段分类
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-04-09 DOI: 10.1007/s10489-025-06484-4
Sahar Hassanzadeh Mostafaei, Jafar Tanha, Amir Sharafkhaneh
{"title":"EnsembleSleepNet: a novel ensemble deep learning model based on transformers and attention mechanisms using multimodal data for sleep stages classification","authors":"Sahar Hassanzadeh Mostafaei,&nbsp;Jafar Tanha,&nbsp;Amir Sharafkhaneh","doi":"10.1007/s10489-025-06484-4","DOIUrl":"10.1007/s10489-025-06484-4","url":null,"abstract":"<div><p>Classifying sleep stages using biological signals is an important and challenging task in sleep medicine. Combining deep learning networks with transformers and attention mechanisms represents a powerful approach for achieving high-performance results in classification tasks. Multimodal learning, which integrates various types of input data, can significantly enhance the classification performance of these networks. However, many existing studies either rely on single-modal data or design a single model to handle different signals and modalities without considering the unique characteristics of each data type, which often fails to capture optimal features. To address this limitation, we propose an ensemble model for sleep stage classification that leverages multimodal data, including raw signals, spectrograms, and handcrafted features. We utilize the Sleep Heart Health Study (SHHS) dataset by selecting multiple signals from polysomnography recordings. Our approach develops three specialized sub-models with different layers and components, each designed based on the unique characteristics of specific data types and signals, and integrates them into a unified ensemble deep learning framework. The proposed EnsembleSleepNet achieved comparable performance against existing methods by obtaining high values of 0.897, 0.852, and 0.831 in accuracy, Cohen's kappa (κ), and Macro F1 score (MF1) respectively. Additionally, ablation studies revealed the impact of the selected signals and components in our developed model.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143809158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal transport-based fusion of two-stream convolutional networks for action recognition 基于最优传输的两流卷积网络融合动作识别
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-04-09 DOI: 10.1007/s10489-025-06518-x
Sravani Yenduri, Madhavi Gudavalli, Gayathri C
{"title":"Optimal transport-based fusion of two-stream convolutional networks for action recognition","authors":"Sravani Yenduri,&nbsp;Madhavi Gudavalli,&nbsp;Gayathri C","doi":"10.1007/s10489-025-06518-x","DOIUrl":"10.1007/s10489-025-06518-x","url":null,"abstract":"<div><p>Understanding human actions in a given video requires spatial and temporal cues for human action recognition. Several deep learning approaches have been explored to extract effective spatio-temporal features. Specifically, two-stream networks have shown prominent performance due to the efficient capturing of motion information by optical flow estimation methods. Here, spatial and temporal paths with RGB &amp; optical flow inputs, respectively, are trained independently and fused at the softmax layer for the classification of actions. However, the conventional two-stream networks exhibit sub-optimal performance mainly due to two reasons: (i) lack of interaction among the streams and (ii) disregard of diverse distributions of RGB &amp; optical flow while fusion. To overcome these limitations, we propose an optimal transport-based fusion of the two-stream networks for action recognition in order to facilitate the alignment of distributions of two streams. First, feature maps from the last layers of CNN are extracted to preserve the pixel-level correspondence between the streams. Next, we calculate the optimal transportation matrix between the feature maps of spatial and temporal streams to map the features from one distribution to the other. Finally, the transformed features are fused to classify the actions. The effectiveness of the proposed approach is demonstrated on widely used action recognition datasets, namely, UCF-101, HMDB-51, SSV2,and Kinetics-400.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143801178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient seizure detection by lightweight Informer combined with fusion of time–frequency–spatial features 结合时频空特征融合的轻量化信息检测方法
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-04-09 DOI: 10.1007/s10489-025-06521-2
Xiangwen Zhong, Guijuan Jia, Haozhou Cui, Haotian Li, Chuanyu Li, Guoyang Liu, Yi Li, Weidong Zhou
{"title":"Efficient seizure detection by lightweight Informer combined with fusion of time–frequency–spatial features","authors":"Xiangwen Zhong,&nbsp;Guijuan Jia,&nbsp;Haozhou Cui,&nbsp;Haotian Li,&nbsp;Chuanyu Li,&nbsp;Guoyang Liu,&nbsp;Yi Li,&nbsp;Weidong Zhou","doi":"10.1007/s10489-025-06521-2","DOIUrl":"10.1007/s10489-025-06521-2","url":null,"abstract":"<div><p>Automatic seizure detection based on electroencephalogram (EEG) signals is essential for monitoring and diagnosing epilepsy, as well as reducing the workload of neurologists who visually inspect long-term EEGs. In this work, a novel framework for automatic seizure detection is proposed by integrating the Stockwell transform (S-transform) with a lightweight Informer model. The S-transform is firstly used to convert EEG signals into multi-level time–frequency features. Subsequently, an Informer encoder is deployed to capture spatial and long-term dependencies of these EEG time–frequency features and perform classification for seizure detection. Both the segment-based evaluation and event-based evaluation were conducted on the CHB-MIT EEG database and the QH-SDU database in patient-specific scenarios. Due to the efficient multi-resolution time–frequency analysis capability of the S-transform and the Informer’s ability to measure spatio-temporal correlation with lower time complexity and memory usage, the proposed method achieved state-of-the-art outcomes over the two EEG databases. The experimental results substantiate the model's ability to generalize across different databases and potential for clinical application.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143809334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Upgraded decision making in continuous domains for autonomous vehicles in high complexity scenarios using escalated DDPG 基于升级DDPG的高复杂场景下自动驾驶汽车连续域决策升级
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-04-08 DOI: 10.1007/s10489-025-06505-2
Khouloud Zouaidia, Med Saber Rais, Lamine Bougueroua
{"title":"Upgraded decision making in continuous domains for autonomous vehicles in high complexity scenarios using escalated DDPG","authors":"Khouloud Zouaidia,&nbsp;Med Saber Rais,&nbsp;Lamine Bougueroua","doi":"10.1007/s10489-025-06505-2","DOIUrl":"10.1007/s10489-025-06505-2","url":null,"abstract":"<div><p>Autonomous vehicles (AVs) have gained attention for their safety enhancements and comfortable travel. Ongoing research targets improvements in AV technology, addressing challenges like road uncertainties, weather changes, and continuous state-actions. In this paper, we propose “Escalated DDPG,” an extension of the Deep Deterministic Policy Gradient (DDPG) algorithm, designed mainly for autonomous vehicle (AV) decision-making. Our novel approach tackles key challenges encountered with DDPG, including instability, slow convergence, and the growing complexity of AV environments. By upgrading action selection and learning policies based on consecutive actions and states, Escalated DDPG enhances convergence speed while maintaining a balanced exploration-exploitation trade-off. We conduct experiments in a gym environment, comparing the performance of our method with traditional DDPG. Results illustrate the superior accuracy and adaptability of Escalated DDPG in handling decision-making tasks involving continuous action and state spaces, even in complex scenarios. The findings in this paper contribute to advancing AV technology, enhancing their decision-making capabilities, and enabling more efficient and reliable autonomous driving systems.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143793165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tensorized graph-guided view recovery for incomplete multi-view clustering 针对不完整多视图聚类的张量图引导视图恢复
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-04-08 DOI: 10.1007/s10489-025-06515-0
Li Zheng, Guanghui Yan, Chunyang Tang, Tianfeng Yan
{"title":"Tensorized graph-guided view recovery for incomplete multi-view clustering","authors":"Li Zheng,&nbsp;Guanghui Yan,&nbsp;Chunyang Tang,&nbsp;Tianfeng Yan","doi":"10.1007/s10489-025-06515-0","DOIUrl":"10.1007/s10489-025-06515-0","url":null,"abstract":"<div><p>Multi-view clustering (MVC) methods have demonstrated remarkable success when all samples are available across multiple views by leveraging consistency and complementary information. However, real-world multi-view data often suffers from incompleteness, where some samples are missing in one or more views. This incompleteness makes MVC challenging, as it becomes difficult to uncover consistency and complementary relationships among the view data. As a result, Incomplete Multi-View Clustering (IMVC) has emerged to address the limitations posed by missing data. An intuitive approach to tackle this issue is view recovery-effectively leveraging consistency information from multiple views to impute missing data. However, the quality of view recovery heavily depends on the learned consistency information, making it crucial to learn high-quality consistency representations. To address this challenge, we propose a novel approach called Tensorized Graph-Guided View Recovery (TGGVR), which integrates view recovery and tensorized graph learning within a unified framework. The tensorized graph learning estimate a similarity graph for each view by exploiting consistency and complementary information through tensorized learning. In addition, high-quality neighborhood structures are exploited to obtain a more accurate consensus graph. This high-quality consensus graph then guides the more accurate recovery of missing data, establishing a cyclical procedure in which tensorized graph learning and data imputation mutually reinforce each other. Experimental results demonstrate that our proposed method outperforms several state-of-the-art approaches in tackling the challenging task of IMVC. Notably, our method significantly outperforms representative competing methods by more than 5% and 10% on the BBC and Caltech datasets, respectively.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143793213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recommendation system for frequent item sets using multi-objective chaotic optimization with convolutional BiLSTM model 利用多目标混沌优化和卷积 BiLSTM 模型的频繁项目集推荐系统
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-04-08 DOI: 10.1007/s10489-025-06432-2
Sudha D, M. Krishnamurthy
{"title":"Recommendation system for frequent item sets using multi-objective chaotic optimization with convolutional BiLSTM model","authors":"Sudha D,&nbsp;M. Krishnamurthy","doi":"10.1007/s10489-025-06432-2","DOIUrl":"10.1007/s10489-025-06432-2","url":null,"abstract":"<div><p>A recommendation system offers a creative way to handle the limitations of e-commerce services by using item and user details. It is used to ascertain the user’s preferences in order to suggest products they would likely purchase and identify frequently used items from the data. The recommender model is designed with several common collaborative filtering techniques, but it has some complications. To overcome this drawbacks, a novel technique is proposed to find the frequent item in the given dataset. This research paper used two types of data, namely the product image data and user rating matrix data. Initially, image characteristics are retrieved using a residual dense network (RDN) to extract relevant features from the images. Then, the extracted features are fed into Multi-Objective Chaotic Horse Herd Optimization (MO-CHHO) to find common item sets from many items. Here, support, confidence, lift, and conviction are considered multi-objective functions. The text data is classified using Convolutional BiLSTM (CBiL) model based on significant sentiment features like All-caps, hashtags, emoticons, negation, elongated units, bag-of-units, punctuation, and numerical values to identify whether the item is common or not. Finally, the fusion process is performed using correlation to find the final frequent item sets from the image and data sets. The evaluation results show that the proposed method achieved 98% accuracy, precision of 99%, 98.3% of sensitivity, 99.5% of specificity, and 98.7% F-1 score using the amazon product review dataset.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143793211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信