Applied Intelligence最新文献

筛选
英文 中文
A filter-wrapper model for high-dimensional feature selection based on evolutionary computation
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-03-26 DOI: 10.1007/s10489-025-06474-6
Pei Hu, Jiulong Zhu
{"title":"A filter-wrapper model for high-dimensional feature selection based on evolutionary computation","authors":"Pei Hu,&nbsp;Jiulong Zhu","doi":"10.1007/s10489-025-06474-6","DOIUrl":"10.1007/s10489-025-06474-6","url":null,"abstract":"<div><p>In machine learning, feature selection plays an important role in improving prediction accuracy and reducing time complexity. This paper proposes a filter-wrapper model to obtain a feature subset from high-dimensional data in a short time. Firstly, features are ranked by information gain and Fisher Score. Secondly, the feature search is realized by binary evolutionary computation based on wrapper. To avoid wasting a lot of searches on low-ranked features, an adaptive feature selection strategy is adopted to guide population search and position update. Finally, a learning strategy is proposed, in which learners study from exemplars and complete position update, and the exemplars are constituted by optimal solutions to balance exploration and exploitation. To demonstrate the effectiveness and efficiency of the proposed model, three binary evolutionary computations, including particle swarm optimization, grey wolf optimizer, and fish migration optimization, are applied to the model, and they present excellent performance in high-dimensional data sets.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LPPSLF: a lightweight privacy-preserving split learning framework for smart surveillance systems
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-03-26 DOI: 10.1007/s10489-025-06489-z
Liang Wang, Hao Chen, Lina Zuo, Haibo Liu
{"title":"LPPSLF: a lightweight privacy-preserving split learning framework for smart surveillance systems","authors":"Liang Wang,&nbsp;Hao Chen,&nbsp;Lina Zuo,&nbsp;Haibo Liu","doi":"10.1007/s10489-025-06489-z","DOIUrl":"10.1007/s10489-025-06489-z","url":null,"abstract":"<div><p>In smart surveillance systems, cameras often have limited computational capacity, which necessitates the offloading of captured images or videos to cloud servers for analysis, raising significant privacy concerns. To address these challenges, we propose a lightweight privacy-preserving split learning framework tailored for smart surveillance systems. In this framework, an upper model is deployed on resource-constrained cameras to extract intermediate features from image segments, which are then transmitted to a lower model on the cloud for further analysis and training. This approach reduces the likelihood of sensitive data exposure by avoiding the transmission of raw images or videos. Furthermore, our framework incorporates adversarial training to defend against reconstruction attacks, preventing adversaries from deducing private information from the intermediate features. Compared to traditional split learning methods, the proposed solution significantly reduces client-side memory usage and computation time, making it well-suited for deployment on low-resource devices. Experimental results on CIFAR10, CIFAR100, and SVHN datasets demonstrate the effectiveness of our framework, with reductions in the server-side decoder’s reconstruction classification accuracy to 12.18%, 2.18%, and 13.09%, respectively. These results validate the framework’s ability to enhance privacy while maintaining computational efficiency.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial-temporal context-aware network for 3D-Craft generation 用于生成 3D 工艺的时空情境感知网络
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-03-26 DOI: 10.1007/s10489-025-06468-4
Ruyi Ji, Qunbo Wang, Boying Wang, Hangu Zhang, Wentao Zhang, Lin Dai, Yanni Wang
{"title":"Spatial-temporal context-aware network for 3D-Craft generation","authors":"Ruyi Ji,&nbsp;Qunbo Wang,&nbsp;Boying Wang,&nbsp;Hangu Zhang,&nbsp;Wentao Zhang,&nbsp;Lin Dai,&nbsp;Yanni Wang","doi":"10.1007/s10489-025-06468-4","DOIUrl":"10.1007/s10489-025-06468-4","url":null,"abstract":"<div><p>The generative modeling of 3D objects in the real world is an interesting but challenging task commonly constrained by process and order. Most existing methods focus on spatial relations to address this issue, neglecting the rich information between temporal sequences. To close this gap, we deliver a spatial-temporal context-aware network to explore the prediction of ordered actions for 3D object construction. Specifically, our approach is mainly formed by two modules, i.e., the spatial-context module and the temporal-context module. The spatial-context module is designed to learn the physical constraints in 3D object construction, such as spatial constraints and gravity. Meanwhile, the temporal-context module integrates the temporal context of action orders in history on the fly toward more accurate predictions. After that, the features of such two modules are merged to finalize the perdition of the following action’s position and block type. The entire model is optimized by the stochastic gradient descent optimization (SGD) method in an end-to-end manner. Extensive experiments conducted on the <i>3D-Craft</i> dataset demonstrate that the proposed method surpasses the state-of-the-art methods with a large margin, i.e., improving <span>(4.5%)</span> absolute ACC@1, <span>(3.3%)</span> absolute ACC@5, and <span>(4.1%)</span> absolute ACC@10. Moreover, the comprehensive ablation studies and insightful analysis further validate the effectiveness of the proposed method.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep attribute graph clustering based on bisymmetric network information fusion and mutual influence
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-03-26 DOI: 10.1007/s10489-025-06295-7
Shuqiu Tan, Lei Zhang, Yahui Liu, Jianxun Zhang
{"title":"Deep attribute graph clustering based on bisymmetric network information fusion and mutual influence","authors":"Shuqiu Tan,&nbsp;Lei Zhang,&nbsp;Yahui Liu,&nbsp;Jianxun Zhang","doi":"10.1007/s10489-025-06295-7","DOIUrl":"10.1007/s10489-025-06295-7","url":null,"abstract":"<div><p>Deep attribute graph clustering has always been a challenging task and an important research topic for real-world data. In recent years, there has been a growing trend in using multi-network information fusion for deep attributed graph clustering. However, existing methods in deep attributed graph clustering have not effectively integrated representations learned from multiple networks and failed to construct a joint loss function that could impact the overall network model, resulting in poor clustering results. To address the aforementioned issues, we proposed AGC-BNIFI, an attribute graph clustering method based on dual symmetric network information fusion and mutual influence. The network of this method consists of a symmetric graph autoencoder and an autoencoder. The two different encoders are combined to improve the attribute learning ability. First, a symmetric graph autoencoder with a symmetric structure is proposed to capture complex linear and adapt to complex graph structure relationships and propagate heterogeneous information of joint embedding and structural features, and can reconstruct the attribute matrix and adjacency matrix; secondly, a layer-by-layer adaptive dynamic fusion module is designed to adaptively fuse the representations learned by each layer of the two encoders, and then learn a better joint representation for clustering tasks; finally, a multi-distribution self-supervision module with soft clustering assignments obtained from different networks that learn from each other and influence each other is proposed, which integrates representation learning and clustering tasks into an end-to-end framework, and jointly optimizes representation learning and clustering tasks by designing a joint loss function. Extensive experimental results on four graph datasets demonstrate the superiority of AGC-BNIFI over state-of-the-art methods. On the Coauthor-Physics dataset, compared to MBN, AGC-BNIFI achieved improvements of 2.6%, 1.1%, 4.3%, and 6.3% in four clustering metrics, respectively.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10489-025-06295-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143707042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-scale spatiotemporal normality learning for unsupervised video anomaly detection
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-03-26 DOI: 10.1007/s10489-025-06485-3
Caitian Liu, Linxiao Gong, Xiong Chen
{"title":"Multi-scale spatiotemporal normality learning for unsupervised video anomaly detection","authors":"Caitian Liu,&nbsp;Linxiao Gong,&nbsp;Xiong Chen","doi":"10.1007/s10489-025-06485-3","DOIUrl":"10.1007/s10489-025-06485-3","url":null,"abstract":"<div><p>Video anomaly detection aims to automatically identify abnormal spatiotemporal patterns in surveillance videos. While unsupervised methods avoid the high cost of collecting abnormal data by learning from regular events, they often struggle to effectively model the inherent multiscale nature of video data. To address this challenge, we propose Multi-Scale Spatiotemporal Normality Learning (MS<span>(^2)</span>NL), a unified framework that systematically processes and integrates multiscale features across both spatial and temporal dimensions. Our framework employs an attention-enhanced stepwise fusion module to aggregate spatial features at different resolutions, enabling comprehensive modeling of appearance patterns from local textures to global structures. For temporal information processing, we design a dynamic aggregation module based on one-dimensional dilated convolutions that effectively captures motion dependencies across multi-scale feature maps while maintaining computational efficiency. These multiscale features are processed through dual decoders: a temporal decoder that learns motion normality through RGB-to-optical-flow mapping, and a spatial decoder that models appearance normality via future frame prediction, with multiscale prototype features stored in an external memory network. This sophisticated handling of multiscale information enables MS<span>(^2)</span>NL to capture subtle spatial deviations while maintaining sensitivity to temporal anomalies. Extensive experiments on benchmark datasets demonstrate the effectiveness of our approach, achieving state-of-the-art frame-level AUROCs of 98.3%, 91.5%, and 74.9% on the UCSD Ped2, CUHK Avenue, and ShanghaiTech datasets, respectively.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10489-025-06485-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143707130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretable multi-agent reinforcement learning via multi-head variational autoencoders
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-03-26 DOI: 10.1007/s10489-025-06473-7
Peizhang Li, Qing Fei, Zhen Chen
{"title":"Interpretable multi-agent reinforcement learning via multi-head variational autoencoders","authors":"Peizhang Li,&nbsp;Qing Fei,&nbsp;Zhen Chen","doi":"10.1007/s10489-025-06473-7","DOIUrl":"10.1007/s10489-025-06473-7","url":null,"abstract":"<div><p>Multi-agent deep reinforcement learning (RL) is increasingly proficient at making collective decisions in complex systems. However, the black-box nature of DRL decision networks often renders agent behaviors difficult to interpret, thereby undermining human trust. Although several reinforcement learning explanation methods have been proposed, most mainly identify factors influencing decisions without elucidating the underlying causal mechanisms based on physical models. Moreover, these methods do not address the generalizability of interpretability within multi-agent system settings. To overcome these challenges, we propose a multi-agent RL network based on multi-head variational autoencoders (MVAE), which generates decisions with interpretable physical semantics for unmanned systems. The MVAE directly encodes multiple types of semantically meaningful features with physical interpretations from the latent space and generates decisions by integrating these semantics according to physical models. Furthermore, considering the different latent variable distributions in continuous and discrete action scenarios, we design two distinct MVAE models based on Gaussian and Dirichlet distributions, respectively, and design training frameworks using deterministic policy gradient networks and proximal policy optimization networks in a multi-agent environment. Additionally, we develop a visualization method to intuitively convey interpretability in both continuous and discrete action scenarios. Simulation experiments comparing our method with existing baselines demonstrate that our approach achieves superior decision-making performance under interpretability conditions, and further validate its performance in large-scale scenarios.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effective selection of public IoT services by learning uncertain environmental factors using fingerprint attention
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-03-26 DOI: 10.1007/s10489-025-06472-8
KyeongDeok Baek, In-Young Ko
{"title":"Effective selection of public IoT services by learning uncertain environmental factors using fingerprint attention","authors":"KyeongDeok Baek,&nbsp;In-Young Ko","doi":"10.1007/s10489-025-06472-8","DOIUrl":"10.1007/s10489-025-06472-8","url":null,"abstract":"<div><p>The scope of the Internet of Things (IoT) environment has been expanding from private to public spaces, where selecting the most appropriate service by predicting the service quality has become a timely problem. However, IoT services can be physically affected by (1) uncertain environmental factors such as obstacles and (2) interference among services in the same environment while interacting with users. Using the traditional modeling-based approach, analyzing the influence of such factors on the service quality requires modeling efforts and lacks generalizability. In this study, we propose <i>Learning Physical Environment factors based on the Attention mechanism to Select Services for UsERs (PLEASSURE)</i>, a novel framework that selects IoT services by learning the uncertain influence and predicting the long-term quality from the users’ feedback without additional modeling. Furthermore, we propose <i>fingerprint attention</i> that extends the attention mechanism to capture the physical interference among services. We evaluate PLEASSURE by simulating various IoT environments with mobile users and IoT services. The results show that PLEASSURE outperforms the baseline algorithms in rewards consisting of users’ feedback on satisfaction and interference.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10489-025-06472-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WMFusion: a W-shaped dual encoder and single decoder network for multimodal medical image fusion
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-03-26 DOI: 10.1007/s10489-025-06477-3
Yu Shao, Lei Yu, Haozhe Tang
{"title":"WMFusion: a W-shaped dual encoder and single decoder network for multimodal medical image fusion","authors":"Yu Shao,&nbsp;Lei Yu,&nbsp;Haozhe Tang","doi":"10.1007/s10489-025-06477-3","DOIUrl":"10.1007/s10489-025-06477-3","url":null,"abstract":"<div><p>The current deep learning-based multimodal medical image fusion algorithms usually use a single feature extractor to extract features from images of different modalities. However, these approaches tend to overlook the distinctive features of different modality medical images, resulting in feature loss. In addition, applying complex network structures to low-level image-processing tasks would waste computational power. Therefore, we innovatively design an end-to-end multimodal fusion network with a dual encoder and single decoder structure, which resembles the letter ‘W’, and we have termed WMFusion. Specifically, we first develop a multi-scale context dynamic feature extractor (MCDFE) that employs context-gated convolution to extract multiscale features from different modalities effectively. Subsequently, we propose a local-global feature fusion module (LGFM) for fusing features of different scales, and we design a cross-modality bidirectional interaction structure in the local branch. Finally, feature redundancy is suppressed and the fusion image is reconstructed by a spatial channel reconstruction module (SCRM) with a spatial and channel reconstruction unit. A large number of experimental results demonstrate that our proposed WMFusion method is superior to some state-of-the-art algorithms in terms of both subjective and objective evaluation metrics, and has satisfactory computation efficiency.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IAMTrack: interframe appearance and modality tokens propagation with temporal modeling for RGBT tracking
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-03-26 DOI: 10.1007/s10489-025-06438-w
Huiwei Shi, Xiaodong Mu, Hao He, Chengliang Zhong, Bo Zhang, Peng Zhao
{"title":"IAMTrack: interframe appearance and modality tokens propagation with temporal modeling for RGBT tracking","authors":"Huiwei Shi,&nbsp;Xiaodong Mu,&nbsp;Hao He,&nbsp;Chengliang Zhong,&nbsp;Bo Zhang,&nbsp;Peng Zhao","doi":"10.1007/s10489-025-06438-w","DOIUrl":"10.1007/s10489-025-06438-w","url":null,"abstract":"<div><p>RGBT tracking has emerged as a robust solution for various applications, including surveillance, autonomous driving, and robotics, owing to its resilience in challenging environments. However, existing RGBT tracking approaches often overlook target appearance changes, location shifts, and the dynamic significance of modality features, limiting long-term tracking accuracy. To address these limitations, we propose IAMTrack, a novel transformer-based framework that achieves sequential tracking by propagating modality and appearance tokens across frames. The method compresses the discriminative features of each modality into modality tokens to transmit modality quality and target location information in real time, allowing the model to focus more on features with high modality quality and features with high target probability, while suppressing noise and redundant information. It also compresses the appearance features of objects similar in appearance across frames into appearance tokens to convey changes in appearance. To further enhance the token learning capability, we design a temporal generalized relation modelling approach that guides future predictions based on past information. The experimental results show that IAMTrack outperforms existing methods in various RGBT tracking scenarios, especially in UAV tracking tasks. Compared with those of previous methods, the MPRs and MSRs of the VTUAV short-term and long-term subdatasets are improved by <span>(1.7%/2.1%)</span> and <span>(2.5%/2.2%)</span>, respectively.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10489-025-06438-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143707040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Probabilistic linguistic hesitant fuzzy multi-attribute decision making for rural revitalization project selection of China
IF 3.4 2区 计算机科学
Applied Intelligence Pub Date : 2025-03-25 DOI: 10.1007/s10489-025-06305-8
Jiu-Ying Dong, Si-Hang Gong, Shu-Ping Wan
{"title":"Probabilistic linguistic hesitant fuzzy multi-attribute decision making for rural revitalization project selection of China","authors":"Jiu-Ying Dong,&nbsp;Si-Hang Gong,&nbsp;Shu-Ping Wan","doi":"10.1007/s10489-025-06305-8","DOIUrl":"10.1007/s10489-025-06305-8","url":null,"abstract":"<div><p>Rural revitalization strategy has pointed out the right direction for solving Chinese \"three rural\" problems. Selecting the most suitable rural revitalization project can be regarded as a multi-attribute decision making (MADM) problem. This paper utilizes the probabilistic linguistic (PL) hesitant fuzzy sets (PLHFSs) to characterize the uncertain information of evaluating rural revitalization projects. PLHFS introduces the characteristics of linguistic hesitant fuzzy set (LHFS) into probabilistic linguistic term set (PLTS), which can represent the membership degrees of linguistic terms (LTs) and the associated probabilities to the set, simultaneously. The normalized and ordered PLHFS is proposed. Some new operation laws for PLHFSs are defined by using Archimedean T-norm and T-conorm (ATT) functions. By employing the Maclaurin symmetric mean (MSM) operator and power average (PA) operator, this paper develops a probabilistic linguistic hesitant fuzzy Archimedean power Maclaurin symmetric mean (PLHFAPMSM) operator and a probabilistic linguistic hesitant fuzzy Archimedean power weighted Maclaurin symmetric mean (PLHFAPWMSM) operator. Some desirable properties of the PLHFAPMSM and PLHFAPWMSM operators are discussed deeply. For MADM with PLHFSs, the individual attribute weight vector for each alternative is derived by data envelopment analysis (DEA). Further, the comprehensive attribute weight vector is determined by a linear goal programming model. Thereby, using the PLHFAPWMSM operator, a new method for MADM with PLHFSs is proposed. Finally, a practical example of rural revitalization project selection is analyzed to illustrate the effectiveness and feasibility of the proposed method.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143698485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信