{"title":"IPv6 Routing Protocol for Low-Power and Lossy Networks Security Vulnerabilities and Mitigation Techniques: A Survey","authors":"Aviram Zilberman, Amit Dvir, Ariel Stulman","doi":"10.1145/3732776","DOIUrl":"https://doi.org/10.1145/3732776","url":null,"abstract":"The proliferation of the Internet of Things (IoT) has reshaped the way we interact with technology, propelling the Routing Protocol for Low-Power and Lossy Networks (RPL) into a critical role as a communication framework. Amid this transformative landscape, security vulnerabilities within RPL-based IoT networks emerge as a substantial concern. This survey delves into these vulnerabilities, offering insights into their intricacies, potential consequences, and robust mitigation strategies. Commencing with a foundational understanding of IoT networks and their real-world applications, the survey sets the stage for comprehending the significance of Routing Protocol for Low-Power and Lossy Networks (RPL). It unravels the unique characteristics of RPL networks, their Destination-Oriented Directed Acyclic Graph (DODAG) topologies, and their pivotal role in enabling seamless device communication. The survey then delves into the heart of RPL security vulnerabilities. It navigates through diverse attack vectors, such as rank attacks and version number attacks. Each vulnerability is scrutinized, unraveling its technical mechanisms and implications for network stability. Transitioning from vulnerabilities to resilience, the survey offers a panoramic view of mitigation strategies. It dissects the nuances of intrusion detection systems (IDS), exploring trust models, location-based approaches, and hybrid systems. Signature-based, anomaly-based, and specification-based detection mechanisms are evaluated for their potential to mitigate threats within RPL networks. As standards shape the IoT landscape, the survey underscores the pivotal role of RPL within this framework. It emphasizes the necessity of secure standards in mitigating vulnerabilities across interconnected IoT devices.","PeriodicalId":50926,"journal":{"name":"ACM Computing Surveys","volume":"41 1","pages":""},"PeriodicalIF":16.6,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143875759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Long Li, Chingfang Hsu, Jianqun Cui, Man Ho Au, Lein Harn, Quanrun Li
{"title":"Provably Secure and Efficient One-to-Many Authentication and Key Agreement Protocol for Resource-Asymmetric Smart Environments","authors":"Long Li, Chingfang Hsu, Jianqun Cui, Man Ho Au, Lein Harn, Quanrun Li","doi":"10.1109/jiot.2025.3564512","DOIUrl":"https://doi.org/10.1109/jiot.2025.3564512","url":null,"abstract":"","PeriodicalId":54347,"journal":{"name":"IEEE Internet of Things Journal","volume":"33 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143876108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multimodal intent recognition based on text-guided cross-modal attention","authors":"Zhengyi Li, Junjie Peng, Xuanchao Lin, Zesu Cai","doi":"10.1007/s10489-025-06583-2","DOIUrl":"10.1007/s10489-025-06583-2","url":null,"abstract":"<div><p>In natural language understanding, intent recognition stands out as a crucial task that has drawn significant attention. While previous research focuses on intent recognition using task-specific unimodal data, real-world scenarios often involve human intents expressed through various ways, including speech, tone of voice, facial expressions, and actions. This prompts research into integrating multimodal information to more accurately identify human intent. However, existing intent recognition studies often fuse textual and non-textual modalities without considering their quality gap. The gap in feature quality across different modalities hinders the improvement of the model’s performance. To address this challenge, we propose a multimodal intent recognition model to enhance non-textual modality features. Specifically, we enrich the semantics of non-textual modalities by replacing redundant information through text-guided cross-modal attention. Additionally, we introduce a text-centric adaptive fusion gating mechanism to capitalize on the primary role of text modality in intent recognition. Extensive experiments on two multimodal task datasets show that our proposed model performs better in all metrics than state-of-the-art multimodal models. The results demonstrate that our model efficiently enhances non-textual modality features and fuses multimodal information, showing promising potential for intent recognition.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143871333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenyang Yang, Zhiming Li, Chao Du, Steven Kwok Keung Chow
{"title":"HLNet: high-level attention mechanism U-Net + + for brain tumor segmentation in MRI","authors":"Wenyang Yang, Zhiming Li, Chao Du, Steven Kwok Keung Chow","doi":"10.1007/s10489-025-06568-1","DOIUrl":"10.1007/s10489-025-06568-1","url":null,"abstract":"<div><p>The high-level attention mechanism enhances object detection by focusing on important features and details, making it a potential tool for tumor segmentation. However, its effectiveness and efficiency in this context remain uncertain. This study aims to investigate the efficiency, feasibility and effectiveness of integrating a high-level attention mechanism into the U-Net and U-Net + + model for improving tumor segmentation. Experiments were conducted using U-Net and U-Net + + models augmented with high-level attention mechanisms to compare their performance. The proposed model incorporated high-level attention mechanisms in the encoder, decoder, and skip connections. Model training and validation were performed using T1, FLAIR, T2, and T1ce MR images from the BraTS2018 and BraTS2019 datasets. To further evaluate the model's effectiveness, testing was conducted on the UPenn-GBM dataset provided by the Center for Biomedical Image Computing and Analysis at the University of Pennsylvania. The segmentation accuracy of the high-level attention U-Net + + was evaluated using the DICE score, achieving values of 88.68 (ET), 89.71 (TC), and 91.50 (WT) on the BraTS2019 dataset and 90.93 (ET), 92.79 (TC), and 93.77 (WT) on the UPEEN-GBM dataset. The results demonstrate that U-Net + + integrated with the high-level attention mechanism achieves higher accuracy in brain tumor segmentation compared to baseline models. Experiments conducted on comparable and challenging datasets highlight the superior performance of the proposed approach. Furthermore, the proposed model exhibits promising potential for generalization to other datasets or use cases, making it a viable tool for broader medical imaging applications.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 7","pages":""},"PeriodicalIF":3.4,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10489-025-06568-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143871330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LLMorpheus: Mutation Testing using Large Language Models","authors":"Frank Tip, Jonathan Bell, Max Schäfer","doi":"10.1109/tse.2025.3562025","DOIUrl":"https://doi.org/10.1109/tse.2025.3562025","url":null,"abstract":"","PeriodicalId":13324,"journal":{"name":"IEEE Transactions on Software Engineering","volume":"50 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143876139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Emergence Model of Perception with Global-contour Precedence Based on Gestalt Theory and Primary Visual Cortex","authors":"Jingmeng Li, Hui Wei","doi":"10.1109/tip.2025.3562054","DOIUrl":"https://doi.org/10.1109/tip.2025.3562054","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"15 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143876137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
IET BiometricsPub Date : 2025-04-25DOI: 10.1049/bme2/9017371
Chenlong Liu, Lu Yang, Wen Zhou, Yuan Li, Fanchang Hao
{"title":"Deep Distillation Hashing for Palmprint and Finger Vein Retrieval","authors":"Chenlong Liu, Lu Yang, Wen Zhou, Yuan Li, Fanchang Hao","doi":"10.1049/bme2/9017371","DOIUrl":"https://doi.org/10.1049/bme2/9017371","url":null,"abstract":"<div>\u0000 <p>With the increasing application of biometric recognition technology in daily life, the number of registered users is rapidly growing, making fast retrieval techniques increasingly important for biometric recognition. However, existing biometric recognition models are often overly complex, making them difficult to deploy on resource-constrained terminal devices. Inspired by knowledge distillation (KD) for model simplification and deep hashing for fast image retrieval, we propose a new model that achieves lightweight palmprint and finger vein retrieval. This model integrates hash distillation loss, classification distillation loss, and supervised loss from labels within a KD framework. And it improves the retrieval and recognition performance of the lightweight model through the network design. Experimental results demonstrate that this method promotes the performance of the student model on multiple palmprint and finger vein datasets, with retrieval precision and recognition accuracy surpassing several existing advanced hashing methods.</p>\u0000 </div>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/9017371","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143871822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maurício Cecílio Magnaguagno, Felipe Meneguzzi, Lavindra de Silva
{"title":"Hypertension and total-order forward decomposition optimizations","authors":"Maurício Cecílio Magnaguagno, Felipe Meneguzzi, Lavindra de Silva","doi":"10.1007/s10458-025-09705-9","DOIUrl":"10.1007/s10458-025-09705-9","url":null,"abstract":"<div><p>Hierarchical Task Network (HTN) planners generate plans using a decomposition process with extra domain knowledge to guide search towards a planning task. Domain experts develop such domain knowledge through recipes of how to decompose higher level tasks, specifying which tasks can be decomposed and under what conditions. In most realistic domains, such recipes contain recursions, i.e., tasks that can be decomposed into other tasks that contain the original task. Such domains require that either the domain expert tailor such domain knowledge to the specific HTN planning algorithm, or an algorithm that can search efficiently using such domain knowledge. By leveraging a three-stage compiler design we can easily support more language descriptions and preprocessing optimizations that when chained can greatly improve runtime efficiency in such domains. In this paper we evaluate such optimizations with the HyperTensioN HTN planner, winner of the HTN IPC 2020 total-order track.</p></div>","PeriodicalId":55586,"journal":{"name":"Autonomous Agents and Multi-Agent Systems","volume":"39 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10458-025-09705-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143871287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cheng Fei, Jun Shen, Hongling Qiu, Xiaoqi Song, Yamin Wang
{"title":"Data-Driven Output Synchronization of Heterogeneous Multi-Agent Systems under False Data Injection Attacks","authors":"Cheng Fei, Jun Shen, Hongling Qiu, Xiaoqi Song, Yamin Wang","doi":"10.1049/cth2.70027","DOIUrl":"https://doi.org/10.1049/cth2.70027","url":null,"abstract":"<p>This paper investigates strategies for achieving optimal output synchronization of heterogeneous multi-agent systems in the presence of false data injection attacks. We formulate a performance index with an infinite time horizon using a zero-sum game framework, treating control input and false data injection attack input as two opposing players. Specifically, the control input's objective is to minimize the performance index, while the false data injection attack input aims to maximize it. Adhering to the optimality principle, we derive the optimal control policy, contingent upon the solution to a related algebraic Riccati equation. Moreover, we propose sufficient conditions that ensure the existence of a solution to the algebraic Riccati equation. Additionally, we have devised a data-driven reinforcement learning algorithm to seek the solution, and its convergence is assured. Furthermore, it has been demonstrated that the solution to this game corresponds to a Nash equilibrium point. Finally, the validity of the proposed methodology is substantiated through simulation results.</p>","PeriodicalId":50382,"journal":{"name":"IET Control Theory and Applications","volume":"19 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cth2.70027","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143871564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ExtRep: a GUI test repair method for mobile applications based on test-extension","authors":"Yonghao Long, Yuanyuan Chen, Chu Zeng, Xiangping Chen, Xing Chen, Xiaocong Zhou, Jingru Yang, Gang Huang, Zibin Zheng","doi":"10.1007/s10515-025-00513-9","DOIUrl":"10.1007/s10515-025-00513-9","url":null,"abstract":"<div><p>GUI testing ensures the software quality and user experience in the ever-changing mobile application development. Using test scripts is one of the main GUI testing manner, but it might be obsolete when the GUI changes with the app’s evolution. Current studies often rely on textual or visual similarity to perform test repair, but may be less effective when the interacted event sequence changes dramatically. In the interaction design, practitioners often provide multiple entry points to access the same function to gain higher openness and flexibility, which indicates that there may be multiple routes for reference in test repair. To evaluate the feasibility, we first conducted an exploratory study on 37 tests from 18 apps. The result showed that over 81% tests could be represented with alternative event paths, and using the extended paths could help enhance the test replay rate. Based on this finding, we propose a test-<b>ext</b>ension-based test <b>rep</b>air algorithm named <i>ExtRep</i>. The method first uses test-extension to find alternative paths with similar test objectives based on feature coverage, and then finds repaired result with the help of sequence transduction probability proposed in NLP area. Experiments conducted on 40 popular applications demonstrate that <i>ExtRep</i> can achieve a success rate of 73.68% in repairing 97 tests, which significantly outperforms current approaches <span>Water</span>, <span>Meter</span>, and <span>Guider</span>. Moreover, the test-extension approach displays immense potential for optimizing test repairs. A tool that implements the <i>ExtRep</i> is available for practical use and future research.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"32 2","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143875443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}