{"title":"A Lightweight Bidirectional Secure Authentication Protocol for Mobile Edge Computing in 6G Networks","authors":"Yuling Bai","doi":"10.1002/itl2.70001","DOIUrl":"https://doi.org/10.1002/itl2.70001","url":null,"abstract":"<div>\u0000 \u0000 <p>With the rapid development of 6G networks, the era of interconnectivity is fully upon us, accompanied by an explosive increase in the number of mobile smart devices. Solutions centered on cloud computing often fail to meet the real-time computing requirements of mobile smart devices and place significant demands on energy consumption and bandwidth. To address these challenges, edge computing has become a necessity. By deploying mobile edge computing systems close to the sources of information, it is possible to meet the demands for real-time services, data analysis optimization, and intelligent applications on mobile devices. Focusing on the issue of identity secure authentication between mobile devices and mobile edge computing servers in mobile edge computing scenarios, this paper proposes a lightweight bidirectional secure authentication protocol using elliptic curve cryptography (ECC). Constructed with elliptic curve encryption and hash functions, the protocol achieves features such as bidirectional secure authentication, anonymity protection for mobile users, and untraceability of mobile users, while also resisting various threats such as replay attacks and man-in-the-middle attacks. Through performance analysis and comparison with similar protocols, the proposed protocol demonstrates a low overall computational cost, making it particularly suitable for mobile smart devices with limited computing resources. Furthermore, the protocol was implemented using tools such as Visual Studio, and its performance was analyzed to illustrate its practical applicability and reliability.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 2","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143496936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Leveraging Cloud and IoT-Based NLP in Virtual Reality for Sustainable Industry Innovation: A Pathway to Achieving SDG 9 (Industry, Innovation, and Infrastructure)","authors":"Gyana Prakash Bhuyan, Bibhu Kalyan Mishra, Gopinath Palai","doi":"10.1002/itl2.659","DOIUrl":"https://doi.org/10.1002/itl2.659","url":null,"abstract":"<div>\u0000 \u0000 <p>This paper explores the integration of cloud computing, IoT, Natural Language Processing (NLP), and virtual reality (VR) as a pathway to achieving Sustainable Development Goal (SDG) 9, which focuses on fostering industry innovation, building resilient infrastructure, and promoting sustainable industrialization. The convergence of these technologies provides a powerful toolkit for driving the next wave of industrial digital transformation. Cloud platforms such as AWS, Azure, IBM Watson, and Google Cloud offer scalability, flexibility, and efficiency, enabling industries to adopt advanced technologies like NLP and VR without significant upfront capital investment. By leveraging IoT for real-time monitoring and data collection, industries can optimize processes, reduce energy consumption, and enhance predictive maintenance, contributing to sustainability. NLP enables natural and efficient human-machine interaction, automating key processes and improving decision-making. The use of VR allows for immersive simulations that enhance worker training, product design, and infrastructure management, reducing waste and increasing safety. This combination of technologies fosters innovation, inclusivity, and sustainability in industrial operations. The paper highlights practical applications of these technologies, including smart manufacturing, predictive maintenance, and infrastructure monitoring, while emphasizing the role of cloud platforms in democratizing access to advanced tools for industries worldwide. Ultimately, this approach supports the achievement of SDG 9 by enabling more sustainable, resilient, and inclusive industrial practices.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 2","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143496925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimizing medical visual question answering: Evaluating the impact of enhanced images, augmented training data, and model selection","authors":"Ali Jaber Almalki","doi":"10.1002/itl2.588","DOIUrl":"https://doi.org/10.1002/itl2.588","url":null,"abstract":"<p>Visual question answering (VQA) has an interesting application in clinical decision support and enables clinicians to extract information from medical images through natural language queries. However, the limited nature of the datasets makes it particularly difficult to develop effective VQA models for the medical profession. The aim of this study was to overcome these obstacles by formally testing methods for data enhancement and model optimization. Specifically, we merged two medical VQA datasets, applied image preprocessing techniques, examined several state-of-the-art model architectures, and extensively trained the best-performing model on the enhanced data. The results showed that training the VGG16-LSTM model on sharper images than the merged dataset resulted in a significant performance improvement due to extending the training time to 200, with F1 scores of the training set 0.9674.</p>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 2","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143496934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Security Enhancement in 5G Networks by Identifying Attacks Using Optimized Cosine Convolutional Neural Network","authors":"Premalatha Santhanamari, Vijayakumar Kathirgamam, Lakshmisridevi Subramanian, Thamaraikannan Panneerselvam, Rathish Chirakkal Radhakrishnan","doi":"10.1002/itl2.70003","DOIUrl":"https://doi.org/10.1002/itl2.70003","url":null,"abstract":"<div>\u0000 \u0000 <p>The exponential growth of 5G networks has introduced advanced capabilities but also heightened susceptibility to sophisticated cyberattacks. To address this, a robust and optimized security framework is proposed, leveraging a Cosine Convolutional Neural Network (CCNN) for attack detection. By emphasizing angular correlations in data, the CCNN improves feature extraction by substituting cosine similarity-based adjustments for conventional convolution processes. To maximize the CCNN's performance, the Exponential Distribution Optimizer (EDO) is employed optimize CCNN. The optimal configuration of CCNN is achieved using EDO's probabilistic search mechanism, which is inspired by exponential distribution and helps to maintain a balanced exploration-exploitation strategy. This integrated approach significantly improves detection accuracy, robustness, and scalability while maintaining low computational overhead. Comprehensive evaluations demonstrate the model's efficacy in identifying diverse attack patterns in 5G networks, outperforming conventional methods. The proposed framework establishes a new benchmark for secure, intelligent 5G infrastructures, contributing to the advancement of cybersecurity in next-generation networks. The introduced approach attains higher accuracy of 99%.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 2","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143496937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Trust-Based Reliability Scheme for Secure Data Sharing With Internet of Vehicles Networks","authors":"Arpit Jain, Ashok Kumar, Mahadev, Jitendra Kumar Chaudhary, Saurabh Singh","doi":"10.1002/itl2.70000","DOIUrl":"https://doi.org/10.1002/itl2.70000","url":null,"abstract":"<div>\u0000 \u0000 <p>With the Internet of Things (IoT) increasingly integrated into vehicles, drivers and passengers can access information anywhere, anytime. As the number of connected vehicles increases, new requirements for vehicular networks arise, including securing, robust, and scalable communication between vehicles and pedestrians. It encompasses the communication between vehicles and infrastructure, as well as the communication between vehicles and pedestrians. A real-time exchange of road condition information can be achieved through this method. An IoV network rogue node detection scheme is presented here using a combined trust model. Using a trust-based security algorithm, vehicles are assessed for trustworthiness and rogue vehicles are identified. Communication and data reliability are used to calculate direct Trust while neighboring vehicles' cooperation is used to calculate recommendation trust. According to simulation results, the proposed model is highly accurate and maintains a low detection delay while protecting evaluation data privacy. Trust updates are more efficient using this model, and malicious vehicles are detected more quickly than traditional schemes.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 2","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143496935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Edge Computing Enables Assessment of Student Community Building: An Emotion Recognition Method Based on TinyML","authors":"Shuo Liu","doi":"10.1002/itl2.645","DOIUrl":"https://doi.org/10.1002/itl2.645","url":null,"abstract":"<div>\u0000 \u0000 <p>Deep network-based video sentiment analysis is crucial for online evaluation tasks. However, these deep models are difficult to run on intelligent edge devices with limited computing resources. In addition, video data are susceptible to lighting interference, distortion, and background noise, which severely limits the performance of facial expression recognition. To relieve these issues, we develop an effective multi-scale semantic fusion tiny machine learning (TinyML) model based on a spatiotemporal graph convolutional network (ST-GCN) which enables robust expression recognition from facial landmark sequences. Specifically, we construct regional-connected graph data based on facial landmarks which are collected from cameras on different mobile devices. In existing spatiotemporal graph convolutional networks, we leverage the multi-scale semantic fusion mechanism to mine the hierarchical structure of facial landmarks. The experimental results on CK+ and online student community assessment sentiment analysis (OSCASA) dataset confirm that our approach yields comparable results.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 2","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143481531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dr. B. D. Parameshachari, Dr. Danilo Pelusi, Dr. Bhargavi Goswami
{"title":"AI-Driven Big Data Analytics for Mobile Healthcare","authors":"Dr. B. D. Parameshachari, Dr. Danilo Pelusi, Dr. Bhargavi Goswami","doi":"10.1002/itl2.624","DOIUrl":"https://doi.org/10.1002/itl2.624","url":null,"abstract":"<div>\u0000 \u0000 <p>This special issue highlights the key insights into the emerging trends and challenges of artificial intelligence (AI) driven big data analytics in mobile healthcare. The selected twenty-six articles, which include two letters, explore innovative applications and advancements in AI-driven healthcare technologies. The contributions of selected articles are divided into four main themes: Medical technology in healthcare communication and AI-driven healthcare, Fitness and Sports in Health Technology, Mental Health based on Behavior Analysis, and Healthcare Data Privacy with Blockchain. These articles collectively advance our understanding of AI applications in healthcare and reveal significant advancements in diagnostic accuracy, patient monitoring, data security and predictive analytics. Additionally, this review discusses the research findings and implications that will lead to groundbreaking insights in real-world applications, which could provide a clear path for learning a current healthcare technology in mobile health with AI.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"7 6","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143187042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Souvik Pal, Raghvendra Kumar, Ahmed Hussein Alkhayyat
{"title":"Multi-Objective Task Scheduling in Cloud IoT Environments Using Differential Evaluation for Big Data Processing","authors":"Souvik Pal, Raghvendra Kumar, Ahmed Hussein Alkhayyat","doi":"10.1002/itl2.598","DOIUrl":"https://doi.org/10.1002/itl2.598","url":null,"abstract":"<div>\u0000 \u0000 <p>In Cloud IoT environments, efficient task scheduling is critical for optimizing resource utilization, reducing latency, and enhancing overall system performance. However, current methods struggle to balance the diverse demands of such environments. This paper presents the Multi-Objective Task Scheduling in Cloud IoT Environments (MOTS-CIoTE) framework, which controls K-means clustering and the Differential Evolutionary (DE) algorithm to optimize resource allocation and minimize task completion time. Our approach achieves significant improvements across key metrics: The system has a total throughput of 827 (tasks per second), a latency of 14.13 ms, a resource contention of 0.33%, an energy efficiency of 0.48 (Joules per task), and a cost-effectiveness of 3.58. The results highlight the considerable potential of MOTS-CIoTE in tackling the intricacies of Cloud IoT environments and augmenting their efficacy, hence facilitating more efficient resource allocation and boosting user satisfaction.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 4","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144206949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuerong Cui, Chuang Zhang, Juan Li, Bin Jiang, Shibao Li, Jianhang Liu
{"title":"Deep Learning Model-Driven Channel Estimation and Equalization for Underwater Acoustic OFDM Receivers","authors":"Xuerong Cui, Chuang Zhang, Juan Li, Bin Jiang, Shibao Li, Jianhang Liu","doi":"10.1002/itl2.619","DOIUrl":"https://doi.org/10.1002/itl2.619","url":null,"abstract":"<div>\u0000 \u0000 <p>The performance of traditional channel estimation algorithms is seriously degraded by the complex and variable underwater acoustic (UWA) environment. In this article, we proposed a model-driven UWA communication receiver that is based on orthogonal frequency division multiplexing (OFDM), and the model mainly consists of two parts: channel estimation and equalization. The channel estimation module uses a convolutional neural network (CNN) to extract UWA channel state features from the pre-estimated channel frequency domain response (CFR), and then implements the equalization of the receiver based on a long short-term memory (LSTM). The equalization module uses discrete Fourier transform (DFT) to denoise the LS channel estimation and use it to construct the pre-equalized symbols, meanwhile, the obtained pre-equalized symbols, the received signals, and the CFR are used as inputs, which effectively improves the generalization capability of the receiver. Experiments show that the proposed model has more accurate recovery accuracy compared with the traditional algorithm and deep learning (DL) based receiver, especially when the channel environment is mismatched, the model shows better robustness.</p>\u0000 </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 5","pages":""},"PeriodicalIF":0.9,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144635450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deni Lumbantoruan, Yantong Wang, Andrej Mihailovic, Hamid Aghvami
{"title":"Formulating network revenue optimization for provisioning tactile internet traffic","authors":"Deni Lumbantoruan, Yantong Wang, Andrej Mihailovic, Hamid Aghvami","doi":"10.1002/itl2.584","DOIUrl":"https://doi.org/10.1002/itl2.584","url":null,"abstract":"<p>Tactile traffic, crucial for applications like remote surgery, imposes strict requirements on delay, jitter, and packet loss. For ensuring guaranteed transport of tactile sessions, the provision of a dedicated tactile traffic class in the access network is necessary. Unlike high-priority traffic classes that can tolerate occasional service lapses, tactile traffic cannot compromise on quality. To support tactile traffic in realistic-sized internet protocol (IP) access networks, the simultaneous provisioning of tactile traffic between all source-destination pairs is addressed. Tactile traffic sessions that we consider have a modest data rate requirement and sessions size. Therefore, network operators need to find the right balance between tactile and best-effort traffic to increase revenue. This optimization problem is formulated as a network layer multi-criterion design problem, aiming to maximize net revenue from tactile traffic while optimizing a similar measure for best-effort traffic. Linear optimization models prove useful in accommodating and facilitating tactile internet traffic in an IP network, allowing determination of the proportion of tactile and best-effort traffic that leads to maximum network revenue.</p>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 4","pages":""},"PeriodicalIF":0.9,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144140820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}