{"title":"Decoy VNF for Enhanced Security in Fog Computing","authors":"Sara Sutton, N. Siasi","doi":"10.1109/gcaiot53516.2021.9692994","DOIUrl":"https://doi.org/10.1109/gcaiot53516.2021.9692994","url":null,"abstract":"Fog computing extends cloud resources to the edge of the network, thus enabling network providers to support real-time applications at low latencies. These applications further demand high security against malicious attacks that target distributed fog servers. One effective defense mechanism here against cyber attacks is the use of honeypots. The latter acts as a potential target for attackers by diverting malicious traffic away from the servers that are dedicated to legitimate users. However, one main limitation of honeypots is the lack of real traffic and network activities. Therefore, it is important to implement a solution that simulates the behavior of the real system to lure attackers without the risk of being exposed. Hence this paper proposes a practical approach to generate network traffic by introducing decoy virtual network functions (VNF) embedded on fog servers, which make the network traffic on honeypots resemble a legitimate, vulnerable fog system to attract cyber attackers. The use of virtualization allows for robust scalability and modification of network functions based on incoming attacks, without the need for dedicated hardware. Moreover, deep learning is leveraged here to build fingerprints for each real VNF, which is subsequently used to support its decoy counterpart against active probes. The proposed framework is evaluated based on CPU utilization, memory usage, disk input/output access, and network latency.","PeriodicalId":169247,"journal":{"name":"2021 IEEE Global Conference on Artificial Intelligence and Internet of Things (GCAIoT)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116572788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Deep Reinforcement Learning Approach for Improving Age of Information in Mission-Critical IoT","authors":"Hossam M. Farag, M. Gidlund, Č. Stefanović","doi":"10.1109/gcaiot53516.2021.9692982","DOIUrl":"https://doi.org/10.1109/gcaiot53516.2021.9692982","url":null,"abstract":"The emerging mission-critical Internet of Things (IoT) play a vital role in remote healthcare, haptic interaction, and industrial automation, where timely delivery of status updates is crucial. The Age of Information (AoI) is an effective metric to capture and evaluate information freshness at the destination. A system design based solely on the optimization of the average AoI might not be adequate to capture the requirements of mission-critical applications, since averaging eliminates the effects of extreme events. In this paper, we introduce a Deep Reinforcement Learning (DRL)-based algorithm to improve AoI in mission-critical IoT applications. The objective is to minimize an AoI-based metric consisting of the weighted sum of the average AoI and the probability of exceeding an AoI threshold. We utilize the actor-critic method to train the algorithm to achieve optimized scheduling policy to solve the formulated problem. The performance of our proposed method is evaluated in a simulated setup and the results show a significant improvement in terms of the average AoI and the AoI violation probability compared to the related-work.","PeriodicalId":169247,"journal":{"name":"2021 IEEE Global Conference on Artificial Intelligence and Internet of Things (GCAIoT)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115956637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Ahmad, Jeng-Hau Lu, Batuhan Sutbas, H. Ng, D. Kissinger
{"title":"Contactless Vital Signs Monitoring by mmWave Efficient Modulatorless Tracking Radar","authors":"W. Ahmad, Jeng-Hau Lu, Batuhan Sutbas, H. Ng, D. Kissinger","doi":"10.1109/gcaiot53516.2021.9691509","DOIUrl":"https://doi.org/10.1109/gcaiot53516.2021.9691509","url":null,"abstract":"This paper proposes a millimeter-wave (mmWave) radar sensor architecture for contactless vital signs detection and monitoring at the industrial, scientific, medical (ISM) 60 GHz band. Such fast remote touchless monitoring is extremely important during pandemic seasons such as COVID-19. The architecture utilizes a leaky wave antenna to synthesize a reconfigurable radar beam whose direction is steered in the space without additional modulator circuits. The modulatorless architecture enables monitoring the vital signs of multiple patients at different locations by measuring the Doppler shifts from their movements. Furthermore, it also offers building power and cost effective sensor components by eliminating the modulator circuitry. The system considerations of the proposed architecture are discussed and the Doppler radar technique for vital signs detection is reviewed. A laboratory experiment of measuring the Doppler shift due to a vibrating target using a prototype of the proposed sensor is successfully conducted. The application of the proposed sensor can be extended to remotely scan and control running machines in industrial environments.","PeriodicalId":169247,"journal":{"name":"2021 IEEE Global Conference on Artificial Intelligence and Internet of Things (GCAIoT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130971003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perspective on efficiency enhancements in processing streaming data in industrial IoT networks","authors":"Julia Rosenberger, Michael Bühren, Dieter Schramm","doi":"10.1109/gcaiot53516.2021.9693073","DOIUrl":"https://doi.org/10.1109/gcaiot53516.2021.9693073","url":null,"abstract":"Both data compression and anomaly detection are very deeply studied areas for the last decades and gain significance for the Internet of Things (IoT), especially industrial IoT (IIoT). Due to the advantages like fewer latency and security aspects, edge computing is often preferred to cloud solutions. While the amount of data as well as the demand for edge data processing increases, resources like bandwidth, computational performance, memory and, in case of Wireless Sensor Networks (WSN), also energy are still limited. This leads primarily to a trade-off between maximum data reduction, information extraction and minimal computational effort. Often, both data compression and anomaly detection are required. This paper demonstrates additional benefits if already one is implemented. Although in many cases the algorithms for both are based on the same models, there are almost no studies on their combined use. In this work, a perspective on the efficiency of combined model usage with only different interpretations for anomaly detection and data compression is proposed. Concrete examples for selected models and the detection of different kinds of anomalies are given. Finally, an outlook on the planned future work is given.","PeriodicalId":169247,"journal":{"name":"2021 IEEE Global Conference on Artificial Intelligence and Internet of Things (GCAIoT)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124763718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ricardo Santil, Bruno Gomes, Sara Paiva, S. I. Lopes
{"title":"Crowd Quantification with Flow Direction Estimation: a Low-Cost IoT-enabled Solution","authors":"Ricardo Santil, Bruno Gomes, Sara Paiva, S. I. Lopes","doi":"10.1109/gcaiot53516.2021.9692929","DOIUrl":"https://doi.org/10.1109/gcaiot53516.2021.9692929","url":null,"abstract":"Monitoring crowds in public environments is of great value for understanding human routines and managing crowd routes in indoor or outdoor environments. This type of information is crucial to improve the business strategy of an organization, and can be achieved by performing crowd quantification and flow direction estimation to generate information that can be later used by a business intelligence/analytic layer to improve sales of a specific service or targeting a new specific product. In this paper, we propose the design of an IoT Crowd sensor composed of an array of ultrasonic ping sensors that is responsible for detecting movement in specific directions. The proposed device has a built-in algorithm that is optimized to quantify and detect the human flow direction in indoor spaces such as hallways. Results have shown an average accuracy above 86% in the five scenarios evaluated when using an array with three elements.","PeriodicalId":169247,"journal":{"name":"2021 IEEE Global Conference on Artificial Intelligence and Internet of Things (GCAIoT)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126412900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artificial Intelligence Pathologist: The use of Artificial Intelligence in Digital Healthcare","authors":"Asmaa Ben Ali Kaddour, N. Abdulaziz","doi":"10.1109/gcaiot53516.2021.9693090","DOIUrl":"https://doi.org/10.1109/gcaiot53516.2021.9693090","url":null,"abstract":"Artificial intelligence is bringing revolutionary changes to so many industries, by introducing them to a new era, full of technological advancements. The healthcare industry has been one of the most beneficial to this change, by merging digital transformation and healthcare, to form digital healthcare. Thereby introducing digital pathology, which implements image processing algorithms to help pathologists analyze and examine a diagnosis faster and more efficiently. It not only reduces the long hours pathologists used to take in laboratory analysis but also reduces human error. Therefore, healthcare digitalization has allowed the integration of computer vision into the medical field, with the use of Artificial intelligence techniques such as deep learning and machine learning algorithms. However, past research work has been limited to using AI models to diagnosis one specific disease at a time. Whereas this research work aims to develop an AI model that will automatically perform pathological analysis, to determine the diagnosis for multiple diseases from a medical image, then provide the medical report, while securing the patient’s data, and assisting them with any questions they might have regarding the diagnosis. This research applies deep learning and machine learning algorithms for image classification via CNN architectures and feature extraction via Morphological properties. The model achieved great outcomes, with high accuracy and good F1-score results of 90.47% and 0.8332 respectively. The resultant model diagnoses 12 medical disorders, with an overall of 29 diagnostic cases, making it the only one of its kind in digitized healthcare applications.","PeriodicalId":169247,"journal":{"name":"2021 IEEE Global Conference on Artificial Intelligence and Internet of Things (GCAIoT)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124836827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Machine Learning for Edge-Aware Resource Orchestration for IoT Applications","authors":"Manar Jammal, M. Abusharkh","doi":"10.1109/gcaiot53516.2021.9692940","DOIUrl":"https://doi.org/10.1109/gcaiot53516.2021.9692940","url":null,"abstract":"The market is experiencing a huge avalanche on the Internet-of-Thing (IoT) devices count and their data traffic from wearable personal devices to smart enterprise applications. This sensory data is playing a crucial role in our day-to-day life and enterprise’s products and business decision-making process. Although this data provides promising business insights and can enhance applications’ performance, it comes with different challenges including connectivity, dynamic resource demands, privacy, and others. Therefore, the infrastructure of the IoT applications must be well-orchestrated and accompanied with intelligence in mind to scale, self-organize, and handle the huge data volume and transmission. The intelligent platform is expected to self-explore workloads and autonomously allocate computing resources at runtime to assist the IoT system in achieving its best intrinsic value. Hence, this paper introduces a novel platform consisting of various machine learning (ML) techniques and optimization model to forecast the IoT applications’ behavior and best deploy such applications dynamically on the edge to meet and enhance the overall end-to-end application performance. The comparative analysis has shown that the Random Forest model has promising results for resource forecasting. Also, the proposed deployment optimization model shows the importance of providing a tradeoff between computing, delay/transmission rate, and computational offloading constraints.","PeriodicalId":169247,"journal":{"name":"2021 IEEE Global Conference on Artificial Intelligence and Internet of Things (GCAIoT)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115284509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Papers By Title","authors":"","doi":"10.1109/gcaiot53516.2021.9693005","DOIUrl":"https://doi.org/10.1109/gcaiot53516.2021.9693005","url":null,"abstract":"","PeriodicalId":169247,"journal":{"name":"2021 IEEE Global Conference on Artificial Intelligence and Internet of Things (GCAIoT)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129591983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A weakly-supervised deep domain adaptation method for multi-modal sensor data","authors":"R. Mihailescu","doi":"10.1109/gcaiot53516.2021.9693050","DOIUrl":"https://doi.org/10.1109/gcaiot53516.2021.9693050","url":null,"abstract":"Nearly every real-world deployment of machine learning models suffers from some form of shift in data distributions in relation to the data encountered in production. This aspect is particularly pronounced when dealing with streaming data or in dynamic settings (e.g. changes in data sources, behaviour and the environment). As a result, the performance of the models degrades during deployment. In order to account for these contextual changes, domain adaptation techniques have been designed for scenarios where the aim is to learn a model from a source data distribution, which can perform well on a different, but related target data distribution.In this paper we introduce a variational autoencoder-based multi-modal approach for the task of domain adaptation, that can be trained on a large amount of labelled data from the source domain, coupled with a comparably small amount of labelled data from the target domain. We demonstrate our approach in the context of human activity recognition using various IoT sensing modalities and report superior results when benchmarking against the effective mSDA method for domain adaptation.","PeriodicalId":169247,"journal":{"name":"2021 IEEE Global Conference on Artificial Intelligence and Internet of Things (GCAIoT)","volume":"214 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113969317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Science Centers as an Essential Tool for AI Pre-College Education in Developing Countries","authors":"A. Elsayed","doi":"10.1109/gcaiot53516.2021.9692949","DOIUrl":"https://doi.org/10.1109/gcaiot53516.2021.9692949","url":null,"abstract":"Despite cultural and social variation between different communities, science centers provide motivating and enriching environments for learning. Immediate impact can be exciting, but the long-term effects on learning and motivation are more significant, especially while including AI related programs. Due to many problems that are facing formal educational systems, especially in developing countries, informal education is expected to complement the formal system, mainly with technological fields, at an affordable price. The long-term impact is to get students involved in science as a lifelong learning process outside the classroom, and expected to result in creating the critical mass of youth who are interested in S&T. Moreover, changing the culture of entrepreneurship and youth to be scientific oriented and educational focused is crucial in upgrading the quality of the regional potential HR.","PeriodicalId":169247,"journal":{"name":"2021 IEEE Global Conference on Artificial Intelligence and Internet of Things (GCAIoT)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128969653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}