Abhijit Guha, M. Obaidat, Debabrata Samanta, SK Hafizul Islam
{"title":"Clustering-based Optimal Resource Allocation Strategy in Title Insurance Underwriting","authors":"Abhijit Guha, M. Obaidat, Debabrata Samanta, SK Hafizul Islam","doi":"10.1109/cits55221.2022.9832993","DOIUrl":"https://doi.org/10.1109/cits55221.2022.9832993","url":null,"abstract":"Production of insurance policies in all types of Insurance requires a thorough examination of the entity against which the Insurance is to be issued. In health insurance, it is the past medical data of the individuals. Vehicle insurance needs the examination of the vehicle and the owner’s data. Likewise, in Title Insurance, it is the historical data of the property which needs scrutiny before the policy issuance. Underwriters perform the job of examining the property records. The scrutiny of the property records requires a high degree of the domain and legal expertise, and title insurance underwriters are often associated with legal professions. They do the final round of validation of the examination process. There are examination teams that take care of the initial set of regular examination tasks associated with each title insurance order. Some human experts assign the orders to the team associates. Not all the orders are of the same complexity in terms of examination. The allocation of the tasks happens based on the gut feeling of the supervisor, considering their experience with the team members. Our research creates clusters of the orders based on specific parameters associated with the orders. It builds a cost model of the past associates working on orders belonging to different clusters. Based on this cost matrix, we have built an optimal task allocation model that assigns the orders to the associates with the promise of optimal cost using a Linear programming solution used frequently in operations research.","PeriodicalId":136239,"journal":{"name":"2022 International Conference on Computer, Information and Telecommunication Systems (CITS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125911651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tracking container network connections in a Digital Data Marketplace with P4","authors":"Sara Shakeri, L. Veen, P. Grosso","doi":"10.1109/cits55221.2022.9832915","DOIUrl":"https://doi.org/10.1109/cits55221.2022.9832915","url":null,"abstract":"There are multiple organizations interested in sharing their data, and they can only do this if a secure platform for data sharing is available which can execute sharing requests under specific agreements and policies. Digital Data Marketplaces (DDMs) aim to provide such an infrastructure. For building a DDM infrastructure, we use containers to provide the required isolation between different sharing requests. However, one important challenge in a containerized DDM infrastructure is providing the ability to monitor the behavior of containers that are involved in the sharing transactions. In addition, the monitoring information in the network layer should be reported in a way that can be interpreted by the upper layers of DDM for further analysis. In this paper, we design a containerized DDM using P4. In our design, the flow traffic between containers is associated with the shared data in a DDM and can be understood by the upper layers. We present different scenarios to demonstrate how our setup can assist in tracking the behavior of containers and providing better performance and security.","PeriodicalId":136239,"journal":{"name":"2022 International Conference on Computer, Information and Telecommunication Systems (CITS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114833494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Energy Harvesting WSNs with Adaptive Modulation: Inter-delivery-aware Scheduling Algorithms","authors":"Chaima Zouine, Amina Hentati, J. Frigon","doi":"10.1109/cits55221.2022.9832980","DOIUrl":"https://doi.org/10.1109/cits55221.2022.9832980","url":null,"abstract":"In this paper, we deal with the regularity of status updates in a monitoring system. Specifically, we consider a system consisting of independent energy harvesting nodes with adaptive modulation capabilities that transmit status updates to a non energy harvesting sink over a fading channel. Due to the randomness of the energy arrival and the channel time variations, a node may have difficulties maintaining regular status updates. Hence, the objective of this work is to design scheduling algorithms that minimize the number of violations of inter-delivery time over a finite time horizon. An inter-delivery violation event occurs when the time duration between two consecutive status updates exceeds a given time limit. We focus on online modulation and power adaptation policies where the transmitting sensor node adjusts the M-ary modulation level and transmission power based on both the channel state and the battery level. Specifically, we propose both deterministic and randomized algorithms to efficiently solve the considered scheduling problem for the onenode system. Deterministic solutions were extended to the multi-node system. The numerical results show that the proposed algorithms realize significant gain in terms of violations events compared to the benchmark fixed modulation solutions.","PeriodicalId":136239,"journal":{"name":"2022 International Conference on Computer, Information and Telecommunication Systems (CITS)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133777283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intrusion Detection System using Aggregation of Machine Learning Algorithms","authors":"K. Arivarasan, M. Obaidat","doi":"10.1109/cits55221.2022.9832982","DOIUrl":"https://doi.org/10.1109/cits55221.2022.9832982","url":null,"abstract":"With the advancement of internet technologies comes the need for systems that can ensure the security of a network. An intrusion Detection System (IDS) can detect and sometimes take action against malicious network traffic. There are different types of IDS. For example, based on the detection method, it can be Signature-based IDS or Anomaly-based IDS or Hybrid IDS. In this work, multiple models are trained using various machine learning algorithms on the NSL-KDD dataset to build an efficient anomaly-based IDS that can detect malicious traffic with utmost accuracy. Supervised Learning algorithms like Logistic Regression, Decision Tree, K-Nearest Neighbour (KNN), XGBoost, Random Forest and Multilayer Perceptron (MLP) are used. At last, the Hard Voting technique is employed to increase efficiency.","PeriodicalId":136239,"journal":{"name":"2022 International Conference on Computer, Information and Telecommunication Systems (CITS)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131387184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CITS 2022 Cover Page","authors":"","doi":"10.1109/cits55221.2022.9832992","DOIUrl":"https://doi.org/10.1109/cits55221.2022.9832992","url":null,"abstract":"","PeriodicalId":136239,"journal":{"name":"2022 International Conference on Computer, Information and Telecommunication Systems (CITS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123998809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Security-Aware Orchestration of Linear Workflows on Distributed Resources","authors":"Georgios L. Stavrinides, H. Karatza","doi":"10.1109/cits55221.2022.9832986","DOIUrl":"https://doi.org/10.1109/cits55221.2022.9832986","url":null,"abstract":"In hybrid and multi-tier distributed architectures, where data may have different security requirements and typically require processing in a pipeline fashion, resource allocation has become particularly challenging. In such environments, it is crucial to use security-aware and effective resource allocation techniques, in order to ensure the secure processing of the workload and achieve a satisfactory Quality of Service (QoS). Towards this direction, in this paper we examine the performance of security-aware resource allocation strategies for linear workflow (LW) jobs in an environment of distributed resources. Only a subset of the resources is considered secure and thus suitable for processing high risk LW jobs. Low risk LW jobs may be executed on either secure or non-secure resources. Two commonly used routing techniques are adapted in order to incorporate security awareness. Their performance is evaluated through simulation. Several scenarios are investigated, with different subset sizes of the secure resources, as well as different probabilities for a LW job to be considered high risk. The simulation results provide useful insights into how the percentage of high risk LW jobs affects the performance in each of the examined cases of secure resources.","PeriodicalId":136239,"journal":{"name":"2022 International Conference on Computer, Information and Telecommunication Systems (CITS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127751904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On Efficiently Partitioning a Topic in Apache Kafka","authors":"Theofanis P. Raptis, A. Passarella","doi":"10.1109/CITS55221.2022.9832981","DOIUrl":"https://doi.org/10.1109/CITS55221.2022.9832981","url":null,"abstract":"Apache Kafka addresses the general problem of delivering extreme high volume event data to diverse consumers via a publish-subscribe messaging system. It uses partitions to scale a topic across many brokers for producers to write data in parallel, and also to facilitate parallel reading of consumers. Even though Apache Kafka provides some out of the box optimizations, it does not strictly define how each topic shall be efficiently distributed into partitions. The well-formulated fine-tuning that is needed in order to improve an Apache Kafka cluster performance is still an open research problem. In this paper, we first model the Apache Kafka topic partitioning process for a given topic. Then, given the set of brokers, constraints and application requirements on throughput, OS load, replication latency and unavailability, we formulate the optimization problem of finding how many partitions are needed and show that it is computationally intractable, being an integer program. Furthermore, we propose two simple, yet efficient heuristics to solve the problem: the first tries to minimize and the second to maximize the number of brokers used in the cluster. Finally, we evaluate its performance via largescale simulations, considering as benchmarks some Apache Kafka cluster configuration recommendations provided by Microsoft and Confluent. We demonstrate that, unlike the recommendations, the proposed heuristics respect the hard constraints on replication latency and perform better w.r.t. unavailability time and OS load, using the system resources in a more prudent way.","PeriodicalId":136239,"journal":{"name":"2022 International Conference on Computer, Information and Telecommunication Systems (CITS)","volume":"347 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115893603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}