{"title":"Partitioned Based Image Segmentation","authors":"S. Shrivastava, Ajay Kumar","doi":"10.1109/CICT48419.2019.9066268","DOIUrl":"https://doi.org/10.1109/CICT48419.2019.9066268","url":null,"abstract":"The aim of this paper is to provide a comprehensive survey of image segmentation method using clustering techniques. In image processing, segmentation plays an important role in the detection and matching of objects. Image segmentation uses number of techniques to find the correct segmented images. Clustering techniques are one such methods used for the segmentation of images. It is the process by the pixels of an image are divided into different partitions on the basis of similarity criteria. This paper presents recent development in image segmentation by using the clustering-based approaches such as K-Means and Fuzzy C-Means.","PeriodicalId":234540,"journal":{"name":"2019 IEEE Conference on Information and Communication Technology","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126392054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Classification of Arrhythmia using Time-domain Features and Support Vector Machine","authors":"M. Dhaka, Poras Khetarpal","doi":"10.1109/CICT48419.2019.9066181","DOIUrl":"https://doi.org/10.1109/CICT48419.2019.9066181","url":null,"abstract":"Cardiac arrhythmia is a heart condition where the heart does not beat in a regular way. This is one of those diseases which are easy to diagnose. A doctor can detect arrhythmia by just looking at the Electrocardiogram (ECG) of the patient because it has many visual clues, which a doctor is trained to identify. All these visual clues are the time-domain feature. Hence, in this paper, an algorithm is presented which uses only time-domain features to classify between normal sinus rhythm and arrhythmia using Support Vector Machine (SVM). The paper also compares the classification results when the frequency domain features are used along with the time-domain features. The frequency-domain features increase the computational complexity of the algorithm and make it harder to create a portable and reliable hardware device for the realtime detection and classification of arrhythmia. The proposed algorithm can be incorporated in a portable, lightweight and robust device which can detect arrhythmia in real-time. The accuracy of the algorithm is 99.36% on MIT-BIH arrhythmia database, which in comparison to other algorithm is an improvement.","PeriodicalId":234540,"journal":{"name":"2019 IEEE Conference on Information and Communication Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127484929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mayank K. Aditia, Srikar Paida, Fahiem Altaf, Soumyadev Maity
{"title":"Certificate-less Public Key Encryption For Secure e-Healthcare Systems","authors":"Mayank K. Aditia, Srikar Paida, Fahiem Altaf, Soumyadev Maity","doi":"10.1109/CICT48419.2019.9066190","DOIUrl":"https://doi.org/10.1109/CICT48419.2019.9066190","url":null,"abstract":"Any data that is shared in a public network, if private, is supposed to be secured to prevent any unauthorized users. E-healthcare systems have the health status of patients, which are one such kind of data that need to be secured. With the development of e-healthcare systems, users have increased by a large number, which by the way results in the need for the security of those. To prevent illicit activities (like data being accessed by unauthorized users), we propose a secure data sharing scheme which uses Certificate-less Public Key Encryption and signature for the confidentiality along with privacy of the health data. We proposed this efficient and secure scheme of data transfer for the patients' health data to provide the privacy required and also to avoid unauthorized users from accessing the data.","PeriodicalId":234540,"journal":{"name":"2019 IEEE Conference on Information and Communication Technology","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125331282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Defect Classification for Silk Fabric Based on Four DFT Sector Features","authors":"Shweta Loonkar, D. Mishra","doi":"10.1109/CICT48419.2019.9066106","DOIUrl":"https://doi.org/10.1109/CICT48419.2019.9066106","url":null,"abstract":"We cannot imagine a world without textile and textile industry. Vital role is played by textile industry in today's world of business. Quality inspection, reliability, durability and fabric with less defects are an important factors for good apparel organizations. Fabric defect classification holds an inimitable position in demand of worthy products. In this paper we have experimented to classify the fabric defects for silk material based on its structural failures. We have used the DFT sectorization process on TILDA textile images to extract features in order to classify the defects. The Feature Vector Database (FVDB) is generated by means of four DFT sectors. FVDB is used as input in WEKA for defect classification based on two test options i.e. 10-fold cross validation and full training set. It has been observed that the rate of classification for silk cloth declines in 10-fold cross validation as compared to full training set. All characterization calculations are analyzed dependent on their accuracy and Kappa statistics. It has been observed that the Random Forest is most efficient algorithm for defect classification for silk fabric due to its high rate of classification.","PeriodicalId":234540,"journal":{"name":"2019 IEEE Conference on Information and Communication Technology","volume":"640 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116687681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance analysis of MIMO-NOMA-Based Indoor Visible Light Communication in Single Reflection Environment","authors":"A. Mishra, A. Trivedi","doi":"10.1109/CICT48419.2019.9066201","DOIUrl":"https://doi.org/10.1109/CICT48419.2019.9066201","url":null,"abstract":"This paper examines the performance of multiple-input-multiple-output (MIMO) non-orthogonal multiple access (NOMA) based indoor visible light communication (VLC) in a single-reflection environment. The VLC system is equipped with two light-emitting diodes (LEDs) transmitter and multi-user receiver, and each user consists of two photodiodes (PDs) working as a NOMA pair. In this paper, a single reflection scenario is considered, and the corresponding delay spread is calculated which is the function of room dimensions. The proposed approach uses two efficient power allocation methods namely normalized gain difference power allocation (NGDPA) and gain ratio power allocation (GRPA). Later, the measurable data rate and channel delay spread in MIMO with NOMA based VLC system is investigated using these methods. The result shows that NGDPA with NOMA has a percentage gain of sum rate up to 18.22 as compared to GRPA with NOMA. Furthermore, the numerical values of maximum channel delay spread are 10.89 ns and 11.14 ns in GRPA and NGDPA, respectively for two users.","PeriodicalId":234540,"journal":{"name":"2019 IEEE Conference on Information and Communication Technology","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131126265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient Method for Data Synchronization in Mobile Database","authors":"Nidhi Singh, M. Hasan","doi":"10.1109/CICT48419.2019.9066122","DOIUrl":"https://doi.org/10.1109/CICT48419.2019.9066122","url":null,"abstract":"In the present era, the use of mobile gadgets is increasing, the process of such mobile gadgets has a limited processing, bandwidth, memory and transfer speed, for which proper resolution is necessary. In a versatile Processing situation, there are many issues like synchronization in the database, the security of the information and portable exchanges. One of the major challenges in the Mobile database is Synchronization. Multiple analyzes are conducted over time to minimize these problems in order to maintain better accuracy. While synchronization requires the accuracy of data to expect a high volume of traffic and high time complexity and to develop appropriate algorithms to synchronize such problems. This paper we consist the current situation of data synchronization and give a new and better concepts for understanding the synchronization. It also helps to control problems of data deployment and maintains the possibilities of accurate data using a new solution for data synchronization.","PeriodicalId":234540,"journal":{"name":"2019 IEEE Conference on Information and Communication Technology","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117296320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fusion of Heterogeneous Range Sensors Dataset for High Fidelity Surface Generation","authors":"M. Singh, K. Venkatesh, A. Dutta","doi":"10.1109/CICT48419.2019.9066172","DOIUrl":"https://doi.org/10.1109/CICT48419.2019.9066172","url":null,"abstract":"Due to the need for higher quality depth data than possible with an individual range sensing approach nowadays, there has been a growing interest to develop an integrated depth sensing technique by fusion of different $3D$ acquisition approaches that are more precise than the individual devices. In this paper, a new unsupervised range data fusion method using distinct range sensors has been presented for the extraction of an accurate surface model. In the fusion method, the analysis of Kinect's depth data based on Haar wavelets is used to identify regions requiring finer scan by the Laser range sensor. The fused data illustrate the more accurate descriptive characteristic of the surface. The experimental results show a high quality reconstructed $3D$ model which validates the correctness of the real surfaces.","PeriodicalId":234540,"journal":{"name":"2019 IEEE Conference on Information and Communication Technology","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129975683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pratyush Ranjan, Shubhanker Srivastava, Vidit Gupta, S. Tapaswi, Neetesh Kumar
{"title":"Decentralised and Distributed System for Organ/Tissue Donation and Transplantation","authors":"Pratyush Ranjan, Shubhanker Srivastava, Vidit Gupta, S. Tapaswi, Neetesh Kumar","doi":"10.1109/CICT48419.2019.9066225","DOIUrl":"https://doi.org/10.1109/CICT48419.2019.9066225","url":null,"abstract":"In today's era of digitisation, many technologies have evolved that every manual work can be digitally automatized. In the digital automatizing process, security and privacy are the most important and highly demanding aspects. Blockchain offers many features that can be used in almost every sphere of life. Features like decentralisation, transparency, privacy makes it an extremely useful technology. Therefore, by making use of all these features, several problems in healthcare sector can be solved like removing complex network of third parties and lack of traceability of transactions. This paper presents a decentralised, secure and transparent organ and tissue transplant web application (also called DApp), which not only nullifies the role of any third party involved in the organ transplantation, but also is a cost effective solution that saves the patient's from high cost of transplantation. The details and Electronic Medical Record(EMR) are hashed using the IPFS(a distributed file server), which reduces the cost of upload to a great extent as shown in the results section of this paper.","PeriodicalId":234540,"journal":{"name":"2019 IEEE Conference on Information and Communication Technology","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114560684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Thinning of Concentric Circular Antenna Array Using Binary Salp Swarm Algorithm","authors":"A. Mondal, Prerna Saxena","doi":"10.1109/CICT48419.2019.9066249","DOIUrl":"https://doi.org/10.1109/CICT48419.2019.9066249","url":null,"abstract":"Antenna array synthesis with the least possible number of elements for obtaining the desired radiation pattern is important in some applications, for example, satellite communication, where the weight of antennas is limited. The objective of this paper is thinning of Concentric Circular Antenna Arrays (CCAA) to achieve a reduction in weight and cost. In this paper, swarm intelligence technique, Binary Salp Swarm Algorithm (BSSA) is introduced for synthesis of thinned CCAA. Thinning of antenna array requires the expulsion of a few antennas in the stretch of antennas so as to attain similar radiation characteristics as that of a densely occupied array. The BSSA approach is proposed to synthesize a CCAA to decrease the maximum side lobe level (MSLL), simultaneously by maintaining the percentage of thinning equivalent or higher than 50%. A CCAA with 440 antennas is optimized by the BSSA approach and is compared with other state-of-the-art approaches to demonstrate its efficacy.","PeriodicalId":234540,"journal":{"name":"2019 IEEE Conference on Information and Communication Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130625140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bhanu Pratap Singh Bankoti, C. Gupta, O. Bandyopadhyay, Mallika Banerjee
{"title":"Analysis of Multitasking in Divided Attention using Machine Learning","authors":"Bhanu Pratap Singh Bankoti, C. Gupta, O. Bandyopadhyay, Mallika Banerjee","doi":"10.1109/CICT48419.2019.9066233","DOIUrl":"https://doi.org/10.1109/CICT48419.2019.9066233","url":null,"abstract":"Evaluation of cognitive functionality plays an important role in the career choice of students as well as for the selection of employee for the employer. Divided attention is one such cognitive ability that deals with allocation of attention to multiple tasks simultaneously. An accurate analysis of divided attention would help us to identify cognitive decline, as well as provides a quantifiable indicator of a salient feature viz., vigilance which is highly relevant for defence personnel as well as pilots in air, water and road. The close observation of divided attention in home or classroom environment is an essential component for early detection of cognitive problems. It also helps in assessing the effectiveness of learning patterns. This work proposes a new method to determine the ability of relative divided attention through unobtrusive monitoring of use of a software game. The process measures the performance of a user (college student) on a multi-task cognitive software by computing the scores as part of the test for divided attention. This metric indicates the user's ability of multitasking in divided attention, i.e whether user is efficiently paying attention to all the tasks at once, or is primarily attending to one task at a time (sacrificing optimal performance). The data set is labelled based on statistical analysis. After classifying the data using machine learning model (random forest), the academic performance of the user is analysed against the divided attention levels to establish a correlation among them.","PeriodicalId":234540,"journal":{"name":"2019 IEEE Conference on Information and Communication Technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122640919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}