{"title":"Determining \"OFF\" Time Duration of FPGA basec GPS Tracking System*","authors":"A. Anil, M. K, T. Panigrahi, Ankit Dubey","doi":"10.1109/INFOCOMTECH.2018.8722403","DOIUrl":"https://doi.org/10.1109/INFOCOMTECH.2018.8722403","url":null,"abstract":"Abstract—In the world of configurable computing, field programmable gate arrays (FPGAs) plays a prominent role. FPGAs give the ability to implement custom hardware functions with the use of prebuilt logic blocks and programmable routing of resources. It also provides the best parts of both ASICs and processor based systems. With the tremendous increase in application and use of FPGA, the need for security to the data produced is immensely high. One such key security factor the current FPGAs in the industry fall short of is determining the OFF time duration. This would be a major threat to the data produced by FPGAs in time sensitive applications. In time sensitive applications, if the off time duration of FPGA is not ensured then, the data produced by the FPGA would be rather obsolete or not very useful without some pre-processing. When larger designs are implemented on FPGAs, they are likely to have multiple clocks running on different paths. Due to FPGAs infinite length clocks, you can create as many clocks as you want with the help of PLL or DCMs. Thus to determine its OFF time is difficult and challenging. To tackle this problem, this paper deals with one such solution of determining OFF duration of FPGA with the help of Arduino. In this paper, one such application that is GPS tracking system is implemented. In order to improve the reliability of that system, the OFF time measurement is measured and verified with real clock to check intentionally making the system power off.","PeriodicalId":175757,"journal":{"name":"2018 Conference on Information and Communication Technology (CICT)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114346232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analyzing Code Smell Removal Sequences for Enhanced Software Maintainability","authors":"Yukti Mehta, Paramvir Singh, A. Sureka","doi":"10.1109/INFOCOMTECH.2018.8722418","DOIUrl":"https://doi.org/10.1109/INFOCOMTECH.2018.8722418","url":null,"abstract":"Code smells are the surface indications which affect the maintainability of the software. Code smells disturb the maintainability of the code by starting a chain reaction of breakages in dependent modules which makes it difficult to read and modify. Applying appropriate refactoring sequences by prioritizing the classes to obtain maintainable software is a tedious process due to strict deadlines of the projects for the developers. Recent researches have explored varied ways of ranking the classes to improve the maintainability. This work empirically investigates the impact of eliminating three prominent code smells by considering their six possible combinations. Our work prioritizes the object oriented software classes in the code that are in the need of refactoring. For prioritizing the refactoring prone classes, a proposed metric, maintainability complexity index is calculated using the values of maintainability index and relative logical complexity as the inputs. The study outcomes show the values of maintainability predicting metrics for the corresponding permutation of the code smell removal sequence. Also, the work aims to yield the sequence which gives software with maximum maintainability so that developers and researchers can save their effort and time to produce high quality software.","PeriodicalId":175757,"journal":{"name":"2018 Conference on Information and Communication Technology (CICT)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132762671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Weed Detection in Farm Crops using Parallel Image Processing","authors":"S. Umamaheswari, R. Arjun, D. Meganathan","doi":"10.1109/INFOCOMTECH.2018.8722369","DOIUrl":"https://doi.org/10.1109/INFOCOMTECH.2018.8722369","url":null,"abstract":"Human community are educated about the environmental issues of pesticides and fertilizers used in agriculture. There is a ever-growing demand for food to be met by agriculture producers. To reduce the environmental issues and address food security, IoT based precision agriculture has evolved. Precision agriculture not only reduces cost and waste, but also improves productivity and quality. We propose a system to detect and locate the weed plants among the cultivated farm crops based on the captured images of the farm. We also propose to enhance the performance of the above system using parallel processing in GPU such that it can be used in real-time. The proposed system takes real time image of farm as input for classification and detects the type and the location of weed in the image. The proposed work trains the system with images of crops and weeds under deep learning framework which includes feature extraction and classification. The results can be used by automated weed detection system under tasks in precision agriculture.","PeriodicalId":175757,"journal":{"name":"2018 Conference on Information and Communication Technology (CICT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130206287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optic Disc Segmentation using Vessel In-painting and Random Walk Algorithm","authors":"Neha Gour, P. Khanna","doi":"10.1109/infocomtech.2018.8722374","DOIUrl":"https://doi.org/10.1109/infocomtech.2018.8722374","url":null,"abstract":"Optic disc segmentation in fundus images is a fundamental step for the detection of retinal diseases like glaucoma. Glaucoma effects the parts of retina inside and around optic disc leading in manifestation of various structural abnormalities. The work proposed in this paper presents an efficient optic disc segmentation methodology using random walk algorithm. Random walk algorithm divides the image into foreground and background regions based on the initial seeds. The optic disc is segmented by using random walk with weights calculated on the color similarity and dissimilarity among neighborhood pixels. The proposed method is tested on fundus images of publicly available Drishti-GS1 database. The final performance is evaluated with respect to precision, sensitivity, specificity, F-score, jaccard, dice, and mean absolute distance measures and compared with other optic disc segmentation approaches presented in the literature.","PeriodicalId":175757,"journal":{"name":"2018 Conference on Information and Communication Technology (CICT)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122041096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yashwanth Kumar Mydam, Shyam Singh Rajput, P. Chanak
{"title":"Low Rank Representation based Discriminative Multi Manifold Analysis for Low-Resolution Face Recognition","authors":"Yashwanth Kumar Mydam, Shyam Singh Rajput, P. Chanak","doi":"10.1109/INFOCOMTECH.2018.8722393","DOIUrl":"https://doi.org/10.1109/INFOCOMTECH.2018.8722393","url":null,"abstract":"Practical face recognition algorithms occasionally faced with the problem of low-resolution profile images. Face images taken by monitoring cameras generally tend to be low-resolution(LR) with extension to unrestrained poses, noise, lighting conditions and occlusion. In this paper, we introduce a low matrix mechanism of matching occluded or inadequate characteristic profile images to a group of high-resolution(HR) profile image representations. In previous research, for matching an LR probe to a set of HR gallery images has introduced a training-based super-resolution approach which transforms LR and HR profile images into a common discriminant characteristic feature space (CDFS) for recognition. To distinguish LR images which are constrained to noise and occlusion, we present a low matrix recovery system which combines the concept of robust principal component analysis (RPCA) and coupled discriminant multi-manifold analysis (CDMMA). In RPCA, we propose to recover a low order matrix from extremely corrupted measures for better representation ability and then perform CDMMA approach in a supervised way where discriminant characteristic features for recognition increased. And then, a standard classification method is employed for final identification.","PeriodicalId":175757,"journal":{"name":"2018 Conference on Information and Communication Technology (CICT)","volume":"197 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126052849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shrikant P Mudnur, Satish Raj Goyal, K. Jariwala, Warish D. Patel, Bhupendra Ramani
{"title":"Hiding the Secret Image Using Two Cover Images for Enhancing the Robustness of the Stego Image Using Haar DWT and LSB Techniques","authors":"Shrikant P Mudnur, Satish Raj Goyal, K. Jariwala, Warish D. Patel, Bhupendra Ramani","doi":"10.1109/INFOCOMTECH.2018.8722352","DOIUrl":"https://doi.org/10.1109/INFOCOMTECH.2018.8722352","url":null,"abstract":"Steganography techniques play vital role in hiding the secret information and hence to secure the data during transmission and storage. In this paper initially one of the two cover images is chosen and the secret image is hidden in this cover image using LSB technique. Exclusive OR (EXOR) operation is performed on the LSB bits using the MSB bits for better security. The other cover image is chosen and the previously constructed stego image is embedded in this cover image using Haar wavelet transform. Using Haar DWT the image can be transformed to approximate band (LL) and detailed bands (Horizontal band(LH), Vertical band(HL), Diagonal band). Detailed bands consist of lesser energy of the original signal when discrete wavelet transform is applied on the original signal. Hence the coefficients of the detailed bands are used to hide the secret information. The quality of the stego image is analysed using the PSNR value.","PeriodicalId":175757,"journal":{"name":"2018 Conference on Information and Communication Technology (CICT)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114853767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Power Optimization of Cell Free massive MIMO with Zero-forcing Beamforming Technique","authors":"Subodh Chand Tripathi, A. Trivedi, Shweta Rajoria","doi":"10.1109/INFOCOMTECH.2018.8722368","DOIUrl":"https://doi.org/10.1109/INFOCOMTECH.2018.8722368","url":null,"abstract":"A Cell Free massive MIMO, comprising a very large number of distributed access points (APs) is a promising technology to provide high data rate, spectral efficiency (SE), and energy efficiency (EE). In this paper, zero-forcing (ZF) beamforming is used since it is free from self-interference. This paper considers Cell Free massive MIMO with ZF beamforming. Here, the power control coefficient and access point power are jointly optimized to improve the Signal-to-interference-plus-noise ratio (SINR) and rate of the system. The original problem is decomposed into two sub-problems and is then optimized individually. Lagrange method is applied to solve the optimization problem efficiently. It is observed that the maximum SINR and rate of downlink improve considerably.","PeriodicalId":175757,"journal":{"name":"2018 Conference on Information and Communication Technology (CICT)","volume":"47 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122052967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CICT 2018 Sponsors","authors":"","doi":"10.1109/infocomtech.2018.8722361","DOIUrl":"https://doi.org/10.1109/infocomtech.2018.8722361","url":null,"abstract":"","PeriodicalId":175757,"journal":{"name":"2018 Conference on Information and Communication Technology (CICT)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122932372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shashank Kotyan, Nishant Kumar, P. Sahu, Venkanna Udutalapally
{"title":"Drishtikon: An advanced navigational aid system for visually impaired people","authors":"Shashank Kotyan, Nishant Kumar, P. Sahu, Venkanna Udutalapally","doi":"10.1109/infocomtech.2018.8722376","DOIUrl":"https://doi.org/10.1109/infocomtech.2018.8722376","url":null,"abstract":"Today, many of the aid systems deployed for visually impaired people are mostly made for a single purpose. Be it navigation, object detection, or distance perceiving. Also, most of the deployed aid systems use indoor navigation which requires a pre-knowledge of the environment. These aid systems often fail to help visually impaired people in the unfamiliar scenario. In this paper, we propose an aid system developed using object detection and depth perceivement to navigate a person without dashing into an object. The prototype developed detects 90 different types of objects and compute their distances from the user. We also, implemented a navigation feature to get input from the user about the target destination and hence, navigate the impaired person to his/her destination using Google Directions API. With this system, we built a multi-feature, high accuracy navigational aid system which can be deployed in the wild and help the visually impaired people in their daily life by navigating them effortlessly to their desired destination.","PeriodicalId":175757,"journal":{"name":"2018 Conference on Information and Communication Technology (CICT)","volume":"2004 38","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132968867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Glioma identification from brain MRI using superpixels and FCM clustering","authors":"N. Gupta, Shiwangi Mishra, P. Khanna","doi":"10.1109/INFOCOMTECH.2018.8722405","DOIUrl":"https://doi.org/10.1109/INFOCOMTECH.2018.8722405","url":null,"abstract":"This work presents a superpixel based computer aided diagnosis (CAD) system for brain tumor segmentation, classification, and identification of glioma tumors. It utilizes superpixel and fuzzy c-means clustering concept for tumor segmentation. At first, dataset images are preprocessed by anisotropic diffusion and dynamic stochastic resonance-based enhancement technique and further segmented through the proposed concept. The run length of centralized patterns are extracted from the segmented regions and classified with naive Bayes classifier. The performance of the system is examined on two brain magnetic resonance imaging datasets for segmentation and identification of glioma tumors. Accuracy for tumor detection is observed 99.89% on JMCD dataset and 100% on BRATS dataset. For glioma identification average accuracies are observed as 97.94% and 98.67% on JMCD and BRATS dataset, respectively. The robustness of the system is examined by 10-fold cross validation and statistical testing. Outcomes are also verified by domain experts in real time.","PeriodicalId":175757,"journal":{"name":"2018 Conference on Information and Communication Technology (CICT)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131183753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}