{"title":"Grid based routing in cognitive radio networks for concurrent communication","authors":"N. Dutta, Z. Pólkowski, C. Savulescu","doi":"10.1109/IC3.2016.7880194","DOIUrl":"https://doi.org/10.1109/IC3.2016.7880194","url":null,"abstract":"Cognitive Radio Network (CRN) exploits the spectrum hole in licensed bands by allowing co located cognitive users to communicate among themselves when licensed users are idle. In such opportunistic communication, the cognitive users need to vacate the channel as and when the licensed user becomes active. Most of the time such sudden termination of ongoing sessions of cognitive devices leads to unacceptable overhead and degrades service quality. So, in this paper a new routing protocol is proposed which emphasizes on concurrent communication of cognitive users rather than opportunistic communication. In concurrent communication cognitive devices are allowed to communicate even in the presence of licensed users activity by controlling the transmission power of cognitive devices such that they do not interfere with the ongoing conversation of licensed users. The algorithm is based on the full location information with the assumption that each of the cognitive device can detect the location information of its own and any other cognitive device in the network. All cognitive devices are arranged into grid cells and every grid selects a grid Head (gH) based on some predefined criteria who takes care of routing. Further, the protocol also suggests a routing zone and forwarding region based on the location of the destination in order to reduce routing overhead and faster route establishment. The simulated results of the protocol is compared with CAODV protocol and observed an improvement in packet delivery ratio and routing overhead.","PeriodicalId":294210,"journal":{"name":"2016 Ninth International Conference on Contemporary Computing (IC3)","volume":"28 17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116727764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vikrant Chaugule, D. Abhishek, Aadheeshwar Vijayakumar, P. B. Ramteke, S. Koolagudi
{"title":"Product review based on optimized facial expression detection","authors":"Vikrant Chaugule, D. Abhishek, Aadheeshwar Vijayakumar, P. B. Ramteke, S. Koolagudi","doi":"10.1109/IC3.2016.7880213","DOIUrl":"https://doi.org/10.1109/IC3.2016.7880213","url":null,"abstract":"This paper proposes a method to review public acceptance of products based on their brand by analyzing the facial expression of the customer intending to buy the product from a supermarket or hypermarket. In such cases, facial expression recognition plays a significant role in product review. Here, facial expression detection is performed by extracting feature points using a modified Harris algorithm. The modified Harris algorithm reduced the time complexity of the existing feature extraction Harris Algorithm. A comparison of time complexities of existing algorithms is done with proposed algorithm. The algorithm proved to be significantly faster and nearly accurate for the needed application by reducing the time complexity for corner points detection.","PeriodicalId":294210,"journal":{"name":"2016 Ninth International Conference on Contemporary Computing (IC3)","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115633859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visheshagya: Time based expertise model for bug report assignment","authors":"Anjali Goyal, Devina Mohan, Neetu Sardana","doi":"10.1109/IC3.2016.7880218","DOIUrl":"https://doi.org/10.1109/IC3.2016.7880218","url":null,"abstract":"The brisk escalation in scale of software systems has made bug triaging an imperative step in bug fixing process. A huge amount of bug reports is submitted daily to bug tracking repositories. Although this practice assists in building a reliable and error-free software product but handling a large amount of work becomes challenging. Bug assignment, an essential step in bug triaging, is the process of designating a suitable developer for the bug report who could make code changes in order to fix the bug. Various approaches ranging from semi to fully automatic bug assignment are proposed in literature. These approaches are mostly based on machine learning and information retrieval techniques. Since the information retrieval based activity profiling approach achieves higher accuracy, they are more often used in recent studies. Time factor based normalization in activity profiling could play a vital role in analyzing the level of expertise (or knowledge) of developers as the knowledge decays with time. This paper proposes a time oriented expertise model, Visheshagya, which utilizes the meta-fields of bug reports for developer selection. The proposed technique is used to prioritize the developers actively participating in software bug repository on the basis of their current knowledge. The proposed approach has been validated on two popular projects of Bugzilla repository, Mozilla and Eclipse. The result shows that time based activity profiling of developers outperforms existing information retrieval based bug report assignment and achieves an improvement of 14.3% and 9.95% in the accuracy of top-10 list size in Mozilla and Eclipse projects respectively.","PeriodicalId":294210,"journal":{"name":"2016 Ninth International Conference on Contemporary Computing (IC3)","volume":"262 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130694732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new method for optic disc localization in retinal images","authors":"Manish Kumar Aggarwal, V. Khare","doi":"10.1109/IC3.2016.7880263","DOIUrl":"https://doi.org/10.1109/IC3.2016.7880263","url":null,"abstract":"In this paper, a new method for localizing optic disc (OD) in retinal images has been proposed. Optic disc is an anatomical feature of retinal image which is considered as a first step for blood vessel segmentation and behaves as a land mark for retinal disease identification. Optic disc is a bright portion of retinal image and share similar properties with early pathology like exudates of Diabetic retinopathy disease. Thus, it is difficult to locate in retinal image due to the presence of similar pathology and non-uniform illumination in retinal image. In proposed approach pixel intensity is used as primary component and only bright pixel are considered for optic disc detection. Secondly, we have assumed that OD is a unique region in the retinal image having blood vessels converges on it. Therefore the optic disc can be localized by correlating every pixel value in the test image with a similar template having an OD at the centre. Finally a blood vessel map is extracted using large window sized median filter. The location of optic disc obtained is verified by the presence of blood vessel pixels in 10×10 window of the optic disc. The result of the proposed method is verified on two standard publicly available DRIVE [13] and DIARETDB1 [15] datasets comprises of 129 images, out of which optic disc in 126 retinal images is detected accurately.","PeriodicalId":294210,"journal":{"name":"2016 Ninth International Conference on Contemporary Computing (IC3)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125572604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adarsh Kumar, M. Goyal, K. Rajalakshmi, A. Aggarwal
{"title":"A simulation annealing based anti-collision protocol for RFID tag identification","authors":"Adarsh Kumar, M. Goyal, K. Rajalakshmi, A. Aggarwal","doi":"10.1109/IC3.2016.7880243","DOIUrl":"https://doi.org/10.1109/IC3.2016.7880243","url":null,"abstract":"The advanced properties like low cost, increased communication range, own battery source etc. preferred Radio frequency identification (RFID) over bar codes for object identification and tracking. In RFID systems, reader-to-reader, tag-to-tag, reader-to-tag collisions are major challenges during object identification. The performance analysis of state-of-art protocols shows that the majority of probabilistic and deterministic algorithms are inefficient. In this work, the proposed protocol uses simulated annealing mechanism which concentrates on parameters of physical state of the RFID readers while selecting the reader for interrogation. If readers with best physical state are selected for interrogation then chances of better network performance also increases. The physical state parameters include energy efficiency, mobility, distance to tag etc. The simulation analysis of proposed scheme shows better performance with parameters like tag identification ratio, total successful interrogation cycles, total time cycles, total IDLE cycles, total number of collisions, delay, and total number of packets sent and received. It is also observed that a collision free RFID network is possible with integration of simulated annealing mechanism.","PeriodicalId":294210,"journal":{"name":"2016 Ninth International Conference on Contemporary Computing (IC3)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121854008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Attitudinal data based server job scheduling using genetic algorithms: Client-centric job scheduling for single threaded servers","authors":"M. Chawla, Kriti Singh, C. Kumar","doi":"10.1109/IC3.2016.7880230","DOIUrl":"https://doi.org/10.1109/IC3.2016.7880230","url":null,"abstract":"With the evolution of web applications towards real time setups, like web sockets, the need for more efficient and setup-specific scheduling techniques escalates. Thus, an effective task scheduling mechanism becomes the prime necessity for any server handling large number of requests concurrently. The existing task scheduling algorithms do not suffice this purpose due to their focus on being generic and minimizing the execution time, while failing to use the characteristics of the system architecture and job profiles with the associated user-agent utility. Additionally, the traditional setups also fail to exploit the knowledge derived from the job profiling, considering the fact they are designed to be more generic. In order to come up with a comprehensive and efficient mechanism, this paper focuses primarily on task scheduling for single threaded environments. Moreover, the proposed scheduling mechanism is driven by Genetic Algorithms (GA), while taking the profile(s) of requests to be served by the targeted server and then using the derived knowledge for enhancing the performance. The experimental results show that the performance of the target server (in terms of efficiency) improves, along with optimizing the client user-agent utility. Furthermore, the proposed model demonstrated significant improvement in user-agent utility while maintaining a viably constant time for computing the same under carefully designed test runs capable of being conclusive by providing enough coverage.","PeriodicalId":294210,"journal":{"name":"2016 Ninth International Conference on Contemporary Computing (IC3)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122748214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CDIA-DS: A framework for efficient reconstruction of compound document image using data structure","authors":"Anand Gupta, Devendra Tiwari, Priyanshi Gupta, Ankit Kulshreshtha","doi":"10.1109/IC3.2016.7880258","DOIUrl":"https://doi.org/10.1109/IC3.2016.7880258","url":null,"abstract":"With the advancement of image acquisition technology, extensive research is being conducted to convert images of paper documentation into an editable electronic format. Various techniques have been developed to extract either Text, Table or Figure region in a document image. However, our finding from past research suggests that these techniques do not deal with documents containing a combination of two or more such regions. Moreover, we believe that in order to facilitate document recreation, the extracted information requires organization in terms of its semantic layout and formatting. Therefore, we advocate the need of a combined technique for extracting each of these regions and need of structuring the extracted information efficiently. In this paper, we propose an efficient two-stage framework CDIA-DS (Compound Document Image Analysis-Data Structure) to cater the aforementioned needs. In the first stage, the regions in document image are identified, and classified in the form of Views (Text/Table/Figure). Views are then organized in the second stage through the proposed tree based structure comprising of leaf and parent nodes in the form of Views and Layouts (arrangement of one or more Views) respectively. In the end experiments are done, to examine the efficiency of CDIA-DS using the proposed data structure.","PeriodicalId":294210,"journal":{"name":"2016 Ninth International Conference on Contemporary Computing (IC3)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122828949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tanvir Ahmad, R. Ahmad, Sarah Masud, Farheen Nilofer
{"title":"Framework to extract context vectors from unstructured data using big data analytics","authors":"Tanvir Ahmad, R. Ahmad, Sarah Masud, Farheen Nilofer","doi":"10.1109/IC3.2016.7880229","DOIUrl":"https://doi.org/10.1109/IC3.2016.7880229","url":null,"abstract":"When multiple terms in the query point to a single concept, the solution is easy to map. But, when many morphologically similar terms refer to separate concepts (showing fuzzy behavior), then arriving at a solution becomes difficult. Before applying any knowledge generation or representation techniques to such polysemic words, word sense disambiguation becomes imperative. Unfortunately, with an exponential increase in data, the process of information extraction becomes difficult. For text data this information is represented in form of context vectors. But, the generation of context vectors is limited by the memory heap and RAM of traditional systems. The aim of this study is to examine and propose a framework for computing context vectors of large dimensions over Big Data, trying to overcome the bottleneck of traditional systems. The proposed framework is based on set of mappers and reducers, implemented on Apache Hadoop. With increase in the size of the input dataset, the dimensions of the related concepts (in form of resultant matrix) increases beyond the capacity of a single system. This bottleneck of handling large dimensions is resolved by clustering. As observed from the study, transition from a single system to a distributed system ensures that the process of information extraction runs smoothly, even with an increase in data.","PeriodicalId":294210,"journal":{"name":"2016 Ninth International Conference on Contemporary Computing (IC3)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114387132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An ontology based approach for formal modeling of structural design patterns","authors":"A. Dwivedi, Anand Tirkey, S. K. Rath","doi":"10.1109/IC3.2016.7880260","DOIUrl":"https://doi.org/10.1109/IC3.2016.7880260","url":null,"abstract":"The present day, software are becoming too much complex by nature. Hence there is a need for developing the right solution of complex requirements. To solve various design problems, a number of tools and techniques are available, one of them is known as design pattern, which helps to find a better solution for the problems, which are recurring in nature. Generally, the notation of design pattern is often described using UML (Unified Modeling Language), which is semi-formal in nature. The UML notation may lead to ambiguities and inconsistencies. In this study, attempts have been made to model the logics behind of structural design patterns using Web Ontology Language (OWL). For consistency checking of the OWL notation, an ontology reasoner i.e., Pellet has been considered. In order to analyze the composition of design patterns, description logic has been applied. For the evaluation of the presented study, an example i.e., online banking system is used.","PeriodicalId":294210,"journal":{"name":"2016 Ninth International Conference on Contemporary Computing (IC3)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125701615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simulation and cost analysis of group authentication protocols","authors":"Adarsh Kumar, K. Gopal, A. Aggarwal","doi":"10.1109/IC3.2016.7880249","DOIUrl":"https://doi.org/10.1109/IC3.2016.7880249","url":null,"abstract":"Radio Frequency Identification (RFID) is an efficient technology for identification, tracking and group proof construction. The multi-round protocols for authentication and group proof construction increase the cost with increase in participants. In this work, computational and communication cost of multi-round protocol is calculated to identify the protocol with least cost and high security. The computational cost is computed using number of rounds and messages, Message Authentication Code (MACO) operations, messages sent per participant and messages received per participant. The communication cost is computed using size of message sent and size of message received. In order to reduce the computational and communication cost, two lightweight group authentication protocols are selected for refinement. The protocols are: Juel's protocol, and Saito and Sakurai protocol. Three refinements are proposed which convert these protocols from two-party group construction to n-party group construction. Results show that refinements reduce the cost compared to Juel's protocol and Saito and Sakurai's protocol. It is observed that high security in group proof construction is still infeasible, thus if security is required to be maximum then multi-round protocol should be preferred. In this work, a multi-round authentication protocol of [1] is analyzed for comparative security analysis. Simulation analysis shows that performance of proposed authentication protocol in multi-round category as well as single-round category is better than state-of-art protocols.","PeriodicalId":294210,"journal":{"name":"2016 Ninth International Conference on Contemporary Computing (IC3)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133285321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}