2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)最新文献

筛选
英文 中文
Prototype of an intelligent system based on RFID and GPS technologies for women safety 基于RFID和GPS技术的女性安全智能系统原型
S. M. Hussain, Shaikh Azeemuddin Nizamuddin, Rolito Asuncion, Chandrashekar Ramaiah, A. Singh
{"title":"Prototype of an intelligent system based on RFID and GPS technologies for women safety","authors":"S. M. Hussain, Shaikh Azeemuddin Nizamuddin, Rolito Asuncion, Chandrashekar Ramaiah, A. Singh","doi":"10.1109/ICRITO.2016.7784986","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784986","url":null,"abstract":"Security for women has become a major issue in most of the countries. Survey results shows that every year around 25000 crime against women were booked across India. From the last ten years, the statistics among women abusement, sexual harassment have been steadily increasing. It has become mandatory to come up with a solution to protect the women from being a victim and to reduce the attacks. The main objective of this paper is to design and implement a highly reliable system for protecting women from being harassed. In this paper, we have developed an intelligent women safety system using Radio Frequency Identification (RFID) and Global positioning system (GPS). The main idea here is using a active RFID tag with passive RFID reader to scan the information and this information is transferred to the AT89C52 microcontroller where in the contacts of around 4 to 5 people is stored in the data base. Once the information is received by the controller, it sends the message to the contacts through GSM module and the location is tracked through the GPS. The simulation is done in ISIS proteus.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125015107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
A new hybrid approch for palm print recognition in PCA based palm print recognition system 基于PCA的掌纹识别系统中一种新的混合掌纹识别方法
Shivkant Kaushik, Rajendra Singh
{"title":"A new hybrid approch for palm print recognition in PCA based palm print recognition system","authors":"Shivkant Kaushik, Rajendra Singh","doi":"10.1109/ICRITO.2016.7784958","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784958","url":null,"abstract":"The major issues that involve in identifying palm print are the search for the templates in the palm print database that best matches with the test sample from input. Here the fundamental to be solved are to select similar features of palm that are needs to be matched. The different feature of palm print that are able to discriminate them from each other must show a huge divergence between different users samples and few divergence between same user samples. For verification process principal lines and datum points are successfully used as an important palm print features, but some other features that are associated with a palm print are delta point features, geometry features, minutiae features and wrinkle features. By using the existing techniques, we propose a distinct scheme to facilitate the dynamic selection of palm print pattern to match it by combining various global and local features of a palm print in a hierarchical way. Our palm print matching system will operate in two steps, enrollment of user and verification of user. In first step that is enrollment; several palm print samples are obtained by a user to store them as templates in the system. Palm print scanner is used to capture the samples which then pass through preprocessing and feature extraction to create the templates which then stored in predefined palm print database. In second and final step, the palm print scanner is used to capture the fresh palm print sample of user. Then the captured palm print sample again passes through preprocessing and feature extraction. These extracted features of a user are compared with existing templates in the database to verify the identity of the user. In this paper, we are proposing a hybrid approach for palm print recognition using a combination of three different approaches of Image processing.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123847083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Improvising the effectiveness of test suites using differential evolution technique 利用差分进化技术提高测试套件的有效性
Shilpi, Karambir
{"title":"Improvising the effectiveness of test suites using differential evolution technique","authors":"Shilpi, Karambir","doi":"10.1109/ICRITO.2016.7784924","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784924","url":null,"abstract":"The process of testing any software system is an atrocious task which indeed consumes a ton of effort, and expensive also. Required effort and time to do adequate as well as effective testing get bigger, as the software gets more complexed that can lead to swarm over the project budget or some test cases left uncovered or delay in completion. A suitably generated test suite does not only locate errors but also aid in reducing cost investment associated with the testing process. This paper implements an optimizing technique called as Differential Evolution to improve the effectiveness of test cases using Average Percentage of Fault Detection (APFD) metric. APFD is taken as the fitness function which is to be optimized. In this work, We have performed comparison of our approach with other existing prioritizing approaches and Experimental computations show that Differential Evolution technique achieve better APFD values than other techniques.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125119136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improvement and analys security of WSN from passive attack 无线传感器网络被动攻击安全性的改进与分析
Gagandeep Kaur, Deepali, R. Kalra
{"title":"Improvement and analys security of WSN from passive attack","authors":"Gagandeep Kaur, Deepali, R. Kalra","doi":"10.1109/ICRITO.2016.7784992","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784992","url":null,"abstract":"Sensor nodes collect the data from the environment and send to sink. But attackers corrupt data while transmitting therefore data security is main concern of wireless sensor network (WSN). In proposed protocol, we decrease the passive attack on sink node by decreasing the traffic on sink node. The simulation results demonstrates the proposed method can each node will compress their data before sending to cluster head. After compressing, the packet size of node will decrease. This will decrease the traffic overload. In this compression technique, we reduce the size of packet by creating a code string of 0 and 1.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129140828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Sentimental analysis of social media using R language and Hadoop: Rhadoop 使用R语言和Hadoop: rha进行社交媒体情感分析
Sunny Kumar, Paramjeet Singh, S. Rani
{"title":"Sentimental analysis of social media using R language and Hadoop: Rhadoop","authors":"Sunny Kumar, Paramjeet Singh, S. Rani","doi":"10.1109/ICRITO.2016.7784953","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784953","url":null,"abstract":"The growth of Technology of World Wide Web has changed the way of expressing people's views, opinions and Sentiments about others. Mostly they use blogs, Social sites, online discussions etc. This leads to the generation of massive amount of data. Gleaning information from massive storage of data is a big challenge for the companies in these days. This paper leverages the sentimental analysis of Twitter data using R language which is helpful for collecting the sentiments information in the form of either positive score, negative score or somewhere in between them. Then we perform the analysis of tweets data that are having a size of TBs means big data using R language and Rhadoop Connector. Here the problem is related with “performance”?? When we extract the information from petabytes of data we focus on the analysis of big data. This paper shows the performance estimation on two different platforms R language and Rhadoop tool.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114562738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Self-adaptive ontology-based focused crawling: A literature survey 基于自适应本体的聚焦爬行:文献综述
Mohd. Aamir Khan, D. Sharma
{"title":"Self-adaptive ontology-based focused crawling: A literature survey","authors":"Mohd. Aamir Khan, D. Sharma","doi":"10.1109/ICRITO.2016.7785024","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7785024","url":null,"abstract":"Web crawlers are known to us since the birth of the internet since 1990, as the web pages are interconnected among themselves and form a unique path along which the crawler travels to fetch the information requested by the user/author. But the traditional crawlers are not able to distinguish between the relevant and the partially relevant web pages. Due to this the crawler had to fetch a huge amount of data from the web even if the web was not fully relevant to the user. This resulted in formation of the crawlers that were committed to the single topic given by the user. These crawlers were known as focused crawlers. These focused crawlers do not crawl the whole web as opposed to the traditional crawlers, as they only crawl the specific part of the web that is related to the given topic. This paper summarizes different qualities of various focused crawlers at present. Basically it divides the focused crawler into two different classes namely Semantic and Social Semantic. Semantic Focused Crawlers uses the ontology to its advantage and to obtain the topics that are contextually related to the given topic. Social Semantic Focused Crawlers takes the advantages of the social networking sites to obtain the web pages that are contextually related to the given topic, and usually the pages are shared by the people that have some interest in some topic related to the queried topic.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123486880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Generalized classification rules for entity identification 用于实体识别的广义分类规则
Umesh S. Bhoskar, Arati Manjaramkar
{"title":"Generalized classification rules for entity identification","authors":"Umesh S. Bhoskar, Arati Manjaramkar","doi":"10.1109/ICRITO.2016.7784951","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784951","url":null,"abstract":"One of the essential tasks in data integration is entity resolution (ER) which will recognize the records that are belonging to the same entity. The entity resolution is referred by many other terms like duplicate detection, pattern matching, etc. Now a days the activities like information integration, information retrieval, crowd sourcing, and pay-as-you-go have involved users to carry out the ER tasks such as to identify whether two entity descriptions are referred to the same entity or not. Previous work of ER involves clustering and comparison approaches which are based on some assumption. The ER gives the poorer quality when such assumptions are not correct. In our approach, we present a new set of entity rules where each rule enumerates all possibilities to identify the correct entity of the records. Additionally, we propose an extended approach (GenR) for efficient and effective rules generation by using a specialized form of term-based entropy measure. We experimentally evaluated the proposed approach using data set with a large no. of records and the data sets with different data characteristics. We report on some promising empirical results which demonstrate performance improvement by using a term-based quality measure.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121927365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Binarization techniques for degraded document images — A review 退化文档图像的二值化技术综述
Jyotsna, Shivani Chauhan, Ekta Sharma, Amit Doegar
{"title":"Binarization techniques for degraded document images — A review","authors":"Jyotsna, Shivani Chauhan, Ekta Sharma, Amit Doegar","doi":"10.1109/ICRITO.2016.7784945","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784945","url":null,"abstract":"Document Image binarization is the segmentation of the document into foreground text and background. It is done to obtain the clear images from which text can be retrieved easily. Thresholding is used for the segmentation of the document images. This paper, presents a review on various document image binarization techniques. Evaluation performance metrics used for the evaluation of the binarization techniques are also explained. Comparison of the performance of the binarization techniques based on the performance metrics like PSNR, F-Measure, NRM and MPM is shown. Performance of the techniques is evaluated on the dataset of DIBCO-2009 and DIBCO-2010.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131635638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Recent developments in searching over encrypted cloud data 在加密云数据上搜索的最新进展
Sneha A. Mittal, C. Krishna
{"title":"Recent developments in searching over encrypted cloud data","authors":"Sneha A. Mittal, C. Krishna","doi":"10.1109/ICRITO.2016.7784977","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7784977","url":null,"abstract":"Cloud Storage is currently one of the most widely used applications of cloud. As usage of cloud is increasing, critical and personal data is also being outsourced making it important to maintain confidentiality and integrity of this data. A basic way of protecting data is encrypting it before outsourcing, but the retrieval of required files from the encrypted cloud becomes a problem which requires searching over the encrypted data. Various schemes have been proposed to deal with this issue of searching over encrypted cloud data, and work continues to advance attempting to provide optimum user search experience resembling plaintext search. This paper reviews research in this field ranging from single keyword to multi-keyword search, forward indexing to reverse indexing, and disjunctive to conjunctive multi-keyword search. As research in this space is growing soon with target of making user search experience over encrypted data resemble plain text search experience (such as “Google Search”).","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132981896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel lower ultra wideband (UWB) compact planar Inverted-F Antenna for WBAN applications 一种新型的低超宽带(UWB)紧凑型平面倒f天线
Ritika Bansal, Jagriti Bhatia, A. Batth, H. Saini, N. Kumar
{"title":"A novel lower ultra wideband (UWB) compact planar Inverted-F Antenna for WBAN applications","authors":"Ritika Bansal, Jagriti Bhatia, A. Batth, H. Saini, N. Kumar","doi":"10.1109/ICRITO.2016.7785000","DOIUrl":"https://doi.org/10.1109/ICRITO.2016.7785000","url":null,"abstract":"The paper proposes an Ultra Wide Band (UWB) Planar Inverted F Antenna (PIFA) for Body Area Network (BAN) applications. A tapered feed is also introduced in order to overcome the narrow band characteristics of PIFA. The two shorting pins are introduced in the middle of the patch to enhance bandwidth performance. The proposed antenna has a low profile structure with overall size of 32mm × 36mm × 8mm and hence can be easily deployed in the medical devices. The antenna is designed using Rogers Duroid 5880 and provides a wide band coverage of 2.40 GHz-5.8 GHz which covers lower UWB band. A good radiation performance and gain pattern can be observed from the proposed antenna.","PeriodicalId":377611,"journal":{"name":"2016 5th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133145018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信