2014 International Conference on Recent Trends in Information Technology最新文献

筛选
英文 中文
Efficient host based intrusion detection system using Partial Decision Tree and Correlation feature selection algorithm 基于部分决策树和相关特征选择算法的高效主机入侵检测系统
2014 International Conference on Recent Trends in Information Technology Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996115
F. Lydia Catherine, Ravi Pathak, V. Vaidehi
{"title":"Efficient host based intrusion detection system using Partial Decision Tree and Correlation feature selection algorithm","authors":"F. Lydia Catherine, Ravi Pathak, V. Vaidehi","doi":"10.1109/ICRTIT.2014.6996115","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996115","url":null,"abstract":"System security has become significant issue in many organizations. The attacks like DoS, U2R, R2L and Probing etc., creating a serious threat to the appropriate operation of Internet services as well as in host system. In recent years, intrusion detection system is designed to prevent the intruder in the host as well as in network systems. Existing host based intrusion detection systems detects the intrusion using complete feature set and it is not fast enough to detect the attacks. To overcome this problem, this paper proposes an efficient HIDS - Correlation based Partial Decision Tree Algorithm (CPDT). The proposed CPDT combines Correlation feature selection for selecting features and Partial Decision Tree (PART) for classifying the normal and the abnormal packets. The algorithm is implemented and has been validated within KDD'99 dataset and found to give better results than the existing algorithms. The proposed CPDT model provides the accuracy of 99.9458%.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134412877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Efficient fingerprint lookup using Prefix Indexing Tablet 高效的指纹查找使用前缀索引平板电脑
2014 International Conference on Recent Trends in Information Technology Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996158
D. Priyadharshini, J. Angelina, K. Sundarakantham, S. Shalinie
{"title":"Efficient fingerprint lookup using Prefix Indexing Tablet","authors":"D. Priyadharshini, J. Angelina, K. Sundarakantham, S. Shalinie","doi":"10.1109/ICRTIT.2014.6996158","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996158","url":null,"abstract":"Backups protect the file systems from disk or other hardware failures, software errors that may corrupt the file system and natural disasters. However, a single file may be present as multiple copies in the file system. Hence searching time to find the redundant data and to eliminate them is high. In addition to this, redundant data consumes more space in storage systems. Data de-duplication techniques are used to address these issues. Fingerprint lookup is a key ingredient for efficient de-duplication. This paper proposes an efficient Fingerprint lookup technique called Prefix Indexing Tablets in which the fingerprint lookup is performed only on necessary tablets. Further to reduce the fingerprint lookup delay, only the prefix of the fingerprint is considered. Experimentation on standard datasets show that the lookup latency of the proposed de-duplication method is reduced by 62% and the running time is improved.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114296412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nature - Inspired enhanced data deduplication for efficient cloud storage 自然-启发增强的重复数据删除,用于高效的云存储
2014 International Conference on Recent Trends in Information Technology Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996211
G. Madhubala, R. Priyadharshini, P. Ranjitham, S. Baskaran
{"title":"Nature - Inspired enhanced data deduplication for efficient cloud storage","authors":"G. Madhubala, R. Priyadharshini, P. Ranjitham, S. Baskaran","doi":"10.1109/ICRTIT.2014.6996211","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996211","url":null,"abstract":"Cloud Computing is the delivery of computing as a service, which is specifically involved with Storage of data, enabling ubiquitous, convenient access to shared resources that are provided to computers and other devices as a utility over a network. Storage, which is considered to be the key attribute, is hindered by the presence of redundant copies of data. Data Deduplication is a specialized technique for data compression and duplicate detection for eliminating duplicate copies of data to make storage utilization efficient. Cloud Service Providers currently employ Hashing technique so as to avoid the presence of redundant copies. Apparently, there are a few major pitfalls which can be vanquished through the employment of a Nature - Inspired, Genetic Programming Approach, for deduplication. Genetic Programming is a systematic, domain - independent programming model making use of the ideologies of biological evolution so as to handle a complicated problem. A Sequence Matching Algorithm and Levenshtein's Algorithm are used for Text Comparison and then Genetic Programming concepts are utilized to detect the closest match. The performance of these three algorithms and hashing technique are compared. Since bio-inspired concepts, systems and algorithms are found to be more efficient, a Nature-Inspired Approach for data deduplication in cloud storage is implemented.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124765775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An Enhanced Adaptive Scoring Job Scheduling algorithm for minimizing job failure in heterogeneous grid network 基于自适应计分的异构网格网络作业调度算法
2014 International Conference on Recent Trends in Information Technology Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996161
S. K. Aparnaa, K. Kousalya
{"title":"An Enhanced Adaptive Scoring Job Scheduling algorithm for minimizing job failure in heterogeneous grid network","authors":"S. K. Aparnaa, K. Kousalya","doi":"10.1109/ICRTIT.2014.6996161","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996161","url":null,"abstract":"Grid computing involves sharing data storage and coordinating network resources. The complexity of scheduling increases with heterogeneous nature of grid and is highly difficult to schedule effectively. The goal of grid job scheduling is to achieve high system performance and match the job to the appropriate available resource. Due to dynamic nature of grid, the traditional job scheduling algorithms First Come First Serve (FCFS) and First Come Last Serve (FCLS) does not adapt to the grid environment. In order to utilize the power of grid completely and to schedule jobs efficiently many existing algorithms have been implemented. However the existing algorithms does not consider the memory requirement of each cluster which is one of the main resource for scheduling data intensive jobs. Due to this the job failure rate is also very high. To provide a solution to that problem Enhanced Adaptive Scoring Job Scheduling algorithm is introduced. The jobs are identified whether it is data intensive or computational intensive and based on that the jobs are scheduled. The jobs are allocated by computing Job Score (JS) along with the memory requirement of each cluster. Due to the dynamic nature of grid environment, each time the status of the resources changes and each time the Job Score(JS) is computed and the jobs are allocated to the most appropriate resources. The proposed algorithm minimize job failure rate and makespan time is also reduced.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"257 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117030131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Towards secure audit services for outsourced data in cloud 为云中的外包数据提供安全审计服务
2014 International Conference on Recent Trends in Information Technology Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996214
Sumalatha M R, Hemalathaa S, Monika R, Ahila C
{"title":"Towards secure audit services for outsourced data in cloud","authors":"Sumalatha M R, Hemalathaa S, Monika R, Ahila C","doi":"10.1109/ICRTIT.2014.6996214","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996214","url":null,"abstract":"The rapid growth in the field of Cloud Computing introduces a myriad of security hazards to the information and data. Data outsourcing relieves the responsibility of local data storage and maintenance, but introduces security implications. A third party service provider, stores and maintains data, application or infrastructure of cloud user. Auditing methods and infrastructures in cloud play an important character in cloud security strategies. As data and applications deployed in the cloud are more delicate, the requirement for auditing systems to provide rapid analysis and quick responses becomes inevitable. In this work we provide a privacy-preserving data integrity protection mechanism by allowing public auditing for cloud storage with the assistance of the data owner's identity. This guarantees the auditing can be done by the third party without fetching the entire data from the cloud. A data protection scheme is also outlined, by providing a method to allow for data to be encrypted in the cloud without loss of accessibility or functionality for the authorized users.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129404636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Automatic localization and segmentation of Optic Disc in retinal fundus images through image processing techniques 利用图像处理技术实现视网膜眼底图像视盘的自动定位与分割
2014 International Conference on Recent Trends in Information Technology Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996090
R. GeethaRamani, C. Dhanapackiam
{"title":"Automatic localization and segmentation of Optic Disc in retinal fundus images through image processing techniques","authors":"R. GeethaRamani, C. Dhanapackiam","doi":"10.1109/ICRTIT.2014.6996090","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996090","url":null,"abstract":"The Optic Disc location detection and extraction are main role of automatically analyzing of retinal image. Ophthalmologists analyze the Optic Disc for finding the presence or absence of retinal diseases viz. Glaucoma, Diabetic Retinopathy, Occlusion, Orbital lymphangioma, Papilloedema, Pituitary Cancer, Open-angle glaucoma etc. In this paper, we attempted to localize and segment the Optic Disc region of retinal fundus images by template matching method and morphological procedure. The optic nerve is originate in the brightest region of retinal image and it act as a main region to detect the retinal diseases using the ratio of cup and disc(CDR) and the ratio between Optic rim & center of the Optic Disc. The proposed work localizes and segments the Optic Disc then the corresponding center points & diameter of retinal fundus images are determined. We have considered the Gold Standard Database (available at public repository) that comprises of 30 retinal fundus images to our experiments. The location of Optic Disc is detected, segmented for all images and the center & diameter of segmented Optic Disc are evaluated against the Optic Disc center points & diameter (ground truth specified by ophthalmologist experts). The Optic Disc centers & diameter identified through our method are near close to ground truth provided by the ophthalmologist experts. The proposed system achieves 98.7% accuracy in locating the Optic Disc while compare with other Optic Disc detection methodologies such as Active Contour Model, Fuzzy C-Means, Artificial Neural Network.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130691900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
An effective enactment of broadcasting XML in wireless mobile environment 无线移动环境下广播XML的有效实现
2014 International Conference on Recent Trends in Information Technology Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996208
J. Briskilal, D. Satish
{"title":"An effective enactment of broadcasting XML in wireless mobile environment","authors":"J. Briskilal, D. Satish","doi":"10.1109/ICRTIT.2014.6996208","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996208","url":null,"abstract":"In this new scenario, Wireless communications are very much popular in all aspects, accordingly to provide an effective enactment of broadcasting energy efficiency and latency efficiency are considered by means of Lineage Encoding and Twig pattern queries. Lineage encoding is the scheme to convert the XML byte formats into bit formats, thereby providing effective achieving of bandwidth. Also this converting scheme can handle twig pattern queries. A twig pattern query provides a very fast reply to the users by performing multi-way searching of tree traversals. And a novel methodology named G node which is a group node consisting collection of multi elements. This provides the accurate information to the users. We propose an XML automation tool that creates customized xml files .so that there is no need of relying on third party for xml files. And also there is no need of storing the xml in the repository to extract the data for further process. Dynamic addition of G nodes is possible in order to add dynamic events without interrupting an existing broadcasting channel. And there is no depth restriction for creating XML file in an automation tool.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128972232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Game theoretical approach for improving throughput capacity in wireless ad hoc networks 提高无线自组织网络吞吐量的博弈论方法
2014 International Conference on Recent Trends in Information Technology Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996152
S. Suman, S. Porselvi, L. Bhagyalakshmi, Dhananjay Kumar
{"title":"Game theoretical approach for improving throughput capacity in wireless ad hoc networks","authors":"S. Suman, S. Porselvi, L. Bhagyalakshmi, Dhananjay Kumar","doi":"10.1109/ICRTIT.2014.6996152","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996152","url":null,"abstract":"In wireless ad hoc networks, Quality of Service (QoS) can be obtained efficiently using the power control scheme. Power control can be achieved by incorporating cooperation among the available links. In this paper, we propose an adaptive pricing scheme that enables the nodes in the networks to determine the maximum allowable power that can be used for transmission of data within the networks so as to avoid inducing interference in the other links that exist in the networks. Each node calculates the total power which, when used for data transmission with the other nodes would obtain Nash Equilibrium (NE) for the utility function. This in turn contributes to maximize the frequency reuse and thereby improves throughput capacity. Numerical results prove that the overall throughput of the network is improved under this scheme.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115650057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An improved dynamic data replica selection and placement in cloud 改进了云中的动态数据副本选择和放置
2014 International Conference on Recent Trends in Information Technology Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996180
A. Rajalakshmi, D. Vijayakumar, Dr.K.G. Srinivasagan
{"title":"An improved dynamic data replica selection and placement in cloud","authors":"A. Rajalakshmi, D. Vijayakumar, Dr.K.G. Srinivasagan","doi":"10.1109/ICRTIT.2014.6996180","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996180","url":null,"abstract":"Cloud computing platform is getting more and more attentions as a new trend of data management. Data replication has been widely used to speed up data access in cloud. Replica selection and placement are the major issues in replication. In this paper we propose an approach for dynamic data replication in cloud. A replica management system allows users to create, and manage replicas and update the replicas if the original datas are modified. The proposed work concentrates on designing an algorithm for suitable optimal replica selection and placement to increase availability of data in the cloud. The method consists of two main phases file application and replication operation. The first phase contains the replica location and creation by using catalog and index. In second phase is used to find whether there is enough space in the destination to store the requested file or not. Replication aims to increase availability of resources, minimum access cost, shared bandwidth consumption and delay time by replicating data. The proposed systems developed under the Eucalyptus cloud environment. The results of proposed replica selection algorithm achieve better accessibility compared with other methods.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124076483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Hand based multibiometric authentication using local feature extraction 基于局部特征提取的手部多重生物特征认证
2014 International Conference on Recent Trends in Information Technology Pub Date : 2014-04-10 DOI: 10.1109/ICRTIT.2014.6996136
B. Bhaskar, S. Veluchamy
{"title":"Hand based multibiometric authentication using local feature extraction","authors":"B. Bhaskar, S. Veluchamy","doi":"10.1109/ICRTIT.2014.6996136","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996136","url":null,"abstract":"Biometrics has wide applications in the fields of security and privacy. Since unimodal biometrics are subjected to various problems regarding recognition and security, multimodal biometrics have been used extensively nowadays for personal authentication. In this paper we have proposed an efficient personal identification system using two biometric identifiers, palm print and Inner knuckle print. In the recent years, palm prints and knuckle prints have overruled other biometric identifiers because of their unique, stable and novelty feature. The proposed feature extraction method for palm print is Monogenic Binary Coding (MBC), which is an efficient approach for extracting palm print features. Then for inner knuckle print recognition we have tried two algorithms named Ridgelet Transform and Scale Invariant Feature Transform (SIFT). Also we have compared their results in terms of recognition rate. We then adopt Support Vector Machine (SVM) for classifying the extracted feature vectors. Combining both knuckle print and palm print for personal identification will give better security and accuracy.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127039129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信