Int. J. Rough Sets Data Anal.最新文献

筛选
英文 中文
Weighted SVMBoost based Hybrid Rule Extraction Methods for Software Defect Prediction 基于SVMBoost加权混合规则提取的软件缺陷预测方法
Int. J. Rough Sets Data Anal. Pub Date : 2019-04-01 DOI: 10.4018/IJRSDA.2019040104
Jhansi Lakshmi Potharlanka, Maruthi Padmaja Turumella
{"title":"Weighted SVMBoost based Hybrid Rule Extraction Methods for Software Defect Prediction","authors":"Jhansi Lakshmi Potharlanka, Maruthi Padmaja Turumella","doi":"10.4018/IJRSDA.2019040104","DOIUrl":"https://doi.org/10.4018/IJRSDA.2019040104","url":null,"abstract":"The software testing efforts and costs are mitigated by appropriate automatic defect prediction models. So far, many automatic software defect prediction (SDP) models were developed using machine learning methods. However, it is difficult for the end users to comprehend the knowledge extracted from these models. Further, the SDP data is of unbalanced in nature, which hampers the model performance. To address these problems, this paper presents a hybrid weighted SVMBoost-based rule extraction model such as WSVMBoost and Decision Tree, WSVMBoost and Ripper, and WSVMBoost and Bayesian Network for SDP problems. The extraction of the rules from the opaque SVMBoost is carried out in two phases: (i) knowledge extraction, (ii) rule extraction. The experimental results on four NASA MDP datasets have shown that the WSVMBoost and Decision tree hybrid yielded better performance than the other hybrids and WSVM.","PeriodicalId":152357,"journal":{"name":"Int. J. Rough Sets Data Anal.","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124817631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Impact of PDS Based kNN Classifiers on Kyoto Dataset 基于PDS的kNN分类器对京都数据集的影响
Int. J. Rough Sets Data Anal. Pub Date : 2019-04-01 DOI: 10.4018/IJRSDA.2019040105
K. Swathi, B. Rao
{"title":"Impact of PDS Based kNN Classifiers on Kyoto Dataset","authors":"K. Swathi, B. Rao","doi":"10.4018/IJRSDA.2019040105","DOIUrl":"https://doi.org/10.4018/IJRSDA.2019040105","url":null,"abstract":"This article compares the performance of different Partial Distance Search-based (PDS) kNN classifiers on a benchmark Kyoto 2006+ dataset for Network Intrusion Detection Systems (NIDS). These PDS classifiers are named based on features indexing. They are: i) Simple PDS kNN, the features are not indexed (SPDS), ii) Variance indexing based kNN (VIPDS), the features are indexed by the variance of the features, and iii) Correlation coefficient indexing-based kNN (CIPDS), the features are indexed by the correlation coefficient of the features with a class label. For comparative study between these classifiers, the computational time and accuracy are considered performance measures. After the experimental study, it is observed that the CIPDS gives better performance in terms of computational time whereas VIPDS shows better accuracy, but not much significant difference when compared with CIPDS. The study suggests to adopt CIPDS when class labels were available without any ambiguity, otherwise it suggested the adoption of VIPDS.","PeriodicalId":152357,"journal":{"name":"Int. J. Rough Sets Data Anal.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129138825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Topological Properties of Multigranular Rough sets on Fuzzy Approximation Spaces 模糊逼近空间上多颗粒粗糙集的拓扑性质
Int. J. Rough Sets Data Anal. Pub Date : 2019-04-01 DOI: 10.4018/IJRSDA.2019040101
B. Tripathy, S. K. Parida, Sudam Charan Parida
{"title":"Topological Properties of Multigranular Rough sets on Fuzzy Approximation Spaces","authors":"B. Tripathy, S. K. Parida, Sudam Charan Parida","doi":"10.4018/IJRSDA.2019040101","DOIUrl":"https://doi.org/10.4018/IJRSDA.2019040101","url":null,"abstract":"One of the extensions of the basic rough set model introduced by Pawlak in 1982 is the notion of rough sets on fuzzy approximation spaces. It is based upon a fuzzy proximity relation defined over a Universe. As is well known, an equivalence relation provides a granularization of the universe on which it is defined. However, a single relation defines only single granularization and as such to handle multiple granularity over a universe simultaneously, two notions of multigranulations have been introduced. These are the optimistic and pessimistic multigranulation. The notion of multigranulation over fuzzy approximation spaces were introduced recently in 2018. Topological properties of rough sets are an important characteristic, which along with accuracy measure forms the two facets of rough set application as mentioned by Pawlak. In this article, the authors introduce the concept of topological property of multigranular rough sets on fuzzy approximation spaces and study its properties.","PeriodicalId":152357,"journal":{"name":"Int. J. Rough Sets Data Anal.","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129028048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Conditioned Slicing of Interprocedural Programs 过程间程序的条件切片
Int. J. Rough Sets Data Anal. Pub Date : 2019-01-01 DOI: 10.4018/IJRSDA.2019010103
M. Sahu
{"title":"Conditioned Slicing of Interprocedural Programs","authors":"M. Sahu","doi":"10.4018/IJRSDA.2019010103","DOIUrl":"https://doi.org/10.4018/IJRSDA.2019010103","url":null,"abstract":"Program slicing is a technique to decompose programs depending on control flow and data flow amongst several lines of code in a program. Conditioned slicing is a generalization of static slicing and dynamic slicing. A variable, the desired program point, and a condition of interest form a slicing criterion for conditioned slicing. This paper proposes an approach to calculate conditioned slices for programs containing multiple procedures. The approach is termed Node-Marking Conditioned Slicing (NMCS) algorithm. In this approach, first and foremost step is to build an intermediate symbolization of a given program code and the next step is to develop an algorithm for finding out conditioned slices. The dependence graph, termed System Dependence Graph (SDG), is used to symbolize intermediate presentation. After constructing SDG, the NMCS algorithm chooses nodes that satisfy a given condition by the process of marking and unmarking. The algorithm also finds out conditioned slices for every variable at every statement during the process. NMCS algorithm employs a stack to save call context of a method. Few edges in SDG are labeled to identify the statement that calls a method. The proposed algorithm is implemented, and its performance is tested with several case study projects.","PeriodicalId":152357,"journal":{"name":"Int. J. Rough Sets Data Anal.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114978607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Collaboration Network Analysis Based on Normalized Citation Count and Eigenvector Centrality 基于归一化引文计数和特征向量中心性的协作网络分析
Int. J. Rough Sets Data Anal. Pub Date : 2019-01-01 DOI: 10.4018/IJRSDA.2019010104
Anand Bihari, Sudhakar Tripathi, A. Deepak
{"title":"Collaboration Network Analysis Based on Normalized Citation Count and Eigenvector Centrality","authors":"Anand Bihari, Sudhakar Tripathi, A. Deepak","doi":"10.4018/IJRSDA.2019010104","DOIUrl":"https://doi.org/10.4018/IJRSDA.2019010104","url":null,"abstract":"In the research community, the estimation of the scholarly impact of an individual is based on either citation-based indicators or network centrality measures. The network-based centrality measures like degree, closeness, betweenness & eigenvector centrality and the citation-based indicators such as h-index, g-index & i10-index, etc., are used and all of the indicators give full credit to all of the authors of a particular article. This is although the contribution of the authors are different. To determine the actual contribution of an author in a particular article, we have applied arithmetic, geometric and harmonic counting methods for finding the actual contribution of an individual. To find the prominent actor in the network, we have applied eigenvector centrality. To authenticate the proposed analysis, an experimental study has been conducted on 186007 authors collaboration network, that is extracted from IEEE Xplore. The experimental results show that the geometric counting-based credit distribution among scholars gives better results than others.","PeriodicalId":152357,"journal":{"name":"Int. J. Rough Sets Data Anal.","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124248644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Comparative Study of Infomax, Extended Infomax and Multi-User Kurtosis Algorithms for Blind Source Separation 盲信源分离中的Infomax、扩展Infomax和多用户峰度算法的比较研究
Int. J. Rough Sets Data Anal. Pub Date : 2019-01-01 DOI: 10.4018/IJRSDA.2019010101
Monorama Swain, Rutuparna Panda, P. Kabisatpathy
{"title":"A Comparative Study of Infomax, Extended Infomax and Multi-User Kurtosis Algorithms for Blind Source Separation","authors":"Monorama Swain, Rutuparna Panda, P. Kabisatpathy","doi":"10.4018/IJRSDA.2019010101","DOIUrl":"https://doi.org/10.4018/IJRSDA.2019010101","url":null,"abstract":"In this article for the separation of Super Gaussian and Sub-Gaussian signals, we have considered the Multi-User Kurtosis(MUK), Infomax (Information Maximization) and Extended Infomax algorithms. For Extended Infomax we have taken two different non-linear functions and new coefficients and for Infomax we have taken a single non-linear function. We have derived MUK algorithm with stochastic gradient update iteratively using MUK cost function abided by a Gram-Schmidt orthogonalization to project on to the criterion constraint. Amongst the various standards available for measuring blind source separation, Cross-correlation coefficient and Kurtosis are considered to analyze the performance of the algorithms. An important finding of this study, as is evident from the performance table, is that the Kurtosis and Correlation coefficient values are the most favorable for the Extended Infomax algorithm, when compared with the others.","PeriodicalId":152357,"journal":{"name":"Int. J. Rough Sets Data Anal.","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129704209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A New Bi-Level Encoding and Decoding Scheme for Pixel Expansion Based Visual Cryptography 基于像素扩展的视觉密码双级编解码新方案
Int. J. Rough Sets Data Anal. Pub Date : 2019-01-01 DOI: 10.4018/IJRSDA.2019010102
R. C. Barik, S. Changder, S. Sahu
{"title":"A New Bi-Level Encoding and Decoding Scheme for Pixel Expansion Based Visual Cryptography","authors":"R. C. Barik, S. Changder, S. Sahu","doi":"10.4018/IJRSDA.2019010102","DOIUrl":"https://doi.org/10.4018/IJRSDA.2019010102","url":null,"abstract":"Mapping of image-based object textures to ASCII characters can be a new modification towards visual cryptography. Naor and Shamir proposed a new dimension of Information security as visual cryptography which is a secret sharing scheme among N number of participants with pixel expansion. Later on, many researchers extended the visual secret sharing scheme with no expansion of pixel regions in binary and color images. By stacking k shares the secret can be decoded using normal vision. In this paper the authors have proposed a modification towards visual cryptography by converting the message in the form of printable ASCII character-based numerical encoding patterns in a binary host image. The encoding of the message is represented as ASCII numeric and a texture of those numeric are arranged to form a binary host image. Then, N numbers of shares are built up but after stacking all the shares the decoding of the message is achieved by converting ASCII numeric to the secret.","PeriodicalId":152357,"journal":{"name":"Int. J. Rough Sets Data Anal.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117340755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Securing Stored Biometric Template Using Cryptographic Algorithm 使用加密算法保护存储的生物特征模板
Int. J. Rough Sets Data Anal. Pub Date : 2018-10-01 DOI: 10.4018/IJRSDA.2018100103
M. Lakhera, M. Rauthan
{"title":"Securing Stored Biometric Template Using Cryptographic Algorithm","authors":"M. Lakhera, M. Rauthan","doi":"10.4018/IJRSDA.2018100103","DOIUrl":"https://doi.org/10.4018/IJRSDA.2018100103","url":null,"abstract":"The biometric template protection technique provides the security in many authentication applications. Authentication based on biometrics has more advantages over traditional methods such as password and token-based authentication methods. The advantage of any biometric-based authentication system over a traditional one is that the person must physically be present at that place while recognizing him. So, it is essential to secure these biometrics by combining these with cryptography. In the proposed algorithm, the AES algorithm is used for securing the stored and transmitted biometric templates using helping data. The helping data is a variable type of data which is changed at every attempt for registration. The final symmetric key AES algorithm is a combination of helping data and actual symmetric keys of the AES algorithm. The experimental analysis shows that a brute force attack takes a long time to recover the original biometric template from cipher biometric template. So, the proposed technique provides sufficient security to stored biometric templates.","PeriodicalId":152357,"journal":{"name":"Int. J. Rough Sets Data Anal.","volume":"27 24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132859702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Fuzzy Rough Set Based Technique for User Specific Information Retrieval: A Case Study on Wikipedia Data 基于模糊粗糙集的用户特定信息检索技术:以维基百科数据为例
Int. J. Rough Sets Data Anal. Pub Date : 2018-10-01 DOI: 10.4018/IJRSDA.2018100102
Nidhika Yadav, N. Chatterjee
{"title":"Fuzzy Rough Set Based Technique for User Specific Information Retrieval: A Case Study on Wikipedia Data","authors":"Nidhika Yadav, N. Chatterjee","doi":"10.4018/IJRSDA.2018100102","DOIUrl":"https://doi.org/10.4018/IJRSDA.2018100102","url":null,"abstract":"Information retrieval is widely used due to extremely large volume of text and image data available on the web and consequently, efficient retrieval is required. Text information retrieval is a branch of information retrieval which deals with text documents. Another key factor is the concern for a retrieval engine, often referred to as user-specific information retrieval, which works according to a specific user. This article performs a preliminary investigation of the proposed fuzzy rough sets-based model for user-specific text information retrieval. The model improves on the computational time required to compute the approximations compared to classical fuzzy rough set model by using Wikipedia as the information source. The technique also improves on the accuracy of clustering obtained for user specified classes.","PeriodicalId":152357,"journal":{"name":"Int. J. Rough Sets Data Anal.","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134131306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Hindi Text Document Classification System Using SVM and Fuzzy: A Survey 基于支持向量机和模糊的印地语文本文档分类系统综述
Int. J. Rough Sets Data Anal. Pub Date : 2018-10-01 DOI: 10.4018/IJRSDA.2018100101
Shalini Puri, S. Singh
{"title":"Hindi Text Document Classification System Using SVM and Fuzzy: A Survey","authors":"Shalini Puri, S. Singh","doi":"10.4018/IJRSDA.2018100101","DOIUrl":"https://doi.org/10.4018/IJRSDA.2018100101","url":null,"abstract":"In recent years, many information retrieval, character recognition, and feature extraction methodologies in Devanagari and especially in Hindi have been proposed for different domain areas. Due to enormous scanned data availability and to provide an advanced improvement of existing Hindi automated systems beyond optical character recognition, a new idea of Hindi printed and handwritten document classification system using support vector machine and fuzzy logic is introduced. This first pre-processes and then classifies textual imaged documents into predefined categories. With this concept, this article depicts a feasibility study of such systems with the relevance of Hindi, a survey report of statistical measurements of Hindi keywords obtained from different sources, and the inherent challenges found in printed and handwritten documents. The technical reviews are provided and graphically represented to compare many parameters and estimate contents, forms and classifiers used in various existing techniques.","PeriodicalId":152357,"journal":{"name":"Int. J. Rough Sets Data Anal.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124856986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信