{"title":"ONTODB : Aplikasi untuk Transformasi Ontologi OWL ke Basis Data Relasi SQL","authors":"Amalia Mabrina Masbar Rus, Zulaiha Ali Othman","doi":"10.24815/JDA.V2I1.13094","DOIUrl":"https://doi.org/10.24815/JDA.V2I1.13094","url":null,"abstract":"OntoDB Application Tool adalah aplikasi desktop yang dikembangkan dengan menggunakan bahasa pemrograman Java. Aplikasi ini dikembangkan sebagai alat untuk mentransformasikan file ontologi dalam format OWL ke file kode basis data SQL. Transformasi OWL ke SQL diperlukan untuk menyimpan ontologi OWL ke dalam basis data, sehingga memudahkan dalam pencarian data dan digunakan sebagai sumber data untuk aplikasi. Namun, file ontologi yang berisi sejumlah besar kelas, properti, dan instance membuat pengembangan aplikasi sulit dilakukan apabila hanya menggunakan beberapa komponen ontologi. Oleh karena itu, aplikasi OntoDB dikembangkan untuk memenuhi kebutuhan akan sebuah aplikasi yang dapat menyimpan sebagian ontologi ke dalam basis data. Aplikasi ini akan menampilkan isi komponen ontologi utama seperti kelas, properti dan instance sehingga pengguna dapat dengan mudah memilih komponen mana yang akan ditransformasikan. Selanjutnya, aplikasi ini akan menghasilkan file SQL berdasarkan isi ontologi yang dipilih. Kemudian, file SQL yang dihasilkan dapat diimpor ke dalam sistem manajemen basis data, seperti MySQL, untuk dijadikan sumber data bagi sebuah aplikasi. OntoDB Application Tool is a desktop application developed using Java programming language. This application was developed as a tool to transform the ontology file in OWL format to an SQL basis data code file. Transforming OWL to SQL is necessary in order to store the OWL ontology into the basis data, thus makes it easier to query and to be used as a source of data for an application. However, the ontology file containing a large number of classes, properti, and instance make it difficult for the developers to develop an application which only using some components of the ontology. Therefore, OntoDB application was developed to meet the need for a tool that can store a part of ontology into the basis data. This application will display the contents of the main ontology components such as classes, properti and instance so that the user can easily select which components that will be transformed. Further, this application will generate an SQL file based on the selected content of ontology. Then, the generated SQL file can be imported into a basis data management system, such as MySQL, to be used as the source of data for an application.","PeriodicalId":38582,"journal":{"name":"International Journal of Data Analysis Techniques and Strategies","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86536470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stellar mass black hole optimisation for utility mining","authors":"K. Subramanian, K. Premalatha","doi":"10.1504/IJDATS.2019.10022546","DOIUrl":"https://doi.org/10.1504/IJDATS.2019.10022546","url":null,"abstract":"Major challenges in mining high utility itemsets from the transaction databases requires exponential search space and database-dependent minimum utility threshold. The search space is very large because of the large number of distinct items and size of the database. Data analysts need to specifying appropriate minimum utility thresholds for their data mining tasks though they may have no knowledge pertaining to their databases. To get rid of these problems, Stellar mass black hole optimisation (SBO) method is proposed to mine Top-K HUIs from the transaction database without specifying minimum utility threshold. To know the performance of SBO, the experiment results are compared with GA.","PeriodicalId":38582,"journal":{"name":"International Journal of Data Analysis Techniques and Strategies","volume":"14 1","pages":"222-245"},"PeriodicalIF":0.0,"publicationDate":"2019-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75192984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Memetic particle swarm optimisation for missing value imputation","authors":"R. Sivaraj, R. Priya","doi":"10.1504/IJDATS.2019.10022547","DOIUrl":"https://doi.org/10.1504/IJDATS.2019.10022547","url":null,"abstract":"Incomplete values in databases stand as a major concern for data analysts and many methods have been devised to handle them in different missing scenarios. Many researchers are increasingly using evolutionary algorithms for handling them. In this paper, a memetic algorithm based approach is proposed which integrates the principles of particle swarm optimisation and simulated annealing, a local search method. A novel initialisation strategy for PSO is also proposed in order to seed good particles into the population. Simulated annealing prevents PSO from premature convergence and helps it in reaching global optimum. PSO algorithm exhibits explorative behaviour and SA exhibits exploitative behaviour and serves as the right combination for memetic algorithm implementation. The proposed algorithm is implemented in different datasets to estimate the missing values and the imputation accuracy and the time taken for execution is found to be better than other standard methods.","PeriodicalId":38582,"journal":{"name":"International Journal of Data Analysis Techniques and Strategies","volume":"1 1","pages":"273-289"},"PeriodicalIF":0.0,"publicationDate":"2019-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74855743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Asmaa Benghabrit, B. Ouhbi, B. Frikh, E. Zemmouri, Hicham Behja
{"title":"Feature selection methods for document clustering: a comparative study and a hybrid solution","authors":"Asmaa Benghabrit, B. Ouhbi, B. Frikh, E. Zemmouri, Hicham Behja","doi":"10.1504/IJDATS.2019.10022545","DOIUrl":"https://doi.org/10.1504/IJDATS.2019.10022545","url":null,"abstract":"The web proliferation makes the exploration and the use of the huge amount of available unstructured text documents challenged, which drives the need of document clustering. Hence, improving the performances of this mechanism by using feature selection seems worth investigation. Therefore, this paper proposes an efficient way to highly benefit from feature selection for document clustering. We first present a review and comparative studies of feature selection methods in order to extract efficient ones. Then we propose a sequential and hybrid combination modes of statistical and semantic techniques in order to benefit from crucial information that each of them provides for document clustering. Extensive experiments prove the benefit of the proposed combination approaches. The performance of document clustering is highest when the measures based on Chi-square statistic and the mutual information are linearly combined. Doing so, it avoids the unwanted correlation that the sequential approach creates between the two treatments.","PeriodicalId":38582,"journal":{"name":"International Journal of Data Analysis Techniques and Strategies","volume":"50 1","pages":"246-272"},"PeriodicalIF":0.0,"publicationDate":"2019-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73617003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A comparative study of unsupervised image clustering systems","authors":"Safa Bettoumi, Chiraz Jlassi, N. Arous","doi":"10.1504/IJDATS.2019.10022548","DOIUrl":"https://doi.org/10.1504/IJDATS.2019.10022548","url":null,"abstract":"The purpose of clustering algorithms is to give sense and extract value from large sets of structured and unstructured data. Thus, clustering is present in all science areas that use automatic learning. Therefore, we present in this paper a comparative study and an evaluation of different clustering methods proposed in the literature such as prototype based clustering, fuzzy and probabilistic clustering, hierarchical clustering and density based clustering. We present also an analysis of advantages and disadvantages of these clustering methods based essentially on experimentation. Extensive experiments are conducted on three real-world high dimensional datasets to evaluate the potential and the effectiveness of seven well-known methods in terms of accuracy, purity and normalised mutual information.","PeriodicalId":38582,"journal":{"name":"International Journal of Data Analysis Techniques and Strategies","volume":"22 1","pages":"197-221"},"PeriodicalIF":0.0,"publicationDate":"2019-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82501226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing the involvement of decision makers in data mart design","authors":"Fahmi Bargui, H. Ben-Abdallah, J. Feki","doi":"10.1504/IJDATS.2019.10018909","DOIUrl":"https://doi.org/10.1504/IJDATS.2019.10018909","url":null,"abstract":"The design phase of a data warehousing project remains difficult for both decision makers and requirements analysts. In this paper, we tackle this difficulty through two contributions. First, we propose a natural language based and goal-oriented template for requirements specification that includes all concepts of the decision-making process. The use of familiar concepts and natural language makes our template more accessible and helps decision makers in validating the specified requirements, which avoids producing data mart that does not meet their needs. Secondly, we propose a decision-making ontology that provides for a systematic decomposition of decision-making goals, which allows new requirements to emerge. This automatic requirements elicitation helps analysts to overcome their lack of domain knowledge, which avoids producing erroneous requirements.","PeriodicalId":38582,"journal":{"name":"International Journal of Data Analysis Techniques and Strategies","volume":"130 1","pages":"148-175"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85767036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An effective feature selection method based on maximum class separability for fault diagnosis of ball bearing","authors":"T. Thelaidjia, Abdelkrim Moussaoui, S. Chenikher","doi":"10.1504/ijdats.2019.10018906","DOIUrl":"https://doi.org/10.1504/ijdats.2019.10018906","url":null,"abstract":"The paper deals with the development of a novel feature selection approach for bearing fault diagnosis to overcome drawbacks of the distance evaluation technique (DET); one of the well-established feature selection approaches. Its drawbacks are the influence of its effectiveness by the noise and the selection of salient features regardless of the classification system. To overcome these shortcomings, an optimal discrete wavelet transform (DWT) is firstly used to decompose the bearing vibration signal at different decomposition depths to enhance the signal to noise ratio. Then, a combination of DET with binary particle swarm optimisation (BPSO) algorithm and a criterion based on scatter matrices employed as an objective function are suggested to improve the classification performances and to reduce the computational time. Finally, support vector machine is utilised to automate the identification of different bearing conditions. From the obtained results, the effectiveness of the suggested method is proven.","PeriodicalId":38582,"journal":{"name":"International Journal of Data Analysis Techniques and Strategies","volume":"11 1","pages":"115-132"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86807034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A method to rank the efficient units based on cross efficiency matrix without involving the zero weights","authors":"Marziye Mirzadeh Tahroodi, A. Payan","doi":"10.1504/IJDATS.2019.10018908","DOIUrl":"https://doi.org/10.1504/IJDATS.2019.10018908","url":null,"abstract":"One of the basic objections of the previous models of cross efficiency (CE) is the possibility for the weights to equal zero. This case takes place for the inputs and the outputs in the efficient responses in CE models. Therefore, the input and the output weights which equal zero do not play a role in computing the score of the CE. In this paper, to overcome this problem, an idea to prevent the optimal weights to equal zero in the CE method is offered. This new method can be expanded to all CE models. Based on the offered method, a zero-one mixed linear programming problem is proposed to obtain a set of non-zero weights among the optimal solutions of the preliminary CE model. Following, the zero-one mixed linear programming problem is changed into an equivalent linear program. Then, according to a consistent CE matrix the efficient units are ranked. In order to explain the model and indicate its advantage, an example is given.","PeriodicalId":38582,"journal":{"name":"International Journal of Data Analysis Techniques and Strategies","volume":"1 1","pages":"101-114"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91106708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Review on recent developments in frequent itemset based document clustering, its research trends and applications","authors":"D. Rajput","doi":"10.1504/IJDATS.2019.10018907","DOIUrl":"https://doi.org/10.1504/IJDATS.2019.10018907","url":null,"abstract":"The document data is growing at an exponential rate. It is heterogeneous, dynamic and highly unstructured in nature. These characteristics of document data pose new challenges and opportunities for the development of various models and approaches for documents clustering. Different methods adopted for the development of these models. But these techniques have their advantages and disadvantages. The primary focus of the study is to the analysis of existing methods and approaches for document clustering based on frequent itemsets. Subsequently, this research direction facilitates the exploration of the emerging trends for each extension with applications. In this paper, more than 90 recent (published after 1990) research papers are summarised that are published in various reputed journals like IEEE Transaction, ScienceDirect, Springer-link, ACM and few fundamental authoritative articles.","PeriodicalId":38582,"journal":{"name":"International Journal of Data Analysis Techniques and Strategies","volume":"38 12","pages":"176-195"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72431601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new feature subset selection model based on migrating birds optimisation","authors":"Naoual El Aboudi, Laila Benhlima","doi":"10.1504/IJDATS.2019.10018910","DOIUrl":"https://doi.org/10.1504/IJDATS.2019.10018910","url":null,"abstract":"Feature selection represents a fundamental preprocessing phase in machine learning as well as data mining applications. It reduces the dimensionality of feature space by dismissing irrelevant and redundant features, which leads to better classification accuracy and less computational cost. This paper presents a new wrapper feature subset selection model based on a recently designed optimisation technique called migrating birds optimisation (MBO). Initialisation issue regarding MBO is explored to study its implications on the model behaviour by experimenting different initialisation strategies. A neighbourhood based on information gain was designed to improve the search effectiveness. The performance of the proposed model named MBO-FS is compared with some state-of-the-art methods regarding the task of feature selection on 11 UCI datasets. Simulation results show that MBO-FS method achieves promising classification accuracy using a smaller feature set.","PeriodicalId":38582,"journal":{"name":"International Journal of Data Analysis Techniques and Strategies","volume":"231 1","pages":"133-147"},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76219118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}