{"title":"An Evolutionary Multi-objective Discretization based on Normalized Cut","authors":"M. Hajizadeh-Tahan, M. Ghasemzadeh","doi":"10.22044/JADM.2019.8507.1989","DOIUrl":"https://doi.org/10.22044/JADM.2019.8507.1989","url":null,"abstract":"Learning models and related results depend on the quality of the input data. If raw data is not properly cleaned and structured, the results are tending to be incorrect. Therefore, discretization as one of the preprocessing techniques plays an important role in learning processes. The most important challenge in the discretization process is to reduce the number of features’ values. This operation should be applied in a way that relationships between the features are maintained and accuracy of the classification algorithms would increase. In this paper, a new evolutionary multi-objective algorithm is presented. The proposed algorithm uses three objective functions to achieve high-quality discretization. The first and second objectives minimize the number of selected cut points and classification error, respectively. The third objective introduces a new criterion called the normalized cut, which uses the relationships between their features’ values to maintain the nature of the data. The performance of the proposed algorithm was tested using 20 benchmark datasets. According to the comparisons and the results of nonparametric statistical tests, the proposed algorithm has a better performance than other existing major methods.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":"8 1","pages":"25-37"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68374824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Capturing Outlines of Planar Generic Images by Simultaneous Curve Fitting and Sub-division","authors":"A. Ebrahimi, G. B. Loghmani, M. Sarfraz","doi":"10.22044/JADM.2019.6727.1788","DOIUrl":"https://doi.org/10.22044/JADM.2019.6727.1788","url":null,"abstract":"In this paper, a new technique has been designed to capture the outline of 2D shapes using cubic B´ezier curves. The proposed technique avoids the traditional method of optimizing the global squared fitting error and emphasizes the local control of data points. A maximum error has been determined to preserve the absolute fitting error less than a criterion and it administers the process of curve subdivision. Depending on the specified maximum error, the proposed technique itself subdivides complex segments, and curve fitting is done simultaneously. A comparative study of experimental results embosses various advantages of the proposed technique such as accurate representation, low approximation errors and efficient computational complexity.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":"8 1","pages":"105-118"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68374369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Salt and Pepper Noise Removal using Pixon-based Segmentation and Adaptive Median Filter","authors":"S. A. Amiri","doi":"10.22044/JADM.2019.7921.1930","DOIUrl":"https://doi.org/10.22044/JADM.2019.7921.1930","url":null,"abstract":"Removing salt and pepper noise is an active research area in image processing. In this paper, a two-phase method is proposed for removing salt and pepper noise while preserving edges and fine details. In the first phase, noise candidate pixels are detected which are likely to be contaminated by noise. In the second phase, only noise candidate pixels are restored using adaptive median filter. In terms of noise detection, a two-stage method is utilized. At first, a thresholding is applied on the image to initial estimation of the noise candidate pixels. Since some pixels in the image may be similar to the salt and pepper noise, these pixels are mistakenly identified as noise. Hence, in the second step of the noise detection, the pixon-based segmentation is used to identify the salt and pepper noise pixels more accurately. Pixon is the neighboring pixels with similar gray levels. The proposed method was evaluated on several noisy images, and the results show the accuracy of the proposed method in salt and pepper noise removal and outperforms to several existing methods.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":"8 1","pages":"119-126"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68374677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image Segmentation using Improved Imperialist Competitive Algorithm and a Simple Post-processing","authors":"V. Naghashi, S. Lotfi","doi":"10.22044/JADM.2019.3935.1464","DOIUrl":"https://doi.org/10.22044/JADM.2019.3935.1464","url":null,"abstract":"Image segmentation is a fundamental step in many of image processing applications. In most cases the image’s pixels are clustered only based on the pixels’ intensity or color information and neither spatial nor neighborhood information of pixels is used in the clustering process. Considering the importance of including spatial information of pixels which improves the quality of image segmentation, and using the information of the neighboring pixels, causes enhancing of the accuracy of segmentation. In this paper the idea of combining the K-means algorithm and the Improved Imperialist Competitive algorithm is proposed. Also before applying the hybrid algorithm, a new image is created and then the hybrid algorithm is employed. Finally, a simple post-processing is applied on the clustered image. Comparing the results of the proposed method on different images, with other methods, shows that in most cases, the accuracy of the NLICA algorithm is better than the other methods.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":"7 1","pages":"507-519"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47736179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Morphological Exudate Detection in Retinal Images using PCA-based Optic Disc Removal","authors":"J. Darvish, M. Ezoji","doi":"10.22044/JADM.2019.1488","DOIUrl":"https://doi.org/10.22044/JADM.2019.1488","url":null,"abstract":"Diabetic retinopathy lesion detection such as exudate in fundus image of retina can lead to early diagnosis of the disease. Retinal image includes dark areas such as main blood vessels and retinal tissue and also bright areas such as optic disk, optical fibers and lesions e.g. exudate. In this paper, a multistage algorithm for the detection of exudate in foreground is proposed. The algorithm segments the background dark areas in the proper channels of RGB color space using morphological processing such as closing, opening and top-hat operations. Then an appropriate edge detector discriminates between exudates and cotton-like spots or other artificial effects. To tackle the problem of optical fibers and to discriminate between these brightness and exudates, in the first stage, main vessels are detected from the green channel of RGB color space. Then the optical fiber areas around the vessels are marked up. An algorithm which uses PCA-based reconstruction error is proposed to discard another fundus bright structure named optic disk. Several experiments have been performed with HEI-MED standard database and evaluated by comparing with ground truth images. These results show that the proposed algorithm has a detection accuracy of 96%.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":"7 1","pages":"487-493"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45488710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Hosseinpour, Seyed A. Moosavie nia, M. Pourmina
{"title":"Depth Improvement for FTV Systems Based on the Gradual Omission of Outliers","authors":"H. Hosseinpour, Seyed A. Moosavie nia, M. Pourmina","doi":"10.22044/JADM.2019.7278.1864","DOIUrl":"https://doi.org/10.22044/JADM.2019.7278.1864","url":null,"abstract":"Virtual view synthesis is an essential part of computer vision and 3D applications. A high-quality depth map is the main problem with virtual view synthesis. Because as compared to the color image the resolution of the corresponding depth image is low. In this paper, an efficient and confided method based on the gradual omission of outliers is proposed to compute reliable depth values. In the proposed method depth values that are far from the mean of depth values are omitted gradually. By comparison with other state of the art methods, simulation results show that on average, PSNR is 2.5dB (8.1 %) higher, SSIM is 0.028 (3%) more, UNIQUE is 0.021 (2.4%) more, the running time is 8.6s (6.1%) less and wrong pixels are 1.97(24.8%) less.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":"7 1","pages":"563-574"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47660767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prediction and Diagnosis of Diabetes Mellitus Using a Water Wave Optimization Algorithm","authors":"S. T. Dehkordi, A. K. Bardsiri, M. Zahedi","doi":"10.22044/JADM.2018.6446.1758","DOIUrl":"https://doi.org/10.22044/JADM.2018.6446.1758","url":null,"abstract":"Data mining is an appropriate way to discover information and hidden patterns in large amounts of data, where the hidden patterns cannot be easily discovered in normal ways. One of the most interesting applications of data mining is the discovery of diseases and disease patterns through investigating patients' records. Early diagnosis of diabetes can reduce the effects of this devastating disease. A common way to diagnose this disease is performing a blood test, which, despite its high precision, has some disadvantages such as: pain, cost, patient stress, lack of access to a laboratory, and so on. Diabetic patients’ information has hidden patterns, which can help you investigate the risk of diabetes in individuals, without performing any blood tests. Use of neural networks, as powerful data mining tools, is an appropriate method to discover hidden patterns in diabetic patients’ information. In this paper, in order to discover the hidden patterns and diagnose diabetes, a water wave optimization(WWO) algorithm; as a precise metaheuristic algorithm, was used along with a neural network to increase the precision of diabetes prediction. The results of our implementation in the MATLAB programming environment, using the dataset related to diabetes, indicated that the proposed method diagnosed diabetes at a precision of 94.73%,sensitivity of 94.20%, specificity of 93.34%, and accuracy of 95.46%, and was more sensitive than methods such as: support vector machines, artificial neural networks, and decision trees.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":"7 1","pages":"617-630"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48221231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Entropy-based Consensus for Distributed Data Clustering","authors":"M. Owhadi-Kareshki, M. Akbarzadeh-T.","doi":"10.22044/JADM.2018.4237.1514","DOIUrl":"https://doi.org/10.22044/JADM.2018.4237.1514","url":null,"abstract":"The increasingly larger scale of available data and the more restrictive concerns on their privacy are some of the challenging aspects of data mining today. In this paper, Entropy-based Consensus on Cluster Centers (EC3) is introduced for clustering in distributed systems with a consideration for confidentiality of data; i.e. it is the negotiations among local cluster centers that are used in the consensus process, hence no private data are transferred. With the proposed use of entropy as an internal measure of consensus clustering validation at each machine, the cluster centers of the local machines with higher expected clustering validity have more influence in the final consensus centers. We also employ relative cost function of the local Fuzzy C-Means (FCM) and the number of data points in each machine as measures of relative machine validity as compared to other machines and its reliability, respectively. The utility of the proposed consensus strategy is examined on 18 datasets from the UCI repository in terms of clustering accuracy and speed up against the centralized version of FCM. Several experiments confirm that the proposed approach yields to higher speed up and accuracy while maintaining data security due to its protected and distributed processing approach.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":"7 1","pages":"551-561"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43215744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Novel Architecture for Detecting Phishing Webpages using Cost-based Feature Selection","authors":"A. Zangooei, V. Derhami, F. Jamshidi","doi":"10.22044/JADM.2019.7183.1852","DOIUrl":"https://doi.org/10.22044/JADM.2019.7183.1852","url":null,"abstract":"Phishing is one of the luring techniques used to exploit personal information. A phishing webpage detection system (PWDS) extracts features to determine whether it is a phishing webpage or not. Selecting appropriate features improves the performance of PWDS. Performance criteria are detection accuracy and system response time. The major time consumed by PWDS arises from feature extraction that is considered as feature cost in this paper. Here, two novel features are proposed. They use semantic similarity measure to determine the relationship between the content and the URL of a page. Since suggested features don't apply third-party services such as search engines result, the features extraction time decreases dramatically. Login form pre-filer is utilized to reduce unnecessary calculations and false positive rate. In this paper, a cost-based feature selection is presented as the most effective feature. The selected features are employed in the suggested PWDS. Extreme learning machine algorithm is used to classify webpages. The experimental results demonstrate that suggested PWDS achieves high accuracy of 97.6% and short average detection time of 120.07 milliseconds.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":"7 1","pages":"607-616"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48983466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust Iris Recognition in Unconstrained Environments","authors":"A. Noruzi, M. Mahlouji, A. Shahidinejad","doi":"10.22044/JADM.2019.7434.1884","DOIUrl":"https://doi.org/10.22044/JADM.2019.7434.1884","url":null,"abstract":"A biometric system provides automatic identification of an individual based on a unique feature or characteristic possessed by him/her. Iris recognition (IR) is known to be the most reliable and accurate biometric identification system. The iris recognition system (IRS) consists of an automatic segmentation mechanism which is based on the Hough transform (HT). This paper presents a robust IRS in unconstrained environments. Through this method, first a photo is taken from the iris, then edge detection is done, later on a contrast adjustment is persecuted in pre-processing stage. Circular HT is subsequently utilized for localizing circular area of iris inner and outer boundaries. The purpose of this last stage is to find circles in imperfect image inputs. Also, through applying parabolic HT, boundaries are localized between upper and lower eyelids. The proposed method, in comparison with available IRSs, not only enjoys higher accuracy, but also competes with them in terms of processing time. Experimental results on images available in UBIRIS, CASIA and MMUI database show that the proposed method has an accuracy rate of 99.12%, 98.80% and 98.34%, respectively.","PeriodicalId":32592,"journal":{"name":"Journal of Artificial Intelligence and Data Mining","volume":"7 1","pages":"495-506"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43809873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}