{"title":"TRAFFIC SIGN BOARD DETECTION AND RECOGNITION FOR AUTONOMOUS VEHICLES AND DRIVER ASSISTANCE SYSTEMS","authors":"Y. Chincholkar, Ayush Kumar","doi":"10.21917/ijivp.2019.0277","DOIUrl":"https://doi.org/10.21917/ijivp.2019.0277","url":null,"abstract":"In the recent year's many approaches have been made that uses image processing algorithms to detect traffic sign boards. Edge detection is used to avoid segmentation problems of the existing method. Color based segmentation faces the challenge of adaptive thresholding which fails in real time scenarios. This proposed algorithm is yet another approach to detect traffic sign boards from video sequences. The first step of this work is the pre-processing of the video frame which is achieved by the gray scale conversion and edge detection and the second step is the extraction of the objects. Hough Transform algorithm is then applied to measure properties of image regions for further analysis. The different feature points which include perimeter, area, filled area, solidity and centroid are extracted for the detection of the traffic sign board. Feature generation and classification are done on the recognition side to get the class of the detected object. The input for the project is video sequences taken from a camera placed on the","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44136173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Athinarayanan, K. Navaz, R. Kavitha, S. Sameena
{"title":"CERVICAL CANCER DETECTION AND CLASSIFICATION BY USING EFFECTUAL INTEGRATION OF DIRECTIONAL GABOR TEXTURE FEATURE EXTRACTION AND HYBRID KERNEL BASED SUPPORT VECTOR CLASSIFICATION","authors":"S. Athinarayanan, K. Navaz, R. Kavitha, S. Sameena","doi":"10.21917/ijivp.2019.0274","DOIUrl":"https://doi.org/10.21917/ijivp.2019.0274","url":null,"abstract":"Planning of invigorating representation is a troublesome and testing process because of the unpredictability of the images and absence of models of the life systems that thoroughly catches the reasonable expressions in each structure. Cervical malignant growth is one of the noteworthy reasons for death among different kinds of the diseases in women around the world. Genuine and auspicious determination can keep the life to some dimension. Therefore, we have proposed a computerized dependable framework for the analysis of the cervical malignancy utilizing surface highlights and machine learning calculation in Pap smear images, it is extremely advantageous to anticipate disease, likewise expands the dependability of the determination. Proposed framework is a multi-organize framework for cell nucleus extraction and disease finding. To begin with, clamor expulsion is performed in the preprocessing venture on the Pap smear images. Exterior highlights are separated from these demand free Pap smear images. Next period of the proposed framework is classification that depends on these separated highlights, SVM classification is utilized. Over 94% exactness is accomplished by the classification stage, demonstrated that the proposed calculation precision is great at recognizing the disease in the Pap smear images.","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47561774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TEMPORAL REDUNDANCY REDUCTION IN WAVELET BASED VIDEO COMPRESSION FOR HIGH DEFINITION VIDEOS","authors":"S. Sowmyayani, P. Rani","doi":"10.21917/ijivp.2018.0263","DOIUrl":"https://doi.org/10.21917/ijivp.2018.0263","url":null,"abstract":"Data Storage and Communication plays a significant role in every human. Digital images and videos are stored in mobile and other storage devices. More specifically, video data requires huge amount of storage space for which the storage devices are more expensive. Hence there is a necessity of reducing the storage space of the data. Video compression is more common in all researches. In this work, the role of wavelets in video compression is studied. The temporal redundant data are converted to spatial data which are then transformed to wavelet coefficients. The low frequency components are removed from these wavelet coefficients. The proposed method is tested with some video sequences. The performance of the proposed method is analyzed by comparing it with the existing recent methods and with the state-of-art H.265 video coding standard. The experimental results substantially proved that the proposed method achieves 3.8dB higher PSNR than H.265 and 1.6dB higher PSNR than recent wavelet based video codecs.","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47002928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"REMOVAL OF UNWANTED OBJECTS FROM IMAGES USING STATISTICS","authors":"S. Kulkarni","doi":"10.21917/ijivp.2018.0268","DOIUrl":"https://doi.org/10.21917/ijivp.2018.0268","url":null,"abstract":"Nowadays with cheap digital cameras and the easy availability of camera enabled smartphones, people have been taking a lot of photos. But many a times, it happens that the photos clicked have some unwanted objects appearing in the picture that either partially or completely obscure the subject or their presence spoils the quality of the photo in some or the other way. Modern powerful image editors and in-painting algorithms are capable enough to remove these abnormalities in post-processing to provide a convincing output image. But these often include manual work which makes them time-consuming. Also, these either require an expert user or some complex algorithms for their functioning. The algorithms and methods discussed in this paper aim to provide a much simpler approach to solve these problems using basic statistics. A comparative analysis of their efficiency is also provided.","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46053133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"IMPROVED AUTOMATIC DETECTION OF GLAUCOMA USING CUP-TO-DISK RATIO AND HYBRID CLASSIFIERS","authors":"D. K. Prasad, L. Vibha, K. Venugopal","doi":"10.21917/ijivp.2018.0270","DOIUrl":"https://doi.org/10.21917/ijivp.2018.0270","url":null,"abstract":"Glaucoma is one of the most complicated disorder in human eye that causes permanent vision loss gradually if not detect in early stage. It can damage the optic nerve without any symptoms and warnings. Different automated glaucoma detection systems were developed for analyzing glaucoma at early stage but lacked good accuracy of detection. This paper proposes a novel automated glaucoma detection system which effectively process with digital colour fundus images using hybrid classifiers. The proposed system concentrates on both Cup-to Disk Ratio (CDR) and different features to improve the accuracy of glaucoma. Morphological Hough Transform Algorithm (MHTA) is designed for optic disc segmentation. Intensity based elliptic curve method is used for separation of optic cup effectively. Further feature extraction and CDR value can be estimated. Finally, classification is performed with combination of Naive Bayes Classifier and K Nearest Neighbour (KNN). The proposed system is evaluated by using High Resolution Fundus (HRF) database which outperforms the earlier methods in literature in various performance metrics.","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44924968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"PERFORMANCE EVALUATION AND COMPARATIVE ANALYSIS OF WATERMARKING ALGORITHM BASED ON ADAPTIVE PREDICTION METHOD","authors":"Chetna Sharma, Neeraj Jain","doi":"10.21917/ijivp.2018.0266","DOIUrl":"https://doi.org/10.21917/ijivp.2018.0266","url":null,"abstract":"","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43634583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"BOUNDING BOX METHOD BASED ACCURATE VEHICLE NUMBER DETECTION AND RECOGNITION FOR HIGH SPEED APPLICATIONS","authors":"V. Baranidharan, K. Varadharajan","doi":"10.21917/ijivp.2018.0264","DOIUrl":"https://doi.org/10.21917/ijivp.2018.0264","url":null,"abstract":"License plate detection and recognition is the one of the major aspects of applying the image processing techniques towards intelligent transport systems. Detecting the exact location of the license plate from the vehicle image at very high speed is the one of the most crucial step for vehicle plate detection systems. This paper proposes an algorithm to detect license plate region and edge processing both vertically and horizontally to improve the performance of the systems for high speed applications. Throughout the detection and recognition the original images are detected, filtered both vertically and horizontally, and threshold based on bounding box method. The whole system was tested on more than twenty five cars with various license plates in Indian style at different weather conditions. The overall accuracy rate of success recognition is 93% at sunlight conditions, 72% at cloudy, 71% at shaded weather conditions.","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42459384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ANALYSIS OF SULPHUR CONTENT IN COPRA","authors":"A. Sagayaraj, G. Ramya, N. Dhanaraj","doi":"10.21917/ijivp.2018.0267","DOIUrl":"https://doi.org/10.21917/ijivp.2018.0267","url":null,"abstract":"Agriculture is the largest economic sector in India. Coconut is one of the most demanded fruit amongst all. The dried coconut, copra is the main source of coconut oil. Naturally it contains 70% of moisture content and it is dried to about 7% for production of coconut oil. The sulphur is added as preservative which acts as anti-microbial agent for preventing bacteria, fungus etc. Sulphur is a toxic food preservative which restricts lung performance and leads to direct allergenic reactions. The survey of World Health Organisation says that 65% of asthmatic children are sensitive to sulphur and 75% of children exposed to sulphur exhibits changes in their behaviour. The sulphur fumigation over coconut affects human both externally and internally. Fumigation leads to cancer and environmental pollution. In order to prevent this devastating effect, copra is examined using image processing. The proposed idea is to identify the presence and percentage of sulphur region present in copra. The region of interest is segmented by method of superimposition thereby segmenting white layers in copra. The RGB colour features are extracted to differentiate the sulphur added copra from normal copra. The coconut is dried under 60°C in a tray drier and shapes of copra decreases at regular interval of time are extracted using image processing. The decreasing percentage of shape features are measured to identify the sulphur added in the copra. The k-means clustering technique is used to discriminate the copra at different levels. The segmented patch area is measured to determine the percentage of sulphur present in copra. The percentage of sulphur over copra is divided into three levels (low sulphur added region, medium sulphur added region and high sulphur added region). The K-Nearest Neighbour classification is also used to classify the sulphur added copra at different levels. The proposed algorithm classifies the sulphur added copra at three different levels with 86% accuracy.","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44305394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"APPLICATION OF SVM AND SOFT FEATURES TO AZERBAIJANI TEXT RECOGNITION","authors":"E. Ismayilov","doi":"10.21917/ijivp.2018.0265","DOIUrl":"https://doi.org/10.21917/ijivp.2018.0265","url":null,"abstract":"The purpose of this study is to establish more accurate and less timeconsuming recognition system for Azerbaijani text recognition. The main problem of investigating and developing recognition systems is the extraction of features, in view of the fact that, most of current recognition systems use features, which are unintelligible for human mind and proposed for operating by computers. For eliminating abovementioned problem, in this paper was offered “soft” features, extracted on base of human-mind techniques. On the side of validating SVM approach and “soft” features provided in this paper, experiments were executed using various feature classes offered for Azerbaijani handprinted characters and different methods.","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48086687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Ranganatha, Y. P. Gowramma, G. N. Karthik, A. Sharan
{"title":"SELECTED SINGLE FACE TRACKING IN TECHNICALLY CHALLENGING DIFFERENT BACKGROUND VIDEO SEQUENCES USING COMBINED FEATURES","authors":"S. Ranganatha, Y. P. Gowramma, G. N. Karthik, A. Sharan","doi":"10.21917/ijivp.2018.0271","DOIUrl":"https://doi.org/10.21917/ijivp.2018.0271","url":null,"abstract":"The commonly identified limitations of video face trackers are, the inability to track human face in different background video sequences with the conditions like occlusion, low quality, abrupt motions and failing to track single face when it contain multiple faces. In this paper, we propose a novel algorithm to track human face in different background video sequences with the conditions listed above. The proposed algorithm describes an improved KLT tracker. We collect Eigen, FAST as well as HOG features and combine them together. The combined features are given to the tracker to track the face. The algorithm being proposed is tested on challenging datasets videos and measured for performance using the standard metrics.","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43676962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}