{"title":"NEW HUMAN SEMEN ANALYSIS SYSTEM (CASA) USING MICROSCOPIC IMAGE PROCESSING TECHNIQUES","authors":"M. ChaudhariN, B. Pawar","doi":"10.21917/IJIVP.2016.0201","DOIUrl":"https://doi.org/10.21917/IJIVP.2016.0201","url":null,"abstract":"Computer assisted semen analysis (CASA) helps the pathologist or fertility specialist to evaluate the human semen. Detail analysis of spermatozoa like morphology and motility is very important in the process of intrauterine insemination (IUI) or In-vitro fertilization (IVF) in infertile couple. The main objective for this new semen analysis is to provide a low cost solution to the pathologist and gynecologist for the routine raw semen analysis, finding the concentration of the semen with dynamic background removal and classify the spermatozoa type (grade) according to the motility and structural abnormality as per the WHO criteria. In this paper a new system , computer assisted semen analysis system is proposed in which hybrid approach is used to identify the moving object, scan line algorithm is applied for confirmation of the objects having tails, so that we can count the actual number of spermatozoa. For removal of background initially the dynamic background generation algorithm is proposed to create a background for background subtraction stage. The standard data set is created with 40× and 100× magnification from the different raw semen s. For testing the efficiency of proposed algorithm, same frames are applied to the existing algorithm. Another module of the system is focused on finding the motility and Type classification of individual spermatozoa.","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":"7 1","pages":"1381-1391"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68388727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SINGLE FRAME SUPER RESOLUTION OF NONCOOPERATIVE IRIS IMAGES","authors":"A. Deshpande, P. Patavardhan","doi":"10.21917/IJIVP.2016.0198","DOIUrl":"https://doi.org/10.21917/IJIVP.2016.0198","url":null,"abstract":"Image super-resolution, a process to enhance image resolution, has important applications in biometrics, satellite imaging, high definition television, medical imaging, etc. The long range captured iris identification systems often suffer from low resolution and meager focus of the captured iris images. These degrade the iris recognition performance. This paper proposes enhanced iterated back projection (EIBP) method to super resolute the long range captured iris polar images. The performance of proposed method is tested and analyzed on CASIA long range iris database by comparing peak signal to noise ratio (PSNR) and structural similarity index (SSIM) with state-of-the-art super resolution (SR) algorithms. It is further analyzed by increasing the up-sampling factor. Performance analysis shows that the proposed method is superior to state-of-the-art algorithms, the peak signal-tonoise ratio improved about 0.1-1.5 dB. The results demonstrate that the proposed method is well suited to super resolve the iris polar images captured at a long distance.","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":"7 1","pages":"1362-1365"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68388655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maqlin Paramanandam, Robinson Thamburaj, J. Mammen
{"title":"AUTOMATED DETECTION OF MITOTIC FIGURES IN BREAST CANCER HISTOPATHOLOGY IMAGES USING GABOR FEATURES AND DEEP NEURAL NETWORKS","authors":"Maqlin Paramanandam, Robinson Thamburaj, J. Mammen","doi":"10.21917/IJIVP.2016.0200","DOIUrl":"https://doi.org/10.21917/IJIVP.2016.0200","url":null,"abstract":"The count of mitotic figures in Breast cancer histopathology slides is the most significant independent prognostic factor enabling determination of the proliferative activity of the tumor. In spite of the strict protocols followed, the mitotic counting activity suffers from subjectivity and considerable amount of observer variability despite being a laborious task. Interest in automated detection of mitotic figures has been rekindled with the advent of Whole Slide Scanners. Subsequently mitotic detection grand challenge contests have been held in recent years and several research methodologies developed by their participants. This paper proposes an efficient mitotic detection methodology for Hematoxylin and Eosin stained Breast cancer Histopathology Images using Gabor features and a Deep Belief NetworkDeep Neural Network architecture (DBN-DNN). The proposed method has been evaluated on breast histopathology images from the publicly available dataset from MITOS contest held at the ICPR 2012 conference. It contains 226 mitoses annotated on 35 HPFs by several pathologists and 15 testing HPFs, yielding an F-measure of 0.74. In addition the said methodology was also tested on 3 slides from the MITOSISATYPIA grand challenge held at the ICPR 2014 conference, an extension of MITOS containing 749 mitoses annotated on 1200 HPFs, by pathologists worldwide. This study has employed 3 slides (294 HPFs) from the MITOS-ATYPIA training dataset in its evaluation and the results showed F-measures 0.65, 0.72and 0.74 for each slide. The proposed method is fast and computationally simple yet its accuracy and specificity is comparable to the best winning methods of the aforementioned grand challenges.","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":"7 1","pages":"1366-1372"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68388718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ANALYSIS OF ABNORMALITIES IN COMMON CAROTID ARTERY IMAGES USING MULTIWAVELETS","authors":"R. Nandakumar, B. JayanthiK.","doi":"10.21917/IJIVP.2016.0195","DOIUrl":"https://doi.org/10.21917/IJIVP.2016.0195","url":null,"abstract":"According to the report given by World Health Organization, by 2030 almost 23.6 million people will die from cardiovascular diseases (CVD), mostly from heart disease and stroke. The main objective of this work is to develop a classifier for the diagnosis of abnormal Common Carotid Arteries (CCA). This paper proposes a new approach for the analysis of abnormalities in longitudinal B-mode ultrasound CCA images using multiwavelets. Analysis is done using HM and GHM multiwavelets at various levels of decomposition. Energy values of the coefficients of approximation, horizontal, vertical and diagonal details are calculated and plotted for different levels. Plots of energy values show high correlation with the abnormalities of CCA and offer the possibility of improved diagnosis of CVD. It is clear that the energy values can be used as an index of individual atherosclerosis and to develop a cost effective system for cardiovascular risk assessment at an early stage.","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":"7 1","pages":"1345-1350"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68388573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CHARACTER RECOGNITION OF VIDEO SUBTITLES","authors":"Satish S Hiremath, K. Suresh","doi":"10.21917/IJIVP.2016.0196","DOIUrl":"https://doi.org/10.21917/IJIVP.2016.0196","url":null,"abstract":"An important task in content based video indexing is to extract text information from videos. The challenges involved in text extraction and recognition are variation of illumination on each video frame with text, the text present on the complex background and different font size of the text. Using various image processing algorithms like morphological operations, blob detection and histogram of oriented gradients the character recognition of video subtitles is implemented. Segmentation, feature extraction and classification are the major steps of character recognition. Several experimental results are shown to demonstrate the performance of the proposed algorithm.","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":"40 1","pages":"1351-1356"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68388586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A SURVEY OF RETINA BASED DISEASE IDENTIFICATION USING BLOOD VESSEL SEGMENTATION","authors":"P. Kuppusamy, B. Divya","doi":"10.21917/IJIVP.2016.0197","DOIUrl":"https://doi.org/10.21917/IJIVP.2016.0197","url":null,"abstract":"The colour retinal photography is one of the most essential features to identify the confirmation of various eye diseases. The iris is primary attribute to authenticate the human. This research work presents the survey and comparison of various blood vessel related feature identification, segmentation, extraction and enhancement methods. Additionally, this study is observed the various databases performance for storing the images and testing in minimal time. This paper is also provides the better performance techniques based on the survey.","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":"7 1","pages":"1357-1361"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68388591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"REAL-TIME OBJECT DETECTION IN PARALLEL THROUGH ATOMIC TRANSACTIONS","authors":"Kavinayan Sivakumar, P. Shanmugapriya","doi":"10.21917/IJIVP.2016.0199","DOIUrl":"https://doi.org/10.21917/IJIVP.2016.0199","url":null,"abstract":"Object detection and tracking is important operation involved in embedded systems like video surveillance, Traffic monitoring, campus security system, machine vision applications and other areas. Detecting and tracking multiple objects in a video or image is challenging problem in machine vision and computer vision based embedded systems. Implementation of such an object detection and tracking systems are done in sequential way of processing and also it was implemented using hardware synthesize tools like verilog HDL with FPGA, achieves considerably lesser performance in speed and it does support lesser atomic transactions. There are many object detection and tracking algorithm were proposed and implemented, among them background subtraction is one of them. This paper proposes an implementation of detecting and tracking multiple objects based on background subtraction algorithm using java and .NET and also discuss about the architecture concept for object detection through atomic transactional, modern hardware synthesizes language called Bluespec.","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":"7 1","pages":"1373-1380"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68388671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AN EFFICIENT SELF-UPDATING FACE RECOGNITION SYSTEM FOR PLASTIC SURGERY FACE","authors":"A. Devi, A. Marimuthu","doi":"10.21917/ijivp.2016.0191","DOIUrl":"https://doi.org/10.21917/ijivp.2016.0191","url":null,"abstract":"Facial recognition system is fundamental a computer application for the automatic identification of a person through a digitized image or a video source. The major cause for the overall poor performance is related to the transformations in appearance of the user based on the aspects akin to ageing, beard growth, sun-tan etc. In order to overcome the above drawback, Self-update process has been developed in which, the system learns the biometric attributes of the user every time the user interacts with the system and the information gets updated automatically. The procedures of Plastic surgery yield a skilled and endurable means of enhancing the facial appearance by means of correcting the anomalies in the feature and then treating the facial skin with the aim of getting a youthful look. When plastic surgery is performed on an individual, the features of the face undergo reconstruction either locally or globally. But, the changes which are introduced new by plastic surgery remain hard to get modeled by the available face recognition systems and they deteriorate the performances of the face recognition algorithm. Hence the Facial plastic surgery produces changes in the facial features to larger extent and thereby creates a significant challenge to the face recognition system. This work introduces a fresh Multimodal Biometric approach making use of novel approaches to boost the rate of recognition and security. The proposed method consists of various processes like Face segmentation using Active Appearance Model (AAM), Face Normalization using Kernel Density Estimate/Point Distribution Model (KDE-PDM), Feature extraction using Local Gabor XOR Patterns (LGXP) and Classification using Independent Component Analysis (ICA). Efficient techniques have been used in each phase of the FRAS in order to obtain improved results.","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":"7 1","pages":"1307-1317"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68388232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Comprehensive Study on Text Information Extraction from Natural Scene Images","authors":"Anit V. Manjaly, S. B","doi":"10.21917/ijivp.2016.0188","DOIUrl":"https://doi.org/10.21917/ijivp.2016.0188","url":null,"abstract":"In Text Information Extraction (TIE) process, the text regions are localized and extracted from the images. It is an active research problem in computer vision applications. Diversity in text is due to the differences in size, style, orientation, alignment of text, low image contrast and complex backgrounds. The semantic information provided by an image can be used in different applications such as content based image retrieval, sign board identification etc. Text information extraction comprises of text image classification, text detection, localization, segmentation, enhancement and recognition. This paper contains a quick review on various text localization methods for localizing texts from natural scene images.","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":"7 1","pages":"1290-1294"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68388214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EVALUATION OF IMMUNOHISTOCHEMISTRY (IHC) MARKER HER2 IN BREAST CANCER","authors":"Prasanna G. Shete, Gajanan K. Kharate","doi":"10.21917/ijivp.2016.0192","DOIUrl":"https://doi.org/10.21917/ijivp.2016.0192","url":null,"abstract":"The paper discusses a novel approach involving algorithm implementation and hardware Devkit processing for estimating the extent of cancer in a breast tissue sample. The process aims at providing a reliable, repeatable, and fast method that could replace the traditional method of manual examination and estimation. Immunohistochemistry (IHC) and Fluorescence in situ Hybridization (FISH) are the two main methods used to detect the marker status in clinical practice. FISH is though more reliable than IHC, but IHC is widely used as it is cheaper, convenient to operate and conserve, the morphology is clear. The IHC markers are Estrogen receptor (ER, Progesterone receptor (PR), Human Epidermal Growth Factor (HER2) that give clear indications of the presence of cancer cells in the tissue sample. HER2 remains the most reliable marker for the detection of breast cancer. The Human Epidermal Growth Factor Receptor (HER2) markers are discussed in the paper, as it gives clear indications of the presence of cancer cells in the tissue sample. HER2 is identified based on the color and intensity of the cell membrane staining. The color and intensity is obviously based on the thresholding for classifying the cancerous cells into severity levels in terms of score to estimate the extent of spread of cancer in breast tissue. For HER2 evaluation, the percentage of staining is calculated in terms of ratio of stain pixel count to the total pixel count. The evaluation of HER2 is obtained through simulation software (MATLAB) using intensity based algorithm and same is run on embedded processor evaluation board Devkit 8500. The results are validated with doctors.","PeriodicalId":30615,"journal":{"name":"ICTACT Journal on Image and Video Processing","volume":"7 1","pages":"1318-1323"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"68388407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}