{"title":"In-between and cross-frequency dependence-based summarization of resting-state fMRI data","authors":"Maziar Yaesoubi, Rogers F. Silva, V. Calhoun","doi":"10.1109/SSIAI.2018.8470314","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470314","url":null,"abstract":"Various data summarization approaches which consist of basis transformation and dimension reduction have been commonly used for information retrieval from brain imaging data including functional magnetic resonance imaging (fMRI). However, most approaches do not include frequency variation of the temporal data in the basis transformation. Here we propose a novel approach to incorporate in-between and cross-frequency dependence for summarization of resting-state fMRI data.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129488251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammed S. Majdi, Sundaresh Ram, Jonathan T. Gill, Jeffrey J. Rodríguez
{"title":"Drive-Net: Convolutional Network for Driver Distraction Detection","authors":"Mohammed S. Majdi, Sundaresh Ram, Jonathan T. Gill, Jeffrey J. Rodríguez","doi":"10.1109/SSIAI.2018.8470309","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470309","url":null,"abstract":"To help prevent motor vehicle accidents, there has been significant interest in finding an automated method to recognize signs of driver distraction, such as talking to passengers, fixing hair and makeup, eating and drinking, and using a mobile phone. In this paper, we present an automated supervised learning method called Drive-Net for driver distraction detection. Drive-Net uses a combination of a convolutional neural network (CNN) and a random decision forest for classifying images of a driver. We compare the performance of our proposed Drive-Net to two other popular machine-learning approaches: a recurrent neural network (RNN), and a multi-layer perceptron (MLP). We test the methods on a publicly available database of images acquired under a controlled environment containing about 22425 images manually annotated by an expert. Results show that Drive-Net achieves a detection accuracy of 95%, which is 2% more than the best results obtained on the same database using other methods.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125236171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Graph Modularity and Randomness Measures : A Comparative Study","authors":"V. Vergara, Qingbao Yu, V. Calhoun","doi":"10.1109/SSIAI.2018.8470322","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470322","url":null,"abstract":"The human brain connectome exhibits a specific structure diagram that is understood to not be wired for randomness. However, aberrant connectivity has been detected and moreover linked to multiple neuropsychiatric and neurological diseases. Graph theory has provided a set of methods to evaluate disruption of brain structure organization. An alternative approach evaluates the difference between brain connectivity matrices and random matrices aiming at assessing randomness. This work compares both approaches within the context of random connectivity. Results indicate the correlation between the two assessments depends on the degree and can be as high as 0.3. Consequently, the two concepts can be treated as complementary, but addressing different aspects of randomness.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121735125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sree Ramya S. P. Malladi, Sundaresh Ram, Jeffrey J. Rodríguez
{"title":"A Ground-Truth Fusion Method for Image Segmentation Evaluation","authors":"Sree Ramya S. P. Malladi, Sundaresh Ram, Jeffrey J. Rodríguez","doi":"10.1109/SSIAI.2018.8470317","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470317","url":null,"abstract":"Image segmentation evaluation is popularly categorized into two different approaches based on whether the evaluation uses a human expert’s manual segmentation as a reference or not. When comparing automated segmentation against manual segmentation, also referred to as the ground-truth segmentation, multiple ground-truths are usually available. Much research has been done on analysis of segmentation algorithms and performance metrics, but very little study has been done on analyzing techniques for ground-truth fusion from multiple ground-truth segmentations. We propose a hybrid ground-truth fusion technique for image segmentation evaluation and compare it with other existing ground-truth fusion methods on a data set having multiple ground-truths at various coarseness levels. Qualitative and quantitative results show that the proposed method provides improved segmentation evaluation performance.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130281860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SHAPE ADAPTIVE ACCELERATED PARAMETER OPTIMIZATION","authors":"A. Yezzi, N. Dahiya","doi":"10.1109/SSIAI.2018.8470380","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470380","url":null,"abstract":"Computer vision based localization and pose estimation of known objects within camera images is often approached by optimizing some sort of fitting cost with respect to a small number of parameters including both pose parameters as well as additional parameters which describe a limited set of variations of the object shape learned through training. Gradient descent based searches are typically employed but the problem of how to \"weigh\" the gradient components arises and can often impact successful localization. This paper describes an automated, shape-adaptive way to choose the parameter weighting dynamically during the fitting process applicable to both standard gradient descent or momentum based accelerated gradient descent approaches.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124413826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conjointly Space and 2D Frequency Localized Filterbanks","authors":"P. Tay, Yanjun Yan","doi":"10.1109/SSIAI.2018.8470386","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470386","url":null,"abstract":"This paper proposes a conjointly space-frequency well localized separable 2D filters. The separable 2D filterbanks constitute a perfect or near perfect reconstruction system. The novel space-frequency localization measure to determine optimality is the product of a filter’s 2D variance in space and 2D frequency variance. The particle swarm optimization method is efficiently applied to determine perfect or near perfect reconstruction optimal 2D filterbanks.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126837386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eric K. Tokuda, Gabriel B. A. Ferreira, Cláudio T. Silva, R. M. C. Junior
{"title":"A NOVEL SEMI-SUPERVISED DETECTION APPROACH WITH WEAK ANNOTATION","authors":"Eric K. Tokuda, Gabriel B. A. Ferreira, Cláudio T. Silva, R. M. C. Junior","doi":"10.1109/SSIAI.2018.8470307","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470307","url":null,"abstract":"In this work we propose a semi-supervised learning approach for object detection where we use detections from a preexisting detector to train a new detector. We differ from previous works by coming up with a relative quality metric which involves simpler labeling and by proposing a full framework of automatic generation of improved detectors. To validate our method, we collected a comprehensive dataset of more than two thousand hours of streaming from public traffic cameras that contemplates variations in time, location and weather. We used these data to generate and assess with weak labeling a car detector that outperforms popular detectors on hard situations such as rainy weather and low resolution images. Experimental results are reported, thus corroborating the relevance of the proposed approach.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133373325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artifact Detection Maps Learned using Shallow Convolutional Networks","authors":"T. Goodall, A. Bovik","doi":"10.1109/SSIAI.2018.8470369","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470369","url":null,"abstract":"Automatically identifying the locations and severities of video artifacts is a difficult problem. We have developed a general method for detecting local artifacts by learning differences between distorted and pristine video frames. Our model, which we call the Video Impairment Mapper (VID-MAP), produces a full resolution map of artifact detection probabilities based on comparisons of exitatory and inhibatory convolutional responses. Validation on a large database shows that our method outperforms the previous state-of-the-art. A software release of VID-MAP that was trained to produce upscaling and combing detection probability maps is available online: http://live.ece.utexas.edu/research/quality/VIDMAP release.zip for public use and evaluation.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127049525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Natural Scene Statistics for Noise Estimation","authors":"Praful Gupta, C. Bampis, Yize Jin, A. Bovik","doi":"10.1109/SSIAI.2018.8470313","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470313","url":null,"abstract":"We investigate the scale-invariant properties of divisively normalized bandpass responses of natural images in the DCT-filtered domain. We found that the variance of the normalized DCT filtered responses of a pristine natural image is scale invariant. This scale invariance property does not hold in the presence of noise and thus it can be used to devise an efficient blind image noise estimator. The proposed noise estimation approach outperforms other statistics-based methods especially for higher noise levels and competes well with patch-based and filter-based approaches. Moreover, the new variance estimation approach is also effective in the case of non-Gaussian noise. The research code of the proposed algorithm can be found at https://github.com/guptapraful/Noise Estimation.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131601541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenjing Shi, M. Pattichis, Sylvia Celedón-Pattichis, Carlos A. LópezLeiva
{"title":"Robust Head Detection in Collaborative Learning Environments Using AM-FM Representations","authors":"Wenjing Shi, M. Pattichis, Sylvia Celedón-Pattichis, Carlos A. LópezLeiva","doi":"10.1109/SSIAI.2018.8470355","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470355","url":null,"abstract":"The paper introduces the problem of robust head detection in collaborative learning environments. In such environments, the camera remains fixed while the students are allowed to sit at different parts of a table. Example challenges include the fact that students may be facing away from the camera or exposing different parts of their face to the camera. To address these issues, the paper proposes the development of two new methods based on Amplitude Modulation-Frequency Modulation (AM-FM) models. First, a combined approach based on color and FM texture is developed for robust face detection. Secondly, a combined approach based on processing the AM and FM components is developed for robust, back of the head detection. The results of the two approaches are also combined to detect all of the students sitting at each table. The robust face detector achieved 79% accuracy on a set of 1000 face image examples. The back of the head detector achieved 91% accuracy on a set of 363 test image examples.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122432543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}