Siteng Chen, Ao Li, Kathleen Lasick, Julie M. Huynh, Linda S. Powers, Janet Roveda, A. Paek
{"title":"Weakly Supervised Deep Learning for Detecting and Counting Dead Cells in Microscopy Images","authors":"Siteng Chen, Ao Li, Kathleen Lasick, Julie M. Huynh, Linda S. Powers, Janet Roveda, A. Paek","doi":"10.1109/ICMLA.2019.00282","DOIUrl":"https://doi.org/10.1109/ICMLA.2019.00282","url":null,"abstract":"Counting dead cells is a key step in evaluating the performance of chemotherapy treatment and drug screening. Deep convolutional neural networks (CNNs) can learn complex visual features, but require massive ground truth annotations which is expensive in biomedical experiments. Counting cells, especially dead cells, with very few ground truth annotations remains unexplored. In this paper, we automate dead cell counting using a weakly supervised strategy. We took advantage of the fact that cell death is low before chemotherapy treatment and increases after treatment. Motivated by the contrast, we first design image level supervised only classification neural networks to detect dead cells. Based on the class response map in classification networks, we calculate a Dead Confidence Map (DCM) to specify confidence of each dead cell. Associated with peak clustering, local maximums in the DCM are used to count the number of dead cells. In addition, a biological experiment based weakly supervised data preparation strategy is proposed to minimize human intervention. We show classification performance compared to general purpose and cell classification networks, and report results for the image-level supervised counting task.","PeriodicalId":436714,"journal":{"name":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114998904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rodrigo Leonardo, Amber Hu, M. Uzair, Qiujing Lu, Iris Fu, Keishin Nishiyama, Sooraj Mangalath Subrahmannian, D. Ravichandran
{"title":"Fusing Visual and Textual Information to Determine Content Safety","authors":"Rodrigo Leonardo, Amber Hu, M. Uzair, Qiujing Lu, Iris Fu, Keishin Nishiyama, Sooraj Mangalath Subrahmannian, D. Ravichandran","doi":"10.1109/ICMLA.2019.00324","DOIUrl":"https://doi.org/10.1109/ICMLA.2019.00324","url":null,"abstract":"In advertising, identifying the content safety of web pages is a significant concern since advertisers do not want brands to be associated with threatening content. At the same time, publishers would like to maximize the number of web pages on which they can place ads. Thus, a fine balance must be achieved while classifying content safety in order to satisfy both advertisers and publishers. In this paper, we propose a multimodal machine learning framework that fuses visual and textual information from web pages to improve current predictions of content safety. The primary focus is on late fusion, which involves combining final model outputs of separate modalities, such as images and text, to arrive at a single decision. This paper presents a fully automated machine learning framework that performs binary and multilabel classification using late fusion techniques. We also introduce additional work in early fusion, which involves extracting and fusing intermediate features from the two separate models. Our algorithms are applied to data extracted from relevant web pages in the advertising industry. Both of our late and early fusion methods obtain significant improvements over algorithms currently in use.","PeriodicalId":436714,"journal":{"name":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","volume":"410 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115231710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiscale Geometric Data Analysis via Laplacian Eigenvector Cascading","authors":"Joshua L. Mike, Jose A. Perea","doi":"10.1109/ICMLA.2019.00183","DOIUrl":"https://doi.org/10.1109/ICMLA.2019.00183","url":null,"abstract":"We develop here an algorithmic framework for constructing consistent multiscale Laplacian eigenfunctions (vectors) on data. Consequently, we address the unsupervised machine learning task of finding scalar functions capturing consistent structure across scales in data, in a way that encodes intrinsic geometric and topological features. This is accomplished by two algorithms for eigenvector cascading. We show via examples that cascading accelerates the computation of graph Laplacian eigenvectors, and more importantly, that one obtains consistent bases of the associated eigenspaces across scales. Finally, we present an application to TDA mapper, showing that our multiscale Laplacian eigenvectors identify stable flair-like structures in mapper graphs of varying granularity.","PeriodicalId":436714,"journal":{"name":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115344320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient Evolutionary Architecture Search for CNN Optimization on GTSRB","authors":"Fabio Marco Johner, J. Wassner","doi":"10.1109/ICMLA.2019.00018","DOIUrl":"https://doi.org/10.1109/ICMLA.2019.00018","url":null,"abstract":"Neural network inference on embedded devices has to meet accuracy and latency requirements under tight resource constraints. The design of suitable network architectures is a challenging and time-consuming task. Therefore, automatic discovery and optimization of neural networks is considered important for continuing the trend of moving classification tasks from cloud to edge computing. This paper presents an evolutionary method to optimize a convolutional neural network (CNN) architecture for classification tasks. The method runs efficiently on a single GPU-workstation and provides simple means to direct the tradeoff between complexity and accuracy of the evolved network. Using this method, we achieved a 11x reduction in the number of multiply-accumulate (MAC) operations of the winning network for the German Traffic Sign Recognition Benchmark (GTSRB) without accuracy reduction. An ensemble of four of our evolved networks competes the winning ensemble with a 0.1% lower accuracy but 70x reduction in MACs and 14x reduction in parameters.","PeriodicalId":436714,"journal":{"name":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124390439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fuzzy-Rough Cognitive Networks: Building Blocks and Their Contribution to Performance","authors":"M. Vanloffelt, G. Nápoles, K. Vanhoof","doi":"10.1109/ICMLA.2019.00159","DOIUrl":"https://doi.org/10.1109/ICMLA.2019.00159","url":null,"abstract":"Pattern classification is a popular research field within the Machine Learning discipline. Black-box models have proven to be potent classifiers in this particular field. However, their inability to provide a transparent decision mechanism is often regarded as an undesirable feature. Fuzzy-Rough Cognitive Networks are granular classifiers that have proven competitive and effective in such tasks. In this paper, we examine the contribution of the FRCN's main building blocks, being the causal weight matrix and the activation values of the neurons, to the model's average performance. Noise injection is employed to this end. Our findings suggest that optimising the weight matrix might not be as beneficial to the model's performance as suggested in previous research. Furthermore, we found that a powerful activation of the neurons included in the model topology is crucial to performance, as expected. Further research should as such focus on finding more powerful ways to activate these neurons, rather than focus on optimising the causal weight matrix.","PeriodicalId":436714,"journal":{"name":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116750770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Analysis of Univariate and Multivariate Electrocardiography Signal Classification","authors":"Nelly Elsayed, A. Maida, M. Bayoumi","doi":"10.1109/ICMLA.2019.00074","DOIUrl":"https://doi.org/10.1109/ICMLA.2019.00074","url":null,"abstract":"Heart diseases are mainly diagnosed by the electrocardiogram (ECG) or (EKG). The correct classification of ECG signals helps in diagnosing heart diseases. In this paper, we study and analyze the univariate and multivariate ECG signal classification problems to find the optimal classifier for ECG signals from existing state-of-the-art time series classification models.","PeriodicalId":436714,"journal":{"name":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117108293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MobileNet-Tiny: A Deep Neural Network-Based Real-Time Object Detection for Rasberry Pi","authors":"Nithesh Singh Sanjay, A. Ahmadinia","doi":"10.1109/ICMLA.2019.00118","DOIUrl":"https://doi.org/10.1109/ICMLA.2019.00118","url":null,"abstract":"In this paper, we present a new neural network architecture, MobileNet-Tiny that can be used to harness the power of GPU based real-time object detection in raspberry-pi and also in devices with the absence of a GPU and limited graphic processing capabilities such as mobile phones, laptops, etc. MobileNet-Tiny trained on COCO dataset running on a non-Gpu laptop dell XPS 13, achieves an accuracy of 19.0 mAP and a speed of 19.4 FPS which is 3 times as fast as MobileNetV2, and when running on a raspberry pi, it achieves a speed of 4.5 FPS which is up to 7 times faster than MobileNetV2. MobileNet-Tiny was modeled to offer a compact, quick, and well-balanced object detection solution to a variety of GPU restricted devices.","PeriodicalId":436714,"journal":{"name":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127477763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Radar-Based Non-intrusive Fall Motion Recognition using Deformable Convolutional Neural Network","authors":"Y. Shankar, Souvik Hazra, Avik Santra","doi":"10.1109/ICMLA.2019.00279","DOIUrl":"https://doi.org/10.1109/ICMLA.2019.00279","url":null,"abstract":"Radar is an attractive sensing technology for remote and non-intrusive human health monitoring and elderly fall detection due to its ability to work in low lighting conditions, its invariance to the environment, and its ability to operate through obstacles. Radar reflections from humans produce unique micro-Doppler signatures that can be used for classifying human activities and fall motion. However, radar-based elderly fall detection need to handle the indistinctive inter-class differences and large intra-class variations of human fall-motion in a real-world situation. Further, the radar placement in the room and varying aspect angle of the falling subject could result in differing radar micro-Doppler signature of human fall-motion. In this paper, we use a compact short-range 60-GHz frequency modulated continuous wave radar for detecting human fall motion using a novel deformable deep convolutional neural network with novel 1-class contrastive loss function in conjunction to focus loss to recognize elderly fall and address several of these signal processing system challenges. We demonstrate the performance of our proposed system in laboratory conditions under staged fall motion.","PeriodicalId":436714,"journal":{"name":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125894594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Siva Skandha Sanagala, S. Gupta, V. K. Koppula, M. Agarwal
{"title":"A Fast and Light Weight Deep Convolution Neural Network Model for Cancer Disease Identification in Human Lung(s)","authors":"Siva Skandha Sanagala, S. Gupta, V. K. Koppula, M. Agarwal","doi":"10.1109/ICMLA.2019.00225","DOIUrl":"https://doi.org/10.1109/ICMLA.2019.00225","url":null,"abstract":"In the proposed work, a convolution neural network (CNN) based model has been used to identify the cancer disease in human lung(s). Moreover, this approach identifies the single or multi-module in lungs by analyzing the Computer Tomography (CT) scan. For the purpose of the experiment, publicly available dataset named as Early Lung Cancer Action Program (ELCAP) has been used. Moreover, the performance of proposed CNN model has been compared with traditional machine learning approaches i.e. support vector machine, k-NN, Decision Tree, Random Forest, etc under various parameters i.e. accuracy, precision, recall, Cohen Kappa. The performance of proposed model is also compared with famous CNN models i.e. VGG16, Inception V3 in terms of accuracy, storage space and inference time. The experimental results show the efficacy of proposed algorithms over traditional machine learning and pre-trained models by achieving the accuracy of 99.5%","PeriodicalId":436714,"journal":{"name":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125509735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Encoding in Neural Networks","authors":"S. Bharitkar","doi":"10.1109/ICMLA.2019.00065","DOIUrl":"https://doi.org/10.1109/ICMLA.2019.00065","url":null,"abstract":"Data transforms, parameter re-normalization, and activation functions have gained significant attention in the neural network community over the past several years for improving convergence speed. The results in the literature are for computer vision applications, with batch-normalization (BN) and the Rectified Linear Unit (ReLU) activation attracting attention. In this paper, we present a new approach in data transformation in the context of regression during the synthesis of Head-related Transfer Functions (HRTFs) in the field of audio. The encoding technique whitens the real-valued input data delivered to the first hidden layer of a fully-connected neural network (FCNN) thereby providing the training speedup. The experimental results demonstrate, in a statistically significant way, that the presented data encoding approach outperforms other forms of normalization in terms of convergence speed, lower mean-square error, and robustness to network parameter initialization. Towards this, we used some popular first-and second-order gradient techniques such as scaled conjugate gradient, Extreme Learning Machine (ELM), and stochastic gradient descent with momentum and batch normalization. The improvements, as shown through t-SNE based depiction and analysis on the input covariance matrix, confirm the reduction in the condition number of the input covariance matrix (a process similar to whitening).","PeriodicalId":436714,"journal":{"name":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116124697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}