{"title":"Towards Intelligent Reading through Multimodal and Contextualized Word LookUp","authors":"Swetha Govindu, Raviteja Vidya Guttula, Swati Kohli, Poonam Patil, Anagha Kulkarni, Ilmi Yoon","doi":"10.1109/ICMLA52953.2021.00203","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00203","url":null,"abstract":"This paper presents Koob, an eBook Reader app that coalesces three key ideas to enhance students’ language learning, specifically for ambiguous words. The first idea is to improve the effectiveness of word lookup functionality through contextualization – by incorporating word sense disambiguation (WSD) techniques to show the contextually relevant definition at the top. The second idea is to augment WSD results with crowd-sourcing solutions. The last idea seeks to reinforce students’ learning by augmenting textual information with a visual aid, pictures related to the word, as part of the word lookup functionality. An empirical evaluation demonstrates that existing WSD techniques can successfully employed to dynamically reorder definitions such that the most relevant definition is at the top of the list for more than 80% of the instances.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"10 1","pages":"1249-1252"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88640636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improve Learner-based Recommender System with Learner’s Mood in Online Learning Platform","authors":"Qing Tang, Marie-Hélène Abel, E. Negre","doi":"10.1109/ICMLA52953.2021.00271","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00271","url":null,"abstract":"Learning with huge amount of online educational resources is challenging, especially when variety resources come from different online systems. Recommender systems are used to help learners obtain appropriate resources efficiently in online learning. To improve the performance of recommender system, more and more learner’s attributes (e.g. learning style, learning ability, knowledge level, etc.) have been considered. We are committed to proposing a learner-based recommender system, not just consider learner’s physical features, but also learner’s mood while learning. This recommender system can make recommendations according to the links between learners, and can change the recommendation strategy as learner’s mood changes, which will have a certain improvement in recommendation accuracy and makes recommended results more reasonable and interpretable.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"46 1","pages":"1704-1709"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85686811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yichun Li, Mina Maleki, Shadi Banitaan, Ming-Jie Chen
{"title":"Data-Driven State of Charge Estimation of Li-ion Batteries using Supervised Machine Learning Methods","authors":"Yichun Li, Mina Maleki, Shadi Banitaan, Ming-Jie Chen","doi":"10.1109/ICMLA52953.2021.00144","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00144","url":null,"abstract":"Recently, electrical vehicles (EVs) have attracted considerable attention from researchers due to the transition of the transportation industry and the increasing demand in the clean energy domain. State of charge (SOC) of Li-ion batteries has a significant role in improving the efficiency, performance, and reliability of EVs. Estimating the SOC of the Li-ion battery cannot be done directly from inner measurements due to the complex and dynamic nature of these kinds of batteries. Several data-driven approaches have recently been used to estimate the SOC of Li-ion batteries, benefiting from the availability of battery data and hardware computing capacity. However, selecting the discriminative features and best supervised machine learning (ML) models for accurate battery states estimation is still challenging. Thus, this paper investigates the effect of different ML models and extracted input features of Li-ion batteries, including Electrochemical Impedance Spectroscopy (EIS) and multi-channel feature set on the SOC prediction. The results on the public Panasonic dataset indicate that using EIS feature set as an input to the deep neural network (DNN) model is more efficient than the multi-channel feature set. Moreover, the DNN model outperforms the Gaussian process regression (GPR) model in terms of the mean squared error, mean absolute error, and root mean squared error rates for the SOC prediction.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"1 1","pages":"873-878"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86230746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weilin Sun, V. Lu, Aaron Truong, Hermione Bossolina, Yuan Lu
{"title":"Purrai: A Deep Neural Network based Approach to Interpret Domestic Cat Language","authors":"Weilin Sun, V. Lu, Aaron Truong, Hermione Bossolina, Yuan Lu","doi":"10.1109/ICMLA52953.2021.00104","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00104","url":null,"abstract":"Being able to understand and communicate with domestic cats has always been fascinating to humans, although it is considered a difficult task even for phonetics experts. In this paper, we present our approach to this problem: Purrai, a neural-network-based machine learning platform to interpret cat’s language. Our framework consists of two parts. First, we build a comprehensively constructed cat voice dataset that is 3.7x larger than any existing public available dataset [1]. To improve accuracy, we also use several techniques to ensure labeling quality, including rule-based labeling, cross validation, cosine distance, and outlier detection, etc. Second, we design a two-stage neural network structure to interpret what cats express in the context of multiple sounds called sentences. The first stage is a modification of Google’s Vggish architecture [2] [3], which is a Convolutional Neural Network (CNN) architecture that focuses on the classification of nine primary cat sounds. The second stage takes the probability outputs of a sequence of sound classifications from the first stage and determines the emotional meaning of a cat sentence. Our first stage architecture generates a top-l and top-2 accuracy of 74.1% and 92.1%, better than that of the state-of-the-art approach: 64.9% and 83.4% [4]. Our sentence-based AI model achieves an accuracy of 81.1% for emotion prediction.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"11 1","pages":"622-627"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82592225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nikolai Körber, A. Siebert, S. Hauke, Daniel Mueller-Gritschneder
{"title":"Tiny Generative Image Compression for Bandwidth-Constrained Sensor Applications","authors":"Nikolai Körber, A. Siebert, S. Hauke, Daniel Mueller-Gritschneder","doi":"10.1109/ICMLA52953.2021.00094","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00094","url":null,"abstract":"Deep image compression algorithms based on Generative Adversarial Networks (GANs) are a promising direction to address the strict communication bandwidth limitations commonly encountered in IoT sensor networks (e.g. Low Power Wide Area Networks). However, current methods do not consider that the sensor nodes, which perform the image encoding, usually only offer very limited computation and memory capabilities, e.g. a resource-constrained tiny device such as a micro-controller. In this paper, we propose the first tiny generative image compression method specifically designed for image compression on micro-controllers. We base our encoder on the well-known MobileNetV2 network architecture, while keeping the decoder side fixed. To cope with the resulting asymmetric design of the compression pipeline, we investigate the impact of different training strategies (end-to-end, knowledge distillation) and integer quantization techniques (post-training, quantization-aware training) on the GAN-training stability. On the Cityscapes dataset, we achieve a compression performance that is very close to the state-of-the-art, while requiring 99% less SRAM size, 97% smaller flash storage and 87% less multiply-add operations. Our findings suggest that tiny generative image compression is particularly well suited for application-specific domains.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"84 1","pages":"564-569"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89856831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-Attention Mechanism in GANs for Molecule Generation","authors":"S. Chinnareddy, Pranav Grandhi, Apurva Narayan","doi":"10.1109/ICMLA52953.2021.00017","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00017","url":null,"abstract":"In discrete sequence based Generative Adversarial Networks (GANs), it is important to both land the samples in the initial distribution and drive the generation towards desirable properties. However, in the case of longer molecules, the existing models seem to under-perform in producing new molecules. In this work, we propose the use of Self-Attention mechanism for Generative Adversarial Networks to allow long range dependencies. Self-Attention mechanism has produced improved rewards in novelty and promising results in generating molecules.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"64 1","pages":"57-60"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83747506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sarah Tymochko, Julien Chaput, T. Doster, Emilie Purvine, Jackson Warley, T. Emerson
{"title":"Con Connections: Detecting Fraud from Abstracts using Topological Data Analysis","authors":"Sarah Tymochko, Julien Chaput, T. Doster, Emilie Purvine, Jackson Warley, T. Emerson","doi":"10.1109/ICMLA52953.2021.00069","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00069","url":null,"abstract":"In this paper we present a novel approach for identifying fraudulent papers from their titles and abstracts. The premise of the approach is that there are holes in the presentation of the approach and findings of fraudulent research papers. As an abstract is intended to highlight key features of the approach as well as important conclusions the authors seek to determine if the assumed existence of holes can be identified from analysis of abstracts alone. The data set considered is derived from papers sharing a single author with labels determined based on a formal linguistic analysis of the complete documents. To detect these logical and literary holes we utilize techniques from topological data analysis which summarizes data based on the presence of multi-dimensional, topological holes. We find that, in fact, topological features derived through a combination of techniques in natural language processing and time-series analysis allow for superior detection of the fraudulent papers than the natural language processing tools alone. Thus we conclude that the connections and holes present in the abstracts of research cons contributes to an ability to infer the scientific validity of the corresponding work.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"92 1","pages":"403-408"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74474223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jérôme Buisine, F. Teytaud, S. Delepoulle, C. Renaud
{"title":"Guided-Generative Network for noise detection in Monte-Carlo rendering","authors":"Jérôme Buisine, F. Teytaud, S. Delepoulle, C. Renaud","doi":"10.1109/ICMLA52953.2021.00018","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00018","url":null,"abstract":"Estimating the features to be extracted from an image for classification tasks are sometimes difficult, especially if images are related to a particular kind of noise. The aim of this paper is to propose a neural network architecture named Guided-Generative Network (GGN) to extract refined information that allows to correctly quantify the noise present in a sliding window of images. GNN tends to find the desired features to address such a problem in order to emit a detection criterion of this noise. The proposed GGN is applied on photorealistic images which are rendered by Monte-Carlo methods by evaluating a large number of samples per pixel. An insufficient number of samples per pixel tends to result in residual noise which is very noticeable to humans. This noise can be reduced by increasing the number of samples, as proven by Monte-Carlo theory, but this involves considerable computational time. Finding the right number of samples needed for human observers to perceive no noise is still an open problem. The results obtained show that GGN can correctly solve the problem without prior knowledge of the noise while being competitive with existing methods.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"17 1","pages":"61-66"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79267064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perceptually Constrained Fast Adversarial Audio Attacks","authors":"Jason Henry, Mehmet Ergezer, M. Orescanin","doi":"10.1109/ICMLA52953.2021.00135","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00135","url":null,"abstract":"Audio adversarial attacks on deep learning models are of great interest given the commercial success and proliferation of these technologies. These types of attacks have been successfully demonstrated, however, artifacts introduced in the adversarial audio are easily detectable by a human observer. In this work, an expansion of the fast audio adversarial perturbation framework is proposed that can produce an adversarial attack that is imperceptible to a human observer in near-real time using black-box attacks. This is achieved by proposing a perceptually motivated penalty function. We propose a perceptual fast audio adversarial perturbation generator (PFAPG) that employs a loudness constrained loss function, in lieu of a conventional L-2 norm, between the adversarial example and original audio signal. We compare the performance of PFAPG against the conventional constraint based on the MSE on three audio recognition datasets: speaker recognition, speech command, and the Ryerson audiovisual database of emotional speech and song. Our results indicate that, on average, PFAPG equipped with the loudness-constrained loss function yields a 11% higher success rate, while reducing the undesirable distortion artifacts in adversarial audio by 10% dB compared to the prevalent MSE constraints.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"3 1","pages":"819-824"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75289722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Argue to Learn: Accelerated Argumentation-Based Learning","authors":"H. Ayoobi, M. Cao, R. Verbrugge, B. Verheij","doi":"10.1109/ICMLA52953.2021.00183","DOIUrl":"https://doi.org/10.1109/ICMLA52953.2021.00183","url":null,"abstract":"Human agents can acquire knowledge and learn through argumentation. Inspired by this fact, we propose a novel argumentation-based machine learning technique that can be used for online incremental learning scenarios. Existing methods for online incremental learning problems typically do not generalize well from just a few learning instances. Our previous argumentation-based online incremental learning method outperformed state-of-the-art methods in terms of accuracy and learning speed. However, it was neither memory-efficient nor computationally efficient since the algorithm used the power set of the feature values for updating the model. In this paper, we propose an accelerated version of the algorithm, with polynomial instead of exponential complexity, while achieving higher learning accuracy. The proposed method is at least $200times$ faster than the original argumentation-based learning method and is more memory-efficient.","PeriodicalId":6750,"journal":{"name":"2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"54 1","pages":"1118-1123"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80865779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}