{"title":"Mobile Based Continuous Authentication Using Deep Features","authors":"Mario Parreño Centeno, Yu Guan, A. Moorsel","doi":"10.1145/3212725.3212732","DOIUrl":"https://doi.org/10.1145/3212725.3212732","url":null,"abstract":"Continuous authentication is a promising approach to validate the user's identity during a work session, e.g., for mobile banking applications. Recently, it has been demonstrated that changes in the motion patterns of the user may help to note the unauthorised use of mobile devices. Several approaches have been proposed in this area but with relatively weak performance results. In this work, we propose an approach which uses a Siamese convolutional neural network to learn the signatures of the motion patterns from users and achieve a competitive verification accuracy up to 97.8%. We also find our algorithm is not very sensitive to sampling frequency and the length of the sequence.","PeriodicalId":419019,"journal":{"name":"Proceedings of the 2nd International Workshop on Embedded and Mobile Deep Learning","volume":"66 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131550514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongkai Wen, Petko Georgiev, Erran L. Li, Samir Kumar, A. Balasubramanian, Youngki Lee
{"title":"Proceedings of the 2nd International Workshop on Embedded and Mobile Deep Learning","authors":"Hongkai Wen, Petko Georgiev, Erran L. Li, Samir Kumar, A. Balasubramanian, Youngki Lee","doi":"10.1145/3212725","DOIUrl":"https://doi.org/10.1145/3212725","url":null,"abstract":"In recent years, breakthroughs from the field of deep learning have transformed how sensor data (e.g., images, audio, and even accelerometers and GPS) can be interpreted to extract the high-level information needed by bleeding-edge sensor-driven systems like smartphone apps and wearable devices. Today, the state-of-the-art in computational models that, for example, recognize a face, track user emotions, or monitor physical activities are increasingly based on deep learning principles and algorithms. Unfortunately, deep models typically exert severe demands on local device resources and this conventionally limits their adoption within mobile and embedded platforms. As a result, in far too many cases existing systems process sensor data with machine learning methods that have been superseded by deep learning years ago.","PeriodicalId":419019,"journal":{"name":"Proceedings of the 2nd International Workshop on Embedded and Mobile Deep Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121057996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Fernández-Marqués, V. W. Tseng, S. Bhattacharya, N. Lane
{"title":"On-the-fly deterministic binary filters for memory efficient keyword spotting applications on embedded devices","authors":"J. Fernández-Marqués, V. W. Tseng, S. Bhattacharya, N. Lane","doi":"10.1145/3212725.3212731","DOIUrl":"https://doi.org/10.1145/3212725.3212731","url":null,"abstract":"Lightweight keyword spotting (KWS) applications are often used to trigger the execution of more complex speech recognition algorithms that are computationally demanding and therefore cannot be constantly running on the device. Often KWS applications are executed in small microcontrollers with very constrained memory (e.g. 128kB) and compute capabilities (e.g. CPU at 80MHz) limiting the complexity of deployable KWS systems. We present a compact binary architecture with 60% fewer parameters and 50% fewer operations (OP) during inference compared to the current state of the art for KWS applications at the cost of 3.4% accuracy drop. It makes use of binary orthogonal codes to analyse speech features from a voice command resulting in a model with minimal memory footprint and computationally cheap, making possible its deployment in very resource-constrained microcontrollers with less than 30kB of on-chip memory. Our technique offers a different perspective to how filters in neural networks could be constructed at inference time instead of directly loading them from disk.","PeriodicalId":419019,"journal":{"name":"Proceedings of the 2nd International Workshop on Embedded and Mobile Deep Learning","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134348061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Myungjae Shin, Joongheon Kim, Aziz Mohaisen, Jaebok Park, KyungHee Lee
{"title":"Neural Network Syntax Analyzer for Embedded Standardized Deep Learning","authors":"Myungjae Shin, Joongheon Kim, Aziz Mohaisen, Jaebok Park, KyungHee Lee","doi":"10.1145/3212725.3212727","DOIUrl":"https://doi.org/10.1145/3212725.3212727","url":null,"abstract":"Deep learning frameworks based on the neural network model have attracted a lot of attention recently for their potential in various applications. Accordingly, recent developments in the fields of deep learning configuration platforms have led to renewed interests in neural network unified format (NNUF) for standardized deep learning computation. The attempt of making NNUF becomes quite challenging because primarily used platforms change over time and the structures of deep learning computation models are continuously evolving. This paper presents the design and implementation of a parser of NNUF for standardized deep learning computation. We call the platform implemented with the neural network exchange framework (NNEF) standard as the NNUF. This framework provides platform-independent processes for configuring and training deep learning neural networks, where the independence is offered by the NNUF model. This model allows us to configure all components of neural network graphs. Our framework also allows the resulting graph to be easily shared with other platform-dependent descriptions which configure various neural network architectures in their own ways. This paper presents the details of the parser design, JavaCC-based implementation, and initial results.","PeriodicalId":419019,"journal":{"name":"Proceedings of the 2nd International Workshop on Embedded and Mobile Deep Learning","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133781701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Koen Goetschalckx, Bert Moons, P. Wambacq, M. Verhelst
{"title":"Efficiently Combining SVD, Pruning, Clustering and Retraining for Enhanced Neural Network Compression","authors":"Koen Goetschalckx, Bert Moons, P. Wambacq, M. Verhelst","doi":"10.1145/3212725.3212733","DOIUrl":"https://doi.org/10.1145/3212725.3212733","url":null,"abstract":"","PeriodicalId":419019,"journal":{"name":"Proceedings of the 2nd International Workshop on Embedded and Mobile Deep Learning","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115507203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"D-Pruner: Filter-Based Pruning Method for Deep Convolutional Neural Network","authors":"Huynh Nguyen Loc, Youngki Lee, R. Balan","doi":"10.1145/3212725.3212730","DOIUrl":"https://doi.org/10.1145/3212725.3212730","url":null,"abstract":"The emergence of augmented reality devices such as Google Glass and Microsoft Hololens has opened up a new class of vision sensing applications. Those applications often require the ability to continuously capture and analyze contextual information from video streams. They often adopt various deep learning algorithms such as convolutional neural networks (CNN) to achieve high recognition accuracy while facing severe challenges to run computationally intensive deep learning algorithms on resource-constrained mobile devices. In this paper, we propose and explore a new class of compression technique called D-Pruner to efficiently prune redundant parameters within a CNN model to run the model efficiently on mobile devices. D-Pruner removes redundancy by embedding a small additional network. This network evaluates the importance of filters and removes them during the fine-tuning phase to efficiently reduce the size of the model while maintaining the accuracy of the original model. We evaluated D-Pruner on various datasets such as CIFAR-10 and CIFAR-100 and showed that D-Pruner could reduce a significant amount of parameters up to 4.4 times on many existing models while maintaining accuracy drop less than 1%.","PeriodicalId":419019,"journal":{"name":"Proceedings of the 2nd International Workshop on Embedded and Mobile Deep Learning","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116555145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shuochao Yao, Yiran Zhao, Shaohan Hu, T. Abdelzaher
{"title":"QualityDeepSense","authors":"Shuochao Yao, Yiran Zhao, Shaohan Hu, T. Abdelzaher","doi":"10.1145/3212725.3212729","DOIUrl":"https://doi.org/10.1145/3212725.3212729","url":null,"abstract":"Deep neural networks are becoming increasingly popular in mobile sensing and computing applications. Their capability of fusing multiple sensor inputs and extracting temporal relationships can enhance intelligence in a wide range of applications. One key problem however is the noisy on-device sensors, whose characters are heterogeneous and varying over time. The existing mobile deep learning frameworks usually treat every sensor input equally over time, lacking the ability of identifying and exploiting the heterogeneity of sensor noise. In this work, we propose QualityDeepSense, a deep learning framework that can automatically balance the contribution of sensor inputs over time by their sensing qualities. We propose a sensor-temporal attention mechanism to learn the dependencies among sensor inputs over time. These correlations are used to infer the qualities and reassign the contribution of sensor inputs. QualityDeepSense can thus focus on more informative sensor inputs for prediction. We demonstrate the effectiveness of QualityDeepSense using the noise-augmented heterogeneous human activity recognition task. QualityDeepSense outperforms the state-of-the-art methods by a clear margin. In addition, we show QualityDeepSense only impose limited resource-consumption burden on embedded devices.","PeriodicalId":419019,"journal":{"name":"Proceedings of the 2nd International Workshop on Embedded and Mobile Deep Learning","volume":"27 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125683134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mayo: A Framework for Auto-generating Hardware Friendly Deep Neural Networks","authors":"Yiren Zhao, Xitong Gao, R. Mullins, Chengzhong Xu","doi":"10.1145/3212725.3212726","DOIUrl":"https://doi.org/10.1145/3212725.3212726","url":null,"abstract":"Deep Neural Networks (DNNs) have proved to be a convenient and powerful tool for a wide range of problems. However, the extensive computational and memory resource requirements hinder the adoption of DNNs in resource-constrained scenarios. Existing compression methods have been shown to significantly reduce the computation and memory requirements of many popular DNNs. These methods, however, remain elusive to non-experts, as they demand extensive manual tuning of hyperparameters. The effects of combining various compression techniques lack exploration because of the large design space. To alleviate these challenges, this paper proposes an automated framework, Mayo, which is built on top of TensorFlow and can compress DNNs with minimal human intervention. First, we present overriders which are recursively-compositional and can be configured to effectively compress individual components (e.g. weights, biases, layer computations and gradients) in a DNN. Second, we introduce novel heuristics and a global search algorithm to efficiently optimize hyperparameters. We demonstrate that without any manual tuning, Mayo generates a sparse ResNet-18 that is 5.13x smaller than the baseline with no loss in test accuracy. By composing multiple overriders, our tool produces a sparse 6-bit CIFAR-10 classifier with only 0.16% top-1 accuracy loss and a 34x compression rate. Mayo and all compressed models are publicly available. To our knowledge, Mayo is the first framework that supports overlapping multiple compression techniques and automatically optimizes hyperparameters in them.","PeriodicalId":419019,"journal":{"name":"Proceedings of the 2nd International Workshop on Embedded and Mobile Deep Learning","volume":"2649 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127483494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HARNet","authors":"Prahalathan Sundaramoorthy, Gautham Krishna Gudur, Manav Rajiv Moorthy, R. Bhandari, Vineeth Vijayaraghavan","doi":"10.1145/3212725.3212728","DOIUrl":"https://doi.org/10.1145/3212725.3212728","url":null,"abstract":"Recent advancements in the domain of pervasive computing have seen the incorporation of sensor-based Deep Learning algorithms in Human Activity Recognition (HAR). Contemporary Deep Learning models are engineered to alleviate the difficulties posed by conventional Machine Learning algorithms which require extensive domain knowledge to obtain heuristic hand-crafted features. Upon training and deployment of these Deep Learning models on ubiquitous mobile/embedded devices, it must be ensured that the model adheres to their computation and memory limitations, in addition to addressing the various mobile- and user-based heterogeneities prevalent in actuality. To handle this, we propose HARNet - a resource-efficient and computationally viable network to enable on-line Incremental Learning and User Adaptability as a mitigation technique for anomalous user behavior in HAR. Heterogeneity Activity Recognition Dataset was used to evaluate HARNet and other proposed variants by utilizing acceleration data acquired from diverse mobile platforms across three different modes from a practical application perspective. We perform Decimation as a Down-sampling technique for generalizing sampling frequencies across mobile devices, and Discrete Wavelet Transform for preserving information across frequency and time. Systematic evaluation of HARNet on User Adaptability yields an increase in accuracy by ~35% by leveraging the model's capability to extract discriminative features across activities in heterogeneous environments.","PeriodicalId":419019,"journal":{"name":"Proceedings of the 2nd International Workshop on Embedded and Mobile Deep Learning","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131949776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}