{"title":"Can Mobile Phone Usage Affect Hypothalamus-Pituitary-Adrenal Axis Response?","authors":"N. K. Uluaydin, O. Cerezci, S. Seker","doi":"10.1109/CCWC47524.2020.9031168","DOIUrl":"https://doi.org/10.1109/CCWC47524.2020.9031168","url":null,"abstract":"There are many academic studies on mobile phones' electromagnetic (EM) radiation effects on salivary glands with solid findings. These findings include anomalies such as increased superoxide dismutase enzyme activity, elevated cortisol levels in saliva, and other indicators of oxidative stress. There are other intracranial endocrine glands such as the hypothalamus, the pituitary, and the pineal glands, which are exposed to similar electromagnetic radiation. These glands control the body's 24-hour cycle of the biological processes and response to stress. But it is not possible to conduct clinical studies on these glands without invasive methods. And invasive methods cannot provide any reliable information, since these glands are too sensitive for any surgical intervention. Therefore, the authors introduce the salivary, hypothalamus, pituitary, and pineal glands with their intrinsic properties and positions into the IEEE phantom head specific absorption rate (SAR) model. Then, they run simulations in order to find electric field values, SAR, and thermal effects on the model by finite element method (FEM) on these glands. The simulations solve the vector Helmholtz equations for a frequency, and provide the electric field, bioheat, and SAR values. The results are compared with the results of studies where EM effects were observed on salivary glands. By taking advantage of the similarities between the hypothalamus, pituitary, pineal and salivary glands, and by reflecting the EM findings of academia on the salivary glands over the hypothalamus, pituitary, and pineal glands, the authors discuss whether there can be EM effects of mobile phone usage on human endocrine system and circadian rhythm through hypothalamus-pituitary-adrenal (HPA) axis and pineal gland.","PeriodicalId":161209,"journal":{"name":"2020 10th Annual Computing and Communication Workshop and Conference (CCWC)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127300857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Scalable Approach to Time Series Anomaly Detection & Failure Analysis for Industrial Systems","authors":"S. Karim, N. Ranjan, Darshit Shah","doi":"10.1109/CCWC47524.2020.9031262","DOIUrl":"https://doi.org/10.1109/CCWC47524.2020.9031262","url":null,"abstract":"Modern industrial systems are complex and require continuous monitoring for smooth operation. Even a small anomaly in an important variable could lead to suboptimal performance, or worse, a system failure. In critical systems, anomalies that go unaccounted can lead to increased maintenance and operating costs. For this reason, industrial systems opt for algorithms that can predict these anomalies. Modern industrial systems have tens or hundreds of variables with potential correlation with an anomaly. For this reason, a method of detecting anomaly and key failure is developed. By finding the key factors for failure, we can get a better insight about that anomaly, avoiding it in the future. Creating a scalable anomaly detection and key factor analysis framework for different industrial systems is difficult as the systems are very dynamic and varying. In our work, we have proposed a scalable stochastic anomaly detection and key factor analysis framework that is scalable across industries reducing downtime costs, maintenance overheads and increasing system efficacy. We have used a combination of Bayes' theorem and Bitmap detection to detect anomalies in time series data. Then, we have aggregated the anomalies and built a mapping tree to find key factors of the anomalies. We have successfully scaled our work achieving high accuracy anomaly detection and precise key factor analysis for different industries.","PeriodicalId":161209,"journal":{"name":"2020 10th Annual Computing and Communication Workshop and Conference (CCWC)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128823183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image Classification on NXP i.MX RT1060 using Ultra-thin MobileNet DNN","authors":"Saurabh Desai, Debjyoti Sinha, M. El-Sharkawy","doi":"10.1109/CCWC47524.2020.9031165","DOIUrl":"https://doi.org/10.1109/CCWC47524.2020.9031165","url":null,"abstract":"Deep Neural Networks play a very significant role in computer vision applications like image classification, object recognition and detection. They have achieved great success in this field but the main obstacles for deploying a DNN model into an Autonomous Driver Assisted System (ADAS) platform are limited memory, constrained resources, and limited power. MobileNet is a very efficient and light DNN model which was developed mainly for embedded and computer vision applications, but researchers still faced many constraints and challenges to deploy the model into resource-constrained microprocessor units. Design Space Exploration of such CNN models can make them more memory efficient and less computationally intensive. We have used the Design Space Exploration technique to modify the baseline MobileNet V1 model and develop an improved version of it. This paper proposes seven modifications on the existing baseline architecture to develop a new and more efficient model. We use Separable Convolution layers, the width multiplier hyperparamater, alter the channel depth and eliminate the layers with the same output shape to reduce the size of the model. We achieve a good overall accuracy by using the Swish activation function, Random Erasing technique and a choosing good optimizer. We call the new model as Ultra-thin MobileNet which has a much smaller size, lesser number of parameters, less average computation time per epoch and negligible overfitting, with a little higher accuracy as compared to the baseline MobileNet V1. Generally, when an attempt is made to make an existing model more compact, the accuracy decreases. But here, there is no trade off between the accuracy and the model size. The proposed model is developed with the intent to make it deployable in a realtime autonomous development platform with limited memory and power and, keeping the size of the model within 5 MB. It could be successfully deployed into NXP i.MX RT1060 ADAS platform due to its small model size of 3.9 MB. It classifies images of different classes in real-time, with an accuracy of more than 90% when it is run on the above-mentioned ADAS platform. We have trained and tested the proposed architecture from scratch on the CIFAR-10 dataset.","PeriodicalId":161209,"journal":{"name":"2020 10th Annual Computing and Communication Workshop and Conference (CCWC)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126253250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-Tenant Big Data Analytics on AWS Cloud Platform","authors":"V. Khedekar, Yun Tian","doi":"10.1109/CCWC47524.2020.9031133","DOIUrl":"https://doi.org/10.1109/CCWC47524.2020.9031133","url":null,"abstract":"Big Data analytics is a crucial part of today's internet world and specifically analyzing the data for the multi-tenant systems connected over the internet which produces a huge amount of multi-structured data. This paper describes the multi-tenant data analytics over the serverless cloud platform mainly Amazon Web Service (AWS). We perform big data analytics on the two different types of applications. One produces static data and the other produces real-time dynamic data. Then we analyze the performance of the big data under the constraints of time, data traffic and size of the data.","PeriodicalId":161209,"journal":{"name":"2020 10th Annual Computing and Communication Workshop and Conference (CCWC)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125285512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anuja P. Parameshwaran, Heta P. Desai, M. Weeks, Rajshekhar Sunderraman
{"title":"Unravelling of Convolutional Neural Networks through Bharatanatyam Mudra Classification with Limited Data","authors":"Anuja P. Parameshwaran, Heta P. Desai, M. Weeks, Rajshekhar Sunderraman","doi":"10.1109/CCWC47524.2020.9031185","DOIUrl":"https://doi.org/10.1109/CCWC47524.2020.9031185","url":null,"abstract":"Non-verbal forms of communication are universal, being free of any language barrier and widely used in all art forms. For example, in Bharatanatyam, an ancient Indian dance form, artists use different hand gestures, body postures and facial expressions to convey the story line. As identification and classification of these complex and multivariant visual images are difficult, it is now being addressed with the help of advanced computer vision techniques and deep neural networks. This work deals with studies in automation of identification, classification and labelling of selected Bharatnatyam gestures, as part of our efforts to preserve this rich cultural heritage for future generations. The classification of the mudras against their true labels was carried out using different singular pre-trained / non-pre-trained as well as stacked ensemble convolutional neural architectures (CNNs). In all, twenty-seven classes of asamyukta hasta (single hand gestures) data were collected from Google, YouTube and few real time performances by artists. Since the background in many frames are highly diverse, the acquired data is real and dynamic, compared to images from closed laboratory settings. The cleansing of mislabeled data from the dataset was done through label transferring based on distance-based similarity metric using convolutional siamese neural network. The classification of mudras was done using different CNN architecture: i) singular models, ii) ensemble models, and iii) few specialized models. This study achieved an accuracy of >95%, both in single and double transfer learning models, as well as their stacked ensemble model. The results emphasize the crucial role of domain similarity of the pre-training / training datasets for improved classification accuracy and, also indicate that doubly pre-trained CNN model yield the highest accuracy.","PeriodicalId":161209,"journal":{"name":"2020 10th Annual Computing and Communication Workshop and Conference (CCWC)","volume":"222 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122527391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"High Impulse Noise Intensity Removal in Natural Images Using Convolutional Neural Network","authors":"M. Mafi, Walter Izquierdo, M. Adjouadi","doi":"10.1109/CCWC47524.2020.9031200","DOIUrl":"https://doi.org/10.1109/CCWC47524.2020.9031200","url":null,"abstract":"This paper introduces a new image smoothing filter based on a feed-forward convolutional neural network (CNN) in presence of impulse noise. This smoothing filter integrates a very deep architecture, a regularization method, and a batch normalization process. This fully integrated approach yields an effectively denoised and smoothed image yielding a high similarity measure with the original noise free image. Specific structural metrics are used to assess the denoising process and how effective was the removal of the impulse noise. This CNN model can also deal with other noise levels not seen during the training phase. The proposed CNN model is constructed through a 20-layer network using 400 images from the Berkeley Segmentation Dataset (BSD) in the training phase. Results are obtained using the standard testing set of 8 natural images not seen in the training phase. The merits of this proposed method are weighed in terms of high similarity measure and structural metrics that conform to the original image and compare favorably to the different results obtained using state-of-art denoising filters.","PeriodicalId":161209,"journal":{"name":"2020 10th Annual Computing and Communication Workshop and Conference (CCWC)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122856169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ticket-Based Authentication for Securing Internet of Things","authors":"A. P. Shrestha, S. Islam, K. Kwak","doi":"10.1109/CCWC47524.2020.9031254","DOIUrl":"https://doi.org/10.1109/CCWC47524.2020.9031254","url":null,"abstract":"Internet of Things comprises nodes with different functionalities, storage capacity, battery life, and computing capabilities. Spatially dispersed and dedicated low powered wireless sensor devices tremendously contribute to enabling Internet of things. However, direct access to data sensed by these sensor devices are restricted to users of foreign networks due to security threats. In this paper, we propose a ticket-based authentication between a low powered sensor node and a mobile device that belong to the foreign network. Considering the capacity limitations of sensor nodes, the key derivation and distribution load are moved to authentication server existing in its administrative domain. The security analysis is also presented to confirm solidity of the presented technique.","PeriodicalId":161209,"journal":{"name":"2020 10th Annual Computing and Communication Workshop and Conference (CCWC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133501210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Henry K. Griffith, Samantha Aziz, Oleg V. Komogortsev
{"title":"Prediction of Oblique Saccade Trajectories Using Learned Velocity Profile Parameter Mappings","authors":"Henry K. Griffith, Samantha Aziz, Oleg V. Komogortsev","doi":"10.1109/CCWC47524.2020.9031274","DOIUrl":"https://doi.org/10.1109/CCWC47524.2020.9031274","url":null,"abstract":"This manuscript proposes and validates two techniques for predicting the trajectory of oblique saccades using a Gaussian velocity profile model. Profile parameters and event duration are estimated at the onset of each saccade using support vector machine regression models. The proposed techniques are evaluated using a set of 47,652 saccades with a mean amplitude of 12.25 degrees of the visual angle gathered from 322 subjects during a random saccade task. Numerous performance metrics are evaluated for predictions made at various fractions of the saccade duration. An average landing point estimation error of less than three degrees of the visual angle is obtained for predictions formed at 30% of the saccade duration.","PeriodicalId":161209,"journal":{"name":"2020 10th Annual Computing and Communication Workshop and Conference (CCWC)","volume":"333 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133940288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Multi Sensor Real-time Tracking with LiDAR and Camera","authors":"Surya Kollazhi Manghat, M. El-Sharkawy","doi":"10.1109/CCWC47524.2020.9031247","DOIUrl":"https://doi.org/10.1109/CCWC47524.2020.9031247","url":null,"abstract":"Self driving cars are equipped with various driver-assistive technologies (ADAS) like Forward Collision Warning system (FCW), Adaptive Cruise Control and Collision Mitigation by Breaking (CMbB) to ensure safety. Tracking plays an important role in ADAS systems for understanding dynamic environment. This paper proposes 3D multi-target tracking method by following a lean way of implementation using object detection with aim of real time. Object Tracking is an integral part of environment sensing, which enables the vehicle to estimate the surrounding object's trajectories to accomplish motion planning. The advancement in the object detection methodologies benefits greatly when following the tracking by detection approach. The proposed method implemented 2D tracking on camera data and 3D tracking on LiDAR point cloud data. The estimated state from each sensors are fused together to come with a more optimal state of objects present in the surrounding. The multi object tracking performance has evaluated on publicly available KITTI dataset.","PeriodicalId":161209,"journal":{"name":"2020 10th Annual Computing and Communication Workshop and Conference (CCWC)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131828561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On-device Training for Breast Ultrasound Image Classification","authors":"Dennis Hou, Raymond Hou, Janpu Hou","doi":"10.1109/CCWC47524.2020.9031146","DOIUrl":"https://doi.org/10.1109/CCWC47524.2020.9031146","url":null,"abstract":"Most on-device AI pre-trained a neural network model in cloud-based server then deployed to edge device for inference. On-device training not only can build personalized model, but also can do distributed training like federated learning to train accurate models from scratch using small updates from many devices. In this work, we implement the semi-supervised convolutional neural network based on successive subspace learning and use a dataset of breast ultrasound (BUS) images to demonstrate a proof of concept of true on-device training. An important advantage of such network is that we can extract the key feature vectors with CNN network architectures without the need of backpropagation computation made it suitable for portable ultrasound. So it can acquire the ultrasound image and train the CNN classifier on the portable device without cloud-based server. We evaluate the model by using a set of BUS images that includes benign and malignant breast tumors. We obtain 94.8% accuracy with this study and demonstrate the applicablility of the proposed on-device training model to improve the diagnosis of BUS images.","PeriodicalId":161209,"journal":{"name":"2020 10th Annual Computing and Communication Workshop and Conference (CCWC)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134319655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}