M. Alzahrani, Salma Kammoun Jarraya, M. Salamah, H. Ben-Abdallah
{"title":"FallFree: Multiple Fall Scenario Dataset of Cane Users for Monitoring Applications Using Kinect","authors":"M. Alzahrani, Salma Kammoun Jarraya, M. Salamah, H. Ben-Abdallah","doi":"10.1109/SITIS.2017.61","DOIUrl":"https://doi.org/10.1109/SITIS.2017.61","url":null,"abstract":"No one refutes the importance of datasets in the development of any new approach. Despite their importance, open access datasets in computer vision remain insufficient for some applications. This paper introduces a FallFree, new and rich dataset that can be used for the evaluation/development of computer vision-based applications pertinent to people who use a cane as a mobility aid, e.g., fall detection, activity recognition. In particular, the FallFree dataset includes video streams captured with Kinect which offers a wide range of visual information. It is organized hierarchically, in terms of scenarios each of which is structured in terms of its features. The current FallFree dataset version covers all fall scenarios of the cane users along with various non-fall scenarios grouped into one set. Each scenario is represented through a rich set of features that can be extracted from Kinect. To widen its usability, the dataset was constructed while accounting for existing datasets' organization, size, scope, streams, types and hypotheses.","PeriodicalId":153165,"journal":{"name":"2017 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126955278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Post-Traumatic Epilepsy in Rats: An Algorithm for Detection of Suspicious EEG Activity","authors":"I. Kershner, Y. Obukhov, I. G. Komoltsev","doi":"10.1109/SITIS.2017.36","DOIUrl":"https://doi.org/10.1109/SITIS.2017.36","url":null,"abstract":"Due to the fact that there are problems in neurophysiological research on post-traumatic epilepsy in order to find sleep spindles and epileptiform discharges in long-term (day or more) recordings of electroencephalography (EEG) there is a need for algorithms for automatic detection of suspicious EEG activity (we call as suspicious activity any EEG activity that differs from the background activity). There are many methods of signal processing. The most common and straightforward are the methods of transition from the temporal representation of the signal to the time-frequency representation. One of them is the wavelet transform. For the wavelet spectrograms, the ridges of the wavelet spectrograms are calculated. Method of detecting the suspicious activity involves an analysis of points of the ridges. The spectrogram of ridge points are calculated, after which the points of the ridge are divided into two groups: those that relate to the background activities and those that relate to suspicious activity. Suspicious activity that does not meet the requirements of neuroscientists is eliminated.","PeriodicalId":153165,"journal":{"name":"2017 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121253022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Predicting Number of Connections on Video Streaming Server with Machine Learning Approach","authors":"Amit Shrivastava, A. Rajavat, R. Deshmukh","doi":"10.1109/SITIS.2017.26","DOIUrl":"https://doi.org/10.1109/SITIS.2017.26","url":null,"abstract":"Predicting a number of connection on the streaming server would be a useful parameter to improve the performance of the server. It can be also proved helpful to understand server behavior. Prediction can impact in improving resources on the server for providing quality video streaming. Streaming of videos from a server is resource hungry process and depends on many features like memory, processor, type of video codec, the bandwidth available and different network parameters (delay, jitter, drop, packet size). In this paper, we will use supervised learning technique on a captured dataset. Dataset is created from four different hardware based streaming server. Our first approach is to formulate a process for capturing data. We have calibrated a lab-based experiment setup on fifty mobiles and four different hardware based streaming server. Graph-based analysis on captured data is done to understand the behavior of the video streaming server. Performed the feature engineering to understand relationship among different features. Prediction is implemented using regression, and decision tree (DT) method. In regression we apply Linear regression, Ridge and Least Absolute Shrinkage and Selection Operator (LASSO). Basic DT and Random Forest (RF). So with this research work, we will show that it is possible to predict the number of connection on the server with exploiting server resources as features. Finally, compare algorithms these machine learning (ML) algorithms. In which RF prove to be best for prediction of connection on streaming server.","PeriodicalId":153165,"journal":{"name":"2017 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126136813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Workflows and Challenges Involved in Creation of Realistic Immersive Virtual Museum, Heritage, and Tourism Experiences: A Comprehensive Reference for 3D Asset Capturing","authors":"H. Esmaeili, H. Thwaites, P. Woods","doi":"10.1109/SITIS.2017.82","DOIUrl":"https://doi.org/10.1109/SITIS.2017.82","url":null,"abstract":"This study provides a technical review of the current state of immersive virtual museum, heritage, and tourism focusing on workflows and challenges involved in realistic asset creation. The workflow includes two parts i.e. virtualization of historic objects and creation of environment. However, in some instances the environment itself is a cultural heritage site e.g. an old castle that can be considered as historic object. Otherwise, the environment is just a conceptual virtual place (created using traditional 3D modeling methods) to mimic museum experience, embedding smaller historic objects, which are virtualized. Although tools and technologies such as photogrammetry, 3D scanning, or aerial 3D mapping have made the process of virtualization of historic/cultural objects considerably easier for basic users, challenges and limitations still remain as these automatic processes are not always accompanied by flawless outcomes. This study addresses some of those challenges and limitations faced during preparation of experimental immersive virtual museum for exhibition purposes. This covers various ranges of topics from lighting, texturing, and topology to limitations related to opacity, dark colors, and small details. This paper also provides a comprehensive overview of the technical details when it comes to preparation of virtual cultural heritage environments specifically for immersive experiences. Areas such as user interaction, navigation, space optimization, quality and viewing distance, access, purpose and objectives, degree of realism, etc. are covered in this review. The major processes illustrated in this study include photogrammetry, aerial 3D mapping, polygon modeling, 3D sculpting, 3D painting, UV Mapping, etc. The major software/tools used in this workflow include Agisoft Photoscan, Autodesk Remake, Pixologic ZBrush, xNormal, Autodesk 3ds Max, Unity, SteamVR, HTC Vive, including other relevant plugins and scripts. However, this study is not a step by step guide or a tutorial, but a reference for the currently available technologies to create immersive virtual museum, cultural heritage, and tourism aiming to distinguish the lines between different levels of processes involved. The objective is to provide a clear understanding of the challenges involved. Based on the literature review done prior to this study, a comprehensive academic reference (covering the mentioned areas) for digital heritage researchers is lacking (to date). The authors believe that due to the increasing availability and affordability of the current immersive virtual reality technologies for basic users this is a proper time for gathering major processes/challenges involved in creation of such environments and present them in form of a comprehensive reference. Although the main focus of this study is on digital heritage, the processes undertaken and explained can be generalized to be used by researchers in other fields where applicable.","PeriodicalId":153165,"journal":{"name":"2017 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114362285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joel Tanzouak, Ndiouma Bame, B. Yenke, Idrissa Sarr
{"title":"A System to Improve the Accuracy of Numeric Weather Prediction (NWP) for Flood Forecasting Systems","authors":"Joel Tanzouak, Ndiouma Bame, B. Yenke, Idrissa Sarr","doi":"10.1109/SITIS.2017.23","DOIUrl":"https://doi.org/10.1109/SITIS.2017.23","url":null,"abstract":"Data provided by EPS (Ensemble Prediction Systems) are crucial for Flood Forecasting Systems (FFS). In fact, most of known FFS such as those with hydraulic models give flooding alerts thanks to raw data provided by weather predictions. However, frequent change of atmosphere behavior due to anthropic factors may alter the forecast of precipitation as well as the temperature variation. Moreover, existing FFS rely entirely on EPS raw data without any pretreatment that aims to face inaccuracy of weather predictions. As a consequence, it is almost impossible to get the precise flood predictions enough earlier to allow authorities or populations taking the special cares. Bearing this in mind, it is primordial to improve the quality of data obtained from EPS in order to increase the accuracy of FFS. The goal of this paper is to propose an extension of a FFS by introducing a correction module that use real-time data collected from sensor networks combined with past and forecasted data of EPS. The results obtained from empiric experiments show the benefits of our correction mechanism in flood predictions.","PeriodicalId":153165,"journal":{"name":"2017 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114627590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Classification and Aesthetic Evaluation of Paintings and Artworks","authors":"Tarpit Sahu, Arjun Tyagi, Sonu Kumar, A. Mittal","doi":"10.1109/SITIS.2017.39","DOIUrl":"https://doi.org/10.1109/SITIS.2017.39","url":null,"abstract":"Painters and Artists have contributed to the field of art over the years with their exceptional talent and skills. The Internet is full of their creativity and imagination where one can find most of their work. Like any other information present on the Internet, paintings are also not well organized. In this paper, a method is proposed to classify paintings with the help of support vector machine classifier using features extracted by a pre trained convolutional neural network-AlexNet. A painting is not only an art on paper but is a medium to arouse emotions and sense of pleasure within the audience. Aesthetic Evaluation aims at evaluation/rating a painting or an artwork on the basis of various parameters like style, topic, emotional engagement etc. which cannot be done by a machine alone. So we cannot leave behind the human inputs while determining the aesthetic value of a painting or an artwork. In this paper we also propose a method to judge or evaluate the aesthetic value of a painting by training a regression model with several image features, like Local Binary Pattern for texture, color histogram for color, Histogram of Oriented Gradients for edges and GIST for scene recognition in the painting, against human ratings for each painting. A dataset constituting of 1225 digital images of paintings of 7 categories is used for classifying and evaluating the aesthetic value. The classification phase was found to have 92.73% accuracy and the evaluation phase performed with an accuracy of 64.15%.","PeriodicalId":153165,"journal":{"name":"2017 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131651643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rahul Nijhawan, Himanshu Sharma, H. Sahni, Ashita Batra
{"title":"A Deep Learning Hybrid CNN Framework Approach for Vegetation Cover Mapping Using Deep Features","authors":"Rahul Nijhawan, Himanshu Sharma, H. Sahni, Ashita Batra","doi":"10.1109/SITIS.2017.41","DOIUrl":"https://doi.org/10.1109/SITIS.2017.41","url":null,"abstract":"Vegetation cover mapping is an imperative task of monitoring the change in vegetation as it can help us meet sustenance requirements. In this study, we explore the future potential of multilayer Deep learning framework (DL) that comprises of hybrid of CNN's, for mapping vegetation cover area as DL is a congenial state-of-art algorithm for implementing image processing. This study proposes a novel DL framework exploiting hybrids of CNN's with Local binary pattern and GIST features. Every CNN is fed with disparate combination of multi-spectral Sentinel 2 satellite imagery bands (spatial resolution of 10m), texture and topographic parameters of Uttarakhand (30° 15' N, 79° 15' E) region, India. Our proposed DL framework outperformed the state-of-art algorithms with a classification accuracy of 88.43%.","PeriodicalId":153165,"journal":{"name":"2017 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132923558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Energy Efficient Virtual Machine Placement in Cloud Data Centers Using Modified Intelligent Water Drop Algorithm","authors":"C. Verma, V. Reddy, G. Gangadharan, A. Negi","doi":"10.1109/SITIS.2017.14","DOIUrl":"https://doi.org/10.1109/SITIS.2017.14","url":null,"abstract":"Cloud Computing is an emerging distributed computing paradigm for the dynamic provisioning of computing services on demand over the internet. Due to heavy demand of various IT services over the cloud, energy consumption by data centers is growing significantly worldwide. The intense use of data centers leads to high energy consumptions, excessive CO2 emission and increase in the operating cost of the data centers. Although many virtual machine (VM) placement approaches have been proposed to improve the resource utilization and energy efficiency, most of these works assume a homogeneous environment in the data centers. However, the physical server configurations in heterogeneous data centers lead to varying energy consumption characteristics. In this paper, we model and implement a modified Intelligent Water Drop algorithm (MIWD) algorithm for dynamic provisioning of virtual machines on hosts in homogeneous and heterogeneous environments such that total energy consumption of a data center in cloud computing environment can be minimized. Experimental results indicate that our proposed MIWD algorithm is giving superior results.","PeriodicalId":153165,"journal":{"name":"2017 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125166000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recurrent Neural Network Based Action Recognition from 3D Skeleton Data","authors":"Parul Shukla, K. K. Biswas, P. Kalra","doi":"10.1109/SITIS.2017.63","DOIUrl":"https://doi.org/10.1109/SITIS.2017.63","url":null,"abstract":"In this paper, we present an approach for human action recognition from 3D skeleton data. The proposed method utilizes Recurrent Neural Network (RNN) and Long Short Term Memory (LSTM) to learn the temporal dependency between joints' positions. The proposed architecture uses a hierarchical scheme for aggregating the learned responses of various RNN units. We demonstrate the effectiveness of using only a few joints as opposed to all the available joints' position for action recognition. The proposed approach is evaluated on well-known publicly available MSR-Action3D dataset.","PeriodicalId":153165,"journal":{"name":"2017 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130916348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rahul Nijhawan, Rose Verma, Ayushi, Shashank Bhushan, Rajat Dua, A. Mittal
{"title":"An Integrated Deep Learning Framework Approach for Nail Disease Identification","authors":"Rahul Nijhawan, Rose Verma, Ayushi, Shashank Bhushan, Rajat Dua, A. Mittal","doi":"10.1109/SITIS.2017.42","DOIUrl":"https://doi.org/10.1109/SITIS.2017.42","url":null,"abstract":"Nail Diseases refer to some kind of deformity in the nail unit. Although the nail unit is a skin accessory, it has its own distinct class of diseases as these diseases have their own set of signs, symptoms, causes and effects that may or may not relate to other medical conditions. Recognizing nail diseases still remains an unexplored and a challenging endeavor in itself. This paper proposes a novel deep learning framework to detect and classify nail diseases from images. A distinct class of eleven diseases i.e. onychomycosis, subungulal hematoma, beau's lines, yellow nail syndrome, psoriasis, hyperpigmentation, koilonychias, paroncychia, pincer nails, leukonychia, and onychorrhexis. The framework uses a hybrid of Convolutional Neural Network (CNNs) for feature extraction. Due to the non-existence of a meticulous dataset, a new dataset was built for testing the enactment of our proposed framework. This work has been tested on our dataset and has also been compared with other state-of-the-art algorithms (SVM, ANN, KNN, and RF) that have been shown to have an excelled performance in the area of feature extraction. The results have shown a comparable performance, in terms of differentiating amongst the wide spectrum of nail diseases and are able to recognize them with an accuracy of 84.58%.","PeriodicalId":153165,"journal":{"name":"2017 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128987268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}