D. Staegemann, M. Volk, Abdulrahman Nahhas, Mohammad Abdallah, K. Turowski
{"title":"Exploring the Specificities and Challenges of Testing Big Data Systems","authors":"D. Staegemann, M. Volk, Abdulrahman Nahhas, Mohammad Abdallah, K. Turowski","doi":"10.1109/SITIS.2019.00055","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00055","url":null,"abstract":"Today, the amount and complexity of data that is globally produced increases continuously, surpassing the abilities of traditional approaches. Therefore, to capture and analyze those data, new concepts and techniques are utilized to engineer powerful big data systems. However, despite the existence of sophisticated approaches for the engineering of those systems, the testing is not sufficiently researched. Hence, in this contribution, a comparison of traditional software testing, as a common procedure, and the requirements of big data testing is drawn. The determined specificities in the big data domain are mapped to their implications on the implementation and the consequent challenges. Furthermore, those findings are transferred into six guidelines for the testing of big data systems. In the end, limitations and future prospects are highlighted.","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121056398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Manifold Extraction in Fluorescent Stack via Deep Learning","authors":"Jianfeng Cao, Hong Yan","doi":"10.1109/SITIS.2019.00032","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00032","url":null,"abstract":"Fueled by the development of advanced imaging techniques, the biological research has recently experienced ever-growing improvements, especially on microscopy analysis. The utilization of microscopies, however, is hampered by either the quality or quantity of these images. At the same time, the equipment is inevitably constrained by the physical limitations. Here we present MF-Net, a framework to automate the extraction of 2.5D membrane manifold from 3D blurred stack image. MF-Net realizes the transformation from 3D to 2D index map, and further to 2.5D manifold efficiently. Accompanied with a scheme to synthesize data, MF-Net is trained without manual annotations. Out of the box, MF-Net gets promising results on both synthetic and real microscopy images. Source code is available at https://github.com/cao13jf/MF-Net","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115883888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuan Lin, Y. Obuchi, Xueting Wang, T. Yamasaki, S. Toriumi, M. Hayashi, S. Nozawa, Midori Takahashi, T. Endo, K. Akita
{"title":"Human Tracking for Children Behavior Analysis in Nursery Schools","authors":"Yuan Lin, Y. Obuchi, Xueting Wang, T. Yamasaki, S. Toriumi, M. Hayashi, S. Nozawa, Midori Takahashi, T. Endo, K. Akita","doi":"10.1109/SITIS.2019.00044","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00044","url":null,"abstract":"Children's care and education is an eternally important element of society. In nursery schools, because children are active and sensitive, it is always difficult to effectively analyze the natural behavior of children if an observer (a person holding a camera) exists. In this paper, we collect a large amount of video data from different nursery schools by using stationary cameras without attaching motion sensors on children. We construct a system for children's behavior analysis based on human tracking from the videos in nursery schools. To achieve effective human tracking in videos of nursery schools, we combine multiple computer version technologies such as person detection, object tracking, and person re-identification. For each technique, we comprehensively compare various methods and identified the best one. With the system, we analyze some behavioral patterns of children in nursery schools. Specifically, we identify the popular areas in the room, the amount of exercise, and the gregariousness of each child.","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127598226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Colantoni, Jean-Baptiste Thomas, M. Hébert, A. Trémeau
{"title":"An Online Tool for Displaying and Processing Spectral Reflectance Images","authors":"P. Colantoni, Jean-Baptiste Thomas, M. Hébert, A. Trémeau","doi":"10.1109/SITIS.2019.00118","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00118","url":null,"abstract":"Modern web browsers allow to manipulate different types of multimedia files and can be adapted, with standardized technologies (WebAssembly, WebGL, etc.), to an ever-increasing number of contents. In this article, we describe how we were able to set up the necessary data structures and software techniques to enable web browsers to manipulate and visualize multi-and hyper-spectral images. A demonstrator, based on two images from a SpecimIQ hyperspectral sensor, is also presented as showcase.","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125845467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cycle-Consistent InfoGAN for Speech Enhancement in Various Background Noises","authors":"Wonsup Shin, Sung-Bae Cho","doi":"10.1109/SITIS.2019.00043","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00043","url":null,"abstract":"Speech enhancement is one of the crucial research topics applied to various fields. In addition, due to the progress of wireless communication technology, the need for speech enhancement research to remove various background noise occurring in the real world is increasing. Recently, a speech enhancement model based on generative adversarial learning, which can build a significant loss function by itself, has been proposed and outperformed the conventional methods. However, these models assume parallel datasets for learning, and there is a problem that the performance decreases for the signal containing various kinds of noise. This paper proposes a novel speech enhancement model based on generative adversarial network (GAN). The proposed method additionally uses cycle-consistency loss for learning on non-parallel datasets, where the InfoGAN mechanism is used to cluster noise information in an unsupervised learning manner. The proposed model can form cluster-specific mapping by using the obtained clustering information. We quantitatively verify the speech enhancement performance of the proposed method through several metrics such as MOS, SI-SNR, and PESQ, and achieve about 55% better MOS performance than the previous GAN-based models.","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127038425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edward M. Lim, Miguel Molina-Solana, C. Pain, Yi-Ke Guo, Rossella Arcucci
{"title":"Hybrid Data Assimilation: An Ensemble-Variational Approach","authors":"Edward M. Lim, Miguel Molina-Solana, C. Pain, Yi-Ke Guo, Rossella Arcucci","doi":"10.1109/SITIS.2019.00104","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00104","url":null,"abstract":"Data Assimilation (DA) is a technique used to quantify and manage uncertainty in numerical models by incorporating observations into the model. Variational Data Assimilation (VarDA) accomplishes this by minimising a cost function which weighs the errors in both the numerical results and the observations. However, large-scale domains pose issues with the optimisation and execution of the DA model. In this paper, ensemble methods are explored as a means of sampling the background error at a reduced rank to condition the problem. The impact of ensemble size on the error is evaluated and benchmarked against other preconditioning methods explored in previous work such as using truncated singular value decomposition (TSVD). Localisation is also investigated as a form of reducing the long-range spurious errors in the background error covariance matrix. Both the mean squared error (MSE) and execution time are used as measure of performance. Experimental results for a 3D case for pollutant dispersion within an urban environment are presented with promise for future work using dynamic ensembles and 4D state vectors.","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124451565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unsupervised Spectral Clustering of Music-Related Brain Activity","authors":"S. Ntalampiras","doi":"10.1109/SITIS.2019.00041","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00041","url":null,"abstract":"The recent advancements in Music Information Retrieval are now giving birth to new exciting fields, one of which is concerned with understanding the relationship existing between brain activity and the music stimuli evoking it. Thus, Music Imagery Information Retrieval (MIIR) has emerged with its goal being to bridge the gap existing between encephalographic responses and the respective music signal. This paper employs the OpenMIIR dataset which includes synchronized recordings of brain activity and music signals, thus facilitating MIIR research. Three tasks have been defined, i.e. stimuli identification, group and meter classification, which examine the problem from different viewpoints. After extracting parameters of linear time-invariant models elaborating on electroencephalographic responses, we demonstrate a suitably-designed unsupervised spectral clustering scheme. Such a scheme highlights the connection existing between responses and the audio structure of the music classes corresponding to the three tasks. We show that there is a strong connection w.r.t stimuli identification and meter classification tasks; however that is not true for the group classification case.","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122070921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How to Identify Competence from Interactions","authors":"Hocine Merzouki, N. Matta, Hassan Atifi","doi":"10.1109/SITIS.2019.00101","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00101","url":null,"abstract":"The concept of competence generally refers to the knowledge, experiences, skills, attitudes, abilities and behavior that enable effective action in a work environment. Since knowledge is linked to action, the part of an individual's knowledge used and put to work every day, mixed with the organization's knowledge, characterizes the competencies that allow a group of people to make complex tasks. The knowledge resides primarily in the heads of beings. Interactions between these persons allow to observe their knowledge through different media as documents, meetings, telephone conversations, or computer communication networks. Many works focusing on the analyse of mediated interactions to different purposes exist. So, we use these works to propose a methodology of identifying competence in professional mediated communications exchanges.","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123222555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual Navigation Using a Webcam Based on Semantic Segmentation for Indoor Robots","authors":"Miho Adachi, Sara Shatari, R. Miyamoto","doi":"10.1109/SITIS.2019.00015","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00015","url":null,"abstract":"The realization of a moving robot that can autonomously work in an actual environment has become important. A three-dimensional dense map that was created using three-dimensional (3D) depth sensors, such as light detection and ranging (LiDAR), is popular in the research field of autonomous moving robots. However, this approach has a few disadvantages: the price of 3D sensing devices and the robustness of localization in practical scenarios with many movable obstacles. To solve this problem, this paper proposes a vision-based navigation scheme that enables autonomous movement in indoor scenes; only a webcam is used as an external sensor. The experimental results from an experiment conducted in a university building demonstrated that a robot can move around on a floor.","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"284 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131416932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Breast Ultrasound Image Classification Using a Pre-Trained Convolutional Neural Network","authors":"M. Daoud, Samir Abdel-Rahman, R. Alazrai","doi":"10.1109/SITIS.2019.00037","DOIUrl":"https://doi.org/10.1109/SITIS.2019.00037","url":null,"abstract":"Breast ultrasound (BUS) imaging is commonly used for breast cancer diagnosis, but the interpretation of BUS images varies based on the radiologist's experience. Computer-aided diagnosis (CAD) systems have been proposed to provide the radiologist with an objective, computer-based classification of BUS images. Nevertheless, the majority of these systems are based on handcrafted features that are designed manually to quantify the tumor. Hence, the accuracy of these CAD systems depends on the capability of the handcrafted features to differentiate between benign and malignant tumors. Convolutional neural networks (CNNs) provide a promising approach to improve the classification of BUS images due to their ability to achieve data-driven extraction of objective, accurate, and generalizable image representations. However, the limited size of the available BUS image databases might restrict the capability of training the CNNs from scratch. To address this limitation, we investigate the use of two approaches, namely the deep features extraction approach and transfer learning approach, to enable the use of a pre-trained CNN model to achieve accurate classification of BUS images. The results show that the deep features extraction approach outperforms the transfer learning approach. Moreover, the results indicate that the extraction of deep features from the pre-trained CNN model, which is combined with effective features selection, has enabled accurate BUS image classification with accuracy, sensitivity, and specificity values of 93.9%, 95.3%, and 92.5%, respectively. These results suggest the feasibility of combining deep features extracted from pre-trained CNN models with effective features selection algorithms to achieve accurate BUS image classification.","PeriodicalId":301876,"journal":{"name":"2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)","volume":"186 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126505889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}