Mojgan Hafezi Fard, K. Petrova, N. Kasabov, Grace Y. Wang
{"title":"Studying Transfer of Learning using a Brain-Inspired Spiking Neural Network in the Context of Learning a New Programming Language","authors":"Mojgan Hafezi Fard, K. Petrova, N. Kasabov, Grace Y. Wang","doi":"10.1109/CSDE53843.2021.9718472","DOIUrl":"https://doi.org/10.1109/CSDE53843.2021.9718472","url":null,"abstract":"Transfer of learning (TL) has been an important research area for scholars, educators, and cognitive psychologists for over a century. However, it is not yet understood why applying existing knowledge and skills in a new context does not always follow expectations, and how to facilitate the activation of prior knowledge to enable TL. This research uses cognitive load theory (CLT) and a neuroscience approach in order to investigate the relationship between cognitive load and prior knowledge in the context of learning a new programming language. According to CLT, reducing cognitive load improves memory performance and may lead to better retention and transfer performance. A number of different frequency-based features of EEG data may be used for measuring cognitive load. This study focuses on analysing spatio-temporal brain data (STBD) gathered experimentally using an EEG device. An SNN based computational architecture, NeuCube, was used to create a brain-like computation model and visualise the neural connectivity and spike activity patterns formed when an individual is learning a new programming language. The results indicate that cognitive load and the associated Theta and Alpha band frequencies can be used as a measure of the TL process and, more specifically, that the neuronal connectivity and spike activity patterns visualised in the NeuCube model can be interpreted with reference to the brain activities associated with the TL process.","PeriodicalId":166950,"journal":{"name":"2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116187445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zaharaddeen Karami Lawal, Hayati Yassin, R. Zakari
{"title":"Flood Prediction Using Machine Learning Models: A Case Study of Kebbi State Nigeria","authors":"Zaharaddeen Karami Lawal, Hayati Yassin, R. Zakari","doi":"10.1109/CSDE53843.2021.9718497","DOIUrl":"https://doi.org/10.1109/CSDE53843.2021.9718497","url":null,"abstract":"Machine Learning (ML) models for flood prediction can be beneficial for flood alerts and flood reduction or prevention. To that end, machine-learning (ML) techniques have gained popularity due to their low computational requirements and reliance mostly on observational data. This study aimed to create a machine learning model that can predict floods in Kebbi state based on historical rainfall dataset of thirty-three years (33), so that it can be used in other Nigerian states with high flood risk. In this article, the Accuracy, Recall, and Receiver Operating Characteristics (ROC) scores of three machine learning algorithms, namely Decision Tree, Logistic Regression, and Support Vector Classification (SVR), were evaluated and compared. Logistic Regression, when compared with the other two algorithms, gives more accurate results and provides high performance accuracy and recall. In addition, the Decision Tree outperformed the Support Vector Classifier. Decision Tree performed reasonably well due to its above-average accuracy and below-average recall scores. We discovered that Support Vector Classification performed poorly with a small size of dataset, with a recall score of 0, below average accuracy score and a distinctly average roc score.","PeriodicalId":166950,"journal":{"name":"2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116432605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aditya Gunjal, Atharva Kulkarni, C. Joshi, Ketaki Gokhale
{"title":"Reconstruction and Upscaling of 3D Models from Single or Multiple Views","authors":"Aditya Gunjal, Atharva Kulkarni, C. Joshi, Ketaki Gokhale","doi":"10.1109/CSDE53843.2021.9718448","DOIUrl":"https://doi.org/10.1109/CSDE53843.2021.9718448","url":null,"abstract":"In recent years research related to 3D reconstruction from 2D images has gained traction and several approaches have been introduced. However, most conventional methods for 3D reconstruction are time consuming and tedious. Additionally, they produce low resolution results and have their own limitations. Our approach attempts to resolve these limitations by using a modified encoder-decoder architecture which generates a low resolution 3D coarse volume from a set of 2D images of an object. In order to improve the quality of the generated model, a pseudo high resolution 3D volume is generated by upsampling the low resolution volume which has multiple missing features. Parallelly, RGB-D images from different angles are generated using the Blender software. Furthermore, these RGB-D images are upscaled to high resolution images using a CNN-image upscaler and a depth map is extracted. These newly generated depth values assist in identifying the missing features from the pseudo 3D volume thereby generating a final high quality 3D coarse volume. Our results show that this approach outperforms the existing methods.","PeriodicalId":166950,"journal":{"name":"2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129836193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Syed Mahbubuz Zaman, M. Hasan, Redwan Islam Sakline, Dipto Das, Md. Ashraful Alam
{"title":"A Comparative Analysis of Optimizers in Recurrent Neural Networks for Text Classification","authors":"Syed Mahbubuz Zaman, M. Hasan, Redwan Islam Sakline, Dipto Das, Md. Ashraful Alam","doi":"10.1109/CSDE53843.2021.9718394","DOIUrl":"https://doi.org/10.1109/CSDE53843.2021.9718394","url":null,"abstract":"The performance of any deep learning model depends heavily on the choice of optimizers and their corresponding hyper-parameters. For any given problem researchers struggle to select the best possible optimizer from a myriad of optimizers proposed in existing literature. Currently the process of optimizer selection in practice is anecdotal at best whereby practitioners either randomly select an optimizer or rely on best practices or online recommendations not grounded on empirical evidence base. In our paper, we delve deep into this problem of picking the right optimizer for text based datasets and linguistic classification problems, by bench-marking ten optimizers on three different RNN models (Bi-GRU, Bi-LSTM and BRNN) on three spam email based benchmark datasets. We analyse the performance of models employing these optimizers using train accuracy, train loss, validation accuracy, validation loss, test accuracy, test loss and RO-AUC score as metrics. The results show that Adaptive Optimization methods (RMSprop, Adam, Adam weight decay and Nadam) with default hyper-parameters outperform other optimizers in all three datasets and RNN model variations.","PeriodicalId":166950,"journal":{"name":"2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127204023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Md. Zakir Hossain, Md Bashir Uddin, Yan Yang, K. A. Ahmed
{"title":"CovidEnvelope: An Automated Fast Approach to Diagnose COVID-19 from Cough Signals","authors":"Md. Zakir Hossain, Md Bashir Uddin, Yan Yang, K. A. Ahmed","doi":"10.1109/CSDE53843.2021.9718501","DOIUrl":"https://doi.org/10.1109/CSDE53843.2021.9718501","url":null,"abstract":"COVID-19 pandemic has a devastating impact on human health and well-being. Numerous biological tools have been utilised for COVID detection, but most of the tools are costly, time-extensive and need personnel with domain expertise. Thus, a cost-effective classifier can solve the problem where cough audio signals showed potentiality as an screening classifier for COVID-19 diagnosis. Recent ML approaches on cough-based covid-19 detection need costly deep learning algorithms or sophisticated methods to extract informative features. In this paper, we propose a low-cost and efficient envelope approach, called CovidEnvelope, which can classify COVID-19 positive and negative cases from raw data by avoiding above disadvantages. This automated approach can select correct audio signals (cough) from background noises, generate envelope around the informative audio signal, and finally provide outcomes by computing area enclosed by the envelope. It has been seen that reliable data-sets are also important for achieving high performance. Our approach proves that human verbal confirmation is not a reliable source of information. Finally, the approach reaches highest sensitivity, specificity, accuracy, and AUC of 0.96, 0.92, 0.94, and 0.94 respectively to detect Covid-19 coughs. Our approach outperformed other existing models on data pre-processing and inference times, and achieved accuracy and specificity of 0.91 and 0.99 respectively, to distinguish COVID-19 coughs from other coughs, resulted from respiratory diseases. The automatic approach only takes 1.8 to 3.9 minutes to compute these performances. Overall, our approach is fast and sensitive to diagnose the people living with COVID-19, regardless of having COVID-19 related symptoms or not. In this connection, the model can be implemented easily in mobile-devices or web-based applications, and countries with poor health facilities will be highly beneficiary for covid diagnosis and measuring prognostication.","PeriodicalId":166950,"journal":{"name":"2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132034348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kerem Buyukozdemir, Kazim Yildiz, Anil Bas, B. Uslu
{"title":"A Review of Heuristic Approaches to Vehicle Routing Problems","authors":"Kerem Buyukozdemir, Kazim Yildiz, Anil Bas, B. Uslu","doi":"10.1109/CSDE53843.2021.9718378","DOIUrl":"https://doi.org/10.1109/CSDE53843.2021.9718378","url":null,"abstract":"This study reviews the approaches that are based on heuristic methods and include processed GPS data in their solutions for vehicle routing problems. Vehicle routing problems are challenging given their constraints and complexity. Due to the nature of vehicle routing, the appropriate solution must be found within a reasonable timeframe. However, it is not possible to scan the entire solution space and find the optimum solution within the ideal timeframe when the problem's complexity increases because of the number of points and the constraints. In order to solve these kinds of situations, the aim is finding the optimum solution, or the optimum solution as close as possible. This situation reveals some scenarios where heuristic and meta-heuristic algorithms are used as solution algorithms to routing problems. GPS data is included in the solution algorithms to increase the performance of heuristic algorithms and routing solutions. As a result of the processed data, environmental factors such as congestion points, average speed on the route and traffic density according to hours are also taken into account. In this way, more consistent solutions are developed for real-life applications.","PeriodicalId":166950,"journal":{"name":"2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121575419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Efficient Mesoscopic Modeling Method for Large Volume Traffic Flow Using Process Mining Techniques","authors":"K. Uehara, K. Hiraishi","doi":"10.1109/CSDE53843.2021.9718441","DOIUrl":"https://doi.org/10.1109/CSDE53843.2021.9718441","url":null,"abstract":"With the development of computing power and the widespread use of sensor technologies, highly accurate and frequent large-volume traffic flow data has become readily available. Model creation from these traffic flow data can be used for various purposes but handling large-volume traffic flow data requires huge computing power and a great deal of work. To mitigate this problem, we study mesoscopic models in which continuous values are replaced with statistical information derived from reduced data by discretization while retaining the model abstraction level that allows for bottleneck verification and identification of stagnation. In addition, we propose a novel model creation method that reduces the workload by applying process mining techniques. Furthermore, using airport traffic flow data as an example, we create an actual model and show that process mining techniques are quite useful in the modeling process.","PeriodicalId":166950,"journal":{"name":"2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124940995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Score-based Structure Learning to Computationally Learn Direct Influence between Hierarchical Dynamic Bayesian Networks","authors":"Ritesh Ajoodha, Benjamin Rosman","doi":"10.1109/CSDE53843.2021.9718401","DOIUrl":"https://doi.org/10.1109/CSDE53843.2021.9718401","url":null,"abstract":"Numerous fields of science have investigated stochastic processes which are partially observable. However, the discovery and analysis of the interaction between, and the influence upon each other, of several of these processes, have not been probed extensively. This paper uses probabilistic structure learning in an attempt to learn influence relationships between stochastic processes that are partially observed. These processes are represented by hierarchical dynamic Bayesian networks (HDBNs). To track the direct influence between the these processes, we provide an algorithm that extends the BIC structure score as well as the cumbersome (greedy hill-climbing) local search procedure. Our method leverages the temporal nature of the HDBN through the use of assembles thereby surpassing the standard approach that treats each process as a single variable. The derived BIC-score for HDBN families is clearly shown to be theoretically decomposable and empirically consistent.","PeriodicalId":166950,"journal":{"name":"2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122447437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Behfar Behzad, Dhanya Therese Jose, Antorweep Chakravorty, Chunming Rong
{"title":"TOTEM SDK: an open toolset for token controlled computation managed by blockchain","authors":"Behfar Behzad, Dhanya Therese Jose, Antorweep Chakravorty, Chunming Rong","doi":"10.1109/CSDE53843.2021.9718489","DOIUrl":"https://doi.org/10.1109/CSDE53843.2021.9718489","url":null,"abstract":"As a part of using the Internet, surging over the past decade, a wide range of data has been generated at a breakneck pace from various sources such as social media, banking sectors and governments. Many organizations have changed their work culture and adopted Big Data analytic to gain various benefits from the data being produced. Nevertheless, sharing this big data was always a tremendous challenge for scientists and engineers as it involves a large volume and sensitive data, which cannot be handled by the conventional way of data analysis. TOTEM: Token for controlled computation, accounts for an innovative concept that integrates both blockchain technologies and big data systems and uses their advantages to present a better, secure and more cost-effective solution for both data owners and data consumers. The TOTEM architecture (US Patent No.: US11,121,874 B2) aims to overcome security and privacy breaches and prevent moving large data sets across the network for analysis. Totem is an entity used in TOTEM architecture for putting constraints on computational operations. Authorised users in the network are allowed to write their own code in a specific format through a TOTEM defined SDK for analysing the data provided by the data owner. The SDK, along with the deployed smart contracts in the network, form a pre monitoring system that keeps track of totem entity value associated with each users’ submitted codes using an estimator table. In this article, we will focus on the layers of this TOTEM defined SDK and further explain how the SDK interacts with the code, analyze it within the layers and finally how it responds to it.","PeriodicalId":166950,"journal":{"name":"2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122728477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance Evaluation of the Weakly Hard Real-Time Tasks for Global Multiprocessor Scheduling Approach","authors":"Habibah Ismail, D. Jawawi, I. Ahmedy, M. A. Isa","doi":"10.1109/CSDE53843.2021.9718499","DOIUrl":"https://doi.org/10.1109/CSDE53843.2021.9718499","url":null,"abstract":"Real-time systems can be classified into three categories, based on the “seriousness” of deadline misses either by hard, soft, or weakly hard real-time tasks. The consequences of deadlines miss for a hard real-time task cannot be tolerated because some failure can affect the system performance whereas some deadline misses can be tolerated for soft real-time tasks. Meanwhile, in a weakly hard real-time task, the distribution of its met and missed deadlines is stated and specified precisely. Due to the complexity and significantly increased functionality in system computation, attention has been given to multiprocessor scheduling. Studies have shown that current multiprocessor scheduling of weakly hard real-time tasks used imprecise computation model based on iterative algorithms. This algorithm decomposed into two parts; mandatory and optional parts, unfortunately, the result analysis is precise only if its mandatory and optional parts are both executed. Even, the use of hierarchical scheduling algorithm, such as two-level scheduling under PFair algorithm may cause high overhead due to frequent preemptions and migrations. Furthermore, this algorithm incurs significant run-time overhead due to their quantum-based scheduling. In order to cater for the limitations and stated problems, an alternative multiprocessor scheduling approach, called global scheduling is proposed. The proposed scheduling approach aims to improve the probability of deadline satisfactions as much as possible and at the same time achieve a higher utilization of the task sets, with less task migrations. Thus, in this paper, performance measurement parameters are used as performance evaluation of the proposed scheduling approach.","PeriodicalId":166950,"journal":{"name":"2021 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE)","volume":"13 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134333647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}