2016 International Workshop on Big Data and Information Security (IWBIS)最新文献

筛选
英文 中文
Generalized learning vector quantization particle swarm optimization (GLVQ-PSO) FPGA implementation for real-time electrocardiogram 基于广义学习向量量化粒子群优化(GLVQ-PSO)的实时心电图FPGA实现
2016 International Workshop on Big Data and Information Security (IWBIS) Pub Date : 2016-10-01 DOI: 10.1109/IWBIS.2016.7872897
Yulistiyan Wardhana, W. Jatmiko, M. F. Rachmadi
{"title":"Generalized learning vector quantization particle swarm optimization (GLVQ-PSO) FPGA implementation for real-time electrocardiogram","authors":"Yulistiyan Wardhana, W. Jatmiko, M. F. Rachmadi","doi":"10.1109/IWBIS.2016.7872897","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872897","url":null,"abstract":"Cardiovascular system is the most important part of human body which has role as distribution system of Oxygen and body's wastes. To do the job, there are more than 60.000 miles of blood vessels participated which can caused a problem if one of them are being clogged. Unfortunately, the conditions of clogged blood vessels or diseases caused by cardiovascular malfunction could not be detected in a plain view. In this matter, we proposed a design of wearable device which can detect the conditions. The device is equipped with a newly neural network algorithm, GLVQ-PSO, which can give recommendation of the heart status based on learned data. After the research is conducted, the algorithm produce better accuracy than LVQ, GLVQ and FNGLVQ in the high level language implementation. Yet, GLVQ-PSO still has relatively worse performance in its FPGA implementation.","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121359324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The power of big data and algorithms for advertising and customer communication 大数据和算法在广告和客户沟通方面的力量
2016 International Workshop on Big Data and Information Security (IWBIS) Pub Date : 2016-10-01 DOI: 10.1109/IWBIS.2016.7872882
Nico Neumann
{"title":"The power of big data and algorithms for advertising and customer communication","authors":"Nico Neumann","doi":"10.1109/IWBIS.2016.7872882","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872882","url":null,"abstract":"Leveraging customer data in scale and often in real time has led to a new field called programmatic commerce — the use of data, automation and analytics to improve customer experiences and company performances. In particular in advertising and marketing, programmatic applications have become very popular because they allow personalization/ micro-targeting as well as easier media planning due to the rise of automated buying processes. In this review study, we will discuss the development of the new field around advertising and marketing technology and summarize present research efforts. In addition, some industry case studies will be shared to illustrate the power of the latest big-data and machine-learning applications for driving business outcomes.","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128403777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Overview of research center for information technology innovation in Taiwan Academia Sinica 台湾中央研究院资讯科技创新研究中心概况
2016 International Workshop on Big Data and Information Security (IWBIS) Pub Date : 2016-10-01 DOI: 10.1109/IWBIS.2016.7872881
Yennun Huang, Szu-Chuang Li
{"title":"Overview of research center for information technology innovation in Taiwan Academia Sinica","authors":"Yennun Huang, Szu-Chuang Li","doi":"10.1109/IWBIS.2016.7872881","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872881","url":null,"abstract":"Founded in February 2007, the aim of the Research Center for Information Technology Innovation (CITI) at Academia Sinica is to integrate research and development efforts in information technologies by various organizations in Academia Sinica, and also to facilitate and leverage IT-related multidisciplinary research. As a integral part of CITI, Taiwan Information Security Center (TWISC) to conduct researches on security with funding support from Ministry of Science and Technology. TWISC serves as a platform for security experts from universities, research institutes and private sector to share information and to explore opportunities to collaborate. Its aim is to boost research and development activities and promote public awareness regarding information security. Its research topics cover data/ software/ hardware/ network security and security management. TWISC has become the hub of security research in Taiwan and have been making significant impact through publishing and creating of toolkits. Recently privacy also becomes one of the main focuses of TWISC. The research team at CITI, Academia has been working on a viable way to assess the disclosure risk of synthetic dataset. Preliminary research result will be presented in this paper.","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116478668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and implementation of merchant acquirer data warehouse at PT. XYZ 在PT. XYZ上设计和实现商户收单数据仓库
2016 International Workshop on Big Data and Information Security (IWBIS) Pub Date : 2016-10-01 DOI: 10.1109/IWBIS.2016.7872888
Y. Ruldeviyani, Bofandra Mohammad
{"title":"Design and implementation of merchant acquirer data warehouse at PT. XYZ","authors":"Y. Ruldeviyani, Bofandra Mohammad","doi":"10.1109/IWBIS.2016.7872888","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872888","url":null,"abstract":"Merchant acquirer is a business of acquiring debit card, credit card, and prepaid card transactions using EDC (electronic payment terminals) in merchants. It is included in one of the top business priority areas in PT. XYZ. It is in the area of retail payments and deposits. It increases fees based incomes, cheap funds, and high yield loans. In order to improve its business performance, PT. XYZ needs the best strategy. However, a good strategic decision needs an adequate of useful information. Currently, information that is provided by reporting staffs involved many manual tasks in the process. In consequence, the data cannot be provided quickly, and it has some complexity limitation. As a solution for this problem, PT. XYZ needs a data warehouse for its merchant acquirer business. This research will focus on the design and the implementation of the data warehouse solution using a methodology that is developed by Ralph L. Kimball. Finally, data warehouse is developed which is suitable for merchant acquirer PT. XYZ's needs.","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117257757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Design DDoS attack detector using NTOPNG 利用NTOPNG设计DDoS攻击检测器
2016 International Workshop on Big Data and Information Security (IWBIS) Pub Date : 2016-10-01 DOI: 10.1109/IWBIS.2016.7872903
G. Jati, Budi Hartadi, A. Putra, Fahri Nurul, M. Iqbal, S. Yazid
{"title":"Design DDoS attack detector using NTOPNG","authors":"G. Jati, Budi Hartadi, A. Putra, Fahri Nurul, M. Iqbal, S. Yazid","doi":"10.1109/IWBIS.2016.7872903","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872903","url":null,"abstract":"Distributed Denial of Service (DDoS) is one kind of attacks using multiple computers. An attacker would act as a fake service requester that drains resources in computer target. This makes the target cannot serve the real request service. Thus we need to develop DDoS detector system. The proposed system consists of traffic capture, packet analyzer, and packet displayer. The system utilizes Ntopng as main traffic analyzer. Detector system has to meet good standard in accuracy, sensitivity, and reliability. We evaluate the system using one of dangerous DDoS tool named Slowloris. The system can detect attacks and provide alerts to detector user. The system also can process all incoming packets with a small margin of error (0.76%).","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124411324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Enhanced tele ECG system using Hadoop framework to deal with big data processing 增强远程心电系统采用Hadoop框架进行大数据处理
2016 International Workshop on Big Data and Information Security (IWBIS) Pub Date : 2016-10-01 DOI: 10.1109/IWBIS.2016.7872900
M. A. Ma'sum, W. Jatmiko, H. Suhartanto
{"title":"Enhanced tele ECG system using Hadoop framework to deal with big data processing","authors":"M. A. Ma'sum, W. Jatmiko, H. Suhartanto","doi":"10.1109/IWBIS.2016.7872900","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872900","url":null,"abstract":"Indonesia has high mortality caused by cardiovascular diseases. To minimize the mortality, we build a tele-ecg system for heart diseases early detection and monitoring. In this research, the tele-ecg system was enhanced using Hadoop framework, in order to deal with big data processing. The system was build on cluster computer with 4 nodes. The server is able to handle 60 requests at the same time. The system can classify the ecg data using decision tree and random forest. The accuracy is 97.14% and 98,92% for decision tree and random forest respectively. Training process in random forest is faster than in decision tree, while testing process in decision tree is faster than in random forest.","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128787211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Big sensor-generated data streaming using Kafka and Impala for data storage in Wireless Sensor Network for CO2 monitoring 大传感器生成的数据流使用Kafka和Impala在无线传感器网络中存储数据,用于二氧化碳监测
2016 International Workshop on Big Data and Information Security (IWBIS) Pub Date : 2016-10-01 DOI: 10.1109/IWBIS.2016.7872896
Rindra Wiska, Novian Habibie, A. Wibisono, W. S. Nugroho, P. Mursanto
{"title":"Big sensor-generated data streaming using Kafka and Impala for data storage in Wireless Sensor Network for CO2 monitoring","authors":"Rindra Wiska, Novian Habibie, A. Wibisono, W. S. Nugroho, P. Mursanto","doi":"10.1109/IWBIS.2016.7872896","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872896","url":null,"abstract":"Wireless Sensor Network (WSN) is a system that have a capability to conduct data acquisition and monitoring in a wide sampling area for a long time. However, because of its big-scale monitoring, amount of data accumulated from WSN is very huge. Conventional database system may not be able to handle its big amount of data. To overcome that, big data approach is used for an alternative data storage system and data analysis process. This research developed a WSN system for CO2 monitoring using Kafka and Impala to distribute a huge amount of data. Sensor nodes gather data and accumulated in temporary storage then streamed via Kafka platform to be stored into Impala database. System tested with data gathered from our-own made sensor nodes and give a good performance.","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123360591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Dimensionality reduction using deep belief network in big data case study: Hyperspectral image classification 基于深度信念网络的大数据降维研究:高光谱图像分类
2016 International Workshop on Big Data and Information Security (IWBIS) Pub Date : 2016-10-01 DOI: 10.1109/IWBIS.2016.7872892
D. M. S. Arsa, G. Jati, Aprinaldi Jasa Mantau, Ito Wasito
{"title":"Dimensionality reduction using deep belief network in big data case study: Hyperspectral image classification","authors":"D. M. S. Arsa, G. Jati, Aprinaldi Jasa Mantau, Ito Wasito","doi":"10.1109/IWBIS.2016.7872892","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872892","url":null,"abstract":"The high dimensionality in big data need a heavy computation when the analysis needed. This research proposed a dimensionality reduction using deep belief network (DBN). We used hyperspectral images as case study. The hyperspectral image is a high dimensional image. Some researched have been proposed to reduce hyperspectral image dimension such as using LDA and PCA in spectral-spatial hyperspectral image classification. This paper proposed a dimensionality reduction using deep belief network (DBN) for hyperspectral image classification. In proposed framework, we use two DBNs. First DBN used to reduce the dimension of spectral bands and the second DBN used to extract spectral-spatial feature and as classifier. We used Indian Pines data set that consist of 16 classes and we compared DBN and PCA performance. The result indicates that by using DBN as dimensionality reduction method performed better than PCA in hyperspectral image classification.","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122486651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Spatial data mining for predicting of unobserved zinc pollutant using ordinary point Kriging 常点克里格法预测未观测锌污染物的空间数据挖掘
2016 International Workshop on Big Data and Information Security (IWBIS) Pub Date : 2016-10-01 DOI: 10.1109/IWBIS.2016.7872894
A. A. Gunawan, A. N. Falah, Alfensi Faruk, D. S. Lutero, B. N. Ruchjana, A. S. Abdullah
{"title":"Spatial data mining for predicting of unobserved zinc pollutant using ordinary point Kriging","authors":"A. A. Gunawan, A. N. Falah, Alfensi Faruk, D. S. Lutero, B. N. Ruchjana, A. S. Abdullah","doi":"10.1109/IWBIS.2016.7872894","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872894","url":null,"abstract":"Due to pollution over many years, large amounts of heavy metal pollutant can be accumulated in the rivers. In the research, we would like to predict the dangerous region around the river. For study case, we use the Meuse river floodplains which are contaminated with zinc (Zn). Large zinc concentrations can cause many health problems, for example vomiting, skin irritations, stomach cramps, and anaemia. However there is only few sample data about the zinc concentration of Meuse river, thus the missing data in unknown regions need to be generated. The aim of this research is to study and to apply spatial data mining to predict unobserved zinc pollutant by using ordinary point Kriging. By mean of semivariogram, the variability pattern of zinc can be captured. This captured model will be interpolated to predict the unknown regions by using Kriging method. In our experiments, we propose ordinary point Kriging and employ several semivariogram: Gaussian, Exponential and Spherical models. The experimental results show that: (i) by calculating the minimum error sum of squares, the fittest theoretical semivariogram models is exponential model (ii) the accuracy of the predictions can be confirmed visually by projecting the results to the map.","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121116694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
The application of big data using MongoDB: Case study with SCeLE Fasilkom UI forum data MongoDB的大数据应用:以SCeLE Fasilkom UI论坛数据为例
2016 International Workshop on Big Data and Information Security (IWBIS) Pub Date : 2016-10-01 DOI: 10.1109/IWBIS.2016.7872889
Argianto Rahartomo, R. F. Aji, Y. Ruldeviyani
{"title":"The application of big data using MongoDB: Case study with SCeLE Fasilkom UI forum data","authors":"Argianto Rahartomo, R. F. Aji, Y. Ruldeviyani","doi":"10.1109/IWBIS.2016.7872889","DOIUrl":"https://doi.org/10.1109/IWBIS.2016.7872889","url":null,"abstract":"Big Data is a condition in which data size in a database is very large so it is difficult to be managed. An e-Learning application, like SCeLE Fasilkom UI (scele.cs.ui.ac.id), also has a very large data. SCeLE has hundreds of forum data, and each forum has at least 4000 threads of discussion. In addition, one thread can have at least dozens or hundreds posts. Therefore, it may further experience data growth problem, which will be difficult to be handled by RDBMS, such as MySQL that is currently used. In order to solve this problem, a research been conducted to apply Big Data in SCeLE Fasilkom UI, which implementation is aimed to increase SCeLE's data management performance. The implementation of Big Data in the research used MongoDB as the system's DBMS. The research result showed that MongoDB obtain better results than MySQL in SCeLE Fasilkom UI forum data case in terms of speed.","PeriodicalId":193821,"journal":{"name":"2016 International Workshop on Big Data and Information Security (IWBIS)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127501914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信