{"title":"Data mining and integration for environmental scenarios","authors":"V. Tran, L. Hluchý, O. Habala","doi":"10.1145/1852611.1852622","DOIUrl":"https://doi.org/10.1145/1852611.1852622","url":null,"abstract":"In this paper we describe our work on the framework for integration and mining of environmental data. We present a suite of selected scenarios which are created within a data mining and integration framework being developed in the project ADMIRE. The scenarios have been chosen for their suitability for data mining by environmental experts which deal with meteorological and hydrological problems, and apply the chosen solutions to pilot areas within Slovakia. The main challenge is that the environmental data required by scenarios are maintained and provided by different organizations and are often in different formats. We present our approach to the specification and execution of data integration tasks, which deals with the distributed nature and heterogeneity of required data resources.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132316349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pham Quang Dung, Phan-Thuan Do, Y. Deville, Hô Tuòng Vinh
{"title":"Constraint-based local search for solving non-simple paths problems on graphs: application to the routing for network covering problem","authors":"Pham Quang Dung, Phan-Thuan Do, Y. Deville, Hô Tuòng Vinh","doi":"10.1145/1852611.1852613","DOIUrl":"https://doi.org/10.1145/1852611.1852613","url":null,"abstract":"Routing problems have been considered as central problems in the fields of transportation, distribution and logistics. LS(Graph) is a generic framework allowing to model and solve constrained optimum paths problems on graphs by local search where paths are known to be elementary (i.e., edges, vertices cannot be repeated on paths). In many real-world situations, the paths to be determined are not known to be neither simple nor elementary. In this paper, we extend the LS(Graph) framework by designing and implementing abstractions that allow to model and solve constrained paths problem where edges, vertices can be repeated on paths (call non-simple paths). We also propose an instance of such problem class: the routing for network covering (RNC) problem which arises in the context of rescue after a natural disaster in which we have to route a fleet of identical vehicles with limited capacity on a transportation network in order to collect the informations of the disaster. Given an undirected weighted graph G = (V, E) representing a transportation network and a vertex v0 ∈ V representing the depot, the RNC problem consists of routing a fleet of unlimited number of identical vehicles with limited capacity that cannot perform a path of length > L such that each vehicle starts from and teminates at the depot and all the edges of a given set S (S ⊆ E) must be visited. The objective of the routing plan is to minimize the number of vehicles used. This paper discusses the challenge around this problem and applies the constructed framework to the resolution of this problem. The proposed model is generic; it allows to solve some variants of the problem where side constraints are required to be added.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130209089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MemMON: run-time off-chip detection for memory access violation in embedded systems","authors":"Nam Ho, Anh-Vu Dinh-Duc","doi":"10.1145/1852611.1852634","DOIUrl":"https://doi.org/10.1145/1852611.1852634","url":null,"abstract":"To deploy a memory protection mechanism, it requires CPU support hardware components like Memory Management Unit (MMU) or Memory Protection Unit (MPU). However, in embedded system, most of microcontrollers lack to be equipped these features because they cause the system incurred hardware cost and performance penalty. In this paper, a method to detect memory corruption at run-time without incurring hardware cost is proposed. Embedded system processor does not require having MMU or MPU. Off-chip detection based on FPGA by hooking on memory bus to monitor memory access for multitasking Realtime Operating System (RTOS) application is explored. Our solution, called MemMON, by combining hardware/software can detect memory access error such as task's stack overflow, task's reading/writing to code/data segments of the other tasks or memory access violation to OS kernel efficiently. In experimental evaluation, a comparison of realtime schedulability is carried out for both using and not using MemMON. Using our MemMON causes realtime schedulability of the system dropped-off about 3 times.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122060980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Name entity recognition using inductive logic programming","authors":"H. T. Le, Thien Huu Nguyen","doi":"10.1145/1852611.1852626","DOIUrl":"https://doi.org/10.1145/1852611.1852626","url":null,"abstract":"Named entity recognition (NER) is the process of seeking to locate atomic elements in text into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, and percentages. It is useful in applying NER to other natural language tasks such as question-answering, text summarization, building semantic web, etc. This paper presents a system, called BKIE, that uses SRV -- an inductive logic program - to extract name entities in Vietnamese text. New predicates and features are added to SRV to deal with characteristics of Vietnamese language. Also, several strategies are proposed in this paper to improve the efficiency of the SRV algorithm. The data set using in experiments is 80 homepages of scientists in Vietnamese language that were tagged manually. The experiments give us the best F-score of 83% for extracting the \"name\" entity. It shows that SRV is an efficient NER algorithm given its advantages of generality and flexibility. In order to increase the system's performance, our future work includes (i) building a larger set of training data to improve system's performance; (ii) implementing BKIE using parallel programming to increase system efficiency; and (iii) testing BKIE with other application domains to get a more accurate evaluation of the system.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124042077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generating qualified summarization answers using fuzzy concept hierarchies","authors":"Ngo Tuan Phong, N. Phuong, N. K. Anh","doi":"10.1145/1852611.1852620","DOIUrl":"https://doi.org/10.1145/1852611.1852620","url":null,"abstract":"In this paper, we introduce a partially automated method to generate qualified answers at multiple abstraction levels for database queries. We examine the issues involving data summarization by Attribute-Oriented Induction (AOI) on large databases using fuzzy concept hierarchies. Because a node may have many abstracts, the fuzzy hierarchies become more complex and vaguer than crisp ones. Therefore, we cannot use exactly the original AOI algorithm with crisp hierarchies, applied for fuzzy hierarchies, to get interesting answers. The main contribution of this paper is that we propose a new approach to refine fuzzy hierarchies and evaluate tuple-terminal conditions to reduce noisy tuples. The foundations of our approach are the generalization hierarchy and a new method to estimate tuple quality. We implemented the algorithm in our knowledge discovery system and the experimental results show that the approach is efficient and suitable for knowledge discovery in large databases.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124808205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Qin, Wai-Man Pang, Binh P. Nguyen, Dong Ni, C. Chui
{"title":"Particle-based simulation of blood flow and vessel wall interactions in virtual surgery","authors":"J. Qin, Wai-Man Pang, Binh P. Nguyen, Dong Ni, C. Chui","doi":"10.1145/1852611.1852636","DOIUrl":"https://doi.org/10.1145/1852611.1852636","url":null,"abstract":"We propose a particle-based solution to simulate the interactions between blood flow and vessel wall for virtual surgery. By coupling two particle-based techniques, the smoothed particle hydrodynamics (SPH) and mass-spring model (MSM), we can simulate the blood flow and deformation of vessel seamlessly. At the vessel wall, particles are considered as both boundary particles for SPH solver and mass points for the MSM solver. We implement an improved repulsive boundary condition to simulate the interactions. The computation of blood flow dynamics and vessel wall deformations are performed in an alternating fashion in every time step. To ensure realism, parameters of both SPH and MSM are carefully configured. Experimental results demonstrate the potential of the proposed method in providing real-time and realistic interactions for virtual vascular surgery systems.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132834901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The LogP and MLogP models for parallel image processing with multi-core microprocessor","authors":"C. Chui","doi":"10.1145/1852611.1852616","DOIUrl":"https://doi.org/10.1145/1852611.1852616","url":null,"abstract":"Despite the advancement and availability of the multiple core microprocessors, it remains an issue on how to fully utilize this relatively new computing platform to achieve optimal performance for a parallel algorithm. There are limitations to the existing theoretical model in analyzing parallel algorithms for multi-core microprocessor systems. The proposed Multi-core LogP (MLogP) model is a more realistic model for parallel computing with multi-core microprocessor. The MLogP model is a variant of the popular LogP model for parallel computation. Experiment with parallel image processing algorithms were used to determine the abilities of LogP and MLogP models in predicting the performance of parallel image processing algorithms on a Intel Core2 Quad 2.44 GHz microprocessor.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132178459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparative analysis of transliteration techniques based on statistical machine translation and joint-sequence model","authors":"Nam Cao, Nhut M. Pham, Q. Vu","doi":"10.1145/1852611.1852624","DOIUrl":"https://doi.org/10.1145/1852611.1852624","url":null,"abstract":"The inability to deal with words in foreign languages imposes difficulties to both Vietnamese speech recognition and text-to-speech systems. A common solution is to look up a dictionary, but the number of available entries is finite and therefore not flexible because speech recognition and text-to-speech systems are expected to handle arbitrary words. Alternatively, data-driven approaches can be employed to transliterate a foreign word into its Vietnamese pronunciation by learning samples and predicting unseen words. This paper presents a comparative analysis between two data-driven approaches based on statistical machine translation and joint-sequence model. Two systems based on these approaches are developed and tested using the same experimental protocol and a dataset consisting of 8050 English words. Results show that joint-sequence model outperforms statistical machine translation in English-to-Vietnamese transliteration.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122006844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prediction-based directional search for fast block-matching motion estimation","authors":"Binh P. Nguyen, T. Do, C. Chui, S. Ong","doi":"10.1145/1852611.1852629","DOIUrl":"https://doi.org/10.1145/1852611.1852629","url":null,"abstract":"This paper proposes an efficient block-matching motion estimation algorithm known as prediction-based directional search (PDS). This new algorithm is applicable to a wide range of video processing applications. The algorithm uses the motion vectors in two neighboring blocks to predict a starting search point for the current block. The subsequent refining search relies on the hypothesis of monotonic block distortion surface and the center-biased characteristic of motion vector probability distribution. The cross pattern in a step and one of four possible directional rectangle search patterns in the next step are iteratively used to find the motion vector. Experiments on eleven video sequences with different characteristics shows that PDS can achieve a faster computation speed with similar or even better distortion performance compared to some existing well-known algorithms.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116207382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Phong, Phan Duc Dung, Duong Nhat Tan, N. Duc, N. T. Thuy
{"title":"Password recovery for encrypted ZIP archives using GPUs","authors":"P. Phong, Phan Duc Dung, Duong Nhat Tan, N. Duc, N. T. Thuy","doi":"10.1145/1852611.1852617","DOIUrl":"https://doi.org/10.1145/1852611.1852617","url":null,"abstract":"Protecting data by passwords in documents such as DOC, PDF or RAR, ZIP archives has been demonstrated to be weak under dictionary attacks. Time for recovering the passwords of such documents mainly depends on two factors: the size of the password search space and the computing power of the underline system. In this paper, we present an approach using modern multi-core graphic processing units (GPUs) as computing devices for finding lost passwords of ZIP archives. The combination of GPU's extremely high computing power and the state-of-the-art password structure analysis methods would bring us a feasible solution for recovering ZIP file password. We first apply password generation rules[9] in generating a reasonable password space, and then use GPUs for exhaustively verifying every password in the space. The experimental results have shown that the password verification speed increases about from 48 to 170 times (depends on the number of GPUs) compared to sequential execution on the Intel Core 2 Quad Q8400 2.66 Ghz. These results have demonstrated the potential applicability of GPUs in this cryptanalysis field.","PeriodicalId":388053,"journal":{"name":"Proceedings of the 1st Symposium on Information and Communication Technology","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123855839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}