{"title":"Fusion of bidirectional image matrices and 2D-LDA: an efficient approach for face recognition","authors":"Hung Phuoc Truong, T. Le","doi":"10.1145/2350716.2350738","DOIUrl":"https://doi.org/10.1145/2350716.2350738","url":null,"abstract":"Although 2D-PCA and 2D-LDA algorithms obtain high recognition accuracy, drawback of these is that they need huge feature matrices for the task of face recognition. Besides, structure information between row and column direction cannot be uncovered simultaneously. To overcome these problems, this paper presents an efficient approach for face image feature extraction - a novel two-stage discrimination approach: preprocess original images to get two new image matrices and represent these images matrices using bidirectional 2D-LDA techniques. This approach directly extracts the optimal projective vectors from two new 2D image matrices by simultaneously considering row-direction 2D-LDA and column direction 2D-LDA. With this proposal, we can utilize the idea of local block features and global 2D images structures so it can preserve the 2D local facial features. Experimental results on ORL and Yale face database demonstrate that the proposed method obtains good recognition accuracy despite having less number of coefficient and few training samples (about two samples for each class).","PeriodicalId":208300,"journal":{"name":"Proceedings of the 3rd Symposium on Information and Communication Technology","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115394551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient label propagation for classification on information networks","authors":"N. K. Anh, V. Thanh, Ngo Van Linh","doi":"10.1145/2350716.2350725","DOIUrl":"https://doi.org/10.1145/2350716.2350725","url":null,"abstract":"Classification on networked data plays an important role in many problems such as web page categorization, classification of bibliographic information network, etc... Most classification algorithms on information networks work by iteratively propagating information through network graphs. One important issue concerning iterative classifiers is that false inferences made at some point in iteration might propagate further causing an \"avalanche\". To address this problem, we propose an efficient label propagation learning algorithm based on the graph-based regularization framework with adjusting network structure iteratively to improve the accuracy of classification algorithm for noisy data. We show empirically that this adjusting network structure improves significantly the performance of the algorithm for web page classification. In particular, we demonstrate that the proposed algorithm achieves good classification accuracy even for relatively large overlap across the classes.","PeriodicalId":208300,"journal":{"name":"Proceedings of the 3rd Symposium on Information and Communication Technology","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127609206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intrusion detection under covariate shift using modified support vector machine and modified backpropagation","authors":"Tran Dinh Cuong, Nguyen Linh Giang","doi":"10.1145/2350716.2350756","DOIUrl":"https://doi.org/10.1145/2350716.2350756","url":null,"abstract":"In this paper, we address the dataset shift problem in building intrusion detection systems by assuming that network traffic variants follow the covariate shift model. Based on two recent works on direct density ratio estimation which are kernel mean matching and unconstrained least squares importance fitting, we propose to modify two well-known classification techniques: neural networks with back propagation and support vector machine in order to make these techniques work better under covariate shift effect. We evaluated the modified techniques on a benchmark intrusion detection dataset, the KDD Cup 1999, and got higher results on predication accuracy of network behaviors compared with the original techniques.","PeriodicalId":208300,"journal":{"name":"Proceedings of the 3rd Symposium on Information and Communication Technology","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125565137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An information content based partitioning method for the anatomical ontology matching task","authors":"Dac-Thanh Tran, Duy-Hoa Ngo, Phan-Thuan Do","doi":"10.1145/2350716.2350757","DOIUrl":"https://doi.org/10.1145/2350716.2350757","url":null,"abstract":"Anatomy ontology matching has been attracting a lot of interest and attention of researchers, especially, biologists, medics and geneticists. This is a very difficult task due to the huge size of anatomy ontologies. Despite the fact that many ontology matching tools have been proposed so far, most of them achieve good results only for small size ontologies. In a recent survey [22], the authors pointed out that the large scale ontology matching problem still presents a real challenge because it is a time consuming and memory intensive process. According to state of the art works, the authors also state that partitioning large scale ontology is a promising solution to deal with this issue. Therefore, in this paper, we propose a partitioning approach to break up the large matching problem into smaller matching subproblems. At first, we propose a method to semantically split anatomy ontology into groups called clusters. It relies on a specific method for computing semantic similarities between concepts based on both their information content on anatomy ontology, and a scalable agglomerative hierarchical clustering algorithm. We then propose a filtering method to select the possible similar partitions in order to reduce the computation time. The experimental analysis demonstrates that our approach is capable of solving the scalability ontology matching problem and encourages us to the future works.","PeriodicalId":208300,"journal":{"name":"Proceedings of the 3rd Symposium on Information and Communication Technology","volume":"438 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132686543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Time series discord discovery using WAT algorithm and iSAX representation","authors":"Nguyen Kim Khanh, D. T. Anh","doi":"10.1145/2350716.2350748","DOIUrl":"https://doi.org/10.1145/2350716.2350748","url":null,"abstract":"Among several existing algorithms proposed to solve the problem of time series discord discovery, HOT SAX and WAT are two widely used algorithms. Especially, WAT can make use of the multi-resolution property in Haar wavelet transform. In this work, we employ state-of-the-art iSAX representation rather than SAX representation in WAT algorithm. To apply iSAX in WAT algorithm, we have to devise two new auxiliary functions and also modify iSAX index structure to adapt Haar transform that is used in WAT algorithm. We empirically evaluate our algorithm with a set of experiments. Experimental results show that WATiSAX algorithm is more effective than original WAT algorithm.","PeriodicalId":208300,"journal":{"name":"Proceedings of the 3rd Symposium on Information and Communication Technology","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134487991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast point quadrupling on elliptic curves","authors":"Duc-Phong Le, B. Nguyen","doi":"10.1145/2350716.2350750","DOIUrl":"https://doi.org/10.1145/2350716.2350750","url":null,"abstract":"Ciet et al. (2006) proposed an elegant method for trading inversions for multiplications when computing [2]P+Q from two given points P and Q on elliptic curves of Weierstrass form. Motivated by their work, this paper proposes a fast algorithm for computing [4]P with only one inversion in affine coordinates. Our algorithm that requires 1I + 8S + 8M, is faster than two repeated doublings whenever the cost of one field inversion is more expensive than the cost of four field multiplications plus four field squarings (i.e. I > 4M + 4S). It saves one field multiplication and one field squaring in comparison with the Sakai-Sakurai method (2001). Even better, for special curves that allow \"a = 0\" (or \"b = 0\") speedup, we obtain [4]P in affine coordinates using just 1I + 5S + 9M (or 1I + 5S + 6M, respectively).","PeriodicalId":208300,"journal":{"name":"Proceedings of the 3rd Symposium on Information and Communication Technology","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134585246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mesh connection with RBF local interpolation and wavelet transform","authors":"Anh-Cang Phan, Romain Raffin, M. Daniel","doi":"10.1145/2350716.2350731","DOIUrl":"https://doi.org/10.1145/2350716.2350731","url":null,"abstract":"We introduce a connection method between two mesh areas at different resolutions. The connecting mesh is based on a local interpolation with radial basis functions and a Lifted B-spline wavelet transform. This ensures that the \"continuity\" between these mesh areas is preserved and the connecting mesh is changed gradually in resolution between coarse and fine areas. This method could be extented to applications related to filling holes, pasting subdivision meshes and joining 3D objects.","PeriodicalId":208300,"journal":{"name":"Proceedings of the 3rd Symposium on Information and Communication Technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132313698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quang-Thuy Ha, Thi-Lan-Giao Hoang, Linh Anh Nguyen, H. Nguyen, A. Szałas, Thanh-Luong Tran
{"title":"A bisimulation-based method of concept learning for knowledge bases in description logics","authors":"Quang-Thuy Ha, Thi-Lan-Giao Hoang, Linh Anh Nguyen, H. Nguyen, A. Szałas, Thanh-Luong Tran","doi":"10.1145/2350716.2350753","DOIUrl":"https://doi.org/10.1145/2350716.2350753","url":null,"abstract":"We develop the first bisimulation-based method of concept learning, called BBCL, for knowledge bases in description logics (DLs). Our method is formulated for a large class of useful DLs, with well-known DLs like ALC, SHIQ, SHOIQ, SROIQ. As bisimulation is the notion for characterizing indis-cernibility of objects in DLs, our method is natural and very promising.","PeriodicalId":208300,"journal":{"name":"Proceedings of the 3rd Symposium on Information and Communication Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114102477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cuong Nguyen Quoc, Dung Tran Tien, K. Dang, Binh Nguyen Huu
{"title":"Robust speech recognition based on binaural speech enhancement system as a preprocessing step","authors":"Cuong Nguyen Quoc, Dung Tran Tien, K. Dang, Binh Nguyen Huu","doi":"10.1145/2350716.2350732","DOIUrl":"https://doi.org/10.1145/2350716.2350732","url":null,"abstract":"In this paper, we present a robust speech recognition based on binaural speech enhancement system as a preprocessing step. This system uses an existing dereverberation technique followed by a spatial masking-based noise removal algorithm where only signals coming from the desired directions are retained by using a threshold angle. While state-of-the art approaches fix the threshold angle heuristically over all time frames, in this paper, we propose to consider an adaptive computation where this threshold angle is first learned in several noise-only frames and then updated frame by frame. Speech recognition results in real environment show the effectiveness of the proposed speech enhancement approach.","PeriodicalId":208300,"journal":{"name":"Proceedings of the 3rd Symposium on Information and Communication Technology","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117077269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Critical systems development methodology using formal techniques","authors":"D. Méry, N. Singh","doi":"10.1145/2350716.2350720","DOIUrl":"https://doi.org/10.1145/2350716.2350720","url":null,"abstract":"Formal methods have emerged as an alternative approach to ensuring the quality and correctness of the high confidence critical systems, overcoming limitations of the traditional validation techniques such as simulation and testing. This paper presents a methodology for developing critical systems from requirement analysis to automatic code generation with standard safety assessment approach. This methodology combines the refinement approach with various tools including verification tool, model checker tool, real-time animator and finally, produces the source code into many languages using automatic code generation tools. This approach is intended to contribute to further the use of formal techniques for developing critical systems with high integrity and to verify complex properties, which help to discover potential problems. Assessment of the proposed methodology is given through developing a standard case study: the cardiac pacemaker.","PeriodicalId":208300,"journal":{"name":"Proceedings of the 3rd Symposium on Information and Communication Technology","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122615118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}