{"title":"Research on Combined Rough Sets with Fuzzy Sets","authors":"Haishan Chen, Meihong Wang, Feng Qian, Q. Jiang","doi":"10.1109/ISIP.2008.12","DOIUrl":"https://doi.org/10.1109/ISIP.2008.12","url":null,"abstract":"Fuzzy set theory and rough set theory are useful mathematical tools for dealing with complex information in many real-world applications. In this paper we describe three aspects of this field: theoretical research into the properties of fuzzy sets and rough sets, research on the efficient implementation of this theory(attribute reduction, rule generation), and finally the development of hybrid systems that combine fuzzy sets or rough sets with other soft computing techniques such as neural networks and genetic algorithms. Hybrid algorithms can greatly improve the quality of the reconstructed system, bringing a much simpler and better solution to many practical applications.","PeriodicalId":103284,"journal":{"name":"2008 International Symposiums on Information Processing","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114441985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Cryptosystem Based on Multiple Chaotic Maps","authors":"Jin-mei Liu, S. Qiu, Fei Xiang, Huijuan Xiao","doi":"10.1109/ISIP.2008.99","DOIUrl":"https://doi.org/10.1109/ISIP.2008.99","url":null,"abstract":"A cryptosystem based on multiple chaotic maps is proposed. It is composed of a function f(times), a 2D chaotic map and two cascaded chaotic subsystems. The iteration times of the cascaded subsystem constructed by simple chaotic maps are changed with the outputs of the 2D chaotic map. Plaintext and the outputs of the cascaded subsystems are handled by the function f(times) to generate ciphertext. Simulation tests and security analyses indicate that the proposed cryptosystem is featured by large key space, high sensitivity to key, and resistance to statistical attacks and differential attacks.","PeriodicalId":103284,"journal":{"name":"2008 International Symposiums on Information Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125377803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Query-Sensitive Graph-Based Sentence Ranking Algorithm for Query-Oriented Multi-document Summarization","authors":"Furu Wei, Yanxiang He, Wenjie Li, Q. Lu","doi":"10.1109/ISIP.2008.21","DOIUrl":"https://doi.org/10.1109/ISIP.2008.21","url":null,"abstract":"Graph-based models and ranking algorithms have been drawn considerable attentions from the document summarization community in the recent years. However, in regard to query-oriented summarization, the influence of the query has been limited to the sentence nodes in the previous graph models. We argue that other than the sentence nodes the sentence-sentence edges should also be measured in accordance with the given query. In this paper, we develop a query-sensitive similarity measure that incorporates the query influence into the evaluation of sentence-sentence edges for graph-based query-oriented summarization. Furthermore, in order to cope with the multi-document summarization task, we explicitly distinguish the inter-document sentence relations from the intra-document sentence relations and emphasize the influence of global information from the document set on local sentence evaluation. Experimental results on DUC 2005 dataset are quite promising and motivate us to further investigate query-sensitive similarity measures.","PeriodicalId":103284,"journal":{"name":"2008 International Symposiums on Information Processing","volume":"339 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122754675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yunfei Shi, Shusheng Zhang, Haitao Fan, Julu Cao, Yan Yang
{"title":"Process Information-Driven 3D Working Procedure Model Construction","authors":"Yunfei Shi, Shusheng Zhang, Haitao Fan, Julu Cao, Yan Yang","doi":"10.1109/ISIP.2008.103","DOIUrl":"https://doi.org/10.1109/ISIP.2008.103","url":null,"abstract":"The working procedure model (WPM), which is composed by a set of models, is used to describe a process of a part made from roughcast to product. And the WPM plays a key role while part being produced. The process information comprises process drawing and process steps and shows a sequencing and asymptotic course that a part is made. The 3D model canpsilat be constructed automatically by the existing method of parameterized design. Focusing on process sheets, this paper studies how to apply and implement the natural language understanding into the 3D reconstruction is researched. The method of asymptotic approximation product was proposed, which constructs 3D process model automatically and intelligence. Compared with the traditional 3D model reconstruction based on orthographic projections or engineering drawing, the process information has some advantages followed. On the one hand, the reconstruction object is translated from the complicated engineering drawing into a series of more simple process drawing. With added plentiful process information for reconstruction, the disturbances are avoided such as irrelevant graph, symbol and label etc. And more, the form change of both neighbor process drawings is so little that the engineering drawings interpretation is no difficulty; in addition, the abnormal solution and multi-solution can be avoided during reconstruction, and the problems how to be applicable to more objects is solved ultimately. Therefore, the utility method for 3D reconstruction model will be possible. On the other hand, the WPM not only includes the information about parts characters, but also can deliver the information of design, process and engineering to the downstream.","PeriodicalId":103284,"journal":{"name":"2008 International Symposiums on Information Processing","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122121469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Set-Pair Analysis Method Based on Grey System Theory","authors":"Sun Jinzhong","doi":"10.1109/ISIP.2008.48","DOIUrl":"https://doi.org/10.1109/ISIP.2008.48","url":null,"abstract":"The expression forms of uncertainty are various in true lives. Research and processing of the uncertainty are a much important question which is not only faced currently by many subjects but also must be resolved. Among the numerous methods, the Grey System theory and Set Pair Analysis are newly evolved and acquire broad applications in many fields. The paper utilizes the grey system theory to process the poor information questions existed in the prediction of set-pair analysis. The innovations in this paper lies in: 1) propose a kind of entropy transform method ; 2) bring forward a kind of whitening method based on fuzzy set-valued statistics; 3) improve the AGO method and provide a kind of Increase accumulated generating operation. The method is applied to predict the performance of human resource and displays good effect.","PeriodicalId":103284,"journal":{"name":"2008 International Symposiums on Information Processing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122196342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wang Ding, Songnian Yu, Qianfeng Wang, Jiaqi Yu, Qiang Guo
{"title":"A Novel Naive Bayesian Text Classifier","authors":"Wang Ding, Songnian Yu, Qianfeng Wang, Jiaqi Yu, Qiang Guo","doi":"10.1109/ISIP.2008.54","DOIUrl":"https://doi.org/10.1109/ISIP.2008.54","url":null,"abstract":"The naive Bayesian (NB) classifier is one of the simple but most efficient and stable classification methods. The great efficiency of NB is mainly because of the conditionally independence assumption among the attributes, which is problematic in practice especially while the attributes are strongly correlated. In this paper, we propose a novel NB text classifier, package and combined naive Bayesian text classifier (PC-NB) that relaxes the independence assumption. The main aim of PC-NB is to make naive Bayesian classifier be more accurate without efficiency reduction. A set of experiments were performed and the results of the analysis and experiment indicate that the proposed classifier is more accurate and powerful while the attributes of an instance are strongly correlated.","PeriodicalId":103284,"journal":{"name":"2008 International Symposiums on Information Processing","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129974965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feature n-gram Set Based Software Zero-Watermarking","authors":"Bin Lu, Fenlin Liu, Xin Ge, Ping Wang","doi":"10.1109/ISIP.2008.104","DOIUrl":"https://doi.org/10.1109/ISIP.2008.104","url":null,"abstract":"To settle the conflicts between stealth and resilience, the conception of software zero-watermarking is proposed by introducing image zero-watermarking into software watermarking in this paper, in which the key point is to choose a proper birthmark of the software, thus feature n-gram set is presented to be birthmark of the software. Then the software zero-watermarking scheme based on the feature n-gram set is introduced incorporating the idea of Shamirpsilas secret sharing. It is from experiments that the zero-watermarking scheme provides higher credibility and better resilience.","PeriodicalId":103284,"journal":{"name":"2008 International Symposiums on Information Processing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128859529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Development of Search Engine in China and its Problems Revealed in Net Information Retrieval","authors":"Mingzhang Zuo, Xia Zhang, Qiang Liu","doi":"10.1109/ISIP.2008.74","DOIUrl":"https://doi.org/10.1109/ISIP.2008.74","url":null,"abstract":"Nowadays, Internet goes deep into peoplepsilas daily routines, brings human beings mass information, and becomes an increasingly important channel of obtaining information. Facing such tremendous amount of information, it is a matter of life and death for us to look for a rapid and efficient method of hunting information. This paper revolves around Search Engine, which is a most efficient tool for information organization and retrieval on the Internet. The methods of retrieving information on the Internet have been improved greatly in recent years because of the application of search engines. Nevertheless, for some reasons, search engine does not provide Internet users perfect retrieval service, and often returns unsatisfied results. Basing on the statistical data from authority, this thesis concentrates on the current developmental situation and its problems found in application of search engines in China, aims at providing search engine developers some references.","PeriodicalId":103284,"journal":{"name":"2008 International Symposiums on Information Processing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134357290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Precise Information Extraction from Text Based on Two-Level Concept Lattice","authors":"Z. Zhong, Zongtian Liu, Yan Guan","doi":"10.1109/ISIP.2008.40","DOIUrl":"https://doi.org/10.1109/ISIP.2008.40","url":null,"abstract":"Aiming at the problems of high cost of time consumption and low precision of IE (information extraction) in IE systems based on concept lattice, a novel mechanism of two-level concept lattice-based IE is put forward. The structure concept lattice is the logical storage of semantic structure of documents, and the content concept lattice is used to store content information of documents. The paper gives the formal descriptions of structure and content concept lattice and analyses the time complexity of them. The consequence shows our method has obvious advantages concerning the time consumption and the precision of content extraction compared with existing IE methods.","PeriodicalId":103284,"journal":{"name":"2008 International Symposiums on Information Processing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132984424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Trusted Computing Model Based on Code Authorization","authors":"Guoheng Wei, Xueguang Zhou, Huanguo Zhang","doi":"10.1109/ISIP.2008.77","DOIUrl":"https://doi.org/10.1109/ISIP.2008.77","url":null,"abstract":"The capabilities trusted computing provides have the potential to radically improve the security and robustness of present systems. By combining present models for trusted computing with the thought of code authorization, we put forward a code authorization based Operation System model for Trusted Computing. This model solves the foundation security problems in the primitive model by creating a trusted chain from a core root of trust to all the Virtual Security Units (VSUs). The Trusted Platform Module (TPM) provides various security services, such as integrity checking and sealed storage, for all the VSUs and Authorization Describing Tables (ADTs). Moreover, the robustness of standard part in NGSCB is enforced for the security protection from the code authorization. This idea of code authorization can also be applied to most of present models that adopt the idea of box partition for trusted computing and improve their security to some extent.","PeriodicalId":103284,"journal":{"name":"2008 International Symposiums on Information Processing","volume":"33 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131806243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}