2015 12th Web Information System and Application Conference (WISA)最新文献

筛选
英文 中文
A Strategy of Inferring Containment Relationship Based on Multiple Readers' Three-State Model 基于多读者三态模型的包容关系推断策略
2015 12th Web Information System and Application Conference (WISA) Pub Date : 2015-09-11 DOI: 10.1109/WISA.2015.34
Tiancheng Zhang, Baojun Wang, Fan Kai, Chunliang Zhang
{"title":"A Strategy of Inferring Containment Relationship Based on Multiple Readers' Three-State Model","authors":"Tiancheng Zhang, Baojun Wang, Fan Kai, Chunliang Zhang","doi":"10.1109/WISA.2015.34","DOIUrl":"https://doi.org/10.1109/WISA.2015.34","url":null,"abstract":"RFID has a natural advantage of querying the containment relationship for its penetrating characteristics. This paper proposes a new containment relationship detecting algorithm-THS-TVGPMI_INFER based on RFID data characteristics, application restricted conditions and a prior knowledge of the deployment environment. Firstly, we use the 3-state detection model which has been proved to be optimal to collect the RFID data. Secondly, we obtain the probable location sets of objects based on Bayesian Inference. And we adopt the time-varying graph model to indicate the possible containment relationship. Then the historical information of the vector and the pointwise mutual information between the objects are calculated based on the time-varying graph model. Finally, we infer the possible containment relationship between objects. Experiments on a large size of simulated data are conducted. The results show that our algorithm has significantly improved the accuracy and efficiency of the containment relationship query.","PeriodicalId":198938,"journal":{"name":"2015 12th Web Information System and Application Conference (WISA)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116631447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Model Based Approach to Hadoop Deployment and Configuration 基于模型的Hadoop部署与配置方法研究
2015 12th Web Information System and Application Conference (WISA) Pub Date : 2015-09-11 DOI: 10.1109/WISA.2015.65
Yicheng Huang, X. Lan, Xing Chen, Wenzhong Guo
{"title":"Towards Model Based Approach to Hadoop Deployment and Configuration","authors":"Yicheng Huang, X. Lan, Xing Chen, Wenzhong Guo","doi":"10.1109/WISA.2015.65","DOIUrl":"https://doi.org/10.1109/WISA.2015.65","url":null,"abstract":"Hadoop is an open source software framework of distributed processing of big data. There are many kinds of services in Hadoop ecosystem, such as HDFS, Map-Reduce, HBase, Hive, Yarn, Flume, Spark, Storm, Zookeeper, and so on, which increase the complexity of deployment and configuration. It takes plenty of time to construct a Hadoop cluster. Although there are some management tools which help administrators deploy and configure Hadoop clusters automatically, they usually provide a fixed solution. So administrators couldn't construct their Hadoop clusters according to different management requirements by the tools. Software architecture acts as a bridge between requirements and implementations. It has been used to reduce the complexity and cost mainly resulted from the difficulties faced by understanding the large-scale and complex software system. This paper proposes a model based approach to Hadoop deployment and configuration which help administrators construct Hadoop clusters in a simple but powerful enough manner. First, we provide the unified models of Hadoop software architecture, according to the domain knowledge of current Hadoop deployment and configuration. Second, we provide a framework with a set of definable rules for domain experts to describe their solutions to deploy and configure Hadoop clusters. Thus, administrators can use various custom solutions to automatically deploy and configure their Hadoop clusters according to different management requirements. In addition, a real-world experiment demonstrates the feasibility, effectiveness and benefits of the new approach to Hadoop deployment and configuration.","PeriodicalId":198938,"journal":{"name":"2015 12th Web Information System and Application Conference (WISA)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127691218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Computing Terms Semantic Relatedness by Knowledge in Wikipedia 维基百科中基于知识的术语语义关联计算
2015 12th Web Information System and Application Conference (WISA) Pub Date : 2015-09-11 DOI: 10.1109/WISA.2015.41
Dexin Zhao, Liangliang Qin, Pengjie Liu, Zhen Ma, Yukun Li
{"title":"Computing Terms Semantic Relatedness by Knowledge in Wikipedia","authors":"Dexin Zhao, Liangliang Qin, Pengjie Liu, Zhen Ma, Yukun Li","doi":"10.1109/WISA.2015.41","DOIUrl":"https://doi.org/10.1109/WISA.2015.41","url":null,"abstract":"Many researchers have recognized Wikipedia as a resource of huge dynamic knowledge base in recent years. This paper provides a new approach for obtaining measures of terms semantic relatedness, which maps terms to relevant Wikipedia articles as the background information for analyzing. The proposed algorithm WLA focuses on the hyperlink structure and summary paragraph extracted from the topic pages to compute two terms similarity. Comparing with other similar techniques, the approach is less computationally intensive, because only the first paragraph is analyzed, not the entire text. Our method achieves good performance on the widely used test set WS-353.","PeriodicalId":198938,"journal":{"name":"2015 12th Web Information System and Application Conference (WISA)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126336571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Recommending Join Queries Based on Path Frequency 基于路径频率推荐连接查询
2015 12th Web Information System and Application Conference (WISA) Pub Date : 2015-09-11 DOI: 10.1109/WISA.2015.52
Man Yu, Shupeng Han, Yale Chai, Y. Zhang, Yanlong Wen
{"title":"Recommending Join Queries Based on Path Frequency","authors":"Man Yu, Shupeng Han, Yale Chai, Y. Zhang, Yanlong Wen","doi":"10.1109/WISA.2015.52","DOIUrl":"https://doi.org/10.1109/WISA.2015.52","url":null,"abstract":"Real databases often consist of hundreds of innerlinked tables, which makes posing a complex join query a really hard task for common users. Join query recommendation is an effective technique to help users formulate better join queries and explore their information demand. In this paper, we propose a novel approach to automatically create join query recommendations based on path frequency. Our approach generates recommendations by analyzing the database schema and underlying data. First, we exploit join queries which are likely to be queried by considering both the importance and the connectivity of tables. Second, we provide users two recommendation forms. One needs no input information and the other allows users to input incomplete information. Users can choose one according to their knowledge. Extensive evaluations demonstrate the effectiveness of our approach and show that our method is helpful to formulate good join queries in practice.","PeriodicalId":198938,"journal":{"name":"2015 12th Web Information System and Application Conference (WISA)","volume":"220 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115836691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Method to Discover Truth with Two Source Quality Metrics 一种使用两个源质量度量来发现真相的方法
2015 12th Web Information System and Application Conference (WISA) Pub Date : 2015-09-11 DOI: 10.1109/WISA.2015.76
Dong Yu, Derong Shen, Mingdong Zhu, Tiezheng Nie, Yue Kou, Ge Yu
{"title":"A Method to Discover Truth with Two Source Quality Metrics","authors":"Dong Yu, Derong Shen, Mingdong Zhu, Tiezheng Nie, Yue Kou, Ge Yu","doi":"10.1109/WISA.2015.76","DOIUrl":"https://doi.org/10.1109/WISA.2015.76","url":null,"abstract":"In many web integration applications, there are usually some sources that depict the same entity object with different descriptions, which leads to lots of conflicts. Resolving conflicts and finding the truth can be used to improve the quality of integration or to build a high-quality knowledge base, etc. In the single-truth data conflicting scenario, existing methods have limitations to distinguish false negative, also named as data missing, and false positive. So their source quality measurements are inadequate. Therefore, in this paper, we use recall and false positive rate to measure source quality and present a method to discover truth. The experimental results on three real-word data sets show that the proposed algorithm can effectively distinguish the data missing and false positive and improve the precision of truth discovery.","PeriodicalId":198938,"journal":{"name":"2015 12th Web Information System and Application Conference (WISA)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116958934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Top-k Skyline Computation in MapReduce MapReduce中高效的Top-k Skyline计算
2015 12th Web Information System and Application Conference (WISA) Pub Date : 2015-09-11 DOI: 10.1109/WISA.2015.57
Baoyan Song, Aili Liu, Linlin Ding
{"title":"Efficient Top-k Skyline Computation in MapReduce","authors":"Baoyan Song, Aili Liu, Linlin Ding","doi":"10.1109/WISA.2015.57","DOIUrl":"https://doi.org/10.1109/WISA.2015.57","url":null,"abstract":"Skyline is widely used in multi-objective decisionmaking, data visualization and other fields. With the rapid increasing of data volume, skyline of big data has also attracted more and more attention. However, skyline of big data has its own shortcomings. When the dimension increases, skyline results will be numerous, and we would like to select k points from the result sets. In this paper, we propose the top-k skyline of big data. It is a Distributed Top-k Skyline Method in MapReduce, called MR-DTKS. Firstly, we convert the multidimensional data to a single value to determine the dominance relationship of two data points. Secondly, we calculate the score by using the converted values to filter out most of unwanted data objects. Finally, we choose k data objects having the strongest dominating capacity. A large number of experiments show that our method is effective, and has good flexibility and scalability on real data sets as well as synthetic data sets.","PeriodicalId":198938,"journal":{"name":"2015 12th Web Information System and Application Conference (WISA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130093660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi-relational Clustering Based on Relational Distance 基于关系距离的多关系聚类
2015 12th Web Information System and Application Conference (WISA) Pub Date : 2015-09-11 DOI: 10.1109/WISA.2015.30
Liting Wei, Yun Li
{"title":"Multi-relational Clustering Based on Relational Distance","authors":"Liting Wei, Yun Li","doi":"10.1109/WISA.2015.30","DOIUrl":"https://doi.org/10.1109/WISA.2015.30","url":null,"abstract":"When clustering the tuples in the target table which is in a relational database, the prior task is to exactly and effectively calculate the relational distance between tuples. A lot of methods are used today, such as the relational distance measuring based on RIBL2. However, all these methods fail to consider the differences of similarity between the objects in both non-target table and target table, which stopped them from getting a high clustering accuracy. Using canonical correlation analysis in this paper and setting a weight for each table in the relational database, the weight indicated its role in the calculation of the distance among target tables. In addition, when calculating the distance between the two clusters to find the center of each cluster, turn the calculation of the distance between clusters into a distance between center points. Experiments show that this method ensures clustering efficiency and improves clustering accuracy.","PeriodicalId":198938,"journal":{"name":"2015 12th Web Information System and Application Conference (WISA)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130734476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Credit Scoring Model Based on Bayesian Network and Mutual Information 基于贝叶斯网络和互信息的信用评分模型
2015 12th Web Information System and Application Conference (WISA) Pub Date : 2015-09-11 DOI: 10.1109/WISA.2015.31
Yuanhang Zhuang, Zhuoming Xu, Yan Tang
{"title":"A Credit Scoring Model Based on Bayesian Network and Mutual Information","authors":"Yuanhang Zhuang, Zhuoming Xu, Yan Tang","doi":"10.1109/WISA.2015.31","DOIUrl":"https://doi.org/10.1109/WISA.2015.31","url":null,"abstract":"Credit scoring profiles the client relationships of empirical attributes (variables) and leverages a scoring model to draw client's credibility. However, empirical attributes often contains a certain degree of uncertainty and requires feature selection. Bayesian network (BN) is an important tool for dealing with uncertain problems and information. Mutual information (MI) measures dependencies between random variables and is therefore a suitable feature selection technique for evaluating the relationship between variables in a complex classification tasks. Using Bayesian network as a statistical model, this study leverages mutual information to build a credit scoring model called BNMI. The learned Bayesian network structure is adaptively adjusted according to mutual information. Empirical study compared the results of BNMI with three existing baseline models. The results show that the proposed model outperforms the baseline models in terms of receiver operating characteristic (ROC), indicating promising application of our BNMI in the credit scoring area.","PeriodicalId":198938,"journal":{"name":"2015 12th Web Information System and Application Conference (WISA)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126718806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Analyzing the Evolution of Ontology Versioning Using Metrics 利用度量分析本体版本控制的演变
2015 12th Web Information System and Application Conference (WISA) Pub Date : 2015-09-11 DOI: 10.1109/WISA.2015.70
Zhiyuan Li, Zhiyong Feng, Xin Wang, Yuan-Fang Li, Guozheng Rao
{"title":"Analyzing the Evolution of Ontology Versioning Using Metrics","authors":"Zhiyuan Li, Zhiyong Feng, Xin Wang, Yuan-Fang Li, Guozheng Rao","doi":"10.1109/WISA.2015.70","DOIUrl":"https://doi.org/10.1109/WISA.2015.70","url":null,"abstract":"The large-scale ontologies are more and more widely used in various fields, some ontologies even have evolved through a number of versions. There is an increasing need for finding a set of metrics to analyze ontology systematically. In this paper, inspired by the concept of ontology metrics at the ontology-level and the class-level, we expand ontology metrics to the property-level and choose well-known large-scale ontologies OpenGALEN and OpenCyc that have different versions as our datasets, then we develop a tool to calculate 12 ontology metrics. Finally, we analyze the experimental results to exhibit the usefulness of ontology metrics and point out some important characteristics of ontology evolution in different ontologies.","PeriodicalId":198938,"journal":{"name":"2015 12th Web Information System and Application Conference (WISA)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134315241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Distributed Storage and Analysis of Massive Urban Road Traffic Flow Data Based on Hadoop 基于Hadoop的海量城市道路交通流数据的分布式存储与分析
2015 12th Web Information System and Application Conference (WISA) Pub Date : 2015-09-11 DOI: 10.1109/WISA.2015.29
Li Zhu, Yun Li
{"title":"Distributed Storage and Analysis of Massive Urban Road Traffic Flow Data Based on Hadoop","authors":"Li Zhu, Yun Li","doi":"10.1109/WISA.2015.29","DOIUrl":"https://doi.org/10.1109/WISA.2015.29","url":null,"abstract":"Because of the traditional methods failing to solve the efficient storage and analyze the problems with rapid growth of the massive traffic flow data, This paper adopts the distributed database HBase of Hadoop to store huge amounts of the urban road traffic flow data. By applying the distributed computing framework of MapReduce, statistical analysis of the traffic flow data is carried out. The experimental results validate the ability of Hadoop cluster, whose efficient storage, computing, scalability can deal with the problem of storing and processing the massive traffic flow data.","PeriodicalId":198938,"journal":{"name":"2015 12th Web Information System and Application Conference (WISA)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114834186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信