Y. Gil, Angela Knight, Kevin Zhang, Larry Zhang, V. Ratnakar, Ricky J. Sethi
{"title":"The Democratization of Semantic Properties: An Analysis of Semantic Wikis","authors":"Y. Gil, Angela Knight, Kevin Zhang, Larry Zhang, V. Ratnakar, Ricky J. Sethi","doi":"10.1109/ICSC.2013.44","DOIUrl":"https://doi.org/10.1109/ICSC.2013.44","url":null,"abstract":"Semantic wikis augment wikis with semantic properties that can be used to aggregate and query data through reasoning. Semantic wikis are used by many communities, for widely varying purposes such as organizing genomic knowledge, coding software, and tracking environmental data. Although wikis have been analyzed extensively, there has been no published analysis of the use of semantic wikis. In this paper, we analyze twenty semantic wikis selected for their diverse characteristics and content. We analyze the property edits and compare to the total number of edits in the wiki. We also show how semantic properties are created over the lifetime of the wiki.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115139595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edgard Marx, Saeedeh Shekarpour, S. Auer, A. N. Ngomo
{"title":"Large-Scale RDF Dataset Slicing","authors":"Edgard Marx, Saeedeh Shekarpour, S. Auer, A. N. Ngomo","doi":"10.1109/ICSC.2013.47","DOIUrl":"https://doi.org/10.1109/ICSC.2013.47","url":null,"abstract":"In the last years an increasing number of structured data was published on the Web as Linked Open Data (LOD). Despite recent advances, consuming and using Linked Open Data within an organization is still a substantial challenge. Many of the LOD datasets are quite large and despite progress in RDF data management their loading and querying within a triple store is extremely time-consuming and resource-demanding. To overcome this consumption obstacle, we propose a process inspired by the classical Extract-Transform-Load (ETL) paradigm. In this article, we focus particularly on the selection and extraction steps of this process. We devise a fragment of SPARQL dubbed SliceSPARQL, which enables the selection of well-defined slices of datasets fulfilling typical information needs. SliceSPARQL supports graph patterns for which each connected sub graph pattern involves a maximum of one variable or IRI in its join conditions. This restriction guarantees the efficient processing of the query against a sequential dataset dump stream. As a result our evaluation shows that dataset slices can be generated an order of magnitude faster than by using the conventional approach of loading the whole dataset into a triple store and retrieving the slice by executing the query against the triple store's SPARQL endpoint.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117264375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Semantic Security of Secret Image Sharing Methods","authors":"Shreelatha Bhadravati, M. Khabbazian, P. Atrey","doi":"10.1109/ICSC.2013.58","DOIUrl":"https://doi.org/10.1109/ICSC.2013.58","url":null,"abstract":"In this work, we analyze some of the existing secret image sharing methods and show that they do not possess semantic security, a property of many secure systems. We propose a new method based on the threshold secret sharing scheme for images in the compressed and uncompressed domains. Our method generates minimal share sizes with similar computational cost to previous methods, yet it is computationally secure and satisfies the semantic security property.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127461357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Framework for Composition and Reuse on the Linked Open Data","authors":"Cristiano E. Ribeiro, A. Vivacqua","doi":"10.1109/ICSC.2013.14","DOIUrl":"https://doi.org/10.1109/ICSC.2013.14","url":null,"abstract":"In recent years, many linked open datasets have been published, enabling data access and interoperability at a new scale. However, reusing rules, queries and processes is still difficult: applications are usually developed from the ground up, reinventing queries, inferences and operations that others might have created before. To address this issue, we introduce reusable inference modules, created following Semantic Web standards, which make it easier to reuse inferences and calculations based on these data. These modules act simultaneously as consumers and publishers, consuming data from one or more sources and publishing results as new, derived datasets. Their internal logic is encapsulated to simplify application development and developers need only configure rules and queries.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124861428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Efficient Implementation of Equivalence Relations in OWL via Rule and Query Rewriting","authors":"Hans-Ulrich Krieger","doi":"10.1109/ICSC.2013.51","DOIUrl":"https://doi.org/10.1109/ICSC.2013.51","url":null,"abstract":"This paper presents an implementation of the three equivalence relations in the language specification of OWL. The approach described here has been realized in the forward chaining engine HFC that we have developed over the last years and that is comparable to popular engines, such as OWLIM or Jena. The proposed technique obviates the combinatorial explosion attributed to equivalence relations in a semantic repository during materialization, when applying the OWL entailment rules from ter Horst (2005) or when using one's own custom rules. Although the approach requires a little work when (i) starting up a repository (cleaning up data, rewriting rules) and (ii) querying its content (replacing individuals by their proxies, and vice versa), it pays off in the end as our measurements have proven by showing a smaller memory footprint and allowing faster inferences than the standard brute-force approach which multiplies out everything.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116845921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Villaça, L. B. D. Paula, R. Pasquini, M. Magalhães
{"title":"A Similarity Search System Based on the Hamming Distance of Social Profiles","authors":"R. Villaça, L. B. D. Paula, R. Pasquini, M. Magalhães","doi":"10.1109/ICSC.2013.24","DOIUrl":"https://doi.org/10.1109/ICSC.2013.24","url":null,"abstract":"The goal of a similarity search system is to allow users to retrieve data that presents a required similarity level in a certain dataset. For example, such dataset may be applied in the social media scenario, where huge amounts of data represent users in a social network. This paper uses a Vector Space Model (VSM) to represent users' profiles and the Random Hyper plane Hashing (RHH) function to create indexes for them. Both VSM and RHH compose an alternative to address the challenge of performing similarity searches over the huge amount of data present in the social media scenario: the Hamming similarity. In order to evaluate the effectiveness of our proposal, this paper brings examples of reference profiles, used for performing queries, and presents results regarding the correlation between cosine and Hamming similarity and the frequency distribution of Hamming distances among identifiers of users' profiles. In short, the results indicate that Hamming similarity can be useful for the development of similarity search systems for social media.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128543323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Saeedeh Shekarpour, Konrad Höffner, Jens Lehmann, S. Auer
{"title":"Keyword Query Expansion on Linked Data Using Linguistic and Semantic Features","authors":"Saeedeh Shekarpour, Konrad Höffner, Jens Lehmann, S. Auer","doi":"10.1109/ICSC.2013.41","DOIUrl":"https://doi.org/10.1109/ICSC.2013.41","url":null,"abstract":"Effective search in structured information based on textual user input is of high importance in thousands of applications. Query expansion methods augment the original query of a user with alternative query elements with similar meaning to increase the chance of retrieving appropriate resources. In this work, we introduce a number of new query expansion features based on semantic and linguistic inferencing over Linked Open Data. We evaluate the effectiveness of each feature individually as well as their combinations employing several machine learning approaches. The evaluation is carried out on a training dataset extracted from the QALD question answering benchmark. Furthermore, we propose an optimized linear combination of linguistic and lightweight semantic features in order to predict the usefulness of each expansion candidate. Our experimental study shows a considerable improvement in precision and recall over baseline approaches.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"53 39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124663641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi Topic Distribution Model for Topic Discovery in Twitter","authors":"Lei Zheng, K. Han","doi":"10.1109/ICSC.2013.81","DOIUrl":"https://doi.org/10.1109/ICSC.2013.81","url":null,"abstract":"Micro logging websites, like Twitter, as a new social media form are growing increasingly popular. Compared with the traditional medias, such as New York Times, tweets are structured data form and with shorter length. Although traditional topic modeling algorithms have been studied well, few algorithms are specially designed to mine Twitter data according to its own features. In this paper, according to the structure of Twitter data, we introduce Multi Topic Distribution Model to mine topics. In addition, we have observed that one tweet mostly discusses either public issues or personal lives. Former studies equally analyze all tweets and fail to discover interests of each individual. With the help of features of Twitter data, dividing topics into two types in semantics, our model not only efficiently discover topics, but also is able to indicate which topics are interested by an user and which topics are hot issues of the Twitter community. Through Gibbs sampling for approximate inference, the experiments are conducted in the TREC2011 data set. Experimental results on the data set have shown an comparison between our model and Latent Dirichlet Allocation, Author Topic Model. We also illustrate an example of topics which are interested by the whole community and several users.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123192607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Design of a Hybrid Feature Detector for Adult Images","authors":"Min-Jen Tsai, Hsuan-Shao Chang","doi":"10.1109/ICSC.2013.73","DOIUrl":"https://doi.org/10.1109/ICSC.2013.73","url":null,"abstract":"Adult image detection is an important issue lately due to the parental control needs. Skin color detector is a typical solution for adult image detection. However, the performance needs to be improved due to the lack of color photo diversity. Even Bag-of-Visual-Words (BoVWs) approaches is another scheme to solve this problem, it is not easy to get efficient solution. In this paper, a hybrid feature detector for adult images is proposed. The method normalizes the local feature and global feature to become a hybrid feature which combines the benefits between BoVWs and skin-color-detector. Experimental results demonstrate that its computation time is shorter than the one of BoVWs technique and almost equivalent to the one of skin-color-detector. The most important fact of all is that the hybrid feature detector has achieved better accuracy ratio than the results by using BoVWs or skin-color-detector techniques.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127910360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can We Create Better Links by Playing Games?","authors":"Jens Lehmann, Tri Quan Nguyen, Timofey Ermilov","doi":"10.1109/ICSC.2013.62","DOIUrl":"https://doi.org/10.1109/ICSC.2013.62","url":null,"abstract":"Just like links are the backbone of the traditional World Wide Web, they are an equally important element in the Data Web. There exist a variety of automated tools, which are able to create a high number of links between RDF resources by using heuristics. However, without manual verification of the created links, it is difficult to ensure high precision and recall. In this article, we investigate whether game based approaches can be used to improve this manual verification stage. Based on the VeriLinks game platform, which we developed, we describe experiments using a survey and statistics collected within a specific interlinking game. Using three different link tasks as examples, we present an analysis of the strengths and limitations of game based link verification.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121621022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}