Josep Maria Brunetti, S. Auer, Roberto García, Jakub Klímek, M. Nečaský
{"title":"Formal Linked Data Visualization Model","authors":"Josep Maria Brunetti, S. Auer, Roberto García, Jakub Klímek, M. Nečaský","doi":"10.1145/2539150.2539162","DOIUrl":"https://doi.org/10.1145/2539150.2539162","url":null,"abstract":"Recently, the amount of semantic data available in the Web has increased dramatically. The potential of this vast amount of data is enormous but in most cases it is difficult for users to explore and use this data, especially for those without experience with Semantic Web technologies. Applying information visualization techniques to the Semantic Web helps users to easily explore large amounts of data and interact with them. In this article we devise a formal Linked Data Visualization Model (LDVM), which allows to dynamically connect data with visualizations. We report about our implementation of the LDVM comprising a library of generic visualizations that enable both users and data analysts to get an overview on, visualize and explore the Data Web and perform detailed analyzes on Linked Data.","PeriodicalId":424918,"journal":{"name":"International Conference on Information Integration and Web-based Applications & Services","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130182873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-Criteria Recommender Systems based on Multi-Attribute Decision Making","authors":"Ferdaous Hdioud, B. Frikh, B. Ouhbi","doi":"10.1145/2539150.2539176","DOIUrl":"https://doi.org/10.1145/2539150.2539176","url":null,"abstract":"The Multi-Criteria Recommender systems continue to be interesting and challenging problem. In this paper we will propose an approach for selection of relevant items in a RS based on multi-criteria ratings and a method of computing weights of criteria taken from Multi-criteria Decision Making (MCDM). This method proposes a correlation coefficient and standard deviation integrated approach for determining weight of criteria in multi-criteria recommender systems. We evaluated the proposed method on an example of movies recommendation. Our approach was compared to some other metrics used in Information Theoretic approach to illustrate its potential applications.","PeriodicalId":424918,"journal":{"name":"International Conference on Information Integration and Web-based Applications & Services","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127609764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How Semantic Knowledge can Enhance the Access to PA Online Services","authors":"A. Goy, Diego Magro, Matteo Casu, V. D. Tomaso","doi":"10.1145/2539150.2539218","DOIUrl":"https://doi.org/10.1145/2539150.2539218","url":null,"abstract":"Current e-government initiatives are offering a huge amount of information about available services, which tends to overload citizens, who are often \"lost\". Intelligent Web-based portals represent a possible solution to this problem, by providing an effective and user-friendly access to online information and service descriptions. In particular, this paper proposes an approach based on formal ontologies and shows how they can provide a great enhancement in this direction. Formal semantic knowledge, in fact, enables the exploitation of reasoning mechanisms to understand users goals and provide them with information and services satisfying their needs. The proposed approach has been tested on a set of services provided by a local Italian Public Administration and results are encouraging.","PeriodicalId":424918,"journal":{"name":"International Conference on Information Integration and Web-based Applications & Services","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116690810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Behavior Analysis of Microblog Users Based on Transitions in Posting Activities","authors":"Y. Yamaguchi, Shuhei Yamamoto, T. Satoh","doi":"10.1145/2539150.2539209","DOIUrl":"https://doi.org/10.1145/2539150.2539209","url":null,"abstract":"In recent years, such microblogs as Twitter have spread widely over the world. Twitter, which enables instant text communications among users, was launched in 2006. In 2012, its Japanese users exceeded 29.9 million. Useful functions related to posting a tweet include reply, retweet, and hashtag. Users communicate with others and spread information with these functions. In this paper, we model user behaviors by a transition of clusters that represent particular posting activities. Under the model, all users belong to a cluster consisting of several features at individual time slots and move among the clusters in a time series. These features include the number of posts and retweets/replies, the time when the tweets were posted, and the number of characters in each tweet. We reveal the temporal transitions of these clusters in the process of using Twitter from the time when users created their accounts. We propose a time longitudinal analysis method to clarify the relation of the transition of user posting activities and their lifetime of Twitter. Our proposed method consists of two steps: creating clusters that represent particular posting activities and drawing a state transition diagram with transition probabilities among clusters. From our analysis results of actual Japanese tweets for a one-year period with our proposed method, we conclude the following. Our proposed method can express changes in the posting activities of users. We conclude that using Twitter's functions, e.g., replies and retweets (RT), are one difference between users who continue to use Twitter for a long time and those who quit relatively soon.","PeriodicalId":424918,"journal":{"name":"International Conference on Information Integration and Web-based Applications & Services","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122969351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DaemonX: Design, Adaptation, Evolution, and Management of Native XML (and More Other) Formats","authors":"Marek Polák, M. Nečaský, I. Holubová","doi":"10.1145/2539150.2539159","DOIUrl":"https://doi.org/10.1145/2539150.2539159","url":null,"abstract":"The most common applications of the today's IT world are information systems. The problems related to their design and implementation have sufficiently been solved. However, the true problems occur when an IS is already deployed and user requirements change. Currently this situation requires a skilled IT expert who knows all system components and, hence, is able to identify and modify all the affected parts. However, not always we have such an expert, whereas for complex systems it is a very hard and error-prone task. In this paper we introduce DaemonX -- an evolution management framework, which enables to manage evolution of complex applications efficiently and correctly. Using the idea of plug-ins, it enables to model almost any kind of a data format (currently XML, UML, ER, and BPMN). Since it preserves relationships among the modeled constructs, it naturally supports propagation of changes to all related affected parts. We describe the general proposal of the framework and, then, its architecture and implementation.","PeriodicalId":424918,"journal":{"name":"International Conference on Information Integration and Web-based Applications & Services","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123777002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Routing Cost and Latency in the VillageTelco Wireless Mesh Network","authors":"M. Adeyeye, A. V. Gelder, S. Ojo","doi":"10.1145/2539150.2539267","DOIUrl":"https://doi.org/10.1145/2539150.2539267","url":null,"abstract":"Afrimesh is the Network Management System (NMS) used in the VillageTelco (VT) project. The NMS uses both Simple Network Management Protocol (SNMP) and Internet Control Message Protocol (ICMP) as its network management protocols. It provides a thorough report on network health, such as interference, noise level and latency. In addition, it displays the current network topology and can run at a node thereby making it efficient to identify compromised or isolated node(s). This article presents the performance of a typical mesh network. The performance metrics include signalling overhead and network latency of a small scale mesh network with few nodes. Experiments showed that an additional hop in a network increases the network latency by averagely 35ms. In addition, the signalling overhead of a VT network increases with time at every node. However, the footprint is small and would not impede the performance of a network in a large-scale deployment.","PeriodicalId":424918,"journal":{"name":"International Conference on Information Integration and Web-based Applications & Services","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126849292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On Providing DDL Support for a Relational Layer over a Document NoSQL Database","authors":"G. Ferreira, A. Calil, R. Mello","doi":"10.1145/2539150.2539196","DOIUrl":"https://doi.org/10.1145/2539150.2539196","url":null,"abstract":"NoSQL databases are gaining ground due to the need for several applications to manipulate large volumes of data without worrying about database system tuning and scaling. However, many applications still use relational databases and do not want to replace their access methods in order to properly manipulate their data on the cloud using now a NoSQL technology. To deal with this problem, a relational-cloud mapping strategy, in terms of data model and data operations, is able to provide a relational view of NoSQL data, eliminating the need for adjustments in the application interface for data management. SimpleSQL is a solution for this problematic. It is a relational layer for Amazon SimpleDB, one of the most popular document NoSQL databases. Although SimpleSQL had proved to be a promising approach in terms of performance, its current version provides only the mapping of SQL DML operations. This paper presents an extension of SimpleSQL to support also DDL operations. This extension allows the creation and manipulation of the database schema from the application (client) side, abstracting any knowledge about data definition at SimpleDB. Preliminary experiments show that our solution continues to be feasible, since the overhead with DDL operations through SimpleSQL is not prohibitive.","PeriodicalId":424918,"journal":{"name":"International Conference on Information Integration and Web-based Applications & Services","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127243856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Metadata Extraction from Books with Facts about Austria","authors":"Petra Korica-Pehserl, H. Maurer","doi":"10.1145/2539150.2539228","DOIUrl":"https://doi.org/10.1145/2539150.2539228","url":null,"abstract":"Digitized fact books are valuable sources of knowledge. Full-text search is a powerful tool to access such knowledge. However, it often delivers too many results for general queries. Therefore we propose an approach to find relevant data by extracting metadata relevant for each page and allow to search for pages on the basis of their metadata as alternative to full-text search. Given the size of scanned data (high quality image scans) clearly this extraction cannot be done manually. As it turns out, although there are some common aspects, different books often need to be treated differently. In particular we can distinguish two kinds of books: lexicons (dictionaries) where items are arranged alphabetically and other books that describe various topics in a more narrative style. In this paper we describe the approach we used on different fact books in detail and share our learnings from this subject.","PeriodicalId":424918,"journal":{"name":"International Conference on Information Integration and Web-based Applications & Services","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131539741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semi-Automated Software Composition Through Generated Components","authors":"F. Mohr, H. K. Büning","doi":"10.1145/2539150.2539235","DOIUrl":"https://doi.org/10.1145/2539150.2539235","url":null,"abstract":"Software composition has been studied as a subject of state based planning for decades. Existing composition approaches that are efficient enough to be used in practice are limited to sequential arrangements of software components. This restriction dramatically reduces the number of composition problems that can be solved. However, there are many composition problems that could be solved by existing approaches if they had a possibility to combine components in very simple non-sequential ways.\u0000 To this end, we present an approach that arranges not only basic components but also composite components. Composite components enhance the structure of the composition by conditional control flows. Through algorithms that are written by experts, composite components are automatically generated before the composition process starts. Therefore, our approach is not a substitute for existing composition algorithms but complements them with a preprocessing step. We verified the validity of our approach through implementation of the presented algorithms.","PeriodicalId":424918,"journal":{"name":"International Conference on Information Integration and Web-based Applications & Services","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134431020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identifying the Truth: Aggregation of Named Entity Extraction Results","authors":"Katja Pfeifer, J. Meinecke","doi":"10.1145/2539150.2539160","DOIUrl":"https://doi.org/10.1145/2539150.2539160","url":null,"abstract":"Huge amounts of textual information relevant for market analysis, trending or product monitoring can be found on the Web. To exploit that knowledge a number of extraction services were proposed that extract and categorize entities from given text. Prior work showed that a combination of individual extractors can increase quality. However, so far no system exists that is fully applicable to reasonably combine real world extraction services that differ substantially in the entity types they extract and the schemata used. In this paper, we propose an aggregation system and a corresponding aggregation process that can be used for these services. We present a number of novel aggregation techniques that incorporate schema-information as well as entity extraction specific characteristics into the aggregation process. The aggregation system is broadly evaluated on six real world named entity recognition services and compared to state of the art approaches.","PeriodicalId":424918,"journal":{"name":"International Conference on Information Integration and Web-based Applications & Services","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123821339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}