J. R. F. D. Silva, Francisco Airton Pereira da Silva, L. Nascimento, Dhiego A. O. Martins, V. Garcia
{"title":"The dynamic aspects of product derivation in DSPL: A systematic literature review","authors":"J. R. F. D. Silva, Francisco Airton Pereira da Silva, L. Nascimento, Dhiego A. O. Martins, V. Garcia","doi":"10.1109/IRI.2013.6642507","DOIUrl":"https://doi.org/10.1109/IRI.2013.6642507","url":null,"abstract":"Dynamic Software Product Lines (DSPL) have gained significant attention in academic community by involving aspects of product lines and runtime adaptable systems development. Managing dynamic variations demands is a challenge addressed by DSPL paradigm. In this context, this paper introduces the results of a systematic literature review that involved 2,084 studies, with the objective of understanding how the dynamic derivation in DSPL is made. The contributions of this study are: a) an embracing analysis and classification of scientific literature in DSPL area, b) the definition of inputs that are needed to perform the dynamic derivation, c) the description of what composes these inputs, and d) the understanding of the process to perform the dynamic derivation. We conclude that due to the lack of maturity in the dynamic derivation field, there are many open research opportunities still available.","PeriodicalId":418492,"journal":{"name":"2013 IEEE 14th International Conference on Information Reuse & Integration (IRI)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132223827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FIEP: An initial design of a firewall information exchange protocol","authors":"Sandeep Reddy Pedditi, Du Zhang, Chung-E Wang","doi":"10.1109/IRI.2013.6642446","DOIUrl":"https://doi.org/10.1109/IRI.2013.6642446","url":null,"abstract":"FIEP, which stands for Firewall Information Exchange Protocol, is a mechanism that enables firewalls to communicate with each other and to form firewall groups in a network. Currently, there is no protocol that allows firewalls to communicate with each other and to exchange information. Until recently, not much thought was given to the need for firewalls to talk to each other. A firewalled network is isolated from the rest of the networks and is considered to be secure. But a firewalled network is not totally secure as it is still prone to distributed attacks. The objective of developing FIEP is to further strengthen the network security through firewalls sharing information with each other and keeping each other informed of any status changes. Like the Border Gateway Protocol (BGP) that enables routers to exchange routing information and keeps them updated, FIEP allows the firewalls to update each other on changes of access control rules, to form groups and notify members in a group of new and removed firewalls, and to alert each other in the network about attacks. FIEP is a TCP/IP based protocol that provides a communication mechanism for two or more firewalls to communicate with each other, and sanctions both static and dynamic configurations of firewalls. The initial design of FIEP includes three modes of communication among firewalls and has a set of seven states. The initial simulation results are promising.","PeriodicalId":418492,"journal":{"name":"2013 IEEE 14th International Conference on Information Reuse & Integration (IRI)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121034156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Jayasinghe, Joshua Kimball, Siddharth Choudhary, T. Zhu, C. Pu
{"title":"An automated approach to create, store, and analyze large-scale experimental data in clouds","authors":"D. Jayasinghe, Joshua Kimball, Siddharth Choudhary, T. Zhu, C. Pu","doi":"10.1109/IRI.2013.6642493","DOIUrl":"https://doi.org/10.1109/IRI.2013.6642493","url":null,"abstract":"The flexibility and scalability of computing clouds make them an attractive application migration target; yet, the cloud remains a black-box for the most part. In particular, their opacity impedes the efficient but necessary testing and tuning prior to moving new applications into the cloud. A natural and presumably unbiased approach to reveal the cloud's complexity is to collect significant performance data by conducting more experimental studies. However, conducting large-scale system experiments is particularly challenging because of the practical difficulties that arise during experimental deployment, configuration, execution and data processing. In this paper we address some of these challenges through Expertus - a flexible automation framework we have developed to create, store and analyze large-scale experimental measurement data. We create performance data by automating the measurement processes for large-scale experimentation, including: the application deployment, configuration, workload execution and data collection processes. We have automated the processing of heterogeneous data as well as the storage of it in a data warehouse, which we have specifically designed for housing measurement data. Finally, we have developed a rich Web portal to navigate, statistically analyze and visualize the collected data. Expertus combines template-driven code generation techniques with aspect-oriented programming concepts to generate the necessary resources to fully automate the experiment measurement process. In Expertus, a researcher provides only the high-level description about the experiment, and the framework does everything else. At the end, the researcher can graphically navigate and process the data in the Web portal.","PeriodicalId":418492,"journal":{"name":"2013 IEEE 14th International Conference on Information Reuse & Integration (IRI)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129416248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interest analysis using social interaction content with sentiments","authors":"Chung-Chi Huang, Lun-Wei Ku","doi":"10.1109/IRI.2013.6642457","DOIUrl":"https://doi.org/10.1109/IRI.2013.6642457","url":null,"abstract":"We introduce a method for learning to predict reader interest. In our approach, interest analysis bases on PageRank and social interaction content (e.g., reader feedback in social media). The method involves automatically estimating topical interest preferences and determining the sentiment for social content. In interest prediction, different content sources of articles and reader feedback representing readers' viewpoints are weighted accordingly and transformed into content-word weighted word graph. Then, PageRank suggests reader interest with the help of word interestingness scores. We present the prototype system, InterestFinder, that applies the method to interest analysis. Experimental evaluation shows that content source and content word weighting, and scores of interest preferences for words inferred across articles are quite helpful. Our system benefits more from subjective social interaction content than objective one in covering general readers' interest spans.","PeriodicalId":418492,"journal":{"name":"2013 IEEE 14th International Conference on Information Reuse & Integration (IRI)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126075210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R AL-msie'Deen, A. Seriai, M. Huchard, C. Urtado, S. Vauttier
{"title":"Mining features from the object-oriented source code of software variants by combining lexical and structural similarity","authors":"R AL-msie'Deen, A. Seriai, M. Huchard, C. Urtado, S. Vauttier","doi":"10.1109/IRI.2013.6642522","DOIUrl":"https://doi.org/10.1109/IRI.2013.6642522","url":null,"abstract":"Migrating software product variants which are deemed similar into a product line is a challenging task with main impact in software reengineering. To exploit existing software variants to build a software product line (SPL), the first step is to mine the feature model of this SPL which involves extracting common and optional features. Thus, we propose, in this paper, a new approach to mine features from the object-oriented source code of software variants by using lexical and structural similarity. To validate our approach, we applied it on ArgoUML, Health Watcher and Mobile Media software. The results of this evaluation showed that most of the features were identified1.","PeriodicalId":418492,"journal":{"name":"2013 IEEE 14th International Conference on Information Reuse & Integration (IRI)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117027746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Web-based metadata retrieval tool for fine art and games artwork","authors":"A. Aboutaleb, R. Khalil, D. Stockton, S. Jackson","doi":"10.1109/IRI.2013.6642465","DOIUrl":"https://doi.org/10.1109/IRI.2013.6642465","url":null,"abstract":"Developers in art sector are faced with many problems during development of a new product (e.g. game). Resourcing of existing artworks, planning of the product development and commercialisation of their artworks are sources of wastes in the product development processes. Developers usually use multiple software and tools to perform these tasks; however, integration between these tools represents a major issue. In this paper, a web-based product improvement tool is presented. The tool provides developers with one-stop shop for them to resource their artworks, using Metadata, from a centralized web-based server; plan and optimise their production development using Lean principles; advertise and commercialise their artworks to potential customers and other developers.","PeriodicalId":418492,"journal":{"name":"2013 IEEE 14th International Conference on Information Reuse & Integration (IRI)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116883199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Verifying SystemC with predicate abstraction: A component based approach","authors":"N. Harrath, B. Monsuez, Kamel Barkaoui","doi":"10.1109/IRI.2013.6642516","DOIUrl":"https://doi.org/10.1109/IRI.2013.6642516","url":null,"abstract":"The SystemC waiting-state automaton is a compositional formal model for verifying properties of SystemC at the transaction level within a delta-cycle: the smallest simulation unit time in SystemC. In this paper, we first propose how to extract automata for SystemC components where we distinguish between threads and methods in SystemC. Then, we propose an approach based on a combination of symbolic execution and computing fixed points via predicate abstraction to infer relations between predicates generated during symbolic execution.","PeriodicalId":418492,"journal":{"name":"2013 IEEE 14th International Conference on Information Reuse & Integration (IRI)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131188143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From software product lines to system of systems: Analysis of an evolution path","authors":"M. A. Ramos, P. Masiero, R. Braga, R. Penteado","doi":"10.1109/IRI.2013.6642498","DOIUrl":"https://doi.org/10.1109/IRI.2013.6642498","url":null,"abstract":"Software Product Line (SPL) refers to engineering methods and techniques able to deliver families of related products designed to meet domain-specific requirements and increase software reuse. System of Systems (SoS) is a composition of independent and interoperable systems intended to achieve unique goals collectively. In this paper, we discuss about conventional SPL approaches and the reasons why they commonly are not a natural choice for delivering SoS members. Then, we propose an evolution path for SPL Engineering (SPLE) to narrow the gap between SPL and SoS. Finally, we present a case study in which we designed a sample SoS by composing systems instantiated from SPL.","PeriodicalId":418492,"journal":{"name":"2013 IEEE 14th International Conference on Information Reuse & Integration (IRI)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123420941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Nandimath, Ekata Banerjee, Ankur Patil, Pratima Kakade, S. Vaidya
{"title":"Big data analysis using Apache Hadoop","authors":"J. Nandimath, Ekata Banerjee, Ankur Patil, Pratima Kakade, S. Vaidya","doi":"10.1109/IRI.2013.6642536","DOIUrl":"https://doi.org/10.1109/IRI.2013.6642536","url":null,"abstract":"The paradigm of processing huge datasets has been shifted from centralized architecture to distributed architecture. As the enterprises faced issues of gathering large chunks of data they found that the data cannot be processed using any of the existing centralized architecture solutions. Apart from time constraints, the enterprises faced issues of efficiency, performance and elevated infrastructure cost with the data processing in the centralized environment. With the help of distributed architecture these large organizations were able to overcome the problems of extracting relevant information from a huge data dump. One of the best open source tools used in the market to harness the distributed architecture in order to solve the data processing problems is Apache Hadoop. Using Apache Hadoop's various components such as data clusters, map-reduce algorithms and distributed processing, we will resolve various location-based complex data problems and provide the relevant information back into the system, thereby increasing the user experience.","PeriodicalId":418492,"journal":{"name":"2013 IEEE 14th International Conference on Information Reuse & Integration (IRI)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131409193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The possibilities of using Petri Nets for realization of a universal model of educational process","authors":"Z. Balogh, M. Turčáni, M. Magdin","doi":"10.1109/IRI.2013.6642468","DOIUrl":"https://doi.org/10.1109/IRI.2013.6642468","url":null,"abstract":"This paper summarizes the results obtained from the design, creation and implementation of the Universal Model of Educational Process with the Support of Petri Nets. Based on the created educational models in Petri nets we are able to verify and simulate individual processes, which are carried out during the passage of student through the e-course in LMS. Universal model was gradually implemented at the Department of Informatics (Faculty of Natural Sciences, Constantine the Philosopher University in Nitra, Slovakia) from the academic year 2009/2010. The module is used in technically oriented subjects, specifically in the field of applied informatics. The results obtained from the use of the module are listed in the debate and point to the fact that in this way we can thus find certain rules of behavior of applicants in the e-course by means of the usage analysis and compare them with the process models created by us. By means of dry comparisons we shall be able to eliminate all interfering elements from process models of e-courses, thus making the other created e-courses more effective and more attractive. Universal Model of Educational Process with the Support of Petri Nets so contributes to the development of skills and abilities of students without they are aware of this fact at all.","PeriodicalId":418492,"journal":{"name":"2013 IEEE 14th International Conference on Information Reuse & Integration (IRI)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134537639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}