{"title":"Proposed Testing Infrastructure for Automation of the GPU Chip Validation: Leading to Painless Driver Development","authors":"Akash Kulkarni","doi":"10.1109/ICSC.2013.77","DOIUrl":"https://doi.org/10.1109/ICSC.2013.77","url":null,"abstract":"Graphics Processing Units (GPU)s have become an integral part for high-end applications. The paper proposes a solution to leverage the GPU driver developer to identify regressions when upgrading driver features and an automatic testing infrastructure to identify compatibility problems.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132521676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extracting Cybersecurity Related Linked Data from Text","authors":"Arnav Joshi, R. Lal, Timothy W. Finin, A. Joshi","doi":"10.1109/ICSC.2013.50","DOIUrl":"https://doi.org/10.1109/ICSC.2013.50","url":null,"abstract":"The Web is typically our first source of information about new software vulnerabilities, exploits and cyber-attacks. Information is found in semi-structured vulnerability databases as well as in text from security bulletins, news reports, cyber security blogs and Internet chat rooms. It can be useful to cyber security systems if there is a way to recognize and extract relevant information and represent it as easily shared and integrated semantic data. We describe such an automatic framework that generates and publishes a RDF linked data representation of cyber security concepts and vulnerability descriptions extracted from the National Vulnerability Database and from text sources. A CRF-based system is used to identify cybersecurity-relatedentities, concepts and relations in text, which are then represented using custom ontologies for the cyber security domain and also mapped to objects in the DBpedia knowledge base. The resulting cyber security linked data collection can be used for many purposes, including automating early vulnerability identification, mitigation and prevention efforts.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"173 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132545268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Naila Karim, K. Latif, N. Ahmed, Mishal Fatima, Atif Mumtaz
{"title":"Mapping Natural Language Questions to SPARQL Queries for Job Search","authors":"Naila Karim, K. Latif, N. Ahmed, Mishal Fatima, Atif Mumtaz","doi":"10.1109/ICSC.2013.35","DOIUrl":"https://doi.org/10.1109/ICSC.2013.35","url":null,"abstract":"A technique for enabling end users to explore semantically annotated data in job search domain, Sem-QAS is presented. It translates a natural language text query into SPARQL by semantically identifying distinct atomic filtering constraints and their semantic association present in the input query. Sem-QAS dynamically forms complex SPARQL queries by combining the triple patterns generated for atomic filtering constraints. The system maintains a high recall and precision by paying special attention to the processing of scope modifiers and association operators. The efficacy and correctness of Sem-QAS is evaluated using Mooney Job data set and queries collected from a real job search engine.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132671222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantic Computing and Drug Discovery - A Preliminary Report","authors":"Charles C. N. Wang, D. Hecht, P. Sheu, J. Tsai","doi":"10.1109/ICSC.2013.86","DOIUrl":"https://doi.org/10.1109/ICSC.2013.86","url":null,"abstract":"Computer-aided drug design methodologies have proven to be very effective, greatly enhancing the efficiency of drug discovery and development processes. In this paper we describe how to integrate complex drug discovery problems and computational solutions via a semantic interface. In particular we describe a Structured Natural Language approach to chemical similarity searches, quantitative structure activity relationship (QSAR) modeling and in silico protein-ligand docking.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133192486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantic Entity Search Diversification","authors":"Tuukka Ruotsalo, Matias Frosterus","doi":"10.1109/ICSC.2013.16","DOIUrl":"https://doi.org/10.1109/ICSC.2013.16","url":null,"abstract":"We present an approach to diversify entity search by utilizing semantics present and inferred from the initial entity search results. Our approach makes use of ontologies and independent component analysis of the entity descriptions to reveal direct and latent semantic connections between the entities present in the initial search results. The semantic connections are then used to sample a set of diverse entities. We empirically demonstrate the performance of our approach through retrieval experiments that use a real-world dataset composed from four entity databases. The results indicate that our approach significantly improves both diversity and effectiveness of entity search.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124427570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantic Medical Prescriptions -- Towards Intelligent and Interoperable Medical Prescriptions","authors":"A. Khalili, B. Sedaghati","doi":"10.1109/ICSC.2013.66","DOIUrl":"https://doi.org/10.1109/ICSC.2013.66","url":null,"abstract":"Medication errors are the most common type of medical errors in health-care domain. The use of electronic prescribing systems (e-prescribing) have resulted in significant reductions in such errors. However, dealing with the heterogeneity of available information sources is still one of the main challenges of e-prescription systems. There already exists different sources of information addressing different aspects of pharmaceutical research (e.g. chemical, pharmacological and pharmaceutical drug data, clinical trials, approved prescription drugs, drugs activity against drug targets. etc.). Handling these dynamic pieces of information within current e-prescription systems without bridging the existing pharmaceutical information islands is a cumbersome task. In this paper we present semantic medical prescriptions which are intelligent e-prescription documents enriched by dynamic drug-related meta-data thereby know about their content and the possible interactions. Semantic prescriptions provide an interoperable interface which helps patients, physicians, pharmacists, researchers, pharmaceutical and insurance companies to collaboratively improve the quality of pharmaceutical services by facilitating the process of shared decision making. In order to showcase the applicability of semantic prescriptions we present an application called Pharmer. Pharmer employs datasets such as DBpedia, Drug Bank, Daily Med and RxNorm to automatically detect the drugs in the prescriptions and to collect multidimensional data on them. We evaluate the feasibility of the Pharmer by conducting a usability evaluation and report on the quantitative and qualitative results of our survey.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125074544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Open Information Extraction via Contextual Sentence Decomposition","authors":"H. Bast, Elmar Haussmann","doi":"10.1109/ICSC.2013.36","DOIUrl":"https://doi.org/10.1109/ICSC.2013.36","url":null,"abstract":"We show how contextual sentence decomposition (CSD), a technique originally developed for high-precision semantic search, can be used for open information extraction (OIE). Intuitively, CSD decomposes a sentence into the parts that semantically \"belong together\". By identifying the (implicit or explicit) verb in each such part, we obtain facts like in OIE. We compare our system, called CSD-IE, to three state-of-the-art OIE systems: ReVerb, OLLIE, and ClausIE. We consider the following aspects: accuracy (does the extracted triple express a meaningful fact, which is also expressed in the original sentence), minimality (can the extracted triple be further decomposed into smaller meaningful triples), coverage (percentage of text contained in at least one extracted triple), and number of facts extracted. We show how CSD-IE clearly outperforms ReVerb and OLLIE in terms of coverage and recall, but at comparable accuracy and minimality, and how CSD-IE achieves precision and recall comparable to ClausIE, but at significantly better minimality.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127245421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Getting Creative with Semantic Similarity","authors":"Ching-Yun Chang, S. Clark, Brian Harrington","doi":"10.1109/ICSC.2013.63","DOIUrl":"https://doi.org/10.1109/ICSC.2013.63","url":null,"abstract":"This paper investigates how graph-based representations of entities and concepts can be used to infer semantic similarity and relatedness, and, more speculatively, how these can be used to infer novel associations as part of a creative process. We show how personalised PageRank on a co-occurrence graph can obtain competitive scores on a standard semantic similarity task, as well as being used to discover interesting and surprising links between entities. We hypothesise that such links could form the first stage in a creative ideation process.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126228658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantic Retrieval for Videos in Non-static Background Using Motion Saliency and Global Features","authors":"Dianting Liu, M. Shyu","doi":"10.1109/ICSC.2013.57","DOIUrl":"https://doi.org/10.1109/ICSC.2013.57","url":null,"abstract":"In this paper, a video semantic retrieval framework is proposed based on a novel unsupervised motion region detection algorithm which works reasonably well with dynamic background and camera motion. The proposed framework is inspired by biological mechanisms of human vision which make motion salience (defined as attention due to motion) is more \"attractive\" than some other low-level visual features to people while watching videos. Under this biological observation, motion vectors in frame sequences are calculated using the optical flow algorithm to estimate the movement of a block from one frame to another. Next, a center-surround coherency evaluation model is proposed to compute the local motion saliency in a completely unsupervised manner. The integral density algorithm is employed to search the globally optimal solution of the minimum coherency region as the motion region which is then integrated into the video semantic retrieval framework to enhance the performance of video semantic analysis and understanding. Our proposed framework is evaluated using video sequences in non-static background, and the promising experimental results reveal that the semantic retrieval performance can be improved by integrating the global texture and local motion information.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"198 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114279563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analysis of Big Data Technologies and Method - Query Large Web Public RDF Datasets on Amazon Cloud Using Hadoop and Open Source Parsers","authors":"Ted Garcia, Taehyung Wang","doi":"10.1109/ICSC.2013.49","DOIUrl":"https://doi.org/10.1109/ICSC.2013.49","url":null,"abstract":"Extremely large datasets found in Big Data projects are difficult to work with using conventional databases, statistical software, and visualization tools. Massively parallel software, such as Hadoop, running on tens, hundreds, or even thousands of servers is more suitable for Big Data challenges. Additionally, in order to achieve the highest performance when querying large datasets, it is necessary to work these datasets at rest without preprocessing or moving them into a repository. Therefore, this work will analyze tools and techniques to overcome working with large datasets at rest. Parsing and querying will be done on the raw dataset - the untouched Web Data Commons RDF files. Web Data Commons comprises five billion pages of web pages crawled from the Internet. This work will analyze available tools and appropriate methods to assist the Big Data developer in working with these extremely large, semantic RDF datasets. Hadoop, open source parsers, and Amazon Cloud services will be used to data mine these files. In order to assist in further discovery, recommendations for future research will be included.","PeriodicalId":189682,"journal":{"name":"2013 IEEE Seventh International Conference on Semantic Computing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124625938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}