{"title":"实践中关联数据模型评价方法探索","authors":"R. Desmeules, Clara Turp, Andrew Senior","doi":"10.1080/19386389.2020.1742434","DOIUrl":null,"url":null,"abstract":"Abstract Ontology development and data modeling are core components of any linked data project. Through our own experiments building a linked data ontology for our collections, we wondered: how are our peers in the linked data community evaluating their ontologies? Are participants engaging in ontology evaluation? What methodologies and evaluation criteria are they using? Are they documenting and sharing their processes? In this paper, we present findings from a survey conducted in the fall of 2018, aimed at professionals from libraries, archives, and museums (LAM) who were part of the data modeling team on linked data projects. The purpose of this survey was to better understand the reality of ontology evaluation in the context of a linked data project. We found that our colleagues were engaging in data modeling as part of linked data projects in a variety of different tasks and roles. There was some ambiguity with respect to evaluation, possibly in part due to the iterative nature of the modeling process. Evaluation is engaged iteratively and informally through use cases, competency questions, and testing of the data in the application. On the whole, not being shared widely outside of a project. The identified barriers to evaluating their models included: lack of knowledge, resources, and documentation.","PeriodicalId":39057,"journal":{"name":"Journal of Library Metadata","volume":"136 1","pages":"65 - 89"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring Methods for Linked Data Model Evaluation in Practice\",\"authors\":\"R. Desmeules, Clara Turp, Andrew Senior\",\"doi\":\"10.1080/19386389.2020.1742434\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Ontology development and data modeling are core components of any linked data project. Through our own experiments building a linked data ontology for our collections, we wondered: how are our peers in the linked data community evaluating their ontologies? Are participants engaging in ontology evaluation? What methodologies and evaluation criteria are they using? Are they documenting and sharing their processes? In this paper, we present findings from a survey conducted in the fall of 2018, aimed at professionals from libraries, archives, and museums (LAM) who were part of the data modeling team on linked data projects. The purpose of this survey was to better understand the reality of ontology evaluation in the context of a linked data project. We found that our colleagues were engaging in data modeling as part of linked data projects in a variety of different tasks and roles. There was some ambiguity with respect to evaluation, possibly in part due to the iterative nature of the modeling process. Evaluation is engaged iteratively and informally through use cases, competency questions, and testing of the data in the application. On the whole, not being shared widely outside of a project. The identified barriers to evaluating their models included: lack of knowledge, resources, and documentation.\",\"PeriodicalId\":39057,\"journal\":{\"name\":\"Journal of Library Metadata\",\"volume\":\"136 1\",\"pages\":\"65 - 89\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-01-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Library Metadata\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/19386389.2020.1742434\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Library Metadata","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/19386389.2020.1742434","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
Exploring Methods for Linked Data Model Evaluation in Practice
Abstract Ontology development and data modeling are core components of any linked data project. Through our own experiments building a linked data ontology for our collections, we wondered: how are our peers in the linked data community evaluating their ontologies? Are participants engaging in ontology evaluation? What methodologies and evaluation criteria are they using? Are they documenting and sharing their processes? In this paper, we present findings from a survey conducted in the fall of 2018, aimed at professionals from libraries, archives, and museums (LAM) who were part of the data modeling team on linked data projects. The purpose of this survey was to better understand the reality of ontology evaluation in the context of a linked data project. We found that our colleagues were engaging in data modeling as part of linked data projects in a variety of different tasks and roles. There was some ambiguity with respect to evaluation, possibly in part due to the iterative nature of the modeling process. Evaluation is engaged iteratively and informally through use cases, competency questions, and testing of the data in the application. On the whole, not being shared widely outside of a project. The identified barriers to evaluating their models included: lack of knowledge, resources, and documentation.