建模语言本体的经验评估:Peira 框架

IF 2 3区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Sotirios Liaskos, Saba Zarbaf, John Mylopoulos, Shakil M. Khan
{"title":"建模语言本体的经验评估:Peira 框架","authors":"Sotirios Liaskos, Saba Zarbaf, John Mylopoulos, Shakil M. Khan","doi":"10.1007/s10270-023-01147-9","DOIUrl":null,"url":null,"abstract":"<p>Conceptual modeling plays a central role in planning, designing, developing and maintaining software-intensive systems. One of the goals of conceptual modeling is to enable clear communication among stakeholders involved in said activities. To achieve effective communication, conceptual models must be understood by different people in the same way. To support such shared understanding, conceptual modeling languages are defined, which introduce rules and constraints on how individual models can be built and how they are to be understood. A key component of a modeling language is an ontology, i.e., a set of concepts that modelers must use to describe world phenomena. Once the concepts are chosen, a visual and/or textual vocabulary is adopted for representing the concepts. However, the choices both of the concepts and of the vocabulary used to represent them may affect the quality of the language under consideration: some choices may promote shared understanding better than other choices. To allow evaluation and comparison of alternative choices, we present Peira, a framework for empirically measuring the domain and comprehensibility appropriateness of conceptual modeling language ontologies. Given a language ontology to be evaluated, the framework is based on observing how prospective language users classify domain content under the concepts put forth by said ontology. A set of metrics is then used to analyze the observations and identify and characterize possible issues that the choice of concepts or the way they are represented may have. The metrics are abstract in that they can be operationalized into concrete implementations tailored to specific data collection instruments or study objectives. We evaluate the framework by applying it to compare an existing language against an artificial one that is manufactured to exhibit specific issues. We then test if the metrics indeed detect these issues. We find that the framework does offer the expected indications, but that it also requires good understanding of the metrics prior to committing to interpretations of the observations.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"102 1","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Empirically evaluating modeling language ontologies: the Peira framework\",\"authors\":\"Sotirios Liaskos, Saba Zarbaf, John Mylopoulos, Shakil M. Khan\",\"doi\":\"10.1007/s10270-023-01147-9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Conceptual modeling plays a central role in planning, designing, developing and maintaining software-intensive systems. One of the goals of conceptual modeling is to enable clear communication among stakeholders involved in said activities. To achieve effective communication, conceptual models must be understood by different people in the same way. To support such shared understanding, conceptual modeling languages are defined, which introduce rules and constraints on how individual models can be built and how they are to be understood. A key component of a modeling language is an ontology, i.e., a set of concepts that modelers must use to describe world phenomena. Once the concepts are chosen, a visual and/or textual vocabulary is adopted for representing the concepts. However, the choices both of the concepts and of the vocabulary used to represent them may affect the quality of the language under consideration: some choices may promote shared understanding better than other choices. To allow evaluation and comparison of alternative choices, we present Peira, a framework for empirically measuring the domain and comprehensibility appropriateness of conceptual modeling language ontologies. Given a language ontology to be evaluated, the framework is based on observing how prospective language users classify domain content under the concepts put forth by said ontology. A set of metrics is then used to analyze the observations and identify and characterize possible issues that the choice of concepts or the way they are represented may have. The metrics are abstract in that they can be operationalized into concrete implementations tailored to specific data collection instruments or study objectives. We evaluate the framework by applying it to compare an existing language against an artificial one that is manufactured to exhibit specific issues. We then test if the metrics indeed detect these issues. We find that the framework does offer the expected indications, but that it also requires good understanding of the metrics prior to committing to interpretations of the observations.</p>\",\"PeriodicalId\":49507,\"journal\":{\"name\":\"Software and Systems Modeling\",\"volume\":\"102 1\",\"pages\":\"\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2024-04-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Software and Systems Modeling\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s10270-023-01147-9\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software and Systems Modeling","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10270-023-01147-9","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

摘要

概念建模在规划、设计、开发和维护软件密集型系统中发挥着核心作用。概念建模的目标之一是使参与上述活动的利益相关者之间能够进行清晰的交流。要实现有效沟通,不同的人必须以相同的方式理解概念模型。为了支持这种共同理解,我们定义了概念建模语言,对如何建立单个模型以及如何理解模型引入了规则和约束。建模语言的一个关键组成部分是本体,即建模者必须用来描述世界现象的一组概念。一旦选择了概念,就需要采用可视化和/或文本词汇来表示这些概念。然而,对概念和用于表示概念的词汇的选择可能会影响所考虑的语言质量:某些选择可能比其他选择更能促进共同理解。为了评估和比较不同的选择,我们提出了 Peira,这是一个通过经验来衡量概念建模语言本体的领域和可理解性的框架。给定一个要评估的语言本体,该框架的基础是观察未来的语言用户如何根据本体提出的概念对领域内容进行分类。然后使用一组度量标准来分析观察结果,并识别和描述概念选择或概念表示方式可能存在的问题。这些指标是抽象的,因为它们可以根据特定的数据收集工具或研究目标具体实施。我们通过将该框架用于比较现有语言和为表现出特定问题而制造的人工语言来评估该框架。然后,我们测试这些指标是否确实能检测出这些问题。我们发现,该框架确实提供了预期的指示,但在对观察结果进行解释之前,还需要对度量标准有充分的了解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Empirically evaluating modeling language ontologies: the Peira framework

Empirically evaluating modeling language ontologies: the Peira framework

Conceptual modeling plays a central role in planning, designing, developing and maintaining software-intensive systems. One of the goals of conceptual modeling is to enable clear communication among stakeholders involved in said activities. To achieve effective communication, conceptual models must be understood by different people in the same way. To support such shared understanding, conceptual modeling languages are defined, which introduce rules and constraints on how individual models can be built and how they are to be understood. A key component of a modeling language is an ontology, i.e., a set of concepts that modelers must use to describe world phenomena. Once the concepts are chosen, a visual and/or textual vocabulary is adopted for representing the concepts. However, the choices both of the concepts and of the vocabulary used to represent them may affect the quality of the language under consideration: some choices may promote shared understanding better than other choices. To allow evaluation and comparison of alternative choices, we present Peira, a framework for empirically measuring the domain and comprehensibility appropriateness of conceptual modeling language ontologies. Given a language ontology to be evaluated, the framework is based on observing how prospective language users classify domain content under the concepts put forth by said ontology. A set of metrics is then used to analyze the observations and identify and characterize possible issues that the choice of concepts or the way they are represented may have. The metrics are abstract in that they can be operationalized into concrete implementations tailored to specific data collection instruments or study objectives. We evaluate the framework by applying it to compare an existing language against an artificial one that is manufactured to exhibit specific issues. We then test if the metrics indeed detect these issues. We find that the framework does offer the expected indications, but that it also requires good understanding of the metrics prior to committing to interpretations of the observations.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Software and Systems Modeling
Software and Systems Modeling 工程技术-计算机:软件工程
CiteScore
6.00
自引率
20.00%
发文量
104
审稿时长
>12 weeks
期刊介绍: We invite authors to submit papers that discuss and analyze research challenges and experiences pertaining to software and system modeling languages, techniques, tools, practices and other facets. The following are some of the topic areas that are of special interest, but the journal publishes on a wide range of software and systems modeling concerns: Domain-specific models and modeling standards; Model-based testing techniques; Model-based simulation techniques; Formal syntax and semantics of modeling languages such as the UML; Rigorous model-based analysis; Model composition, refinement and transformation; Software Language Engineering; Modeling Languages in Science and Engineering; Language Adaptation and Composition; Metamodeling techniques; Measuring quality of models and languages; Ontological approaches to model engineering; Generating test and code artifacts from models; Model synthesis; Methodology; Model development tool environments; Modeling Cyberphysical Systems; Data intensive modeling; Derivation of explicit models from data; Case studies and experience reports with significant modeling lessons learned; Comparative analyses of modeling languages and techniques; Scientific assessment of modeling practices
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信