Sotirios Liaskos, Saba Zarbaf, John Mylopoulos, Shakil M. Khan
{"title":"Empirically evaluating modeling language ontologies: the Peira framework","authors":"Sotirios Liaskos, Saba Zarbaf, John Mylopoulos, Shakil M. Khan","doi":"10.1007/s10270-023-01147-9","DOIUrl":null,"url":null,"abstract":"<p>Conceptual modeling plays a central role in planning, designing, developing and maintaining software-intensive systems. One of the goals of conceptual modeling is to enable clear communication among stakeholders involved in said activities. To achieve effective communication, conceptual models must be understood by different people in the same way. To support such shared understanding, conceptual modeling languages are defined, which introduce rules and constraints on how individual models can be built and how they are to be understood. A key component of a modeling language is an ontology, i.e., a set of concepts that modelers must use to describe world phenomena. Once the concepts are chosen, a visual and/or textual vocabulary is adopted for representing the concepts. However, the choices both of the concepts and of the vocabulary used to represent them may affect the quality of the language under consideration: some choices may promote shared understanding better than other choices. To allow evaluation and comparison of alternative choices, we present Peira, a framework for empirically measuring the domain and comprehensibility appropriateness of conceptual modeling language ontologies. Given a language ontology to be evaluated, the framework is based on observing how prospective language users classify domain content under the concepts put forth by said ontology. A set of metrics is then used to analyze the observations and identify and characterize possible issues that the choice of concepts or the way they are represented may have. The metrics are abstract in that they can be operationalized into concrete implementations tailored to specific data collection instruments or study objectives. We evaluate the framework by applying it to compare an existing language against an artificial one that is manufactured to exhibit specific issues. We then test if the metrics indeed detect these issues. We find that the framework does offer the expected indications, but that it also requires good understanding of the metrics prior to committing to interpretations of the observations.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":"102 1","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software and Systems Modeling","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10270-023-01147-9","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Conceptual modeling plays a central role in planning, designing, developing and maintaining software-intensive systems. One of the goals of conceptual modeling is to enable clear communication among stakeholders involved in said activities. To achieve effective communication, conceptual models must be understood by different people in the same way. To support such shared understanding, conceptual modeling languages are defined, which introduce rules and constraints on how individual models can be built and how they are to be understood. A key component of a modeling language is an ontology, i.e., a set of concepts that modelers must use to describe world phenomena. Once the concepts are chosen, a visual and/or textual vocabulary is adopted for representing the concepts. However, the choices both of the concepts and of the vocabulary used to represent them may affect the quality of the language under consideration: some choices may promote shared understanding better than other choices. To allow evaluation and comparison of alternative choices, we present Peira, a framework for empirically measuring the domain and comprehensibility appropriateness of conceptual modeling language ontologies. Given a language ontology to be evaluated, the framework is based on observing how prospective language users classify domain content under the concepts put forth by said ontology. A set of metrics is then used to analyze the observations and identify and characterize possible issues that the choice of concepts or the way they are represented may have. The metrics are abstract in that they can be operationalized into concrete implementations tailored to specific data collection instruments or study objectives. We evaluate the framework by applying it to compare an existing language against an artificial one that is manufactured to exhibit specific issues. We then test if the metrics indeed detect these issues. We find that the framework does offer the expected indications, but that it also requires good understanding of the metrics prior to committing to interpretations of the observations.
期刊介绍:
We invite authors to submit papers that discuss and analyze research challenges and experiences pertaining to software and system modeling languages, techniques, tools, practices and other facets. The following are some of the topic areas that are of special interest, but the journal publishes on a wide range of software and systems modeling concerns:
Domain-specific models and modeling standards;
Model-based testing techniques;
Model-based simulation techniques;
Formal syntax and semantics of modeling languages such as the UML;
Rigorous model-based analysis;
Model composition, refinement and transformation;
Software Language Engineering;
Modeling Languages in Science and Engineering;
Language Adaptation and Composition;
Metamodeling techniques;
Measuring quality of models and languages;
Ontological approaches to model engineering;
Generating test and code artifacts from models;
Model synthesis;
Methodology;
Model development tool environments;
Modeling Cyberphysical Systems;
Data intensive modeling;
Derivation of explicit models from data;
Case studies and experience reports with significant modeling lessons learned;
Comparative analyses of modeling languages and techniques;
Scientific assessment of modeling practices