Dynamic System Models and their Simulation in the Semantic Web

Tracking #: 3038-4252

Moritz Stüber
Georg Frey

Responsible editor: 
Guest Editors SW for Industrial Engineering 2022

Submission type: 
Full Paper
Modelling and simulation (M&S) are core tools for designing, analyzing and operating today’s industrial systems. They often also represent both a valuable asset and a significant investment. Typically, their use is constrained to a software environment intended to be used by engineers on a single computer. However, the knowledge relevant to a task involving modelling and simulation is in general distributed in nature, even across organizational boundaries, and may be large in volume. Therefore, it is desirable to increase the FAIRness (Findability, Accessibility, Interoperability, and Reuse) of M&S capabilities; to enable their use in loosely coupled systems of systems; and to support their use by intelligent software agents. In this contribution, the suitability of Semantic Web technologies to achieve these goals is investigated and an open-source proof of concept-implementation based on the Functional Mock-up Interface (FMI) standard is presented. Specifically, models, model instances, and simulation results are exposed through a hypermedia API and an implementation of the Pragmatic Proof Algorithm (PPA) is used to successfully demonstrate the API’s use by a generic software agent. The solution shows an increased degree of FAIRness and fully supports its use in loosely coupled systems. The FAIRness could be further improved by providing more “rich” (meta)data.
Full PDF Version: 

Major Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 09/Mar/2022
Minor Revision
Review Comment:

Dear Authors,
I read your manuscript and I found it interesting and original.
The developed hypermedia API result in higher FAIRness and higher support to loose coupling, which are relevant for research and for practice, also as shown in the good application examples reported.
The paper presents a very high quality of writing: it is a pleasure to read and the English language is correctly used.
The paper refers to GITHUB resources, where we can find the developed ontologies, interfaces, python code and the README files. The README files are clear and complete, to my understanding, to allow for replicability of experiments (also thanks to the detail level on which the authors dwell in the manuscript).
The paper is also well focused against the Call for Paper for the Special Issue on Semantic Web for Industrial Engineering: Research and Applications.

Here are my comments for an improvement of the paper.

1. Section 1 "introduction", page 1, line 46: "The approximation of the system of DAEs by means of numerical integration algorithms is called simulation; it is the computational process by which a trajectory of values over time is retrieved as the result." I think this sentence is not accurate, there are types of "static" simulations that do not fall within the definition of "over time trajectory of values". Think for example of Monte Carlo simulation. Please be more accurate on this sentence.

2. Section 1.1, page 2, lines 13 - 44: please consider including references here. This is a Journal article and needs to be properly framed against the research background & context starting from the Introductory sections.

3. Section 1.2: page 4, lines 23 - 25:
"Programmers then construct requests specific to a certain version of the API at design-time, which is neither RESTful (even though such APIs often denote themselves as such), nor fully supports loose coupling." I have the feeling this sentence should be better motivated. Why is it so?

4. Section 1.3:
The RQs and HPs are clearly defined and I appreciate that you evaluate&discuss them one by one devoting a section for each point in Section 5.
However, I would suggest that you include in Section 1.3 also a recap of the Gaps that you are addressing. At the moment, they are scattered around the text and the reader needs to go search for them and compare them with RQs. I suggest you make a clear list of gaps that is linearly connected to the RQs.

5. Section 2.1, page 6, line 32: "With respect to what is modelled by an ontology, methodological and referential ontologies can be distinguished." you need a reference here.

6. I would appreciate some more details on the methodology you followed to evaluate the proposed models. Example: Results of evaluation are shown in Figures 4 and 5, but there are not much details on how and who did this evaluation. You provide the meaning of the 4 values and the reason (in tables 3 and 4), but you never state if this number is given by one person, more people discussed on that..., how do we ensure there are no biases?

7. I would suggest that you expose some possibilities of future works in the Conclusions and not in the Discussion section.

Additional and very minor comments:
i) section 1, page 2 lines 4 - 6: this sentence is very long to read, (three lines before arriving to the verb!!), please consider cutting or reformulating it.

ii) section 2.1, page 7, line 29 -32: the sentence is very long to read and presents too many commas, please consider cutting or reformulating it.

iii) Section 3.2.3, page 16, line 17 "inconvinient" should be "inconvenient".

iv) Section 4, page 18, line 24: I would suggest avoiding starting the section with "in the abstract".

v) Section 5.1.3, page 23, line 2: "results" should be "result"; line 5-6: there is a double "and"

vi) Section 6: page 24, line 28: "adress" should be "address".

Thank you!

Review #2
Anonymous submitted on 11/Apr/2022
Major Revision
Review Comment:

The paper proposes the use of semantic technologies to support the description of services supporting M&S capabilities and facilitate their automatic composition without the need of explicitly programming needed requests. The authors show the use of RDF graph-based knowledge instances to realize hypermedia API and how they better support the FAIR principles with respect to other technologies (FMU and HTTP API).

Although interesting, the contribution of the paper is mainly technical. The authors do not sufficiently emphasize and discuss the novelty and the scientific contribution of the presented work. A better discussion of the novel aspects of the proposed approach is especially needed with respect to the defined and used ontological models. In this regard section 3 of the paper requires major revisions.

- The ontology model seems quite simple and syntactic, not really leveraging the full semantic stack of semantic web technologies. The authors explain they use the RDF format that simply provides a graph-based syntactic structure of knowledge but they do not really explain the semantics they define to characterize relevant information. In this regard, it is not clear if they use simple information schema through RDFS or if they define more complex concepts and properties through using OWL.

- The level of details of represented information seems to support quite generic models with input/output parameters. A finer grained representation of FMI and FMU seems necessary to better support reasoning and thus characterize capabilities of different models, properties and their simulations. It would be for example necessary to support retrieval of models that fit some desired properties user/application needs.

- Composition of services and simulations through PPA and RESTdesc seems only partially supported by the associated semantic models. A reliable handling of failures, sending requests to alternative M&S services with the same capabilities, would for example be possible by reasoning on a finer grained model of known/available services. The actual composition seems relying on the manually specified RESTdesc rules only. In this regard, the specification of such rules seems to require a high level of expertise on the syntactic features of RDF and used models. It would be interesting to discuss if and how the used semantic model may facilitate/support engineering and the definition of such rules, abstracting from underlying (less intuitive) syntactic features.

- PPA basically seems quite similar to a generic AI planning algorithm. In this regard, a related work that seems quite relevant and close to the paper is [1] where HTN planning is used to automatically compose web services. The authors should take into account this work and discuss the novelty of the proposed approach with respect to [1].

- Poor formalization of PPA and the addressed composition problem. The authors do not define clearly what goals, initial states are and how they are represented. Furthermore it is not clear if the background knowledge is general or domain specific (or both) and it is actually used within the composition algorithm.

- It is not clear why the Triple Pattern Fragments (TPFs) is really necessary since the kind of query it supports can be easily implemented through SPARQL. Examples clearly showing why TPFs are necessary would facilitate understanding their advantages with respect to “raw” SPARQL queries.

[1] Evren Sirin, Bijan Parsia, Dan Wu, James Hendler, Dana Nau, “HTN planning for Web Service composition using SHOP2”, Journal of Web Semantics, Volume 1, Issue 4, 2004, Pages 377-396, ISSN 1570-8268, https://doi.org/10.1016/j.websem.2004.06.005.

Review #3
Anonymous submitted on 23/Jun/2022
Major Revision
Review Comment:

In this study, the author aims to solve the integration of various computational models (optimization and simulation) exposed as services using web-based technologies. They address the automatic discoverability, orchestration, and invocation of such computational models as services as part of distributed semantic queries. Regarding discoverability, authors used hypermedia API to increase the FAIRness, for orchestration, authors adopted a pragmatic proof algorithm, and for invocation, authors used various web tools for federation. With a lofty goal, the authors have put a considerable amount of effort into providing a proof of concept with a substantial volume of comparative analysis and discussion on various concerns related to their proposed solution. Most impressive is the FAIR and lose coupling analysis (Although done without any consensus from the community). However, the structure and writing style makes the article hard to read. Below, I point to the ways the readability of the paper may be improved:
I. The paper uses many web tools, protocols, and principles in their implementation and therefore refers to them with some abbreviation frequently throughout the paper. It will be easier to read the paper if a glossary of these terms is given at the beginning of the paper.
II. The authors combined the methodology and implementation in a single section (3). Therefore, the technical solutions hinder the understanding of the overall methodology and system architecture. A separate methodology section with generic system diagrams (e.g., functional modules, activity diagram, flowchart) in standard UML (or SysML, BPMN if preferred) will help in understanding the overall system before going into technologies for realizing such a system.
III. Where not necessary, it is better to use the generic term of the component instead of the name of a particular technology that implements it. E.g., service metadata instead of RESTdesc, service instead of hypermedia APIs. These technologies need to be contained in a separate implementation section instead of being scattered around the entire paper.
IV. Most importantly, the authors aimed to tackle multiple problems in a single system including FAIRness of M&S, automatic discovery, and orchestration, as well as federated service/query execution to ensure loose coupling. The paper however narrates the system in a way that it becomes difficult to identify what component does what.
Furthermore, the following list mentions many issues which require resolution.
1) In abstract: “systems of systems” -> “system of systems”
2) “Therefore, it is desirable to increase the FAIRness (Findability, Accessibility, Interoperability, and Reuse) of M&S capabilities; to enable their use in loosely coupled systems of systems; and to support their use by intelligent software agents.” – The abstract does not mention the goal completely as increasing FAIRness is one of the goals of the authors as they included service composition and loose coupling in the research questions.
3) “models allow inferring new information based on what is already known by means of reasoning” – the paper refers to different entities as a model, e.g., model of data (e.g., ontology), computational model (e.g., simulation), system (e.g., the proposed solution). They need to be distinguished.
4) “Ontologies encode concepts, roles, and their interrelations based on Description Logics (DL); computational reasoning is the process by which satisfiability, classification, axiom entailment, instance retrieval et cetera are computed.” – the paper starts with ontology whereas ontology is one of the tools used in a larger system that is proposed. Ontology is not the goal of the paper. Also, not all ontologies are based on DL.
5) “As a consequence of this formalization, a limit in scope and expressivity and therefore a limit on the class of problems that can be solved using a certain language, including its ecosystem such as model libraries, Integrated Development Environments (IDEs) and expert communities, is imposed.” – not understood how formalization may cause a problem in scope and expressivity. They may be caused by the choice of language by developers but it is not to blame on the formalization. For example, an ontology written in full first-order logic is more expressive than OWL.
6) “…this raises the question of whether two distinct modelling approaches and the corresponding ecosystems can be meaningfully combined in order to enlarge the class of problems that can be investigated” – If authors mean Web Ontology Language (OWL) for ontologies or Modelica for DAE are two distinct modelling languages, then it is required to be noted that they are for different purposes not different approaches for the same purpose. Also, it is not known what the authors mean by “class of problem”, is it complexity class? Problems for different domains? Problem types?
7) “we focus on dynamic models for the time-varying behaviour of quantities in technical, multi-domain systems that can be represented as a system of DAEs … Other uses of the terms Modelling and Simulation are equally valid in their respective contexts, but out of scope for this work.” – It is not clearly understood how the methodology and application are only specific to this type of models and not others.
8) “a set of conceptual resources such as “today’s air temperature on campus” – What is a resource here? Air temperature or campus? If Air temperature, why it is a resource? If “resource” is used here as a concept from REST community, this needs to be mentioned.
9) Please provide a table comparing HTTP, REST-API, and Hypermedia-API
10) “…which is neither RESTful (even though such APIs often denote themselves as such)…” – please justify such opinion.
11) “For realizing software that exposes M&S capabilities FAIRly, it is desirable to support loose coupling” – it is not explained what the relationship or dependence between FAIRness and loose- coupling is.
12) “What are generic or intelligent software agents, though?” --> “What are generic or intelligent software agents?”
13) “Therefore, the PPA can be seen as an intelligent software agent.” – it is not fully justified why PPA is selected for service composition.
14) “We see hypermedia APIs, as an exemplary specific interface to RDF data,…” – Why hypermedia API is exemplary specific to RDF data? What is the relation?
15) “…machine-actionability of capabilities…” – this phrase is not ambiguous; needs clarification.
16) H3 and H4: “Researchers and software engineers can use…” These research questions cannot be evaluated without a survey. No such survey is presented in the paper.
17) “…using any RESTdesc-enabled hypermedia API…” RESTdesc is not defined before that.
18) “…methodological and referential ontologies…” – this is not a standard distinction and comes from Hoffman’s conceptualization. I think methodological are upper level and referential are domain-level ontologies. This needs to be clarified.
19) “OSLC has a strong focus on human end-users, as for example shown through the ‘resource preview’ [18] and ‘delegated dialogs’ [19] features” – why this is relevant to the topic of discussion.
20) “The creation of the links that encode this trace, facilitated through delegated dialogs and resource preview, is identified as the functionality implemented most [20, tbl. 7, p. 25].” – incomplete sentence. Please rephrase for clarity.
21) “Second, these descriptions should be generated automatically as far as possible, starting from the FMUs used” – It is not understood how they are automatically generated. Do authors mean translating the model description in RDF format?
22) “…developed ontologies in combination with established ones needs to be implemented” – What are the established ones?
23) Ontology based metadata for services such as Web Service Modeling Ontology (WSMO), Semantic Web Service Language (SWSO), and OWL-S need to be included as part of SoTA. The developed ontology may reuse many parts of them.
24) There are many problems with the ontology models proposed. E.g., if a System is a ClassOfSystem then every specific System also represents the entire group of similar systems. My bicycle does not represent all the bicycles.
25) “Fig. 2. The implementation is structured in distinct API- and worker components which exchange data via queues; a reverse proxy provides a HTTPS connection to users. An instance of https://github.com/LinkedDataFragments/Server.js enables querying (proxied through the API)” – a technology-neutral system architecture will explain the proposed system better (please see II).
26) “…an <#about>-graph is created that is explicitly linked…” – what is <#about>-graph?
27) “…of the Hydra core vocabulary [37] in lines 30 to 32.” – Why is Hydra vocabulary used? Is it standard for hypermedia API?
28) “Several advantages of the TPF interface have been observed [39, p. 203]: a reduced load on the server; better support for more clients sending requests simultaneously; and increased potential for benefiting from HTTP caches…. Moreover, TPFs are compliant with REST and thus well suited for integration into a hypermedia API.” – Such justifications for the choice of a particular technology are useful and needed for all sorts of technological choices. However, instead of scattering them in the text, these justifications should be presented in a separate section under implementation.
29) “The PPA is visualized in Figure 3. It can be summarized as follows:” – It is not required to describe the algorithm if directly adopted from Verborgh et al. [14, p. 34]. Referring to the original description is enough. Only the improvement part needs to be mentioned.
30) It is necessary to justify the choice of PPA over many other similar algorithms from Answer Set Programming which also need to be included in SoTA.
31) Goal state, Initial state, and resolution paths need to be described more rigorously.
32) “goal state g shown in Listing 3” – listing 3 is a rule with the same body and head. How can it describe a state (preferably a fact)? Why the rule has the same body and head as such rule will not change anything in the KG?
33) “…the RESTdesc descriptions can be obtained through an OPTIONS * request, this means that only knowledge of RESTdesc, RDF and HTTP are assumed and any hypermedia API using these technologies can be used for achieving goals without programming.” – It needs to be clarified that the RESTdesc needs to be coded by somebody, especially the shapes in the OPTIONS.
34) “…precondition in Listing 2 is now met and, as a consequence, the request fully specified.” – The listing 2 is part of the methodology/implementation. This is another example of mixing up the structure. Similar to the suggestion in II, that methodology needs to be separated from implementation, the application also needs to be separated from implementation.
35) “Furthermore, no static service interface description is needed” – What do the authors refer to by static service interface?
36) “Since the M&S hypermedia API only exposes a TPF endpoint…” – this is due to the author’s implementation choice, they do not necessarily only exposes the TPF endpoint.
37) “The ability to execute SPARQL requests against the M&S hypermedia API makes it a source of linked data which can be used in applications or to build KGs. However, these applications should consider that resources do not necessarily exist forever, either because they expire or because they are deleted by a user. These considerations are out of scope here, though.” – This paragraph is not well written, please improve clarity.