Assembly Line for conceptual models

Tracking #: 3235-4449

Authors: 
Karel Klima
Petr Kremen1
Martin Ledvinka1
Michal Med
Alice Binder
Miroslav Blasko
Martin Necasky1

Responsible editor: 
Guest Editors Tools Systems 2022

Submission type: 
Tool/System Report
Abstract: 
Assembly Line is an open-source web platform for collaborative development of ontologies. The platform is generic and domain independent by design, and we demonstrate its capabilities within the domain of Czech public administration -- in this case Assembly Line facilitates building a common Czech eGovernment ontology. In our previous work we introduced the semantic government vocabulary (SGoV), an ontologically-founded layered vocabulary reflecting the diversity of public administration domains and legal concepts important in these domains. We presented SGoV as a rich conceptual model to describe the meaning of open government data published by different public authorities using shared vocabularies. Assembly Line is a natural continuation of that work. It is a platform designed to create, manage, and publish SGoV. The platform includes tools that enable various offices within public administration to manage their own vocabularies, while ensuring semantic integrity of the SGoV as a whole. We demonstrate the impact of the Assembly line on two data publication scenarios in the Czech public administration domain, the registry of rights and obligations and the electronic code of laws.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Reject

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 29/Oct/2022
Suggestion:
Reject
Review Comment:

I'm providing my review by following the guidelines for 'Tools and Systems Report'

Criterion 1) Quality, importance, and impact of the described tool or system (convincing evidence must be provided).
-------------------------
The paper describes a tool (Assembly Line, AL for short) a platform designed to create, manage, and publish SGoV, an ontologically-founded layered vocabulary reflecting the diversity of public administration domains and legal concepts important in these domains.

The paper describes a tool that is clearly useful for the specific task at hand. Nonetheless it takes for granted the fact that a tailored tool for managing one single vocabulary is important and relevant from a scientific point of view and therefore the authors do not make any attempts (apart form a limited comparison with other ontology/vocabulary editing tools at the end of the paper) to motivate why such a tool can have a general value as a tool for managing vocabularies beyond the fact that it is the tool that supports SGoV. This hampers the impact of the tool in a significant manner both in terms of presentation and in terms of actual content.

In fact the authors start from requirements (functional requirements in section 3.1 and qualitative requirements in section 3.2) that are rather general and would apply to many different scenarios in vocabulary management but then they do not reflect at all on how AL contributes in detail to satisfy the requirements and which ones of the AL features, functionalities or architectural components could be beneficial to vocabulary management tools in general and what makes AL an advanced vocabulary management tool beyond the fact that it handles SGoV.

I suggest the authors to make a reflection and discussion of what are the distinctive points of the tool form the point of view of vocabulary management which may also take the form of a better comparison (i.e., less shallow) with other tools but which could also be started by the question what could Al bring to someone who wants to build a general tool for vocabulary management covering diverse domains? Would that be easy to modify the tool? Would that be possible at all? What are the good and bad features that one could take / avoid?

Assessing quality was almost impossible for me. In fact, the paper provides a reasonable description of the architecture in terms of (1) modular components and (2) technical backend support for collaboration (based on GitHub). It does not provide any real description of the frontend functionalities of the tool but it rather chooses to illustrate the workflow of operations supported by the tool (section 4.1 -4.3) with no grasp of the tool. I did try to install it using the docker provided in the Long-term stable URL for resources (and asked also some more technical colleagues to do the same) but we were unsuccessful. As such the back end architecture appears reasonable and solid but I cannot really say much of the quality of the tool from a functional and front end point of view.

Criterion 2) Clarity, illustration, and readability of the describing paper, which shall convey to the reader both the capabilities and the limitations of the tool.
-----------------------------
The paper suffers from a lack of general and structured description of the presented tool. If the authors choose to start with requirements I would suggest they continue the paper by illustrating how those requirements are met both at the front end and back end level. Moreover it is very difficult to distinguish and evaluate in the paper the AL features that are specific to the SGoV domain and the ones that are general and valid for a generic ontologically-founded layered vocabulary reflecting the diversity of a complex domain or domains. Capabilities and limitations are described in very general terms which do not give any idea on how the capabilities and limitations are reflected in the actual tool.

Just to make an example, the authors claim they make domain experts and modeling experts collabiorate. How in practice? How does this collaboration aspect is specific for Al and how it is general (and may be compared in detail with collaborative protege or moki - assuming you want to compare with something that may be discontinued, I believe). This illustration and discussion should be make for all the important features of the tool in my opinion.

Criterion 3) Please also assess the data file provided by the authors under “Long-term stable URL for resources”. In particular, assess [... se instructions for reviewers]

After battling for a few hours I gave up the effort to install the tool. here is what I suggest:

- readme in english would avoid cheesy google translate (the current readme are mostly in Czech)

- ./gen_env.sh local if run on macOs leads to erro due to "base64 -w 0" not compatible with macOs (I suggest the authors to check for compatibility of systems or at least clearly indicate which platforms / versions a user should use)

- “TERMIT_SERVER_KEYCLOAK_CREDENTIALS_SECRET" “KEYCLOAK_REALMKEY" “SGOV_SERVER_REPOSITORY_GITHUBUSERTOKEN" "KEYCLOAK_REALMKEY" "SGOV_SERVER_KEYCLOAK_CREDENTIALS_SECRET” environment variables values to be set is not clear

- docker-compose --env-file .env.local up leads to Error response from daemon: Head "https://docker.pkg.github.com/v2/opendata-mvcr/ontographer/al-ontographe...": no basic auth credentials

In general providing a self contained container in docker hub without variables would be helpful

Review #2
Anonymous submitted on 10/Feb/2023
Suggestion:
Reject
Review Comment:

The article describes a tool called "Assembly Line" that provides an answer to the development of collaborative ontologies. This is an interesting and relevant problem for the readers of SWJ.

In this review, I will evaluate the tool based on the Tools and Systems criteria, as follows:

> These reports should be brief and pointed, indicating clearly the capabilities of the described tool or system.

Topics such as SKOS (2.1) and OWL (2.3) probably do not need to be explained for SWJ readers.

Section 3 presents the requirements in detail, which can be considered the capabilities of the system. Section 4 similarly details the process.

The paper then jumps into architecture and user evaluations, without actually detailing to the reader what the tool looks like and how to use it. I.e., the paper itself does not enable readers to fully understand how the tool will help them.

> It is strongly encouraged, that the described tools or systems are free, open, and accessible on the Web.

Whereas the tool is available as open source, I have been unable to run it. There is no demo video or other convincing material to give me an impression of the tool. This makes assessing the tool difficult, and as such I feel that I cannot advise positively at this point in time.

> (1) Quality, importance, and impact of the described tool or system (convincing evidence must be provided).

As per the above, this is hard to assess. Many different kinds of tools can conform to the text written in the paper. Whereas the user testing aims to bring evidence, this is not a research paper; I need to be able to test the tool first-hand.

> (2) Clarity, illustration, and readability of the describing paper, which shall convey to the reader both the capabilities and the limitations of the tool.

As per the above, I think that some sections could be removed in favor of providing a better impression of the experience of the tool.