Review Comment:
### High-level impression
I have a general feeling that the writing of this paper did not receive enough attention.
Although I can surely sympathise with the goal and the work, the lack of argumentation and attention to detail
puts me in no position to accept this paper.
I am currently not at all convinced that this paper can reach a state to accept it,
but with enough argumentation, a broader scope, and some generalizable results (see section hereafter),
there might be a possibility.
The long-term URL is very well structured and feels very complete.
#### originality
The research is not original: an existing library was repurposed to a different language.
I hoped to read a more detailed analysis of what kind of features are relevant to a (visual/textual) editor in general
and to ShEx editing in particular (which I would consider valuable research), but no more than a comparison with existing editors was given.
Also, a better classification of the type of visual editor could be interesting: ShexAuthor feels a 'form-like' editor,
whilst more graphical 'drag-n-drop' type of editors also exist and an argumentation why this form-like editor was pursued would be interesting to me.
Sadly, the lack of argumentation and references to similar work in the introduction, and the very narrow scope of the related work section cannot convince me
that the currently presented work is original.
To be seen as research, I would expect more generalizable results.
#### significance of the results
As the evaluation shows, the results are not statistically significant.
The only significant results are rather because a textual and visual editor are compared,
and have nothing really to do with the presented work.
However, the impact of YASHE is clear and also clearly represented in the paper.
Given that, I would actually expect a resoure paper around YASHE would be more relevant than the results of this paper.
The latter would require a different kind of evaluation.
#### quality of writing.
The writing feels very hastily: many typos and ill-phrased sentences (details below).
### Detailed review
#### Abstract
- Need is not convincing. Why?
- There is no conclusion
- What do you mean by 'better results'
#### Introduction
- Context and Need are not clear nor well argumented, giving the impression that this work is not that valuable.
- Rephrase: "[One editor] was the one incorporated by YASGUI[2]. Later extracted as an independent module": The sentence would be clearer if you state "[One editor] is YASQE, and independent module extracted from YASGUI".
- Personally, I find it annoying that you don't state what YASHE stands for.
- It's not clear why exactly the listed features were incorporated, and not others.
- Typo: "On the other hand": there's not 'On the one hand' anywhere. This also points to a lack of structure in the Introduction section. I would expect and introductory paragraph overviewing two types of tools (textual and graphical), detailing their differences, and only then diving in the details.
- Rephrase: "One of the problems faced by domain experts [...] is the fact that they don't need to be accustomed to the use of computer languages": I would assume that 'the fact' is not the problem but rather "One of the problems faced by domain experts [...] is that they are not accustomed to the use of computer languages"
- There is very little argumentation or references to back up the claims in this introduction paragraph ('they may find a GUI more comfortable'-> do they?)
- "a shapes graphical assistant": I don't understand what this means, what is a shapes assistant, do you mean a user interface?
- Rephrase: "This tool integrates YASHE into its system to visualize the shapes created from the wizard": I'm confused, I thought YASHE was a textual editor, this text makes it seem as if it's a visualizer? Is it both? that's not clear. Maybe it points to ShexAuthor?
- You have twice 'In this paper, we present'. That points to a lack of structure: if you present 2 things, introduce them both together
### Related work
- It's not clear what the selection criteria were for these shex tools. Is it ad hoc? How can you know that this is a relevant list then? Why are other shape validation language-like editors taken into account, such as JSON schema editors, SHACL editors, etc?
- https://www.semantic-web-journal.net/system/files/swj2834.pdf happens to have evaluated a graphical language for shape editing (not tied to SHACL, but the tooling currently only supports SHACL import/export).
- Why is RDFShape not part of this SOTA? Yes it incorporated YASHE, but it should at least be mentioned IMO.
- Aha, the listed features in the introduction were found through SOTA review. Maybe it makes more sense to not specify those features in the introduction, as readers don't know where they came from
- It's weird that the functional comparison between YASHE and related SOTA is done within the Related work section. For me it's fine to add that in the same table, but the actual discussion should not be in the related work section IMO (people should be able to skip the SOTA section and still understand your paper IMO).
- Typos: in the table: Ghrapic and Pritner
- You state "ShEx2- Simple Online Validator 1 (to abbreviate ShEx2 from now on)", but later in the text you keep using "ShEx2-Simple Online Validator". Either abbreviate or not, please.
- "the possibility to edit and create EntitySchemas. Wikidata offers a plain text editor to perform this task. This editor only offers 2 of the 17 features defined in Table1". Multiple things wrong with this statement:
- In the table, "Save entity schema" is state, that doesn't sound the same as 'edit and create'.
- "2 of the 17", while I see 3 features offered in the table
- "defined in Table1": Table1 doesn't define anything, it provides some labels but the alignment between the discussed features and the labels is not always clear.
- Typo: "Table1": add a space.
- "This could facilitate the appearance of grammatical errors in the EntitySchemas" I don't understand this statement. How does 'beting able to edit and create' facilitate the 'appearance of grammatical errors'? Or is specific validation in place?
- "Grammatical error detection": Is this "Error Checking" in the Table? This is not clear
### Description
- Apart from being a very vague title, the ToC stated 'only' the architecture would be described. This section contains much more
- "server-side": don't you mean client-side?
- "It takes care through the defined prefixes": what prefixes are you talking about? Prefix.cc prefixes? Q and P?
- Why is the 'tooltips' feature not part of the list?
- Also, drop the list and make use of paragraphs, much more readable :)
- Rephrase: "over which we pass our mouse" Isn't 'hover' a more conventional way to describe this?
- Considering the limitations: how important/impactful are those limitations? Are these marginal cases or very impactful limitations?
### Methodology
- Where does the questionnaire come from? How are you sure you are asking the right questions?
- The precentages are not clear: what do 16%, 13%, and 71% mean? Something like '16% of all to be generated items where prefixes'?
- The discussion about ratio is overly long, the same formula is just repeated 3 times. This could be solved more elegantly
- CPun is not used in the formula at the end of section 4. Is this a typo in the formula or is CPu 'the fastest' user? Also, shouldn't the formula use 'precision{x} (precision of user X)' where the top part of the division is Tux? Also, you mention both precision and accuracy: what is it?
- It seems no qualitative responses where gathered for the test audience. This is a quite big limitation to your experiment IMO.
### Results
- Statistical analysis: you are comparing textual with visual editors. It feels very logical to me that a visual editor uses less keystrokes and more mouse clicks on average. What value does your statistical analysis in that regard give except for the naive conclusion of my previous sentence? Is this relevant to ShEx editing? Why didn't you compare textual editors with textual editors and visual editors with visual editors? --> It turns you specify this in the discussion section, but it should be clear in the results that you also compare textual editors with textual editors solely.
- It is not clear why Tukey and Sheffe tests are both used. Please argument why (because now it is hard to compare and it feels as if they are chosen opportunistically)
- "Despite this, YASHE and ShExAuthor obtain lower values": I assume you mean 'for Time consumed', but it is not stated as such.
- How is it possible that A11 and A12 don't have any results for ShexAuthor, Wikidata, and (for A11) ShEx2? Please explain.
### Conclusion and Future Work
- I am missing generalizable conclusions
- Future work is severly lacking. It feels as if the tool is perfect, and only the experiment should be repeated with a larger scale. What features were deemed most relevant? What features were lacking?
|