Review Comment:
So here are my comments on the paper:
I found the paper a good, interesting read. It proposes an innovative approach to combining LLMs and KGs and offers a thorough evaluation.
I could not find information about the availability of data, code, or evaluation results. Without this, a final decision about the paper is impossible for me. Thus, please provide access to these artefacts (for reviewers, but also the readers of the paper).
Beyond that, I do have a few rather minor suggestions for improvement:
p.2. l.7: Can you describe how the token limits compare to KG sizes? (so how many tokens would be needed for your entire KG)?
p.2., l.34: not sure GraphQL is the most relevant reference here. What about e.g. Cypher?
p.3., l.1: I don't see a clear difference between RQs 1 and 2. Provide more detail or merge.
p.4, l. 13: What about the SPARQL extension of SHACL?
p.4, l. 37: is multi-modality exploited in any way or do you focus on the text?
p.5, Fig.1 : Talking about master-slave is somewhat outdated and contested. I suggest to change the example.
p.7, l. 26ff: Do you have evidence for that claim?
p.9, l15: typo in the figure (informaiton)
p. 9, l38 (The label...): I don't understand that sentence
p.10, l. 26: how do you prune?
p. 12, Algo 1: The algo assumes that the shortest path represents the connection between the entities the user is interested in. That is not necessarily the case (in particular if user queries are not very verbose and it is difficult to guess the intent of the users). Can you deal with that?
p.12, 38: how do you prioritize
p.16, l2: where can I find the rules?
p.16, Table 5: in addition to overall results, it would be interesting to see metrics for the individual questions.
Comment to the editors: I did not find the "Long-term stable URL for resources" - but maybe I simply didn't know where to look.
This manuscript was submitted as 'full paper' and should be reviewed along the usual dimensions for research contributions which include (1) originality, (2) significance of the results, and (3) quality of writing. Please also assess the data file provided by the authors under “Long-term stable URL for resources”. In particular, assess (A) whether the data file is well organized and in particular contains a README file which makes it easy for you to assess the data, (B) whether the provided resources appear to be complete for replication of experiments, and if not, why, (C) whether the chosen repository, if it is not GitHub, Figshare or Zenodo, is appropriate for long-term repository discoverability, and (4) whether the provided data artifacts are complete. Please refer to the reviewer instructions and the FAQ for further information.
|