ImageSchemaNet: Formalizing embodied commonsense knowledge providing an image-schematic layer to Framester

Tracking #: 3084-4298

Authors: 
Stefano De Giorgis
Aldo Gangemi
Dagmar Gromann

Responsible editor: 
Guest Editors Commonsense 2021

Submission type: 
Ontology Description
Abstract: 
Commonsense knowledge is a broad and challenging area of research which investigates our understanding of the world as well as human assumptions about reality. Deriving directly from the subjective perception of the external world, it is intrinsically intertwined with embodied cognition. Commonsense reasoning is linked to human sense-making, pattern recognition and knowledge framing abilities. This work presents a new resource that formalizes the cognitive theory of image schemas. Image schemas are dynamic conceptual building blocks originating from our sensorimotor interactions with the physical world, and enable our sense-making cognitive activity to assign coherence and structure to entities, events and situations we experience everyday. ImageSchemaNet is an ontology that aligns pre-existing resources, such as FrameNet, VerbNet, WordNet and MetaNet from the Framester hub, to image schema theory. This article describes an empirical application of ImageSchemaNet, combined with semantic parsers, on the task of annotating natural language sentences with image schemas.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Accept

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 27/Apr/2022
Suggestion:
Accept
Review Comment:

1 Quality and Relevance
=======================
This paper contributes an ontology for image schemas, ImageSchemaNet,
which is an image-schematic layer in the Framester hub.
ImageSchemaNet links to existing resources like FrameNet, WordNet, and
VerbNet. The authors describe the ImageSchemaNet and its vocabulary,
and perform an evaluation using OpenSesame and FRED (frame-based
parsers).

The revised version contains many notable additions (see below) for
clarity and readability. The edited version still lacks a README
file, although the authors provide links to an endpoint which can be
tested. Per other reviewer feedback about the completeness of the
resource, the authors added a query to retrieve all IS and SP in
Appendix B.

Questions and concerns were adequately addressed in the resubmission
response.

2 Illustration, clarity and readability
=======================================
In this new version, the authors added more figures for clarity per
suggestion: flow diagram in Appendix E, and an example about the
"shape puzzles" game in the footnote in Section 3.1. The authors also
updated the figures in Figure 1 and Figure 2 to make them more readable.

The authors added an example explanation in Appendix C. Appendix E,
Figure 6 which was added as a step by step schema to explain both
ImageSchemaNet components and the process of resource building.
The full knowledge graph and IS activated are in Appendix C.

With these additions, I am convinced that this is properly ready for
an accepted submissions.

Review #2
Anonymous submitted on 08/May/2022
Suggestion:
Accept
Review Comment:

This manuscript was submitted as 'Ontology Description' and should be reviewed along the following dimensions: (1) Quality and relevance of the described ontology (convincing evidence must be provided).

This paper provides an ontology for image schema using Framnet semantics. The ontology can be queried from Framnet's endpoint. Image schema is a common and important cognitive instrument for sentence making and creating language expressions. Hence, this newly created ontology can be highly useful and relevant to the readers of this journal.

(2) Illustration, clarity and readability of the describing paper, which shall convey to the reader the key aspects of the described ontology.

The revised version of this paper is clear and well written.

Please also assess the data file provided by the authors under "Long-term stable URL for resources". In particular, assess (A) whether the data file is well organized and in particular contains a README file which makes it easy for you to assess the data,
There is a readme file in the github repo with example queries. No data file is provided, but the entire ontology is accessible through Framester's hub.

(B) whether the provided resources appear to be complete for replication of experiments, and if not, why,
It is sufficient.

(C) whether the chosen repository, if it is not GitHub, Figshare or Zenodo, is appropriate for long-term repository discoverability,
yes.

and (4) whether the provided data artifacts are complete. Please refer to the reviewer instructions and the FAQ for further information.

The current work only includes 6 types of image schemas. While there are other types of image schemas, at least we have a clear idea of what is covered.