Handling Wikidata Qualifiers in Reasoning

Tracking #: 3375-4589

Authors: 
Sahar Aljalbout
Gilles Falquet
Didier Buchs

Responsible editor: 
Guest Editors Wikidata 2022

Submission type: 
Full Paper
Abstract: 
Wikidata is a knowledge graph increasingly adopted by many communities for diverse applications. Wikidata statements are annotated with qualifier-value pairs that are used to depict information such as the validity context of the statement, its causality, provenances, etc. Handling the qualifiers in reasoning is a challenging problem. When defining inference rules (in particular, rules on ontological properties (x subclass of y, z instance of x, etc.), one must consider the qualifiers as most of them participate in the semantics of the statements. This poses a complex problem because a) there is a massive number of qualifiers, and b) the qualifiers of the inferred statement are often a combination of the qualifiers in the rule condition. In this work, we propose to address this problem by a) defining a categorization of the qualifiers b) formalizing the Wikidata model with a many-sorted logical language; the sorts of this language are the qualifier categories. We couple this logic with an algebraic specification that provides a means for effectively handling qualifiers in inference rules. The work supports the expression of all current Wikidata ontological properties. Finally, we discuss the methodology for practically implementing the work and present a prototype implementation.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Major Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Daniel Hernandez submitted on 21/Mar/2023
Suggestion:
Major Revision
Review Comment:

Summary

In this paper, the authors propose using a many-sort logic to formalize the semantics of qualifiers in Wikidata. The problem motivation exemplifies inferences that could be done from the qualifiers. Some inferences require information from the different sorts of contexts that can be defined with qualifiers. The authors define a reduced set of sorts, and claim that these sorts can be used to provide general inferences over the data. They describe some inference rules over these sorts, and some constructors that define operations over contexts (e.g., the intersection between temporal contexts).

Originality

The originality is low. Patel-Schneider (Contextualization via Qualifiers. Emerging Topics in Semantic Technologies. ISWC 2018 Satellite Events) suggested taking a logic approach for the qualifiers and developed some logical rules in [10]. The authors of this paper said that they differentiate on that (1) the values are not opaque but can be accessed by functions (e.g., to obtain the intersection of two temporal intervals) and (2) qualifiers are organized in sorts (see page 15, lines 13-18). By organizing qualifiers in sorts, several qualifiers can be abstracted into a single sort. For instance, Wikidata has many qualifiers to describe temporal context, and in this paper, they represent all these temporal qualifiers as instants or intervals. However, this manipulation of time is not new, since it is also present in works that represent contexts with lattices. The authors claim that their method can make inferences that semi-lattices cannot (page 15, lines 27-28). The rules in the proposed work that do not follow lattices are the rule on page 11 (lines 22-29), the rule on page 12 (lines 9-10), and the rule on page 12 (lines 33-38). All these rules are related to sequences (which do not combine well with lattices) and annotations (which have no clear semantics). I would not define these two sorts (and the causality) as context but as additive information, as is suggested by Patel-Schneider in Contextualization via Qualifiers. In general, the referred works using lattices provide a conceptual model that allows to infer rules. On the other hand, the rules presented in this work are explained individually (sometimes wrongly, as is pointed out in I6). I wonder to know if some conceptualization can be done, so we can infer the proposed rules from some more fundamental principles.

Relevance

Reasoning over qualifiers can improve our capacities to make inferences over Wikidata, or to validate current data. Hence, it is a relevant problem to solve. The author provides some new rules that can be applied to make inferences using Wikidata qualifiers. However, I am not convinced of the comprehensiveness of the proposed rules. There are maybe many other rules that could be added to this paper. I agree on that the specifying the semantics of each qualifier via rules is out of the question, and that a more abstract view is needed, but some of the proposed rules are very bounded. For instance, the rule on page 13 (lines 17-22) is about a particular relation "spouse", a particular qualifier "date of death", and a particular end-of-cause "death." This rule does not work if we change "spouse" by "parent."

Quality

The paper is, in general, easy to follow. However, I think there are several issues that should be addressed. The main issues are: a clarification of the comprehensiveness of the proposal, explaining how they manage incomplete information (I5), and the distinction between context and additional information (e.g., I6). I miss some more general principles behind the proposed rules.

Issues and questions

I1. The paper states that Wikidata has more than 9 thousand qualifiers, of which 200 appear in more than 10 thousand statements, and "due to the massive number of qualifiers, specifying the behavior of each qualifier in each rule is out of the question." Then it is specified that qualifiers are organized in the categories that are in Table 1 (and the annotations' category). The file http://ke.unige.ch/wikidata/Statistics/qualifier-prominence.csv contains information about qualifiers and categories. However, not all qualifiers have a category. How can I know that the proposed categories are representative of all the qualifiers in Wikidata? Are these categories defined over the 200 qualifiers that appear in more than 10 thousand statements?

I2. The rules the authors define to make inferences over the qualifiers have very few constructors to encode contexts. For example, for the temporal context, these functions are timeValidity(), interval(), instant(), and undefined. On the other hand, there are several qualifiers that can be related to time. I am uncertain whether all temporal contexts can be defined in terms of intervals or instants. For instance, the Christian Lent, the Islamic Ramadan, or the Germanic Yule, are recurrent periods that have been repeating from some time ago. The Wikidata property "day in the year for periodic occurrence" can be used for these recurrent contexts (although it may be uncommon as qualifiers). I wonder to know which percentage of the qualifiers can be expressed in terms of the constructors presented in Table 2.

I3. Why do not separate the validity temporal context from the spatial context? It seems that temporal intersections do not affect spatial intersections.

I4. The authors say that the validity domain has several dimensions: time, space, work,
taxon, etc. However, Table 2 shows constructors for only 2 dimensions: time and space. Are other dimensions considered in the inference rules?

I5. I have concerns about how the ontological rules deal with the lack of information. For example, the rule for the "subclass of" property (page 11, lines 25-28) evaluates testIntersect(V1, V2). What happens if V2 does not provide temporal qualifiers? Does then the testIntersect function return unknown because the temporal scope is considered undefined, or does it return true because it is considered that V2 is not temporally constrained? For instance, the statement that cats are animals may have an empty validity context to codify that the fact is always true. The same happens with intervals where one of the components is undefined, as happens in line 37, page 12. Does this undefined means unconstrained, or unknown? Depending on the chosen semantics, the semantics of intersect may lead to uncertain answers, or to fewer inferences as the one we may expect.

I6. The rule for symmetric properties seems to be wrong (page 12, lines 9-12). If the marriage of Alice with Bob has endCause=death, this means that the death of Bob was the cause of the end of the marriage. If we use the rule, then we will get that the marriage of Bob with Alice has endCause=death. This implies that the death of Alice was the cause of the end of the marriage, which is not true. It should be noticed that causality is not a context but additional information (as I mentioned previously) so it has the same issues with symmetric properties as annotations and sequences.

I7. In Definition 1 the term S is defined twice, as a set of sorts, and as a predicate symbol.

I8. Definition 1 is too long. There are many concepts involved in this definition. It should be divided into several smaller definitions.

I9. In Definition 1 it is said that F is a set of function symbols, but then (in line 18) it is said that F contains functions instead of function symbols.

I10. In Definition 1 it is not clear what the theory Spec is. If it is, e.g., a first order theory, then it should be stated in the definition.

I11. It is not recommended to make an implication in a Definition (lines 15-16, page 7).

I12. The space before "where" in line 9, page 8 should be removed.

I13. In lines 29-30, page 8, there are mentioned several dimensions for the validity context. However, in the paper I only see two: time and space. Does the paper consider more dimensions. This sentence suggests that the paper is more comprehensive than what I can see about the encoding of the validity sort in Table 2.

I14. In lines 34-38, page 8, it is indicated that provenance includes "sources, techniques, etc." Again, the "etc" suggest that the paper is more comprehensive than what I can see in Table 2.

I15. The explanation in lines 39-38, page 11, about why the resulting causality sort of the rule is the union of the causalities is not convincing. I think that this rule needs an example where statements with predicates "instance of" and "subclass of" have causalities.
(In general, some additional examples would help the reader to understand rules).

I16. I suggest not using the word "complex" in 41, page 14, because it can be interpreted as algorithmic complexity (it also occurs together with time + space). Instead, it should be said that "a mapping needs to take into account several considerations."

I17. In lines 44-46, page 14, it is said that the several SPARQL queries are executed iteratively to obtain all the inferred statements. You should report how many iterations were executed, how long it takes, how many inferences each rule produces, how many different properties and qualifiers are involved, and their proportion with the qualifiers that are not involved in inferences. These statistics can be useful to understand how comprehensive is the proposal.

Review #2
By Maximilian Marx submitted on 22/Mar/2023
Suggestion:
Major Revision
Review Comment:

The article proposes an approach for context-aware reasoning over
Wikidata, where the context is given by qualifiers on Wikidata
statements. The approach groups qualifiers into several different
categories and consideres a many-sorted first-order logic setting,
where each qualifier category corresponds to a sort and is accompanied
by a background theory, specified in CASL. Built on this, the article
presents several inference rules over qualified statements. Finally, a
possible implementation is outlined where rules are compiled to SPARQL
construct queries, with the background theory implemented as
Javascript functions in the triple store.

I am not particularly happy with the choice to base the approach on
the RDF dumps of Wikidata, rather than on the Wikibase data model
itself. This gives rise to some semantic inaccuracies, such as
calling, e.g., “prov:wasDerivedFrom”, “wikibase:rank”, or “rdf:type”
qualifiers (they are not, but rather encode references (which can,
indeed, have their own qualifiers!), statement ranks, and a variety of
other things in the RDF representation). Furthermore, it excludes,
e.g., the special “no value” value (used, e.g., in conjunction with a
“follows” qualifier to signal the start of a sequence) from some of
the inference rules, as these don't correspond to nodes in the RDF
dump, but rather lead to “stmt a wdno:P…” triples. This RDF influence
is also reflected in Definition 1, where the first three sorts of the
single predicate symbol are all “resource”. However, resource
explicitly includes only Wikidata items (by virtue of mentioning
Q16222597) and “literal denotations” occurring as “subject, object, or
qualifier value in a statement”. In particular, this does not include
other Wikidata entities, such as properties (yet any statement in
Wikidata always has a property in the predicate position, and
properties can also appear as subjects or objects), but also excludes,
e.g., IRIs as objects, while still allowing for, e.g., literals as
subjects (which is neither allowed in Wikidata statements nor in RDF
triples).

Some of the proposed inference rules don't seem to be well-typed
(although I might be misreading the CASL specification, but in that
case clearly some explanation in the article would be required), e.g.,
the marriage and death rule passes the date (of sort resource) to the
interval function, which takes either a instantTime or a duration as
its second argument. Another example are the sequence rules, which
call startTime/endTime (expecting sort timeInterval) on values
returned from extractTime (returning sort time).

Other inference rules don't seem to be particularly useful:
“equivalent property” and “equivalent class” are used on Wikidata
exclusively for alignment with external Ontologies, i.e., they should
(barring modelling errors) never have Wikidata entities as
objects. Thus, e.g., the second “equivalent property” rule will only
ever match if the conclusion is already present in Wikidata, wheres
the first and third rule cannot infer a legal statement.

Yet other inference rules lack explanations for the design choices
made: E.g., why does the “subclass of” rule check that the validity
contexts intersect, but the “subproperty of” rule does not? More
generally, a lot of the material in the article is presented as-is,
without any discussion of the choices and possible trade-offs made. As
another example, the authors mention that the “country” qualifier is
only sometimes used to signal validity, which might suggest that
assigning each qualifier to a single sort is too restrictive (and it
is not immediately obvious to me why having a single sort for all
qualifiers could not work).

I am also wondering what exactly can (and cannot) be expressed in the
proposed formalism. For example, consider the sequence rules and the
Obama example: Suppose we have a statement saying George W. Bush is
replaced by Barrack Obama (with series ordinal 43), and a statement
saying that Donald Trump replaced Barrack Obama (with series ordinal
45), could we have a rule inferring the statement from Figure 1?
Furthermore, would it be possible to _only_ infer this statement, and
not also two statements with just “replaces” and “replaced by”,
respectively (or even just not infer these two statements if the one
shown is already present)?

Some of the code, data files, and statistics are provided, but are not
explicitly mentioned as the long-term link to resources. It seems that
the data is hosted with the authors' university, but it is non-obvious
whether this is suitable for long-term storage. While, e.g., the
results of compiling some of the rules to SPARQL construct queries
(and the compiler itself) are included, I have not been able to find
the rules themselves, and the syntax used is not documented (judging
from the grammar rules in the compiler, it is definitely not the
FOL-based syntax used throughout the article). I would also have
expected to find the CASL specification among the data (although it is
also printed in the appendix, having it available as a file would
simplify its use), but have not managed to find it.

I have also not been able to figure out how the provided
ModuleFunctions correspond to the CASL specification: some of the
functions specified are not present (e.g., timeValidity), others (such
as endTime) don't seem to match the specified behaviour on undefined
input. Unfortunately, the article contains no details on this.

The writing is easy to follow, though I have found some issues (see
below for a – possibly still incomplete – list). Two things that
should be standardised, however, are (i) spaces/no spaces in front of
footnote markers (compare, e.g., footnotes 1 and 2) and (ii) the
format used for dates (e.g., 9 June 1732, 25/11/1991,
16-02-2022). Also kindly note that the correct capitalisation is
“Wikidata”, not “wikidata”.

While reasoning about qualifiers is a very interesting topic, I cannot
recommend the article in its current form for acceptance. I can
imagine several distinct directions that could eventually lead to a
nice article: A comprehensive exploration of what kind of inference
rules can and cannot be expressed in the multi-sorted formalism is
certainly one such direction. A rigorous, formal specification of the
Wikibase data model and a well-designed background theory might be
interesting in their own right. Alternatively, a working pipeline as
outlined in the “Implementation” section, where I could load a
Wikidata dump, write some inference rules, and compute the inferred
statements would be a welcome contribution.

p1, l41: drop the space before “)”
p1, l44: “Where” doesn't begin a new sentence here, so should not be capitalised.
p2, l32: “If it is relatively” ~> “While it is relatively”
p3, l12: “intersection of statement” ~> “intersection of statements”
p3, l38: “Georges” ~> “George”
p3, l47: Note that properties with a property usage example always occur in a qualifier position (as this is how usage examples are modelled), but that does not make them qualifiers (indeed, usage as qualifiers might be forbidden by a property scope constraint).
p4, l9: “statement ;” ~> “statement;”
p4, l31: “2008” ~> “2009”
p4, l50: “are :” ~> “are:”
p6, l6: “Divorce being the value of the end cause qualifier” is an incomplete sentence
p6, l32: “constraint(P2302)” ~> “constraint (P2302)”
p7, l1: “an knowledge” ~> “a knowledge”
p7, l9/l13: this should be the same predicate symbol in both places
p7, l26: “)[” ~> “) [”
p8, l37: “provenances is” ~> “provenances are”
p10, l1: “Prominance” ~> “Prominence”
p10, l38: “level( i.e.” ~> “level (i.e.,”
p10, l42: “value of sorts” ~> “values of sorts”
p10, l43: “rules takes” ~> “rules take”
p10, l44: “atoms ψ contains” ~> “atoms and ψ is”
p11, l10: “qualifiers categories” ~> “qualifier categories”
p11, l47: “P_2) .” ~> “P_2).”
p12, l5: “describe more” ~> “further describe”
p12, l23: “dismiss also” ~> “also dismiss”
p12, l31: “Inspired from” ~> “Inspired by”
p13, l2: “qualifier” ~> “qualifier value”
p13, l9: “For example,” is an incomplete sentence
p13, l24: “dateofdeath” ~> “date of death”
p14, l39: the triples need spaces between the components
p15, l26: “lattices structures” ~> “lattice structures”
p16, l36: “subproperty property” ~> “subproperty of”?
p17, l25: “Type constraint” ~> “Subject type constraint”
p17, l50: “contaisn” ~> “contains”