DWRank: Learning Concept Ranking for Ontology Search

Tracking #: 883-2093

Authors: 
Anila Sahar Butt
Armin Haller
Lexing Xie

Responsible editor: 
Guest Editors EKAW 2014 Schlobach Janowicz

Submission type: 
Full Paper
Abstract: 
With the recent growth of Linked Data on the Web there is an increased need for knowledge engineers to find ontologies to describe their data. Only limited work exists that addresses the problem of searching and ranking ontologies based on a given query term. In this paper we introduce DWRank, a two-staged bi-directional graph walk ranking algorithm for concepts in ontologies. DWRank characterises two features of a concept in an ontology to determine its rank in a corpus, the centrality of the concept to the ontology within which it is defined (HubScore) and the authoritativeness of the ontology where it is defined (AuthorityScore). It then uses a Learning to Rank approach to learn the feature weights for the two ranking strategies in DWRank. We compare DWRank with state-of-the-art ontology ranking models and traditional information retrieval algorithms. This evaluation shows that DWRank significantly outperforms the best ranking models on a benchmark ontology collection for the majority of the sample queries defined in the benchmark. In addition, we compare the effectiveness of the HubScore part of our algorithm with the state-of-the-art ranking model to determine a concept centrality and show the improved performance of DWRank in this aspect. Finally, we evaluate the effectiveness of the FindRel part of the AuthorityScore method in DWRank to find missing inter-ontology links and present a graph-based analysis of the ontology corpus that shows the increased connectivity of the ontology corpus after extraction of the implicit Inter-ontology links with FindRel.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Accept

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Laurens Rietveld submitted on 21/Jan/2015
Suggestion:
Minor Revision
Review Comment:

The original EKAW paper was reviewed by me as well. This review is copy/pasted at the bottom of this review for reference. (the previous review is available at the following URL as well, as reviewer #2... : http://semantic-web-journal.net/content/relationship-based-top-k-concept...)

The authors developed a method for searching and ranking ontologies for a given textual query.
An (offline) index is built using hub scores of concepts within ontologies, and authority scores of the ontologies themselves. A learning-to-rank algorithm is applied using hub/authority and text relevancy features to learn a ranking model.
Queries are answered using two strategies for filtering, text similary calculations and ranking using the offline indexes.

Just as the conference paper, this paper is well thought out and well written.
Below, I will focus on the delta w.r.t. to previous paper.

The delta between both papers is good, and most of the feedback on the conference paper has been taken into account.
The original offline procedure is extended with a learning-to-rank algorithm, and the experiment-setup + results sections had a great benefit from this extended journal version.
The experiment setup now better details aspects such as the queries used in the analysis, or the effects of adding implicit ontology links.
The results section now better illustrates the impact of several important design decisions.

There are still some comments though:
I'm missing more explanation/discussion of the results, such as how your results impact your use case, or how these results provide insights which are interesting for other use cases.
Could you e.g. discuss how this approach deals with newly emerged but well defined ontologies? (this is discussed in the context of related work on page 2)

Minor:
- page 2, left col, 1st par: 'also dedicated' -> 'dedicated'
- Make more explicit that this paper is a continuation of the EKAW paper (e.g. \thanks note in the title)

In general, this paper is a good improvement of the already well written EKAW paper.

####EKAW REVIEW####

Overall evaluation
Select your choice from the options below and write its number below.
2
== 3 strong accept
== 2 accept
== 1 weak accept
== 0 borderline paper
== -1 weak reject
== -2 reject
== -3 strong reject
Reviewer's confidence
Select your choice from the options below and write its number below.
4
== 5 (expert)
== 4 (high)
== 3 (medium)
== 2 (low)
== 1 (none)
Interest to the Knowledge Engineering and Knowledge Management Community
Select your choice from the options below and write its number below.
4
== 5 excellent
== 4 good
== 3 fair
== 2 poor
== 1 very poor
Novelty
Select your choice from the options below and write its number below.
4
== 5 excellent
== 4 good
== 3 fair
== 2 poor
== 1 very poor
Technical quality
Select your choice from the options below and write its number below.
4
== 5 excellent
== 4 good
== 3 fair
== 2 poor
== 1 very poor
Evaluation
Select your choice from the options below and write its number below.
3
== 5 excellent
== 4 good
== 3 fair
== 2 poor
== 1 not present
Clarity and presentation
Select your choice from the options below and write its number below.
4
== 5 excellent
== 4 good
== 3 fair
== 2 poor
== 1 very poor
Review
Please provide your textual review here.
The authors developed a method for searching and ranking ontologies for a given textual query. An (offline) index is built using hub scores of concepts within ontologies, and authority scores of the ontologies themselves.
Queries are answered using text similary calculations, ranking using the offline indexes, and two strategies for filtering.
The paper is an interesting and well written paper. Below are my main comments:
- Why was a PageRank-like algorithm used for the hubscore calculation? When measuring the centrality of a node in a network, other network analysis algorithms such as betweenness centrality may work even better
Explaining the results was a bit lacking:
- I understand the argumentation behind using artificial ontology concepts for data-type relations in the hubscore calculations. However, I would be interested in a better analysis of this. What happens with the performance with and without this particular trick. And for which type of ontologies is it more suitable?
- Other than table 1 and 2, I miss more stats on the ontology benchmark. How connected is the ontology network? What is the average/median degree? Depending on such figures, we might better understand the influence of the authority calculation of the ontologies on the ranking.
- Why is a detailed description of the precision/recall of the filter step out of the scope of this paper? I would have liked this part particularly
- I like the graphs and tables on increased performance. However, I would have easily traded the space of one of these graphs and tables for more insight. E.g., which concepts are incorrectly ranked (based on your evaluation outcome), and why? Is it the filtering step or the offline step? Is there a particular property of this concept or ontology which causes this?
Minor comments:
- 'dumping factor' (page 4) => 'damping factor'
- For reproducability reasons, I miss a link to the source code
In summary, this is a good paper. There is still room for improvement, particularly in explaining the results (possibly in a journal version)

Review #2
By Yingjie Hu submitted on 20/May/2015
Suggestion:
Minor Revision
Review Comment:

This paper aims at providing better ranking of ontologies given a query concept. The authors present methods to calculate the centrality of concepts and the authority of ontologies. They include these two features, in addition to the traditional textual similarity, into the ranking algorithm called DWRank. A ground truth dataset has been used to train DWRank and tune the weights of the features. The authors evaluate DWRank by comparing it with existing benchmark approaches. Besides, the authors also compare the performances of DWRank with fixed weights and with learned weights. The evaluation result shows that DWRank outperforms existing approaches and can produce more meaningful ranking of the important concepts. This paper is well organized and provides step-to-step detailed description on the method and experiment design. The evaluation is concrete and comprehensive. I would recommend this paper for publication if the authors can address the minor issues listed as below:

Page 2, paragraph 2: "In this paper we propose a new ontology concept retrieval framework..." In this paragraph, the authors were trying to give an overview introduction of the framework, which is fine. However, the content of this paragraph seems to overlap with section 2.2. Thus, I would suggest that the authors shorten this paragraph, and only give some brief introduction here.

Page 8, section 4.1: a query string Q= {q1, q2, q3...}. I assume each q represents a single word in the input query. What if the query concept consists of multiple words? For example, the user might want to find an ontology for the concept of "natural disaster", would this search term be divided into "natural" and "disaster"? What would be the potential consequences in matching, since ontologies typically combine two words into one to represent a concept such as "NaturalDisaster"?

Figure 1: it is difficult to visually differentiate the rectangles (representing offline Learning) and index boxes with rounded edges. Maybe the authors can use ellipses to represent indexes.

Index for equation 3 (immediately below equation 2) is missing, and the following equation indexes are accordingly misordered.

Algorithm 1: line numbers are missing.

The very short sentence below section 5.2 is unnecessary and can be removed.

Page 10, section 5.2.1, experiment-1, point 2: there is a typo in "a LTR algorithm, a ranking model is learnt *form* the hub score, the authority score and the text relevancy ...", which should be "from".

Table 4: there is a duplicated "51" for the SSN ontology.

Page 12, right text column at the top: "This can be seen in Table 7 that presents the top 5 concepts of the FOAF ontology ranked by HubScore and CARRank." "Table 7" should be "Table 6". All the indexes, including figures, equations, and tables, should all be double checked.

Page 15, the future work paragraph: the authors so far only mentioned the efficiency issue, and some more discussions could be used. For example, is it possible to include an online learning process into the ranking model? Such a model may constantly improve its performance based on the feedback of users (instead of using only a pre-trained offline model).