Semantic Web User Interfaces: A Systematic Mapping Study and Review

Paper Title: 
Semantic Web User Interfaces: A Systematic Mapping Study and Review
Authors: 
Ghislain Hachey, Dragan Gasevic
Abstract: 
Context: Exploration of user interfaces has attracted a considerable attention in Semantic Web research in the last several years. However, the overall impact of these research e orts and the level of the maturity of their results are still unclear. Objective: This study aims at surveying existing research on Semantic Web user interfaces in order to synthesise their results, assess the level of the maturity of the research results, and identify needs for future research. Method: This paper reports the results of a systematic literature review and mapping study on Semantic Web user interfaces published between 2003 and 2011. After collecting a corpus of 87 papers, the collected papers are analysed through four main dimensions: general information such as sources, types and time of publication; covered research topics; quality of evaluations; and nally, used research methods and reported rigour of their use. Results: Based on only the obtained data, the paper provides a high level overview of the works reviewed, a detailed quality assessment of papers involving human subjects and a discussion of the identi ed gaps in the analysed papers and draws promising venues for future research. Conclusion: We arrive at two main conclusions: important gaps exist which hinder a wide adoption of Semantic Web technologies (e.g. lack user interfaces based on semantic technologies for social and mobile applications) and experimental research hardly used any proven theories and instruments speci cally designed for investigating technology acceptance.
Full PDF Version: 
Submission type: 
Survey Article
Responsible editor: 
Natasha Noy
Decision/Status: 
Reject and Resubmit
Reviews: 

Solicited review by Lora Aroyo:

The goal of this paper to give a good overview of the literature on semantic web user interfaces is nice. This goal should be stressed more explicitly in the abstract, where it could also be clarified more explicitly that the focus is on the (published) science (more than the technology of semantic web user interfaces itself). This stress is relevant, since one could discuss to what extent the technological evolution is captured in literature and driven by scientific literature and considerations.

In abstract: 'lack' -> 'lack of'; reconsider the word 'exploration'.

Section 1 repeats the same ambition as in the abstract and also could benefit from being more precise: the topic is not just the state of the art of semantic web user interface technology (that is more or less implicit), but more on the state of scientific approaches into that subject area.

In 2.0: missing ')', 'listed item' -> 'listed items'. In 2.1 where the design is presented, it would be best not to include outcomes already, e.g. 'reveals gaps': just the design is sufficient now. In 2.1 when the question is asked about software engineers, why are other relevant roles not considered, like interaction designers? It feels that here the inspiration from the software engineering studies is taken too far. Connected to the numbers in Figure 2 it would also be interesting to give the numbers of publications from which these have been selected (the total sizes). Since the last exclusion criterium is perhaps controversial (with regard to numbers and measuring impact), a consideration of its implications (and its justification) should be added. The explanation of the third main variable (p. 5) could be improved. In 2.4 is something said about how conducted the study, but before that this was not done, and this could be relevant for understanding the design and coding book decisions. Including the quality assessment of the studies (not the papers) is a good element of the whole study, but perhaps the role and aim of the assessment could be put in a broader perspective: supposing that there would be a difference between the quality as assessed in this step and the perceived quality by the papers being accepted could mean and imply many things. It is worth objectively reporting potential differences (and afterwards interpreting them). Give the number of papers after which a 90% coding agreement was reached.

In the third section where findings are reported it would be good to separate the objective reporting of findings from the more subjective interpretation and explanation. Some of the interpretations could be challenged with still the reported findings be a good read. It is also wise to differentiate between the semantic web and its evolution and adoption in literature on one hand from that of semantic web user interface research on the other hand: it is natural that a focus on UI for example is later in time. In 3.1 the actual main publication venues can be mentioned (to see where the focus on UI is best served); for example, it would be interesting to see how SWUI is showing up in UI-venues compared to SW venues. It would also be interesting to know how the connection is regarding the often applied principle of 'first conference then journal'. A section like 3.2.1 could perhaps be started or ended with a short demarcation of the aspect: an aspect like a UI for SW data can be perceived from several angles and it would be good to somehow consolidate how in this study the aspects is considered and perhaps defined. When discussing querying and search with UI (3.2.3) it appears that the focus of the authors is on the underlying (?) mechanisms for querying and searching where perhaps the UI would aim to abstract from it: it is not clear how this aspect is defined (in order to judge whether all relevant papers are included properly). Overall in 3.2 it would be good before the mapping of the papers to the authors' framework to describe and justify that framework (of study). When opening the discussion in 3.3 on human-involved evaluation it would be good to indicate which papers could and should have such kind of evaluation and which papers were actually proposing a new UI (design) and could seek justification in validating their design through an implementation (when the ambition is to have model-driven UI code generation for example): it would be good to see (perhaps in the end) whether the different approaches were justified adequately by the authors. The presentation of the grades in 3.4 can benefit from an explanation of the grading scheme (to understand a remark about passing). In 3.5 it is very confusing that concepts like users (of a system or UI) and readers/audience of a paper or scientific contribution are mixed. Combined with the criticism on the studies it could also lead to the observation that at this stage of the research (!) authors are still targeting each other to help make collectively some steps forward (towards the 'real' good studies).

In section 4 it shows how the authors of this paper have connected rigour to human-involved evaluation: that is a choice that can be challenged (as indicated above). The authors should be careful with this. Research on UI can also be targeted at the software implementation process and then different justifications would be acceptable. In 4.1 after giving the facts in the figure the bullets that follow are too much of ad-hoc discussion, enumerating questions. Where in the previous parts of the paper a structured approach was advocated and used, here this is missing. It is interesting to see that most suggestions made here in these bullets are not associated with references. When discussing software engineering in 4.2 it is relevant to remark to what extent software plays a role in the vision of the papers studied: browsing typically put software in a generic position and with semantic web browsing one could aim for a similar position, so it can differ how much of 'software engineering' is relevant. Also here a rough definition of the authors' take at this aspect could help to understand the subsequent subjective interpretations about the aspect. Most sections in 4 combine a detailed report of findings (that a reader wants to interpret and for which a reader wants to sense its quality) and a subjective discussion by the authors about what could be done: it feels that these could be split. When only reporting the main findings, looking backwards, leaving most of the detailed numbers to the appendix, would also decrease the length of the paper and would highlight the big picture better. Afterwards a more speculative and forward looking discussion of the findings can then follow. Regarding the limitation mentioned in 4.7 on the classification schema, it could also be interesting to see how the authors of the papers considered would agree with the classification themselves. That could also shed light on the ambitions in terms of method and rigour (and perhaps even in the long run serve the implicit ambition of this paper even more, by authors paying more attention to some of the lessons learned from this study).

In the conclusion some main lessons are given. It seems that these are not directly the consequence of the earlier sections. It would be good to build the foundation for these lessons in the Discussion section. Then the conclusion can just report the main elements of these suggestions in a 'take home' manner.

The whole paper is to be praised for its ambition to add what is now missing (more or less). The outcome is nice, although one could improve the presentation, mainly by separating (1) the identification of the need to do this, (2) the identification of the instruments to be used, (3) the actual outcomes of the application of the instruments, and (4) some subjective interpretation of the outcomes. The first three are without nice contributions for many in the field to read.

Solicited review by Valentina Presutti:

The paper proposes a Survey on Semantic Web user interfaces to the aim of assessing the state of the art in the field and distill main requirements for future research.

The methodology used by the authors for developing this contribution shows to be well designed and based on established practices (mainly in software engineering), in fact a systematic literature review and mapping study is performed, and authors provide references to scientific work that supports the chosen methodology. My impression is that this is the only strength of the paper in its current status. In fact, such detailed description of the adopted methodology raises the reader's expectation (at least this one's) that are unfortunately disappointed in the rest of the paper. The authors fail in providing a good introduction to, and overview of, the topic as well as in providing a clear description and motivations for the requirements that have emerged from the analysis of the state of the art in the field. Authors are focused on judging how good are on average the papers on Semantic Web user interfaces published so far instead of concentrating on introducing basic concepts, point to established practices in the field, and analyzing the types of existing approaches, their strengths and weaknesses, and open issues.

In other words, personally after reading this paper I haven't reached a better understanding of what (if any) are the differences and challenges that Semantic Web technologies have brought to the design and development of user interfaces, nor of what are the open issues, and the available technologies. However, all this information is sparsely and superficially reported in the paper making me think that the authors must be able to adequately revise the presentation of their work and resubmit a better version of it that fulfills such requirements, by including examples and describing the most promising and possibly useful available technologies so far.

As far as comprehensiveness is concerned, the paper can be improved. I find surprising that Fresnel and related widgets are missing, so are ontology editor plugins for visualizing ontologies, and the authors may want to analyse the contributions collected by the IUI workshop series on "Visual Interfaces to the Social and Semantic Web", which perfectly fit the topic and that are completely missing.
In fact the paper selection has been based on systematic querying of academic databases. The papers have been then classified based on their provenance e.g. journal, conferences, etc. This approach is debatable, and at least has produced some gaps such as the ones mentioned above. A minor comment in this regard is that the provenance of papers could be more detailed by providing information about conferences and journals that resulted to be main sources of contribution in the field. Also a comparative analysis of existing frameworks and tools would be desirable.

Another aspect deserving attention is readability. Besides the need of revising the English form, the structure of the paper needs improvements. Many concepts that drive the analysis criteria are introduced very early in the paper all at ones and out of context, and then they are used many pages after for presenting results of the review work, making it very hard for the reader to follow the narrative. I had to go back many times and search definitions or descriptions in previous sections in order to understand e.g., the content of a table. For example, quality assessment criteria are described at page 7 and then used at page 14. A main problem as far as presentation is concerned (and maybe affecting also the implementation of the methodology in this case) is that the authors introduce a number of classifications that drive analysis and quality assessment criteria without motivating such choices or pointing to other sources supporting them. This issue makes the whole analysis and discussion confused and sometimes naive. Most of the tables are wrongly referenced in the text, and in most of the cases they are not explained and commented exhaustively. Section 3.2 is supposed to contain a description of the existing approaches and technologies. It is instead a list of references to state of the art papers with a short mention of their main topics. This section is useless in its current form, while I would have expected examples with screenshots and some details on the analysed approached (when available) and on their adoption and perception from users. Also in this case the authors propose a classification of works without explaining its rationale, however they do not answer questions such as: what are the most advanced and stable work in a category? What are its open issues? What are the most adopted technologies and in what domains? Etc.

Some detailed comments:

The opening sentence of the paper claims that researching novel user interfaces is essential for the success of the Semantic Web vision. Furthermore, the authors state that this is specially important if the added value of Semantic Web has to be shown to users. Although I agree on the importance of studying novel user interfaces (or empower existing ones) in order to exploit the potential benefit of semantic web technologies, I disagree on the fact that this is important because users have to "understand" such benefit, they just have to benefit from them! Also, why the authors think that novel types of interfaces are needed? What are the limits of existing interfaces? Could it be that semantic web technologies can be used for supporting better interaction in existing interface types?

Most of the observations and interpretation of Figures and Tables are not accompanied by adequate explanation of the rationale behind them. For example, at page 15, the authors deduce from Fig. 3 that "most of the studies targeting scientists and engineers were mainly done in-house". I cannot deduce it from that figure, the authors should make an effort for explaining the rationale of their deductions.

The main quality assessment criterion applied by the authors for analyzing the selected papers refers to the evaluation setting that they report. Although this aspect is an important one, it does not provide alone an idea of what actually is available and what is missing in the field of user interfaces for the semantic web. Furthermore, given this strong focus, one would expect to have an extensive summary of the most appropriate evaluation methods and shared good practices for validating user interfaces for the semantic web. Instead there are only few references to important works accompanied by very short comments.

Section 4 is mainly characterized by naive discussions and very vague suggestions that are not clearly associated with the tables included in the paper. Most of the tables (as mentioned above) are not explained in detailed, nor are their dimensions clarified and motivated.

Minor comment
- the number of paper selected for the quality assessment is not coherent in all sections that mention it, sometimes it's 16 sometimes it's 17, and some others is 14.

Solicited review by Tom Heath:

This paper describes a systematic review and mapping of literature related to UIs and Semantic Web technology. The topic overall is highly relevant and important as the community at large seeks to understand how these technologies can best deliver value to end users. There is a distinct lack of primary and review literature in this area, therefore, coupled with the authors' efforts at a rigorous analysis, this paper is to be welcomed.

That said, I have a number of issues with the paper that cause me to recommend major revisions. The most significant of these is the quality and depth of the analysis itself. This isn't a comment on the methodology (see below) but simply reflects that I'm not sure what we actually learn from the analysis in the paper. There is plenty of material presented but this isn't effectively rolled up into key take-home messages. As a reader I was left with the feeling "so what?". Related to this is a structural issue; I would place all the bubble charts and their descriptive text in the "Results" section and use Section 4 to actually discuss what can be learned from these.

My second major issue (which may actually be an underlying cause for the former) is that I'm not sure how revealing many of the bubble charts actually are. Currently they seem to largely serve as a more attractive way to present summary statistics. Specific issues with these:

- It's very hard for the reader to distill concrete lessons from the plots (maybe group them onto one or two consecutive pages for side by side comparison, at the expense of being contiguous with the text that describes them). In figure 10 the axes appear reversed, which doesn't help.

- It's not always clear what is the basis for the categories used on the axes of these plots; these are sometimes referred to in the text but I found it hard to form a mental model of each of these and how they were derived. They seem to ad-hoc and too much of the methodology around this issue is "glossed over", making it hard to assess their validity and limiting the ability to reproduce the analysis. As an aside, you my find my "A Taskonomy for the Semantic Web" paper in SWJ interesting as a way to unify the search/browse/communication/etc applications dimension or as a basis for future analysis.

- Is there any basis to the ordering of the items on the axes? If not then adding some may help, e.g. putting all "experiments" sequentially to help visually distinguish them from others (e.g. simulation).

- The colouring in the plots would not translate well to greyscale and is also non-continuous, meaning the reader has to constantly refer to the key to determine levels of rigour reflected in each bubble. Increasing shades of grey would be a better option for colouring.

Returning to the issue of the conclusions that can be drawn from the analysis, in many cases the numbers of studies assigned to each bubble are small, yet the authors still draw conclusions from these (e.g. "studies of x were more rigorous" or similar) when N may be in single digits. It doesn't appear to me that such conclusions can be drawn. Yes, this can be seen as a product of the volume of literature available, but that simply means that conclusions need to be more modest/conservative/tentative. There is rather too much speculation/conjecture in Section 4.

Regarding literature available, I'm surprised that the CEUR workshop proceedings or DBLP weren't included, as this would have thrown up more early stage work (or which there is a lot on SW UIs). I also couldn't find specific references to the other three review studies mentioned in the paper. More upfront critique of these would be useful

Overall, I'm left with the feeling that some greater synthesis of the analysis is required. Some higher-order visualisation/analysis (e.g. MDS, to pick an example out of the air) would seem to be more fitting/useful than the bubble plots.

Regarding language, the paper is fairly readable (with some minor typos etc) but too informal and chatty in places (e.g. "There really is little to gain in keeping things secret..." on page 26). Some references are incomplete.

Taking a step back from the specifics of the paper, there is an unresolved issue related to this topic, namely "in what way, if any, will interfaces and interaction over Semantic Web/Linked Data differ from that over conventional data sources/methods?". In that context this paper is a reasonable attempt at mapping the space of existing literature, but may end up missing the bigger "elephant in the room" that is the true barrier to progress in this area. I'd like to see the authors consider these broader issues as part of a deeper discussion at the end of the paper.

In summary, there are some significant issues with the paper that I feel need more work, but if they can be resolved the paper would make a positive contribution to the literature. I'd certainly encourage the authors to pursue the topic further.

Tom Heath.

Tags: