Affective Graphs: The Visual Appeal of Linked Data

Tracking #: 491-1687

Suvodeep Mazumdar
Daniela Petrelli
Khadija Elbedweihy
Vitaveska Lanfranchi
Fabio Ciravegna

Responsible editor: 
Guest editors Semantic Web Interfaces

Submission type: 
Full Paper
The essence and value of Linked Data lies in the ability of humans and machines to query, access and reason upon highly structured and formalised data. Ontology structures provide an unambiguous description of the structure and content of data. While a multitude of software applications and visualization systems have been developed over the past years for Linked Data, there is still a significant gap that exists between applications that consume Linked Data and interfaces that have been designed with significant focus on aesthetics. Though the importance of aesthetics in affecting the usability, effectiveness and acceptability of user interfaces have long been recognised, little or no explicit attention has been paid to the aesthetics of Linked Data applications. In this paper, we introduce a formalised approach to developing aesthetically pleasing semantic web interfaces by following aesthetic principles and guidelines identified from literature. We apply such principles to design and develop a generic approach of using visualizations to support exploration of Linked Data, in an interface that is pleasing to users. This provides users with means to browse ontology structures, enriched with statistics of the underlying data, facilitating exploratory activities and enabling visual query for highly precise information needs. We evaluated our approach in three ways: an initial objective evaluation comparing our approach with other well-known interfaces for the semantic web and two user evaluations with semantic web researchers.
Full PDF Version: 


Solicited Reviews:
Click to Expand/Collapse
Review #1
By Ghislain Hachey submitted on 21/Jun/2013
Minor Revision
Review Comment:

This paper introduces a formalised approach to developing aesthetically pleasing semantic web interfaces based on sound research. Considering that there has been no thorough study of the usefulness and potential benefits of more aesthetically designed interfaces I think this paper clearly scores on originality and would be a welcomed addition to the literature of Semantic Web Interfaces.

A useful set of guidelines for building Linked Data interfaces was extracted from the literature identified in tables. This will definitely be very useful. My only comment is that the tables do not read very well (which is a little ironic considering the aim of the research). The tables could be produced much more clearly with good LaTeX formatting. If this paper is to be distributed in a stand alone form I would recommend re-doing the tables to be more aesthetically pleasing and readable. I do take notice that typography was not part of this study and that it will be explored in future work which is conforting.

Regarding the design decisions of color use in pie charts (for graph nodes), I do not think this would scale to a large graph with many nodes (containing many concepts): the more colors you get the harder it gets (for me anyway) to make any sense of what's on the screen (e.g. some colors might become too close leading to correlation confusion, overall clutter which is against your own design principle 7. of Table 1). However, I think that when focusing (or zooming) these color design decisions are beneficial (as shown in Fig 5.). At a large scale, one idea to explore could be to add the ability choose (say by clicking nodes, or check marking) which node would to "colorize" providing a quick glimpse of regions of interest.

Looking at much of the interesting comments from users in the evaluation I was left to believe that users were fully aware the goal of the experiment was to evaluate the Affective Graph approach. Now, maybe those comments were included "because" the paper is about Affective Graph and they were not necessarily representative of all comments, but this was not clear to me. However, if users were aware, this in itself will result in bias; it has been demonstrated by several people that participants have a tendency to forge the results (so to speak) in favour of people doing the hard research work (response bias). I'm not saying this was the case, just that this is an impression I was left with. A simple line like "users were completely unaware of which system was being compared to others" or even better, introduce several control groups (e.g. some led to believe they were evaluating NLP interfaces others the graphical approaches). I take notice that it was a test leader who did the experiment, but that in itself is not enough to rule out all types of biases.

Another evaluation that would be interesting is re-doing the second evaluation (described in 10.2) after the learnability evaluation (described in 10.3). In other words, it would be nice to know if the results of the usability of Affective Graph in comparison to other approaches would change once users have more experience with the systems. Of course, to be fair, the same amount of experience should be given to other querying approaches as well. Just a thought.

Presentation and quality of writing was excellent. My only (minor) comments are the following:

- In 10.1 a cross reference link is missing (see Section ??)
- In 10.2.2 "graphical highly approach" would better read has "highly graphical approach".
- The Table 1 and 2 could be clearer. Proper use of LaTeX would do a better job.
- The dash symbols do not seem to be a real dash but looks like a minus or hyphen symbols.

If this paper would have been available when we wrote [1] it would have scored very high on the quality assessment criteria evaluation. The quality assessment criteria is based on the Gold Standard for Evidence-Based Medecine and adapted to UI evaluation in software engineering. Following is a few points that could be further strengthen:

- Provide alternative research (experiments) designs and justify why
chosen method is better to address the aim.
- Explain why the chosen sample of participants were the most appropriate to extract information sought.
- Is sample size large enough, if yes clearly justify it.
- No explicit control group (if so, I could not see it).
- Maybe take more contradictory data into account (if any exists).
- Include statistical quality control data (e.g. Kappa, ICC, Cronbach, etc.)

In summary, this is without a doubt an original paper with a high quality presentation. The significance of the results in my opinion are extremely important to the semantic web community and beyond. There are some areas that could be strengthen but I would recommend including this paper in the Semantic Web Interfaces Special Issue preferably with minor changes but even without change--the paper certainly help in raising the standard.

[1] Hachey, G. Gasevic D Semantic Web User Interfaces: A Systematic Mapping Study Semantic Web Journal, July 2012

Review #2
By Lloyd Rutledge submitted on 08/Jul/2013
Review Comment:

This paper is interesting, engaging and pleasant to read. It would make an important contribution to this journal special issue. For the most part, the paper is a pleasant an engaging story of how one should go about applying usability issues in multiple phases of building a semantic web system. The paper does so by setting a good example. Experts criticize the need more addressing of usability in Semantic Web research. This paper could be an important part of fulfilling that need.

As mentioned, this paper reads like a case study in applying usability issues in designing and making a system. Part of this process is setting up a broadly applicable Semantic Web interface assessment method. The paper derives this method well from a large body of related work in terms of what aspects to measure, how to measure them and what other systems to compare with.

The paper then describes its application of a design and development cycle that applies usability techniques. The paper provides a good example of applying these techniques, which other Semantic Web researchers should. This part of the paper may be redundant to related work. However, insights gained this process receive good description here and are applicable elsewhere. In addition, it may serve as a relatively unique case of applying these methods in the context of Semantic Web systems.

In a field where better application of usability methods is important, this paper raises the bar of applying usability methods in Semantic Web research. That does not mean, however, that this paper is an ideal execution of usability methods. This paper’s most significant shortcoming is that tends too much to carry out a general-purpose competition between its system and other systems, as if the reader is at the store trying to decide which whole system to buy and use exclusively. Instead, I would prefer assessments of how well individual focused system features perform given tasks in given contexts. Then the reader could decide if and how to build each feature into a given system for its given purpose. However, while I find this focus a bit skewed, the paper does at times provide metrics and insight into the usability of individual features. In addition, the paper is straightforward where its system performs poorly and, in these cases, provides insightful discussion about why.

The paper describes its technical implement ion in details. Some of the discussion is interesting but it may be a higher level of detail than is necessary.

The bottom of page one states “Semantic Web and Linked Data Interfaces have traditionally been designed and evaluated for usability, performance and reliability.” Really? Unlike adjacent sentences, this has no citation. I would love to believe this statement as I feel it is an important goal. However, some bemoan the lack of usability assessment in Semantic Web interface research. It would be easier to find citations bemoaning the lack thereof than supporting the sentence in quotes. The ESWC 2013 keynote by David Karger, for example. In addition, the importance but absence of this would benefit the paper by providing motivation and justifying impact of its contribution. The paper does so on page two, paragraph three.

Here are some minor errors: Some citations like a preceding spaces, such as “websites[48]” on page two and guidelines[73] on page four. Perhaps a regexp search would find other instances of this. On page two, “some of ways” should be “some of the ways”. Reference 54 has two periods at the end. On page 18, section 9.3, paragraph 2, there should be no comma in “It is to be noted, that …”. Additionally, in this case, “It should be noted that” is perhaps better. Page 18, section 10.1, paragraph one has a dangling section reference (“see Section ??”). Page 24 has a margin intrusion.

Review #3
By Ian Dickinson submitted on 29/Jul/2013
Review Comment:

This is, on the whole, a very well written paper. The aims are clear, the context of prior work is well established, it is clear what problem the research is intended to address and the evaluation is reasonably convincing. It is clear to me that the authors have achieved a marked improvement over prior tools in the usability of a directed graph -centric graphical interface for exploring linked data. This is long overdue: very few of the extant tools for exploring linked or semantic web data have had any serious consideration of usability and user needs.

I was less convinced by the argument that the usability improvements were wholly or primarily due to the aesthetics of the interface. I agree that aesthetics in design are very important. However, the user-centric design process followed by the researchers, with a clear focus on user needs, on measurement, and on itererative design cycles that take into account feedback from prior prototypes is *by itself* sufficient to ensure a distinct improvement over the state of the art. What the researchers did not test was the impact of aesthetics by itself. For example, given an interface whose functionality is the result of iterative user-centric improvement, one could look at the impact of colour, shape, etc by varying just those variables. That would make a more convincing argument for the impact of aesthetics per se. I was also surprised that the metrics used only focus on a subset of easily evaluable factors; in general choice of overall colour pallete, typography and iconography have a very large impact on aesthetic appeal, so it is hard to see what the authors are measuring when these factors are omitted.

While the evaluation experiment in section 10.2 was clearly very thorough, the motivation could have been better explained. In particular, it was not obvious to me why a question answering task would be a natural fit for a graph-based interface. It was reassuring that the users reported subjectively that they enjoyed using the interface, but the objective task performance comparison seems to suffer from a disparity in the suitability of the tools to the experimental task.

On the whole, the paper is very well written with few errors that I noticed. Several mentions are made of owl:subClassOf, which should be rdfs:subClassOf. Section 10.1 references "Section ??". The very first sentence reads "The human response to aesthetic" instead of "aesthetics". On page 22, the paragraph that begins "Users appreciated ..." contains duplicated content starting "The query generation is intuitive..".

In summary, this is a well written paper and a good piece of research. The focus on aesthetics as part of the user experience is commendable, but I was not convinced that it was demonstrated that the positive results were due solely to that focus, as opposed to the overall design process.