User Experience Benchmarking and Evaluation to Guide the Development of a Semantic Knowledge Graph Exploration Tool

Tracking #: 3316-4530

This paper is currently under review
Authors: 
Roberto García
Juan-Miguel López-Gil
Rosa Gil1

Responsible editor: 
Guest Editors Interactive SW 2022

Submission type: 
Full Paper
Abstract: 
Despite the increasing amount of semantic data available, there is still a lack of adoption of user-facing applications based on semantic technologies, especially those geared towards the exploration of disparate semantic datasets. Benchmarks have already been identified as drivers of the advancement of different domains and till recently there was not a benchmark of semantic data exploration tools. Building on top of the Benchmark for End-User Structured Data User Interfaces (BESDUI), we explore now how it can guide the development of a new tool for semantic knowledge graphs exploration, RhizomerEye. The results at the current stage of development show better results than those for the RhizomerEye predecessor. However, there is the risk of overfitting the tool to the benchmark, overloading the user interface to produce the best benchmark results but producing an unusable UI. To rule this out, an evaluation with real users has been also conducted, using the same dataset and tasks provided by the benchmark, but involving real users to measure the User Experience instead of deriving the UX metrics analytically. Moreover, the evaluation has been complemented with the user satisfaction dimension, unmeasurable by the benchmark. Overall the results are promising, showing comparable results to those of the benchmark, especially for users with knowledge about semantic technologies. On the other hand, the evaluation with real users has made it possible to identify potential RhizomerEye improvements, also taking into account user satisfaction, and ways to better suit BESDUI to be used in evaluations with real users.
Full PDF Version: 
Tags: 
Under Review