An Ontology-based approach for making Machine Learning systems Accountable

Tracking #: 2715-3929

This paper is currently under review
Authors: 
Iker Esnaola-Gonzalez

Responsible editor: 
Guest Editors ST 4 Data and Algorithmic Governance 2020

Submission type: 
Full Paper
Abstract: 
Although the maturity of the Artificial Intelligence technologies is rather advanced nowadays, its adoption, deployment and application is not as wide as it could be expected, mainly due to the lack of trust of users in the Artificial Intelligence systems. The explainable Artificial Intelligence (XAI) has emerged as a way of addressing this lack of trust. However, the explainability of the systems is necessary but far from sufficient for such a goal. Accountability, is another relevant factor to advance in this regard, as it enables discovering the causes that derived a given decision or suggestion made by an Artificial Intelligence system. In this article, the use of ontologies is conceived as the way for making Machine Learning systems accountable, as they offer conceptual modelling capabilities to describe a domain of interest, as well as formality and reasoning capabilities. The feasibility of the proposed approach has been demonstrated in a real-world scenario and it is expected to pave the way towards unlocking the full potential of Semantic Technologies for achieving trustworthy AI systems.
Full PDF Version: 
Tags: 
Under Review