An Ontology for Ethical AI Principles

Tracking #: 2713-3927

Authors: 
Andrew Harrison
Dayana Spagnuelo
Ilaria Tiddi

Responsible editor: 
Guest Editors ST 4 Data and Algorithmic Governance 2020

Submission type: 
Ontology Description
Abstract: 
The initial trickle of organisations releasing Artificial Intelligence (AI) principles documents has turned into a flood, termed the proliferation of principles, with current counts exceeding over 300 of such documents. This has led researchers to apply traditional systematic review techniques to the growing corpus of knowledge. Aims vary from meta-analytic accounts of country of origin, gender of authors, and type of organisations, to mapping principles across documents, to attempts to consolidate the vast number of principles down to a set of core authoritative principles, to authors selecting principle documents to support a research hypothesis. The commonality underlying all these efforts is traditional research techniques, which are arguably inefficient, and create static artefacts with low reusability. The Semantic Web offers a different way, an avenue to examine this proliferating body of knowledge, creating dynamic knowledge graphs, richly and more objectively connecting principles as concepts, providing enhanced semantic querying, and incorporating the existing resources from the Linked Open Data cloud. In order to achieve this, an ontology for AI principles is first required. This work presents the first ontology for Ethical AI principles (AIPO), leveraging ontology vocabularies including Dublin Core, SKOS, FOAF and DCAT2 among others, and shows its applicability through a use-case based on the OECD's AI principles set. We further discuss the benefits of AIPO, including the facilitation of systematic studies and its impact over the AI principle sets landscape.
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Reject

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Christopher Brewster submitted on 28/Mar/2021
Suggestion:
Major Revision
Review Comment:

# Ontology for Ethical AI principles

# General Comments

This paper presents a detailed analysis of a number of meta-analyses of AI ethical principles and builds an ontology, following SW principles, based on this analysis. The paper is mostly very well written BUT is not clear as to what it is trying to do.

Given that the authors try to follow standard ontology engineering methodologies, it would appear to be significant shortcoming that no mention is made of competency questions. It would be key in justifying the development of this ontology to explain what competency questions the ontology was seeking to satisfy cf. Uschold and Gruniger 1996; Wiśniewski et al. 2019.

The authors would like the ontology they propose to enable AI principles to be part of the LOD cloud. However, in effect the ontology enables *documents* about AI principles to be part of the LOD cloud. This is a reasonable objective but is less grand than offering an ontology covering AI principles. Thus much of the introduction and its implicit ambitions about encoding ethical principles in an ontology are a distraction what is finally offered.

In conclusion, the paper reflects a category confusion. The paper actually presents a generic ontology suitable for the analysis of any controversial topic that generates many responses. It does not enable the encoding of philosophical or social principles that should form the foundation of an ontology of AI principles. Not can the ontology form the basis of a normative approach to the control or limitation of AI systems (given the extensive use of SKOS rather than OWL) Thus in several places the claims the authors make for the significance of their work rather over eggs the pudding.

# Specific observations

## Introduction

* The introduction observes the explosion of AI principles published by multiple organisations and then critiques the development of meta-analyses for a) involving a "significant amount of manpower" b) being poorly reusable c) becoming outdated rather quickly. The authors go on to note that there is a shift to web-native research outputs.

A priori, this does not stack up logically. AI principles should reflect epistemological and societal principles and preferences - this is not a "body of knowledge" that changes continuously but (should be) a reflection of the self-awareness and introspective depth of promoters and detractors of AI and its social, environmental and other implications. We are not analysing data about the behaviour and spread of a virus here which is much more "a body of knowledge".

Secondly the argument that an excess of manpower is needed to analyse these principles is problematic. While the use of NLP and other computational techniques is frequently quite useful, the act of "human analysis ... and coding" reflects the reality that in the humanities (of which philosophy is one) exegesis, hermeneutics and in general philosophical analysis cannot and should *not be automated* because it is a core human centred activity.

* The authors state that "As mentioned, the principle sets need to be re-engineered to make them human consumable in volume and format, with traditional meta-study methods an attempt to do this, but the creation
of an ontology and use of Semantic Web tools and technology it is argued, a better way." However, they present no arguments for this preference for using a SW web approach. Why do the principles need to be re-engineered? Does "re-engineering" of ethical principles make any sense at all? How can an ontology be a method to make such principles more easily "human consumable"? (Ontologies are mostly opaque to people who are not ontology engineers)

* The authors state "Additionally, semantic technologies and Linked Data can make the principle sets machine readable, serving both to assist human comprehension through use of data mining, and becoming directly implementable into AI entities themselves." This reviewer given his experience of ontologies and the challenge of implementing them "directly into AI entities" finds this idea that one could use the ontology to implement philosophical ethical principles very far fetched. A worked example here would make the statement more convincing. For example how could an AI principle such as "AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle" be implemented via an ontology into an AI system? The implementation of AI principles using such an ontology would seem to go against a frequently repeated principle of "human control of technology". This would lead in fact to ontology based control of technology - is this what the authors desire?

## Related work

* No explanation is given as to why these 8 papers were chosen. Was this accidental, a convenience sample, the most cited papers or were these other criteria?
* "This richer exploratory functionality is based solely off semantically linking topic words and keywords, but is enough to evidence the value of the Semantic Web approach." -- The description does not explain the "evidence of the value", it just asserts it as though it is obvious. What does the LAIP allow a user to do using SW technology that a simple IR engine would not?
* More generally it seems to be assumed that a given word (e.g. "fair") means the same to all people involved in AI. Given the huge difference in both the semantics associated with a word and the cultural (and therefore ethical) baggage they carry, these systems that link up different principles using keywords seem rather naive.
* The section on ontologies in artificial intelligence is disappointing in that only some of the referenced articles reflect real examples of ontologies being used for AI. Most examples reflect far narrower concerns such as querying databases or undertaking systemic literature reviews -- this is not AI by any stretch of the imagination. There are of course good examples where ontologies are being used in sophisticated ways for real AI problems but these are not mentioned.

## Methodology

* see comment above about absence of competency questions

## AI Principles Ontology

* A careful look at the ontology, both the figure presented and the tabular view appears to indicate just one property makes this ontology distinctively about AI (dct:subject modsci:ArtificalIntelligence). Otherwise this ontology could be used for any political or other subject (e.g. environmental principles). It appears to be a reasonably well constructed ontology but NOT particularly capturing AI or the ethical issues around AI.
* The use of SKOS:Concept to cover all the substantive content (i.e. the philosophical and ethical content of any AI principles) means that there can be no use of reasoning (SKOS is only suitable for showing associations not to reason with) to demonstrate (for example) contradictions between principles asserted by a set of AI principles. This completely negates the assertion in the introduction that the ontology could be used in software to implement specific AI principles.
* The authors state "The example shows strengths of the ontology, such as capturing that accountability (the only principle fully shown) is related
to the other four principles (only one is partially shown) via them needing to be implemented as a necessary condition for accountability." There is nothing in the structure of the ontology which indicates that any principle is a *necessary condition* (in a logical sense) for accountability - skos:related cannot be interpreted as *necessary*.

## Benefits of AIPO

* The authors appear to believe their ontology will remove the need for coding and recoding of documents presenting AIprinciples. They seem to miss the point that coding a document (as done is social science research) is part of the analytical or interpretive process. A further assumption here is that one could create a coherent knowledge graph that would actually be useful. The only kind of knowledge graph that makes sense is one describing the *documents* not the *principles* and that difference is significant. One should note that the potential knowledge graph is not actually offered as yet.

Review #2
By Pompeu Casanovas submitted on 04/Jul/2021
Suggestion:
Reject
Review Comment:

This article (i) introduces an ontology of AI ethical principles (AIPO) to allow the creation of a dynamic knowledge graph for user defined queries on this subject, (ii) promotes the integration of Ethics as a body of knowledge into the Web of Linked data. This is a timely and quite appropriate research subject, and ontologies can certainly play a key role in ordering, managing, and handling ethical principles for AI.
In this review, I will focus on philosophical grounds and the knowledge acquisition process.

Summary: (1) Quality and relevance of the described ontology: (i) the ontology is relevant, (ii) quality could be improved, (iii) biases should be explicitly addressed, (iv) tests and walkthroughs are insufficient and should be carried out because a partial proof of concept on OECD principles is insufficient to validate the ontology (v) metrics should have been described and provided; (2) Illustration, clarity and readability of the describing paper: (i) the introduction, first section, methodology, and conclusions should be rewritten, (ii) the theoretical assumptions explicitly assessed and explained, including its intentionality and usages, (3) Other dimensions will be put aside in this review. What it can be found at https://github.com/AndrewHarrison/AIPO shows a promising preliminary work. It lacks development, evaluation, and testing. I would have liked to discuss the ethical core-concepts, but I could not find them because the ontology is more the ontology is more document-based than content-based. It is a bit surprising that no ethicist has been involved in the ontology building process. Ethics engineering is an emerging field, and the authors could have benefit from a previous conceptual analysis that is missing in this paper.

The paper contains good ideas and a promising start, but the knowledge acquisition process should be better explained, and the ontology should be validated more extensively. I encourage the authors to pursue their work and to resubmit after completing it. Another suggestion would be submitting and presenting it first at a major SW conference.

Some observations follow:

1. The authors write: “These principle (ethics) sets serve as non-legislative policy instruments also known as soft-law”. p. 1.

Ethics are not a subpart of soft law but stands by its own as a separate field of research, embedded into all regulatory systems. Ethical instruments and soft law instruments (standards, protocols, agreements, recommendations....) are not the same. ‘Soft law’ is a term that originated thirty years ago in International (Customary) and transnational law to refer to agreements, commitments and relationships that are deemed to be horizontal, ‘non-binding’, contrary to the instruments covered by ‘hard law’, related to jurisdictions and vertical power of the nation state. Ethics might (or might not) be infused into both soft and hard law. When ethics are not taken into account by public powers (Parliaments, Courts, Administration), i.e. when there is no good governance, legal and socio-legal scholars use to talk about ‘state law’ or the ‘unrule of law’, as ethics is taken as an essential component of the broad political concept of rule of law (contrary to tyranny and dictatorship as political forms). There is a certain amount of literature on this doctrinal construct, e.g. Shaffer, G.C. and Pollack, M.A., 2009. ‘Hard vs. soft law: Alternatives, complements, and antagonists in international governance’. Minn. L. Rev., 94, p.706.

2. […] “the principles set need to be re-engineered to make them human consumable in volume and format”. What does it mean to ‘consume’ ethical principles? I understand that this is referred to semantics, opposing machine and human processing, i.e. consumable by machines. It’s similar to make rules ‘consumable’, i.e. available and ready to be used. However, at a more basic level, making documents accessible does not mean making their content consumable (by humans). Some more precision would be needed, because there is a long tradition in practical philosophy against identifying Ethics “with one single human concern or with one single set of concepts”. Endorsing Dewey’s perspective on Ethics, Putnam asserted that “the primary aim of the ethicist should not be to produce a ‘system’ but to contribute to the solution of practical problems—as indeed, Aristotle already knew. Although we can often be guided by universal principles (at least they are typically stated as if they were universal and exceptionless) in the solution of practical problems, few real problems can be solved as treating them as mere instances of a universal generalisation, and few practical problems are such that when we have resolved them—and Dewey held that the solution to a problem is always provisional and fallible—we are rarely able to express what we learned in the course of our encounter with a ‘problematic situation’ in the form of a universal generalisation that can be unproblematically applied to other situations.” Ethics without ontology (2004, 5). I could have quoted other practical philosophies coming from a different tradition, but with the same prevention against the ‘consumable’ usability and reusability of ethical principles as such, e.g. Leszek Kolakowski’s Ethics without a moral code (1971). In purity, Ethics cannot be completely ‘known’ but is produced and enhanced through collective agency (human or artificial).

3. “The consumability and use of these AI principles is important to ensure public accessibility and accountability, shared understanding between actors to prevent AI arms races, assistance in the design and deployment of AI systems, and finally to help drive and shape the presumably forthcoming hard-law and regulation of AI.”

There are pêle-mêle aims of different nature here (functional, technical, political, legal…), and this is not helping to elucidate what ‘consumability’ of ethical principles means. In this field, it is worth mentioning that ontology building holds not only a technological but a moral and political dimension as well, and this should be explicitly acknowledged and clarified from the beginning. Several authors have been noticing that the intermediate activity of ontology engineering is an inherently moral activity, i.e. purports moral effects. Cf. Anticoli, L. and Toppano, E., 2013. ‘Technological mediation of ontologies: the need for tools to help designers in materializing ethics’. International Journal of Philosophy Study, 1(3), pp.23-31. I would recommend as well to have a look at the two reports on AI ethical and legal governance carried out by AI4People (Atomium Foundation), ‘AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations’ (https://link.springer.com/article/10.1007/s11023-018-9482-5 ) and ‘AI4 People Report on Good AI Governance’ https://www.eismd.eu/wp-content/uploads/2019/11/AI4Peoples-Report-on-Goo...

4. In their first research question, the authors equate conceptual meaning with ethical knowledge. In the second research question, they link this ‘consumable’ ontology to their use and impact. They assume that extracting ethical concepts from texts and turning them into a readable, ‘consumable’ format will help to make them more effective or at least more suitable for regulatory purposes. This position is not that clear. Information retrieval should be differentiated from regulatory usages. Ethics have been (and still are) embedded into legal texts in many ways, with the aid of specific vocabularies with a huge variability of meanings and philosophical roots (e.g. le bon père de famille in the Napoléon Civil Code, extended to Italian, French, German… Codes in the 19th c.). In the Common law tradition, the implementation of legal schemes is based on constitutional check and balances principles based on proportionality that are not always coded (e.g. UK). Is the ethical ontology also covering these vocabularies and ‘fundamental legal concepts’ expressing ethical stances that pervade all regulatory bodies (including policies and ISOs)? In the article, Ethics is not understood as behaviour or instances for agency but as a set of separated concepts expressing principles and values for the use of AI. This is a limited understanding of the field, and I am afraid that this limitation is reflected into AIPO, the ontology for AI principles. Its philosophical grounds and possible usages and impact should be better specified. This goes back to the knowledge acquisition process used to build up the ontology, showing an expert-driven rather than data-driven methodology.

5. The methodology is divided into three phases: “Using the principle of a life-cycle from software engineering, ontology engineering can be used and broken into the phases of requirements analysis, ontology creation, and ontology assurance”. The authors did not start with competency questions (this would have provided the backbone for the ontology) but directly with the requirements, using primary sources: “The primary source of AI principle documents is not academic articles, but rather documents produced by a range of different actors, then published on websites directly or as PDFs available for download from websites. Hence, we did not use academic databases as the predominant source to retrieve the documents, but rather search engines and news articles, along with principle sets previously collected by the authors during prior research, as well as the principle sets that were referred to in the secondary systematic studies.“ But no further detail, data nor metrics are offered about the performance and iterative cycles of the knowledge acquisition process (which documents, how many, source, metrics applied etc.). They built up the ontology from scratch, which means in practice that they were driven by the analysis of the seven articles on AI principles that they had chosen as representative (summarised in the previous section as ‘related work’). These seven academic papers are also quite different from each other in scope, intention, and assumptions. Valuable as they are, I would not assume that they can provide the grounds for an ontology on ethics and AI. They were written with a different purpose in mind.

6. The authors acknowledge that “ontology mapping and integration was done manually by the authors without the use of automated tools.” Why not? A normal way of working out the production and integration of knowledge in these phases is combining both qualitative and quantitative methods (clustering, probabilistic topic models, latent semantic analysis, etc.). If primary sources are used, the lack of quantitative analyses must be justified. Otherwise, the authors rely on their own reading and experience, and this does not minimise the risk of being biased by their own values (the so-called ‘ideological/or academic’ bias).

7. The last sections of the article, especially its conclusions, does not match what is normally expected from ‘conclusions and further work’. There is no need to convince SWJ readers about the feasibility and benefits of a knowledge graph, or adding information not previously provided. These sections usually summarise the findings (against the advanced hypotheses) and the results of the validation tests.

Review #3
Anonymous submitted on 03/Aug/2021
Suggestion:
Major Revision
Review Comment:

The paper presents a new ontology to model AI principles. The idea is to then employ this ontology to "ensure" a better design of AI solutions, with more attention to ethical principles. The AI principles Ontology (AIPO) is created by passing through a number of steps, namely the requirement analysis, the creation of the AIPO ontology based on 8 papers reviewed by the authors, and finally the assurance of the AIPO ontology.

The paper is well written, the goal of the paper is clear and well motivated. The discussion of the related literature is sufficient, as well as the required preliminaries to understand the main contribution of the paper.

From the technical point of view, the description of the ontology, its core concepts and structure, as well as the Proof of Concept of AIPO using the OECD principles are well detailed and somehow justified. The discussion around the necessity of an ontology of ethical AI principles is insightful.
However, there are some drawbacks which need to be fixed before acceptance for publication:
- [Relevance and utility of this ontology]: despite the interesting discussion around the ethical principles in AI, which is I agree one of the main issues of the latest years in the area, the justification why and how the provided ontology can help still remains unclear and unconvincing. Whilst I can agree on the fact that this ontology is useful to structure in a clearer way different proposals around the ethical principles of AI (notice that there is no full agreement on the list of values to be retained nor on their respective priorities!), this seems to be the only concrete use of this ontology. It will not be used to check whether the new AI is compliant with the values "imposed" on AI artifacts, nor to "enforce" any kind of compliance of these systems. Just wishing it will be used by AI researchers to build their own KG for their AI artifacts seems somehow wishful thinking. The authors should realistically discuss this issue in the paper.
- [Completeness of the list of values]: the authors described the 8 papers they reviewed to build the AIPO ontology. It is hard to assess why the authors considered 8 papers only, whilst during the last few years several papers have been published around this issue. The choice of 8 papers only may bias the ontology modeling, taking into account only some values and excluding others. I suggest the authors to consider also the results published by several national committees for digital ethics in Europe, Canada and Australia in particular. Also the new AI regulation of the EU should be investigated. In addition, I wonder whether this "huge" general ontology would benefit from being linked to specialized sub-ontologies for different AI fields, e.g., autonomous vehicles, face recognition in public spaces, etc. The values and principles may not be the same in all of these sub-fields.
- [Evaluation of the ontology]: a full evaluation of the ontology is missing. The only evaluation addressed on the ontology is through the proof of concept. However, is is not enough. A complete evaluation should take into account the following dimensions: Accuracy, Completeness, Conciseness, Adaptability, Clarity, Computational efficiency and Consistency.

(A) the data file is well organized and in particular contains a README file which makes it easy for you to assess the data,
YES

(B) the provided resources appear to be complete for replication of experiments, and if not, why,
The ontology is provided, the 8 articles are now provided. The PoC is available.

(C) the chosen repository, if it is not GitHub, Figshare or Zenodo, is appropriate for long-term repository discoverability, and
YES, GITHUB

(D) the provided data artifacts are complete.
YES