Article | REF: H5030 V1

Explainability in Artificial Intelligence; towards Responsible AI

Authors: Daniel RACOCEANU, Mehdi OUNISSI, Yannick L. KERGOSIEN

Publication date: December 10, 2022, Review date: February 29, 2024

You do not have access to this resource.
Click here to request your free trial access!

Already subscribed? Log in!


Français

1. Motivations for explainable artificial intelligence

Over the past decade, there has been a clear conceptual trend in artificial intelligence literature. It is interesting to note the latent need for interpretable AI models over time (which is intuitively true, as interpretability is a requirement in many fields). However, it was not until 2017-2018 that interest in techniques for explaining AI models became widespread in the scientific and R&D community (figure 1 ).

You do not have access to this resource.

Exclusive to subscribers. 97% yet to be discovered!

You do not have access to this resource.
Click here to request your free trial access!

Already subscribed? Log in!


The Ultimate Scientific and Technical Reference

A Comprehensive Knowledge Base, with over 1,200 authors and 100 scientific advisors
+ More than 10,000 articles and 1,000 how-to sheets, over 800 new or updated articles every year
From design to prototyping, right through to industrialization, the reference for securing the development of your industrial projects

This article is included in

Software technologies and System architectures

This offer includes:

Knowledge Base

Updated and enriched with articles validated by our scientific committees

Services

A set of exclusive tools to complement the resources

Practical Path

Operational and didactic, to guarantee the acquisition of transversal skills

Doc & Quiz

Interactive articles with quizzes, for constructive reading

Subscribe now!

Ongoing reading
Motivations for explainable artificial intelligence
Outline