AI systems use sophisticated algorithms that apply to personal data for developing more and more decision making applications that directly impact humans. Both for social acceptability and for ethical purposes, it is of utmost importance to make the decisions of AI systems interpretable by humans and also to provide guarantees of privacy protection. In this talk, I will present some ongoing work that we are conducting in the Grenoble MIAI chaire "Explainable and Responsible AI" for building interpretable explanations of AI algorithms. In a first part, I will summarize experimental results that we have obtained on building local and global explanations for predictions of microcredit default learned by black-box models from a tabular dataset. In the second part, I will present our ongoing work on explaining privacy risks detected by a graph-based reasoning algorithm used to check incompatibility between privacy and utility policies expressed as queries. in this setting, queries are interpreted as logical formulas over a common schema and the explanation is based on the construction of a small synthetic graph data illustrating a possible entailment between graph patterns.
Seminar DKM: Some ongoing work on building interpretable explanations for AI algorithms on tabular or graph data
Seminar
Starting on
Ending on
Location
IRISA Rennes
Room
Online
Speaker
Marie-Christine Rousset
Main department