Chair of Artificial Intelligence and Machine Learning
print


Breadcrumb Navigation


Content

Can I trust my Explainable AI algorithm? Evaluating and Benchmarking xAI algorithms. (Ba/Ma)

Topic for a bachelor/master's thesis

Short Description:

xAI algorithms are a set of approaches toward understanding black-box models. However, in recent years, xAI algorithms became themselves questionable as many failed to explain basic transparent models. Moreover, given the huge number of known evaluation metrics for xAI (Fidelity, Fragility, Stability, etc.), it became difficult for data scientists to accurately evaluate each xAI and to remain up to date on its evolution. This issue yields a clearly visible symptom known as the illusion of explanatory depth in interpreting xAI results [1], and it is now a known fact that data scientists are prone to misuse interpretability tools [2].

The goal of the thesis is to define an evaluation method for a specific type of xAI of your choice (Feature importance, Feature interaction, counterfactual, etc.) and learn common limitations and misuse of existing algorithms. A solution to mitigate common issues could be elaborated in the second part of the thesis.

Prerequisites

Good background in supervised learning, Explainable AI, Python, Front-end programming skills like JavaScript is a plus.

Contact

Karim Belaid

References

  • [1] Michael Chromik, Malin Eiband, Felicitas Buchner, Adrian Krüger, and Andreas Butz. I think I get your point, ai! the illusion of explanatory depth in explainable ai. In 26th International Conference on Intelligent User Interfaces, 2021.
  • [2] Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. Interpreting interpretability: understanding data scientists’ use of interpretability tools for machine learning. In Proceedings of the 2020 CHI conference on human factors in computing systems, 2020.