Artificial Intelligence: Why Explanations Matter
10-18, 09:10–09:30 (Europe/Zurich), Aula 4.101

In the rapidly evolving field of Artificial Intelligence (AI), the
importance of understanding model decisions is becoming increasingly
vital. This talk explores why explanations are crucial for both
technical and ethical reasons. We begin by examining the necessity of
explainability in AI systems, particularly in mitigating unexpected
model behavior, biases and addressing ethical concerns. The discussion
then transitions into Explainable AI (XAI), highlighting the
differences between interpretability and explainability, and
showcasing methods for enhancing model transparency. A real-world
examples will demonstrate how these concepts can be practically
employed to improve model performance. The talk concludes with
reflections on the challenges and future directions in XAI.

See also: Presentation slides (2.4 MB)

Albert Weichselbraun is a Professor of Information Science at the Swiss Institute for Information Research at the University of Applied Sciences of the Grisons in Chur, and cofounder and Chief Scientist at webLyzard technology. He has authored over 90 peer-reviewed research publications and has been a member of the expert group on communication science of the Swiss Academies of Art and Sciences.