Computing and evaluating visual explanations
Prof. Dr. Sc. Simone Schaub-Meyer, Visual Inference Lab, TU Darmstadt
Abstract
Recent developments in deep learning have led to significant advances in many areas of computer vision. However, especially in safety critical scenarios, we are not only interested in task specific performance but there is a critical need to be able to explain the decision process of a deep neural networks despite its complexity. Visual explanations can help to demystify the inner workings of these models, providing insights into their decision-making processes. In my talk I will first talk about how we can obtain visual explanations efficiently and effectively in case of image classification. In the second part I will talk about potential metrics and frameworks for assessing the quality visual explanations. A challenging task due to the difficulty of obtaining ground truth explanations for evaluation.
Speaker Bio
Simone Schaub-Meyer is an assistant professor at the Technical University of Darmstadt, as well as affiliated with the Hessian Center for Artificial Intelligence (hessian.AI). The focus of her research is on developing efficient, robust, and understandable methods and algorithms for image and video analysis. She recently got the renowned Emmy Noether Programme (ENP) grant of the German Research Foundation (DFG) supporting her research on Interpretable Neural Networks for Dense Image and Video Analysis. Before starting her own group, she was a postdoctoral researcher in the Visual Inference Lab of Prof. Stefan Roth. Prior to joining TU Darmstadt, she was a postdoctoral researcher at the Media Technology Lab at ETH Zurich working on augmented reality. She obtained her doctoral degree from ETH Zurich in 2019, awarded with the ETH medal, where she developed novel methods for motion representation and video frame interpolation in collaboration with Disney Research Zurich.
Time & Place
Thursday, July 24, 2025
10:30 – 11:30
Reisensburg Castle