Invited Talks
KEMAI regularly invites researchers to give talks related to the field of medical AI. Below is a list of our invited speakers and their presentations.
Date: 2025-07-24
Location: Reisensburg Castle
Prof. Dr. Sc. Simone Schaub-Meyer, Visual Inference Lab, TU Darmstadt
Abstract
Recent developments in deep learning have led to significant advances in many areas of computer vision. However, especially in safety critical scenarios, we are not only interested in task specific performance but there is a critical need to be able to explain the decision process of a deep neural networks despite its complexity. Visual explanations can help to demystify the inner workings of these models, providing insights into their decision-making processes. In my talk I will first talk about how we can obtain visual explanations efficiently and effectively in case of image classification. In the second part I will talk about potential metrics and frameworks for assessing the quality visual explanations. A challenging task due to the difficulty of obtaining ground truth explanations for evaluation.
Speaker Bio
Simone Schaub-Meyer is an assistant professor at the Technical University of Darmstadt, as well as affiliated with the Hessian Center for Artificial Intelligence (hessian.AI). The focus of her research is on developing efficient, robust, and understandable methods and algorithms for image and video analysis. She recently got the renowned Emmy Noether Programme (ENP) grant of the German Research Foundation (DFG) supporting her research on Interpretable Neural Networks for Dense Image and Video Analysis. Before starting her own group, she was a postdoctoral researcher in the Visual Inference Lab of Prof. Stefan Roth. Prior to joining TU Darmstadt, she was a postdoctoral researcher at the Media Technology Lab at ETH Zurich working on augmented reality. She obtained her doctoral degree from ETH Zurich in 2019, awarded with the ETH medal, where she developed novel methods for motion representation and video frame interpolation in collaboration with Disney Research Zurich.
Time & Place
Thursday, July 24, 2025
10:30 – 11:30
Reisensburg Castle
Date: 2025-07-23
Location: Reisensburg Castle
Prof. Dr. Lena Kästner, University Bayreuth
Abstract
As AI technology becomes increasingly used in the public sphere, including in such vulnerable settings as courts and hospitals, questions about the societal demands of deploying AI are becoming ever more relevant. General calls to make the use of AI “responsible”, viz. that the systems in question should be safe, trustworthy, fair, privacy respecting, etc. are echoed by researchers, legal institutions, NGOs and customer protection services alike. But how to best achieve these feats, remains a matter of heated academic, political and legal debates. One concept that has taken center stage over the past few years is explainability – or explainable AI (XAI). Put in a nutshell, the idea is that if we can render opaque AI systems explainable with XAI methods that will, in one way or the other, help us ensure their safety, trustworthiness, fairness, and so forth. This reasoning has led to a veritable XAI-hype. In this talk, I take a critical look at contemporary XAI, highlight some of its limitations, and sketch avenues for research addressing these.
Speaker Bio
Lena Kästner is professor for philosophy, computer science and AI at the University of Bayreuth. She has a background in Cognitive Science and Cognitive Neuroscience and received her PhD in philosophy from Ruhr-University Bochum. Prof. Kästner’s research focuses on explanations, intelligence, and causation. Currently, she is also head-PI of the projects “Explainable Intelligent Systems (EIS)” and “For the Greater Good? Deepfakes in Criminal Prosecution (FoGG)”. She is also vice president of the German Society for Philosophy of Science (GWP), vice-director of Bayreuth’s “Research Center for AI in Science and Society (RAIS2), and coordinator of the interdisciplinary Master’s program “Philosophy & Computer Science” in Bayreuth.
Time & Place
Wednesday, July 23, 2025
11:15 – 12:00
Reisensburg Castle
Date: 2025-07-23
Location: Reisensburg Castle
PD Dr. med. Judith Herrmann, University Hospital Tübingen
Abstract
Artificial intelligence (AI) is increasingly being integrated into radiological workflows, offering significant potential for improving both efficiency and image quality. Its applications are diverse, ranging from automated image acquisition and interpretation to workflow optimization and predictive analytics. A particularly promising area lies in AI-based reconstruction for magnetic resonance imaging (MRI). Deep learning (DL) algorithms enable the reconstruction of high-quality images from highly undersampled raw data, thereby substantially reducing scan times. This acceleration not only enhances patient comfort and increases scanner throughput but also contributes to a reduction in energy consumption per examination. As such, AI-driven MRI reconstruction represents a concrete example of how technological innovation can simultaneously advance diagnostic performance and promote environmental sustainability in medical imaging. This presentation will place particular emphasis on this application, examining its potential as a key driver of energy efficiency and sustainable radiology practice.
Speaker Bio
PD Dr. med. Judith Herrmann is a board-certified radiologist at the University Hospital in Tübingen, Germany, where she has been working since 2019. She completed her medical studies at the University of Tübingen and began her career in radiology under the mentorship of Prof. Nikolaou. PD Dr. med. Judith Herrmann is a board member of the working group on information technology and a board member of the Young Radiologist Forum within the German Radiological Society. Her research primarily focuses on the application of artificial intelligence in MRI image reconstruction, with a particular interest in its potential to improve efficiency and sustainability in MRI examinations.
Time & Place
Wednesday, July 23, 2025
13:00 – 13:45
Reisensburg Castle
Date: 2025-07-23
Location: Reisensburg Castle
Prof. Veronika Cheplygina PhD, IT University of Copenhagen
Abstract
It may seem intuitive that we need high quality datasets to ensure for robust algorithms for medical image classification. With the introduction of openly available, larger datasets, it might seem that the problem has been solved. However, this is far from being the case, as it turns out that even these datasets suffer from issues like label noise and shortcuts or confounders. Furthermore, there are behaviours in our research community that threaten the validity of published findings. In this talk I will discuss both types of issues with examples from recent papers.
Relevant Papers:
- Copycats: the many lives of a publicly available medical imaging dataset
- Data usage and citation practices in medical imaging conferences
- Augmenting Chest X-ray Datasets with Non-Expert Annotations
- Machine learning for medical imaging: methodological failures and recommendations for the future
Speaker Bio
Prof. Veronika Cheplygina’s research focuses on meta-research in the fields of machine learning and medical image analysis. She received her Ph.D. from Delft University of Technology in 2015. After a postdoc at the Erasmus Medical Center, in 2017 she started as an assistant professor at Eindhoven University of Technology. In 2020, failing to achieve various metrics, she left the tenure track of search of the next step where she can contribute to open and inclusive science. In 2021 she started as an associate professor at IT University of Copenhagen, and was recently appointed as full professor at the same university. Next to research and teaching, Veronika blogs about academic life at https://www.veronikach.com. She also loves cats, which you will often encounter in her work.
Time & Place
Wednesday, July 23, 2025
13:45 – 14:30
Reisensburg Castle
Date: 2025-07-23
Location: Reisensburg Castle
Dr.-Ing. Renate Schmidt, University of Manchester
Abstract
SNOMED CT is established technology of AI in health, where it provides the basis for medical terminological services used to support consistent data capture, easy data sharing and convenient analysis of data. SNOMED CT is a large knowledge base (ontology) of definitions of medical codes used by clinicians in health care sectors worth-wide. After a brief introduction of medical ontologies and their benefits, this talk will review subontologies, a bespoke technique for procuding concise extracts of SNOMED CT, their key features, use cases, successful results and their development in a successful collaboration with industry.
Speaker Bio
Renate Schmidt is University Reader in Computer Science and Leader of Formal Methods Research Group in the Department of Computer Science at the University of Manchester. She served as Chair of the PGR Degrees Panel in the Faculty of Science and Engineering and was Member of the FSE Doctoral Academy Academic Leadership Team and the Faculty Graduate Committee (2021-2024). She is Associate Editor or Editorial Board Member of Artificial Intelligence Journal, Journal of Artificial Intelligence Research, Journal of Automated Reasoning and Journal of Applied Non- Classical Logic. Her research involves the development of both theoretical results and implemented systems for knowledge representation, automated symbolic reasoning and formal methods. Her current research is driven by the aim to develop improved automated support for knowledge representation, ontology extraction, knowledge re- engineering, information hiding/obfuscation, abductive learning and query answering in the context of ontologies.
Time & Place
Wednesday, July 23, 2025
10:30 – 12:00
Reisensburg Castle
Date: 2025-07-02
Location: Building O27, Room 122
KEMAI Research Training Group GRK 3012/1
Andrew Lee Hufton, Editor-in-Chief of Patterns, Cell Press
Abstract
Andrew Hufton, Editor-in-Chief of Patterns, will share how his journal incorporates FAIR & open science into their editorial evaluation process, and why open science plays a key role in ensuring reproducibility and promoting public trust in science. He will describe how submitting open, reproducible papers can aid consideration at any journal, and provide tips on how to draft and submit such papers in a manner that is friendly to peer-reviewers. Advice on how to share data, code and models effectively will be included. In addition, Andrew will discuss new issues and pitfalls that have arisen with generative AI technologies, and provide advice on their ethical use in research and manuscript writing.
Patterns is an interdisciplinary, open-access journal from Cell Press, that publishes data science in the broadest sense, including a wide range of computational and data- rich research, as well as associated topics in ethics, philosophy and science policy.
Host
Prof. Dr. Hans Kestler, Faculty of Engineering, Computer Science and Psychology
Time & Place
Wednesday, July 2, 2025
13:00 – 14:00
Building O27, Room 122
Date: 2025-05-21
Location: Building O27, Room 441
Hendrik Möller & Robert Graf, TU Munich
Abstract
We present recent advances in automatic image processing pipelines designed for large-scale cohort studies, focusing on applications in the NAKO (German National Cohort) and back pain research. We first highlight our work on Image2Image translation methods—including denoising diffusion models and Pix2Pix networks—that enable missing sequences, correct reconstruction errors in water-fat imaging (MAGO-SP), and perform MRI-to-CT translation for accurate bone segmentation. The second part will focus on segmentation, showcasing models such as SPINEPS and TotalVibeSegmentator and our approach to generating new ground truths for training. We then turn to the analysis of anomalies in the spine (e.g., variations in vertebral morphology, stump ribs, enumeration anomalies) and present initial findings on spine morphology and proton density fat fraction (PDFF) statistics across large population datasets.
Speakers Bios
Hendrik Möller is a doctorate student in computer science at the University Hospital rechts der Isar at TUM (Department for Diagnostic and Intervenrional Neuroradiology). He works in an EU-funded project around the NAKO (transl. national cohort) dataset, a large MRI cohort representing the german population. He develops machine learning methods to label and segment these scans with a practical orientation. Before that, he finished his Master’s in Robotics, Cognition, and Intelligence at the Technical University of Munich.
Robert Graf is a PhD student affiliated with the Institute of Artificial Intelligence in Medicine at the Technical University of Munich (TUM) and the Deep-Spine group. He works on Spine image translation, superresolution, registration, and analysis.
Time & Place
Wednesday, May 21, 2025
13:00 – 14:00
Building O27, Room 441