PhD Candidate
I joined AI-Med as a PhD student in June 2019. My research focuses on developing Machine Learning solutions for Medical Image Analysis with a focus on segmentation.
Previously I completed my master’s degree in Biomedical Computing at TUM and worked on my Master Thesis at AI-Med.
E-mail: annemarie.rickmann [at] gmail.com, Annemarie.Rickmann [at] med.uni-muenchen.de
Research Interests:
- Medical Image Segmentation
- 3D Convolutional Neural Networks
For students interested in doing a Master Thesis on medical image segmentation, please see this open project.
Publications:
Wolf, Tom Nuno; Bongratz, Fabian; Rickmann, Anne-Marie; Pölsterl, Sebastian; Wachinger, Christian Keep the Faith: Faithful Explanations in Convolutional Neural Networks for Case-Based Reasoning Proceedings Article Forthcoming In: AAAI Conference on Artificial Intelligence, 2024, Forthcoming. @inproceedings{nokey, Explaining predictions of black-box neural networks is crucial when applied to decision-critical tasks. Thus, attribution maps are commonly used to identify important image regions, despite prior work showing that humans prefer explanations based on similar examples. To this end, ProtoPNet learns a set of class-representative feature vectors (prototypes) for case-based reasoning. During inference, similarities of latent features to prototypes are linearly classified to form predictions and attribution maps are provided to explain the similarity. In this work, we evaluate whether architectures for case-based reasoning fulfill established axioms required for faithful explanations using the example of ProtoPNet. We show that such architectures allow the extraction of faithful explanations. However, we prove that the attribution maps used to explain the similarities violate the axioms. We propose a new procedure to extract explanations for trained ProtoPNets, named ProtoPFaith. Conceptually, these explanations are Shapley values, calculated on the similarity scores of each prototype. They allow to faithfully answer which prototypes are present in an unseen image and quantify each pixel's contribution to that presence, thereby complying with all axioms. The theoretical violations of ProtoPNet manifest in our experiments on three datasets (CUB-200-2011, Stanford Dogs, RSNA) and five architectures (ConvNet, ResNet, ResNet50, WideResNet50, ResNeXt50). Our experiments show a qualitative difference between the explanations given by ProtoPNet and ProtoPFaith. Additionally, we quantify the explanations with the Area Over the Perturbation Curve, on which ProtoPFaith outperforms ProtoPNet on all experiments by a factor >10**3. |
Bongratz, Fabian; Rickmann, Anne-Marie; Wachinger, Christian Neural deformation fields for template-based reconstruction of cortical surfaces from MRI Journal Article In: Medical Image Analysis, vol. 93, pp. 103093, 2024, ISSN: 1361-8415. @article{BongratzV2CFlow2024, The reconstruction of cortical surfaces is a prerequisite for quantitative analyses of the cerebral cortex in magnetic resonance imaging (MRI). Existing segmentation-based methods separate the surface registration from the surface extraction, which is computationally inefficient and prone to distortions. We introduce Vox2Cortex-Flow (V2C-Flow), a deep mesh-deformation technique that learns a deformation field from a brain template to the cortical surfaces of an MRI scan. To this end, we present a geometric neural network that models the deformation-describing ordinary differential equation in a continuous manner. The network architecture comprises convolutional and graph-convolutional layers, which allows it to work with images and meshes at the same time. V2C-Flow is not only very fast, requiring less than two seconds to infer all four cortical surfaces, but also establishes vertex-wise correspondences to the template during reconstruction. In addition, V2C-Flow is the first approach for cortex reconstruction that models white matter and pial surfaces jointly, therefore avoiding intersections between them. Our comprehensive experiments on internal and external test data demonstrate that V2C-Flow results in cortical surfaces that are state-of-the-art in terms of accuracy. Moreover, we show that the established correspondences are more consistent than in FreeSurfer and that they can directly be utilized for cortex parcellation and group analyses of cortical thickness. |
Bongratz, Fabian; Rickmann, Anne-Marie; Wachinger, Christian Abdominal organ segmentation via deep diffeomorphic mesh deformations Journal Article In: Scientific Reports, vol. 13, no. 1, 2023. @article{BongratzAbdominal2023, Abdominal organ segmentation from CT and MRI is an essential prerequisite for surgical planning and computer-aided navigation systems. It is challenging due to the high variability in the shape, size, and position of abdominal organs. Three-dimensional numeric representations of abdominal shapes with point-wise correspondence to a template are further important for quantitative and statistical analyses thereof. Recently, template-based surface extraction methods have shown promising advances for direct mesh reconstruction from volumetric scans. However, the generalization of these deep learning-based approaches to different organs and datasets, a crucial property for deployment in clinical environments, has not yet been assessed. We close this gap and employ template-based mesh reconstruction methods for joint liver, kidney, pancreas, and spleen segmentation. Our experiments on manually annotated CT and MRI data reveal limited generalization capabilities of previous methods to organs of different geometry and weak performance on small datasets. We alleviate these issues with a novel deep diffeomorphic mesh-deformation architecture and an improved training scheme. The resulting method, UNetFlow, generalizes well to all four organs and can be easily fine-tuned on new data. Moreover, we propose a simple registration-based post-processing that aligns voxel and mesh outputs to boost segmentation accuracy. |
Rickmann, Anne-Marie; Bongratz, Fabian; Wachinger, Christian Vertex Correspondence in Cortical Surface Reconstruction Conference Medical Image Computing and Computer Assisted Intervention -- MICCAI 2023, vol. 14227, Springer Nature Switzerland, Cham, 2023, ISBN: 978-3-031-43993-3. @conference{rickmann_v2cc_2023, Mesh-based cortical surface reconstruction is a fundamental task in neuroimaging that enables highly accurate measurements of brain morphology. Vertex correspondence between a patient's cortical mesh and a group template is necessary for comparing cortical thickness and other measures at the vertex level. However, post-processing methods for generating vertex correspondence are time-consuming and involve registering and remeshing a patient's surfaces to an atlas. Recent deep learning methods for cortex reconstruction have neither been optimized for generating vertex correspondence nor have they analyzed the quality of such correspondence. In this work, we propose to learn vertex correspondence by optimizing an L1 loss on registered surfaces instead of the commonly used Chamfer loss. This results in improved inter- and intra-subject correspondence suitable for direct group comparison and atlas-based parcellation. We demonstrate that state-of-the-art methods provide insufficient correspondence for mapping parcellations, highlighting the importance of optimizing for accurate vertex correspondence. |
Rickmann, Anne-Marie; Xu, Murong; Wolf, Tom Nuno; Kovalenko, Oksana; Wachinger, Christian HALOS: Hallucination-free Organ Segmentation after Organ Resection Surgery Proceedings Article In: Information Processing in Medical Imaging (IPMI), 2023. @inproceedings{nokey, The wide range of research in deep learning-based medical image segmentation pushed the boundaries in a multitude of applications. A clinically relevant problem that received less attention is the handling of scans with irregular anatomy, e.g., after organ resection. State-of-the-art segmentation models often lead to organ hallucinations, i.e., false-positive predictions of organs, which cannot be alleviated by oversampling or post-processing. Motivated by the increasing need to develop robust deep learning models, we propose HALOS for abdominal organ segmentation in MR images that handles cases after organ resection surgery. To this end, we combine missing organ classification and multi-organ segmentation tasks into a multi-task model, yielding a classification-assisted segmentation pipeline. The segmentation network learns to incorporate knowledge about organ existence via feature fusion modules. Extensive experiments on a small labeled test set and large-scale UK Biobank data demonstrate the effectiveness of our approach in terms of higher segmentation Dice scores and near-to-zero false positive prediction rate. |
Rickmann, Anne-Marie; Bongratz, Fabian; P"olsterl, Sebastian; Sarasua, Ignacio; Wachinger, Christian Joint Reconstruction and Parcellation of Cortical Surfaces Proceedings Article In: International Workshop on Machine Learning in Clinical Neuroimaging, pp. 3–12, Springer, 2022. @inproceedings{rickmann2022joint, |
Rickmann, Anne-Marie; Senapati, Jyotirmay; Kovalenko, Oksana; Peters, Annette; Bamberg, Fabian; Wachinger, Christian AbdomenNet: deep neural network for abdominal organ segmentation in epidemiologic imaging studies Journal Article In: BMC Medical Imaging, vol. 22, no. 168, 2022. @article{nokey, Background Whole-body imaging has recently been added to large-scale epidemiological studies providing novel opportunities for investigating abdominal organs. However, the segmentation of these organs is required beforehand, which is time consuming, particularly on such a large scale. Methods We introduce AbdomentNet, a deep neural network for the automated segmentation of abdominal organs on two-point Dixon MRI scans. A pre-processing pipeline enables to process MRI scans from different imaging studies, namely the German National Cohort, UK Biobank, and Kohorte im Raum Augsburg. We chose a total of 61 MRI scans across the three studies for training an ensemble of segmentation networks, which segment eight abdominal organs. Our network presents a novel combination of octave convolutions and squeeze and excitation layers, as well as training with stochastic weight averaging. Results Our experiments demonstrate that it is beneficial to combine data from different imaging studies to train deep neural networks in contrast to training separate networks. Combining the water and opposed-phase contrasts of the Dixon sequence as input channels, yields the highest segmentation accuracy, compared to single contrast inputs. The mean Dice similarity coefficient is above 0.9 for larger organs liver, spleen, and kidneys, and 0.71 and 0.74 for gallbladder and pancreas, respectively. Conclusions Our fully automated pipeline provides high-quality segmentations of abdominal organs across population studies. In contrast, a network that is only trained on a single dataset does not generalize well to other datasets. |
Bongratz, Fabian; Rickmann, Anne-Marie; Pölsterl, Sebastian; Wachinger, Christian Vox2Cortex: Fast Explicit Reconstruction of Cortical Surfaces from 3D MRI Scans with Geometric Deep Neural Networks Proceedings Article In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. @inproceedings{Bongratz_2022_CVPR, The reconstruction of cortical surfaces from brain magnetic resonance imaging (MRI) scans is essential for quantitative analyses of cortical thickness and sulcal morphology. Although traditional and deep learning-based algorithmic pipelines exist for this purpose, they have two major drawbacks: lengthy runtimes of multiple hours (traditional) or intricate post-processing, such as mesh extraction and topology correction (deep learning-based). In this work, we address both of these issues and propose Vox2Cortex, a deep learning-based algorithm that directly yields topologically correct, three-dimensional meshes of the boundaries of the cortex. Vox2Cortex leverages convolutional and graph convolutional neural networks to deform an initial template to the densely folded geometry of the cortex represented by an input MRI scan. We show in extensive experiments on three brain MRI datasets that our meshes are as accurate as the ones reconstructed by state-of-the-art methods in the field, without the need for time- and resource-intensive post-processing. To accurately reconstruct the tightly folded cortex, we work with meshes containing about 168,000 vertices at test time, scaling deep explicit reconstruction methods to a new level. |
Gröger, Fabian; Rickmann, Anne-Marie; Wachinger, Christian STRUDEL: Self-Training with Uncertainty Dependent Label Refinement across Domains Conference Machine Learning in Medical Imaging (MLMI), 2021. @conference{Gröger2021, We propose an unsupervised domain adaptation (UDA) approach for white matter hyperintensity (WMH) segmentation, which uses Self-TRaining with Uncertainty DEpendent Label refinement (STRUDEL). Self-training has recently been introduced as a highly effective method for UDA, which is based on self-generated pseudo labels. However, pseudo labels can be very noisy and therefore deteriorate model performance. We propose to predict the uncertainty of pseudo labels and integrate it in the training process with an uncertainty-guided loss function to high- light labels with high certainty. STRUDEL is further improved by incorporating the segmentation output of an existing method in the pseudo label generation that showed high robustness for WMH segmentation. In our experiments, we evaluate STRUDEL with a standard U-Net and a modified network with a higher receptive field. Our results on WMH segmentation across datasets demonstrate the significant improvement of STRUDEL with respect to standard self-training. |
Özgün, Sinan; Rickmann, Anne-Marie; Roy, Abhijit Guha; Wachinger, Christian Importance Driven Continual Learning for Segmentation Across Domains Proceedings Article In: Liu, Mingxia; Yan, Pingkun; Lian, Chunfeng; Cao, Xiaohuan (Ed.): Machine Learning in Medical Imaging, pp. 423–433, Springer International Publishing, Cham, 2020, ISBN: 978-3-030-59861-7. @inproceedings{10.1007/978-3-030-59861-7_43, The ability of neural networks to continuously learn and adapt to new tasks while retaining prior knowledge is crucial for many applications. However, current neural networks tend to forget previously learned tasks when trained on new ones, i.e., they suffer from Catastrophic Forgetting (CF). The objective of Continual Learning (CL) is to alleviate this problem, which is particularly relevant for medical applications, where it may not be feasible to store and access previously used sensitive patient data. In this work, we propose a Continual Learning approach for brain segmentation, where a single network is consecutively trained on samples from different domains. We build upon an importance driven approach and adapt it for medical image segmentation. Particularly, we introduce a learning rate regularization to prevent the loss of the network's knowledge. Our results demonstrate that directly restricting the adaptation of important network parameters clearly reduces Catastrophic Forgetting for segmentation across domains. Our code is publicly available on https://github.com/ai-med/MAS-LR. |
Rickmann, Anne-Marie; Roy, Abhijit Guha; Sarasua, Ignacio; Wachinger, Christian Recalibrating 3D ConvNets with Project & Excite Journal Article In: IEEE Transactions on Medical Imaging, 2020. @article{rickmann2020recalibrating, |
Rickmann, Anne-Marie; Roy, Abhijit Guha; Sarasua, Ignacio; Navab, Nassir; Wachinger, Christian 'Project & Excite' Modules for Segmentation of Volumetric Medical Scans Proceedings Article In: Medical Image Computing and Computer Aided Intervention, Springer, 2019. @inproceedings{rickmann2019, Fully Convolutional Neural Networks (F-CNNs) achieve state-of-the-art performance for image segmentation in medical imaging. Recently, squeeze and excitation (SE) modules and variations thereof have been introduced to recalibrate feature maps channel- and spatial-wise, which can boost performance while only minimally increasing model complexity. So far, the development of SE has focused on 2D images. In this paper, we propose 'Project & Excite' (PE) modules that base upon the ideas of SE and extend them to operating on 3D volumetric images. 'Project & Excite' does not perform global average pooling, but squeezes feature maps along different slices of a tensor separately to retain more spatial information that is subsequently used in the excitation step. We demonstrate that PE modules can be easily integrated in 3D U-Net, boosting performance by 5% Dice points, while only increasing the model complexity by 2%. We evaluate the PE module on two challenging tasks, whole-brain segmentation of MRI scans and whole-body segmentation of CT scans. |
You must be logged in to post a comment.