PhD Candidate
I joined AI-Med as a PhD student in June 2019. My research focuses on developing Machine Learning solutions for Medical Image Analysis with a focus on segmentation.
Previously I completed my master’s degree in Biomedical Computing at TUM and worked on my Master Thesis at AI-Med.
E-mail: annemarie.rickmann [at] gmail.com, Annemarie.Rickmann [at] med.uni-muenchen.de
Research Interests:
- Medical Image Segmentation
- 3D Convolutional Neural Networks
For students interested in doing a Master Thesis on medical image segmentation, please see this open project.
Publications:
Rickmann, Anne-Marie; Xu, Murong; Wolf, Tom Nuno; Kovalenko, Oksana; Wachinger, Christian HALOS: Hallucination-free Organ Segmentation after Organ Resection Surgery Inproceedings Forthcoming In: IPMI: International Conference on Information Processing in Medical Imaging 2023, Forthcoming. @inproceedings{nokey, The wide range of research in deep learning-based medical image segmentation pushed the boundaries in a multitude of applications. A clinically relevant problem that received less attention is the handling of scans with irregular anatomy, e.g., after organ resection. State-of-the-art segmentation models often lead to organ hallucinations, i.e., false-positive predictions of organs, which cannot be alleviated by oversampling or post-processing. Motivated by the increasing need to develop robust deep learning models, we propose HALOS for abdominal organ segmentation in MR images that handles cases after organ resection surgery. To this end, we combine missing organ classification and multi-organ segmentation tasks into a multi-task model, yielding a classification-assisted segmentation pipeline. The segmentation network learns to incorporate knowledge about organ existence via feature fusion modules. Extensive experiments on a small labeled test set and large-scale UK Biobank data demonstrate the effectiveness of our approach in terms of higher segmentation Dice scores and near-to-zero false positive prediction rate. |
Rickmann, Anne-Marie; Bongratz, Fabian; P"olsterl, Sebastian; Sarasua, Ignacio; Wachinger, Christian Joint Reconstruction and Parcellation of Cortical Surfaces Inproceedings In: International Workshop on Machine Learning in Clinical Neuroimaging, pp. 3–12, Springer, 2022. @inproceedings{rickmann2022joint, |
Rickmann, Anne-Marie; Senapati, Jyotirmay; Kovalenko, Oksana; Peters, Annette; Bamberg, Fabian; Wachinger, Christian AbdomenNet: deep neural network for abdominal organ segmentation in epidemiologic imaging studies Journal Article In: BMC Medical Imaging, vol. 22, no. 168, 2022. @article{nokey, Background Whole-body imaging has recently been added to large-scale epidemiological studies providing novel opportunities for investigating abdominal organs. However, the segmentation of these organs is required beforehand, which is time consuming, particularly on such a large scale. Methods We introduce AbdomentNet, a deep neural network for the automated segmentation of abdominal organs on two-point Dixon MRI scans. A pre-processing pipeline enables to process MRI scans from different imaging studies, namely the German National Cohort, UK Biobank, and Kohorte im Raum Augsburg. We chose a total of 61 MRI scans across the three studies for training an ensemble of segmentation networks, which segment eight abdominal organs. Our network presents a novel combination of octave convolutions and squeeze and excitation layers, as well as training with stochastic weight averaging. Results Our experiments demonstrate that it is beneficial to combine data from different imaging studies to train deep neural networks in contrast to training separate networks. Combining the water and opposed-phase contrasts of the Dixon sequence as input channels, yields the highest segmentation accuracy, compared to single contrast inputs. The mean Dice similarity coefficient is above 0.9 for larger organs liver, spleen, and kidneys, and 0.71 and 0.74 for gallbladder and pancreas, respectively. Conclusions Our fully automated pipeline provides high-quality segmentations of abdominal organs across population studies. In contrast, a network that is only trained on a single dataset does not generalize well to other datasets. |
Bongratz, Fabian; Rickmann, Anne-Marie; Pölsterl, Sebastian; Wachinger, Christian Vox2Cortex: Fast Explicit Reconstruction of Cortical Surfaces from 3D MRI Scans with Geometric Deep Neural Networks Inproceedings In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. @inproceedings{Bongratz_2022_CVPR, The reconstruction of cortical surfaces from brain magnetic resonance imaging (MRI) scans is essential for quantitative analyses of cortical thickness and sulcal morphology. Although traditional and deep learning-based algorithmic pipelines exist for this purpose, they have two major drawbacks: lengthy runtimes of multiple hours (traditional) or intricate post-processing, such as mesh extraction and topology correction (deep learning-based). In this work, we address both of these issues and propose Vox2Cortex, a deep learning-based algorithm that directly yields topologically correct, three-dimensional meshes of the boundaries of the cortex. Vox2Cortex leverages convolutional and graph convolutional neural networks to deform an initial template to the densely folded geometry of the cortex represented by an input MRI scan. We show in extensive experiments on three brain MRI datasets that our meshes are as accurate as the ones reconstructed by state-of-the-art methods in the field, without the need for time- and resource-intensive post-processing. To accurately reconstruct the tightly folded cortex, we work with meshes containing about 168,000 vertices at test time, scaling deep explicit reconstruction methods to a new level. |
Gröger, Fabian; Rickmann, Anne-Marie; Wachinger, Christian STRUDEL: Self-Training with Uncertainty Dependent Label Refinement across Domains Conference Machine Learning in Medical Imaging (MLMI), 2021. @conference{Gröger2021, We propose an unsupervised domain adaptation (UDA) approach for white matter hyperintensity (WMH) segmentation, which uses Self-TRaining with Uncertainty DEpendent Label refinement (STRUDEL). Self-training has recently been introduced as a highly effective method for UDA, which is based on self-generated pseudo labels. However, pseudo labels can be very noisy and therefore deteriorate model performance. We propose to predict the uncertainty of pseudo labels and integrate it in the training process with an uncertainty-guided loss function to high- light labels with high certainty. STRUDEL is further improved by incorporating the segmentation output of an existing method in the pseudo label generation that showed high robustness for WMH segmentation. In our experiments, we evaluate STRUDEL with a standard U-Net and a modified network with a higher receptive field. Our results on WMH segmentation across datasets demonstrate the significant improvement of STRUDEL with respect to standard self-training. |
Özgün, Sinan; Rickmann, Anne-Marie; Roy, Abhijit Guha; Wachinger, Christian Importance Driven Continual Learning for Segmentation Across Domains Inproceedings In: Liu, Mingxia; Yan, Pingkun; Lian, Chunfeng; Cao, Xiaohuan (Ed.): Machine Learning in Medical Imaging, pp. 423–433, Springer International Publishing, Cham, 2020, ISBN: 978-3-030-59861-7. @inproceedings{10.1007/978-3-030-59861-7_43, The ability of neural networks to continuously learn and adapt to new tasks while retaining prior knowledge is crucial for many applications. However, current neural networks tend to forget previously learned tasks when trained on new ones, i.e., they suffer from Catastrophic Forgetting (CF). The objective of Continual Learning (CL) is to alleviate this problem, which is particularly relevant for medical applications, where it may not be feasible to store and access previously used sensitive patient data. In this work, we propose a Continual Learning approach for brain segmentation, where a single network is consecutively trained on samples from different domains. We build upon an importance driven approach and adapt it for medical image segmentation. Particularly, we introduce a learning rate regularization to prevent the loss of the network's knowledge. Our results demonstrate that directly restricting the adaptation of important network parameters clearly reduces Catastrophic Forgetting for segmentation across domains. Our code is publicly available on https://github.com/ai-med/MAS-LR. |
Rickmann, Anne-Marie; Roy, Abhijit Guha; Sarasua, Ignacio; Wachinger, Christian Recalibrating 3D ConvNets with Project & Excite Journal Article In: IEEE Transactions on Medical Imaging, 2020. @article{rickmann2020recalibrating, |
Rickmann, Anne-Marie; Roy, Abhijit Guha; Sarasua, Ignacio; Navab, Nassir; Wachinger, Christian 'Project & Excite' Modules for Segmentation of Volumetric Medical Scans Inproceedings In: Medical Image Computing and Computer Aided Intervention, Springer, 2019. @inproceedings{rickmann2019, Fully Convolutional Neural Networks (F-CNNs) achieve state-of-the-art performance for image segmentation in medical imaging. Recently, squeeze and excitation (SE) modules and variations thereof have been introduced to recalibrate feature maps channel- and spatial-wise, which can boost performance while only minimally increasing model complexity. So far, the development of SE has focused on 2D images. In this paper, we propose 'Project & Excite' (PE) modules that base upon the ideas of SE and extend them to operating on 3D volumetric images. 'Project & Excite' does not perform global average pooling, but squeezes feature maps along different slices of a tensor separately to retain more spatial information that is subsequently used in the excitation step. We demonstrate that PE modules can be easily integrated in 3D U-Net, boosting performance by 5% Dice points, while only increasing the model complexity by 2%. We evaluate the PE module on two challenging tasks, whole-brain segmentation of MRI scans and whole-body segmentation of CT scans. |
You must be logged in to post a comment.