About me:
I am a PhD student at AI-Med at Ludwig-Maximilians-Universität in Munich. My research focuses on developing Machine Learning solutions for Medical Imaging and Computer Vision. Previously, I completed my master’s degree on Biomedical Computing at TU München.
E-mail: ignacio [at] ai-med.de
Profiles: LinkedIn
Research interests:
- Machine Learning for Medical Imaging
- Computer Vision applications
- Generative Adversarial Networks
- Neuroimaging and Deep Brain Stimulation techniques.
Publications:
Rickmann, Anne-Marie; Bongratz, Fabian; P"olsterl, Sebastian; Sarasua, Ignacio; Wachinger, Christian Joint Reconstruction and Parcellation of Cortical Surfaces Proceedings Article In: International Workshop on Machine Learning in Clinical Neuroimaging, pp. 3–12, Springer, 2022. @inproceedings{rickmann2022joint, |
Ignacio Sarasua, Sebastian Pölsterl; Wachinger, Christian Hippocampal Representations for Deep Learning on Alzheimer's Disease Journal Article In: Scientific reports, vol. 12, no. 1, pp. 1-13, 2022, ISSN: 2045-2322. @article{sarasua2022hippocampal, Deep learning offers a powerful approach for analyzing hippocampal changes in Alzheimer's disease (AD) without relying on handcrafted features. Nevertheless, an input format needs to be selected to pass the image information to the neural network, which has wide ramifications for the analysis, but has not been evaluated yet. We compare five hippocampal representations (and their respective tailored network architectures) that span from raw images to geometric representations like meshes and point clouds. We performed a thorough evaluation for the prediction of AD diagnosis and time-to-dementia prediction with experiments on an independent test dataset. In addition, we evaluated the ease of interpretability for each representation-network pair. |
Sarasua, Ignacio; Pölsterl, Sebastian; Wachinger, Christian TransforMesh: A Transformer Network for Longitudinal modeling of Anatomical Meshes Conference Machine Learning in Medical Imaging (MLMI), 2021. @conference{sarasua2021transformesh, The longitudinal modeling of neuroanatomical changes related to Alzheimer's disease (AD) is crucial for studying the progression of the disease. To this end, we introduce TransforMesh, a spatio-temporal network based on transformers that models longitudinal shape changes on 3D anatomical meshes. While transformer and mesh networks have recently shown impressive performances in natural language processing and computer vision, their application to medical image analysis has been very limited. To the best of our knowledge, this is the first work that combines transformer and mesh networks. Our results show that TransforMesh can model shape trajectories better than other baseline architectures that do not capture temporal dependencies. Moreover, we also explore the capabilities of TransforMesh in detecting structural anomalies of the hippocampus in patients developing AD. |
Sarasua, Ignacio; Lee, Jonwong; Wachinger, Christian Geometric Deep Learning on Anatomical Meshes for the Prediction of Alzheimer's Disease Conference ISBI: International Symposium on Biomedical Imaging 2021, 2021. @conference{sarasua2021geometric, Geometric deep learning can find representations that are optimal for a given task and therefore improve the performance over pre-defined representations. While current work has mainly focused on point representations, meshes also contain connectivity information and are therefore a more comprehensive characterization of the underlying anatomical surface. In this work, we evaluate four recent geometric deep learning approaches that operate on mesh representations. These approaches can be grouped into template-free and template-based approaches, where the template-based methods need a more elaborate pre-processing step with the definition of a common reference template and correspondences. We compare the different networks for the prediction of Alzheimer's disease based on the meshes of the hippocampus. Our results show advantages for template-based methods in terms of accuracy, number of learnable parameters, and training speed. While the template creation may be limiting for some applications, neuroimaging has a long history of building templates with automated tools readily available. Overall, working with meshes is more involved than working with simplistic point clouds, but they also offer new avenues for designing geometric deep learning architectures. |
Rickmann, Anne-Marie; Roy, Abhijit Guha; Sarasua, Ignacio; Wachinger, Christian Recalibrating 3D ConvNets with Project & Excite Journal Article In: IEEE Transactions on Medical Imaging, 2020. @article{rickmann2020recalibrating, |
Benjamin ; Sarasua Ignacio ; Wachinger Gutierrez-Becker, Christian Discriminative and generative models for anatomical shape analysis on point clouds with deep neural networks Journal Article In: Medical Image Analysis, vol. 67, pp. 101852, 2020. @article{gutierrez2020discriminative, We introduce deep neural networks for the analysis of anatomical shapes that learn a low-dimensional shape representation from the given task, instead of relying on hand-engineered representations. Our framework is modular and consists of several computing blocks that perform fundamental shape processing tasks. The networks operate on unordered point clouds and provide invariance to similarity transformations, avoiding the need to identify point correspondences between shapes. Based on the framework, we assemble a discriminative model for disease classification and age regression, as well as a generative model for the accruate reconstruction of shapes. In particular, we propose a conditional generative model, where the condition vector provides a mechanism to control the generative process. For instance, it enables to assess shape variations specific to a particular diagnosis, when passing it as side information. Next to working on single shapes, we introduce an extension for the joint analysis of multiple anatomical structures, where the simultaneous modeling of multiple structures can lead to a more compact encoding and a better understanding of disorders. We demonstrate the advantages of our framework in comprehensive experiments on real and synthetic data. The key insights are that (i) learning a shape representation specific to the given task yields higher performance than alternative shape descriptors, (ii) multi-structure analysis is both more efficient and more accurate than single-structure analysis, and (iii) point clouds generated by our model capture morphological differences associated to Alzheimer’s disease, to the point that they can be used to train a discriminative model for disease classification. Our framework naturally scales to the analysis of large datasets, giving it the potential to learn characteristic variations in large populations. |
Sarasua, Ignacio; Poelsterl, Sebastian; Wachinger, Christian Recalibration of Neural Networks for Point Cloud Analysis Proceedings Article In: 3DV, 2020. @inproceedings{sarasua2020recalibration, Spatial and channel re-calibration have become powerful concepts in computer vision. Their ability to capture long-range dependencies is especially useful for those networks that extract local features, such as CNNs. While re-calibration has been widely studied for image analysis, it has not yet been used on shape representations. In this work, we introduce re-calibration modules on deep neural networks for 3D point clouds. We propose a set of re-calibration blocks that extend Squeeze and Excitation blocks and that can be added to any network for 3D point cloud analysis that builds a global descriptor by hierarchically combining features from multiple local neighborhoods. We run two sets of experiments to validate our approach. First, we demonstrate the benefit and versatility of our proposed modules by incorporating them into three state-of-the-art networks for 3D point cloud analysis: PointNet++, DGCNN, and RSCNN. We evaluate each network on two tasks: object classification on ModelNet40, and object part segmentation on ShapeNet. Our results show an improvement of up to 1% in accuracy for ModelNet40 compared to the baseline method. In the second set of experiments, we investigate the benefits of re-calibration blocks on Alzheimer's Disease (AD) diagnosis. Our results demonstrate that our proposed methods yield a 2% increase in accuracy for diagnosing AD and a 2.3% increase in concordance index for predicting AD onset with time-to-event analysis. Concluding, re-calibration improves the accuracy of point cloud architectures, while only minimally increasing the number of parameters. |
Pölsterl, Sebastian; Gutiérrez-Becker, Benjamı́n; Sarasua, Ignacio; Roy, Abhijit Guha; Wachinger, Christian An AutoML Approach for the Prediction of Fluid Intelligence from MRI-Derived Features Proceedings Article In: Adolescent Brain Cognitive Development Neurocognitive Prediction Challenge (ABCD-NP-Challenge), 2019. @inproceedings{Poelsterl2019-ABCDb, We propose an AutoML approach for the prediction of fluid intelligence from T1-weighted magnetic resonance images. We extracted 122 features from MRI scans and employed Sequential Model-based Algorithm Configuration to search for the best prediction pipeline, including the best data pre-processing and regression model. In total, we evaluated over 2600 prediction pipelines. We studied our final model by employing results from game theory in the form of Shapley values. Results indicate that predicting fluid intelligence from volume measurements is a challenging task with many challenges. We found that our final ensemble of 50 prediction pipelines associated larger parahippocampal gyrus volumes with lower fluid intelligence, and higher pons white matter volume with higher fluid intelligence. |
Pölsterl, Sebastian; Gutiérrez-Becker, Benjamı́n; Sarasua, Ignacio; Roy, Abhijit Guha; Wachinger, Christian Prediction of Fluid Intelligence from T1-Weighted Magnetic Resonance Images Proceedings Article In: Adolescent Brain Cognitive Development Neurocognitive Prediction Challenge (ABCD-NP-Challenge), 2019. @inproceedings{Poelsterl2019-ABCDa, We study predicting fluid intelligence of 9–10 year old children from T1-weighted magnetic resonance images. We extract volume and thickness measurements from MRI scans using FreeSurfer and the SRI24 atlas. We empirically compare two predictive models: (i) an ensemble of gradient boosted trees and (ii) a linear ridge regression model. For both, a Bayesian black-box optimizer for finding the best suitable prediction model is used. To systematically analyze feature importance our model, we employ results from game theory in the form of Shapley values. Our model with gradient boosting and FreeSurfer measures ranked third place among 24 submissions to the ABCD Neurocognitive Prediction Challenge. Our results on feature importance could be used to guide future research on the neurobiological mechanisms behind fluid intelligence in children. |
Pölsterl, Sebastian; Sarasua, Ignacio; Gutiérrez-Becker, Benjamín; Wachinger, Christian A Wide and Deep Neural Network for Survival Analysis from Anatomical Shape and Tabular Clinical Data Proceedings Article In: Data and Machine Learning Advances with Multiple Views Workshop, European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD), 2019. @inproceedings{Poelsterl2019-WideAndDeep, We introduce a wide and deep neural network for prediction of progression from patients with mild cognitive impairment to Alzheimer's disease. Information from anatomical shape and tabular clinical data (demographics, biomarkers) are fused in a single neural network. The network is invariant to shape transformations and avoids the need to identify point correspondences between shapes. To account for right censored time-to-event data, i.e., when it is only known that a patient did not develop Alzheimer's disease up to a particular time point, we employ a loss commonly used in survival analysis. Our network is trained end-to-end to combine information from a patient's hippocampus shape and clinical biomarkers. Our experiments on data from the Alzheimer's Disease Neuroimaging Initiative demonstrate that our proposed model is able to learn a shape descriptor that augments clinical biomarkers and outperforms a deep neural network on shape alone and a linear model on common clinical biomarkers. |
Rickmann, Anne-Marie; Roy, Abhijit Guha; Sarasua, Ignacio; Navab, Nassir; Wachinger, Christian 'Project & Excite' Modules for Segmentation of Volumetric Medical Scans Proceedings Article In: Medical Image Computing and Computer Aided Intervention, Springer, 2019. @inproceedings{rickmann2019, Fully Convolutional Neural Networks (F-CNNs) achieve state-of-the-art performance for image segmentation in medical imaging. Recently, squeeze and excitation (SE) modules and variations thereof have been introduced to recalibrate feature maps channel- and spatial-wise, which can boost performance while only minimally increasing model complexity. So far, the development of SE has focused on 2D images. In this paper, we propose 'Project & Excite' (PE) modules that base upon the ideas of SE and extend them to operating on 3D volumetric images. 'Project & Excite' does not perform global average pooling, but squeezes feature maps along different slices of a tensor separately to retain more spatial information that is subsequently used in the excitation step. We demonstrate that PE modules can be easily integrated in 3D U-Net, boosting performance by 5% Dice points, while only increasing the model complexity by 2%. We evaluate the PE module on two challenging tasks, whole-brain segmentation of MRI scans and whole-body segmentation of CT scans. |
Narazani, Marla; Sarasua, Ignacio; Pölsterl, Sebastian; Lizarraga, Aldana; Yakushev, Igor; Wachinger, Christian Is a PET all you need? A multi-modal study for Alzheimer's disease using 3D CNNs Proceedings Article In: Medical Image Computing and Computer-Assisted Intervention (MICCAI), 0000. @inproceedings{Narazani2022, Alzheimer’s Disease (AD) is the most common form of dementia and often difficult to diagnose due to the multifactorial etiology of dementia. Recent works on neuroimaging-based computer-aided diagnosis with deep neural networks (DNNs) showed that fusing structural magnetic resonance images (sMRI) and fluorodeoxyglucose positron emission tomography (FDG-PET) leads to improved accuracy in a study population of healthy controls and subjects with AD. However, this result conflicts with the established clinical knowledge that FDG-PET better captures AD-specific pathologies than sMRI. Therefore, we propose a framework for the systematic evaluation of multi-modal DNNs and critically re-evaluate single- and multi-modal DNNs based on FDG-PET and sMRI for binary healthy vs. AD, and three-way healthy/mild cognitive impairment/AD classification. Our experiments demonstrate that a single-modality network using FDG-PET performs better than MRI (accuracy 0.91 vs 0.87) and does not show improvement when combined. This conforms with the established clinical knowledge on AD biomarkers, but raises questions about the true benefit of multi-modal DNNs. We argue that future work on multi-modal fusion should systematically assess the contribution of individual modalities following our proposed evaluation framework. Finally, we encourage the community to go beyond healthy vs. AD classification and focus on differential diagnosis of dementia, where fusing multi-modal image information conforms with a clinical need. |
Sarasua, Ignacio; Pölsterl, Sebastian; Wachinger, Christian CASHformer: Cognition Aware SHape Transformer for Longitudinal Analysis Proceedings Article In: Medical Image Computing and Computer-Assisted Intervention (MICCAI), 0000. @inproceedings{Sarasua2022b, |
You must be logged in to post a comment.