Page tree
Skip to end of metadata
Go to start of metadata

!! Due to the current state of emergency (Corona virus), until further notice, all presentations are conducted exclusively virtually via the conferencing system. !!

The seminar takes place continuously (also during the semester break). In order to arrange a talk, please register through our Oberseminar registration formThis can only be done by the project supervisors.

Location: MI 03.13.010 (Seminar Room)

Mobile view: If you are having trouble seeing the schedule on your phone screen, please switch to the desktop version using the ≡ button on the top left corner.



PresentationDiscussion
Kick-offs10 min20 min
Finals20 min10 min


Schedule:



Date

Time

Student

Title

Type

Supervisor

23.Okt (Fr) 10:00 Miruna Gafencu Cervical Spine Segmentation and 3D Printing for Improved Patient Treatment Planning BA Final Dr. Thomas Wendler
23.Okt (Fr) 10:30 Beriozchin, Evghenii Extending the VesARlius medical education app for remote learning and wider device support IDP Final Daniel Roth
23.Okt (Fr) 11:00 Tobias Piltz Forearm Localization based on Deep Learning and Realsense Camera BA Final Dr. Mingchuan Zhou
23.Okt (Fr) 11:30 Sajal Randhar Reconstruction of Photo-acoustic images using backprojection and model based approaches MA Final Mingchuan Zhou
30.Okt (Fr) 10:30 Florian Albrecht AR Assistance for the Technician during Navigated and Robotic Surgery IDP Final Alexander Winkler
30.Okt (Fr) 11:00 Alexander Epple Photorealistic Rendering of Training Data for Object Detection and Pose Estimation with a Physics Engine MA Final Fabian Manhardt
30.Okt (Fr) 11:30 Philipp Nikutta Infectious Disease Modeling for COVID-19 with Increased Spatial Resolution - Investigating Opportunities for Machine Learning and Community Mobility Data BA Final Matthias Keicher
30.Okt (Fr) 12:00 Samin Hamidi Self-Supervised Prediction of the Optimal Number of Clusters in K-Means Clustering MA Final Azade Farshad
30.Okt (Fr) 12:30 Aadhithya Sanka Disentangled Representation Learning of Medical Brain Images Using Flow Based Models MA Final Matthias Keicher
06.Nov (Fr) 10:30 Francesca de Benetti Deep CNN for dosimetry distribution estimation from SPECT/CT and PET/CT in radioisotope therapy MA Final Dr. Thomas Wendler
06.Nov (Fr) 11:00 Raphael Ronge Automatic Feature Interaction Learning for Alzheimer's Disease Diagnosis using Factorization Models MA Kick-Off Pölsterl, Sebastian
06.Nov (Fr) 11:30 Mariia Borysova Acceleration of Non-Negative Model Based Reconstruction for Photoacoustic Imaging BA Final Mingchuan Zhou
20.Nov (Fr) 10:30 Richard Gaus Associations of neuroanatomy with multiple neuropsychological disorders and test scores MA Final Sebastian Pölsterl
20.Nov (Fr) 11:00 Youssef Zidan Detection of Adversarial Examples with Feature Attribution Methods IDP Final Seong Tae Kim


Detailed information about the above presentations:



Date & Time

23.Oktober 2020 -- 10:00

Title

Cervical Spine Segmentation and 3D Printing for Improved Patient Treatment Planning

Student

Miruna Gafencu

Type

BA Final

Supervisor

Dr. Thomas Wendler

Additional supervisors

Felix Achilles (Medability), Ahmad Ahmadi, Magda Paschali, Matthias Keicher

Director

Prof. Dr. Nassir Navab

Abstract

Automatic spine segmentation is a core task for assistive systems in spine surgery. Over the last years, several machine learning-based methods have been published that address this task, yielding increased accuracy and
reduced computational time. At the same time, clinical studies have shown the potential for 3d printed anatomical models in medical education, diagnostics, and individual treatment planning. In this thesis, the application of existing state-of-the-art segmentation models is tested in
clinical practice. Moreover, additional uses of the segmentation methods are evaluated, such as fracture detection and automatic generation of anatomical 3d models for rapid prototyping. A focused, conclusive set of
experiments serves to analyze these use-cases of automatic segmentation with the goal of delivering an easy-to-use tool for clinical practitioners.
The student is supported by experts in the fields of 3d printing, deep learning based segmentation and traumatology.

Date & Time

23.Oktober 2020 -- 10:30

Title

Extending the VesARlius medical education app for remote learning and wider device support

Student

Beriozchin, Evghenii

Type

IDP Final

Supervisor

Daniel Roth

Additional supervisors

Daniela Kugelmann (LMU)

Director

Nassir Navab

Abstract

VesARlius is a multiuser Augmented Reality application for anatomy learning. It consists of a shared scene for multiple users, in which 3D models of body parts and the corresponding CT scans are presented. The aim of the app is to study these body parts collectively in a classroom.
VesARlius was developed in Unity. Before the start of the project it had support for Hololens and was partially ported for Hololens 2. The networking layer was established through the Holotoolkit Sharing Service.
As a result of the project, all 3 problems, that were put forward, were solved. First, a new networking layer – Mirror networking – was established. Additionally, all the potential actions inside the scene were synchronized. Secondly, the supported platforms were extended to VR kits (ex. HTC Vive Pro), Hololens 2 and desktop. For this, migration from Holotoolkit to MRTKv2 was required. At last, an additional dataset and the corresponding UI, along with basic user avatars, were added.

Date & Time

23.Oktober 2020 -- 11:00

Title

Forearm Localization based on Deep Learning and Realsense Camera

Student

Tobias Piltz

Type

BA Final

Supervisor

Dr. Mingchuan Zhou

Director

Prof. Nassir Navab

Abstract

Forearm localization is an interesting research direction for the robotic diagnostic scan. In this thesis, we created a dataset of labeled forearm pictures and used the deep learning framework MASK RCNN to train and localize the forearm in RGB image. Afterwards, the depth image from realsense camera is used to register and segment the forearm in 3D space. Based on the segmentation result in combination with the depth image, a trajectory is calculated using a line of best fit algorithm. For the purpose of easy integration and usability, docker in combination with ROS was chosen. The effectiveness and efficiency of the proposed implementation are verified in a real world scenario. Finally, the Franka robot is used to follow this trajectory to perform the forearm scan.

Date & Time

23.Oktober 2020 -- 11:30

Title

Reconstruction of Photo-acoustic images using backprojection and model based approaches

Student

Sajal Randhar

Type

MA Final

Supervisor

Mingchuan Zhou

Director

Prof. Nassir Navab

Abstract

Photoacoustic imaging is a biomedical imaging technique based on the photo-acoustic effect. In photoacoustic imaging, a device sends laser pulses into biological tissues. The energy from the laser is absorbed(i.e. by tissue, blood etc.) and converted into heat, which ultimately leads to ultrasonic emission. Different substances absorb different amounts of laser energy, thereby emitting different magnitude of the ultrasonic emission (i.e. photoacoustic signal). The emitted ultrasonic waves are detected by ultrasonic transducers and then analyzed to produce images.

This thesis involves the reconstruction of 2D images from the signal data of the photoacoustic device(i.e. ultrasonic transducers). I will be using two approaches namely, back projection and model-based approach, to reconstruct the 2D images. A back-projection algorithm is a common biomedical imaging technique. It is very simple but fast. It equally divides the measured signal among the pixels based on the projection angle and distance. While the model-based approach uses a dense scenario of the device to reconstruct the image.

Date & Time

30.Oktober 2020 -- 10:30

Title

AR Assistance for the Technician during Navigated and Robotic Surgery

Student

Florian Albrecht

Type

IDP Final

Supervisor

Alexander Winkler

Director

Nassir Navab

Abstract

Navigation in surgery was born from the desire to perform safer and less invasive procedures.
It mainly answers the questions "Where is my (anatomical) target?" and "Where am I (anatomically)?"

An even newer trend is robotic surgery, which for many applications also applies a preoperative scan of the patient like in navigation, but on top of it, the robot can guide the surgeon and avoid risk areas or even perform parts of the surgery autonomously.

In navigated surgery, but especially in robotic surgery a technical assistant has to constantly monitor the system to ensure proper tracking of the patients bones by the markers.

There is a body of work targeting the use of "Virtual and Augmented Reality (VR/AR) for healthcare professionals". Most of them are aimed at the physicians who perform arguably the most critical part of the procedure. The main surgeons are, however, strongly dependent on the other staff in the OR as well, especially scrub nurses and technical assistants who support the surgeon while also maintaining patient safety.
The goal of this clinical project, is to create an "HMD application assisting a technical assistant" in navigated or robotic orthopedic surgery.
As a first step this system should help the assistant to maintain a good visibility of the markers to the camera and aid in positioning the tracking system or robot in the OR.
As surgical navigation systems and especially robotic systems are closed systems and would lose their certification if they were to interface with an outside device such as an AR HMD, it is very attractive for a medical device manufacturer, if this assistance system works standalone without interfacing and interfering with the medical device.

Date & Time

30.Oktober 2020 -- 11:00

Title

Photorealistic Rendering of Training Data for Object Detection and Pose Estimation with a Physics Engine

Student

Alexander Epple

Type

MA Final

Supervisor

Fabian Manhardt

Director

Prof. Nassir Navab

Abstract

Deep learning and specifically Convolutional Neural Networks (CNNs) were designed to handle this complexity. Well-trained CNNs are capable of transferring their learned abilities into new scenarios, they can generalize to the real world. However, a new problem arises when taking into account the immense amount of training data needed. With real training data being both difficult to generate and annotate, this problem is even more pronounced.
In order to address this challenge, we use real photographs and room scans as the basis for our synthetic data set generator. The scans are used both for simulating the rigid-body physics of the virtual objects correctly, as well as to improve rendering quality. The photographs and renders are blended, combining the realism of captured images with the infinite variety simulated objects provide. To keep computational costs low, we render the objects without background. The synthetic parts of the image appear realistic, as the scene is incorporated into rendering. With the images appearing natural, the the risk of overfitting is reduced as well.

Date & Time

30.Oktober 2020 -- 11:30

Title

Infectious Disease Modeling for COVID-19 with Increased Spatial Resolution - Investigating Opportunities for Machine Learning and Community Mobility Data

Student

Philipp Nikutta

Type

BA Final

Supervisor

Matthias Keicher

Additional supervisors

Mathias Unberath

Director

Prof. Dr. Nassir Navab

Abstract

The ongoing pandemic of the novel coronavirus poses several modeling challenges. One of them is the unavailability of the true numbers of infections and deaths because traditional com- partmental models base their future predictions on them. The true counts for the number of infected and deceased are unobservable because of testing and measuring biases and are in addi- tion, subject to a delay in reporting. Many approaches introduce more parameters that model the non pharmaceutical interventions, like social distancing or lockdown. Ultimately, this leads to an increasingly ill-conditioned inverse problem, as the complexity of the model increases while the unreliable, delayed data remains constant. To address this issue, one model proposes to share some epidemiological parameters across different regions[1]. This works well when the model op- erates on country-level data, which contains a lot of observed infections and deaths. However, when such a model operates on more localized data, like states or counties, the lack of data and the high signal-to-noise ratio compromises the model’s ability to produce reliable results. This work is aimed at examining how the model is limited by its strong assumptions and prior dis- tributions and how to alleviate its factors that prevent necessary localization. First, a spatially localized dataset is acquired. Second, the aforementioned model is evaluated on the collected dataset. Ultimately, this work investigates new means to overcome the difficulties that arise due to a lack of reliable localized data for epidemiological models. In addition to traditional methods, machine learning techniques are evaluated for their suitability to solve this task.

Date & Time

30.Oktober 2020 -- 12:00

Title

Self-Supervised Prediction of the Optimal Number of Clusters in K-Means Clustering

Student

Samin Hamidi

Type

MA Final

Supervisor

Azade Farshad

Director

Prof. Dr. Nassir Navab

Abstract

Clustering is a well-known unsupervised learning approach where the data is partitioned into multiple clusters for assignment of pseudo-labels. The number of clusters in data without labels is an unknown parameter that is usually manually set or is calculated by a data scientist using classical approaches. With recent advances in machine learning, these approaches face limitations, thus the need of automatic prediction of number of clusters emerges to move towards a better learning with less supervision. In this thesis we focus on predicting the number of clusters in the k-means clustering method using meta-learning or learning to learn.

Date & Time

30.Oktober 2020 -- 12:30

Title

Disentangled Representation Learning of Medical Brain Images Using Flow Based Models

Student

Aadhithya Sanka

Type

MA Final

Supervisor

Matthias Keicher

Additional supervisors

Dr. Seong Tae Kim

Director

Prof. Dr. Nassir Navab

Abstract

Disentangled representations allow for control over the generative factors of the images, which can be used to generate highly controlled synthetic images for training other models that require large number of labelled or unlabelled data.
Recently, Flow-based generative models are proposed to generate realistic images by directly model the data distribution with invertible functions.
In this work, we propose a new flow-based generative model, named GLOWin, to improve the disentanglement ability of latent features in the Flow-based generative models.
The flow model is trained to generate brain images by maximizing the log-likelihood loss. Feature disentanglement is achieved by factorizing the latent space into components such that each component learns the representation for one generative factor. Supervision is applied to encourage pairs of images sharing the same generative feature have similar representations in the factor corresponding to the feature they share. The proposed method is evaluated by comparing the results on downstream tasks with other methods.

Date & Time

06.November 2020 -- 10:30

Title

Deep CNN for dosimetry distribution estimation from SPECT/CT and PET/CT in radioisotope therapy

Student

Francesca de Benetti

Type

MA Final

Supervisor

Dr. Thomas Wendler

Additional supervisors

Dr. Johannes Oberreuter

Director

Prof. Dr. Nassir Navab

Abstract

The evaluation of the absorbed dose is crucial in the planning of internal radiotherapy treatments. In the case of theranostics, this calculation can be done using pre-therapy imaging with an imaging isotope and PET/CT or SPECT/CT. A precise estimation of the dose distribution can then be performed running Monte Carlo simulations based on the PET/CT or SPECT/CT data, where the physics of the radioactive material and the surrounding tissue is accurately considered. However, Monte Carlo simulations are difficult to set up and time consuming, and therefore they are not commonly used in clinical practice. This thesis project aims to compute the absorbed dose using a 3D CNN, using as input a CT scan and a PET or SPECT scan. The absorbed dose used as ground truth is the result of a Monte Carlo simulation run on the GATE platform modelling aspects of the beta and gamma decay, as well as the tissue properties. As final step, the thesis considers the problem in the clinical setting of the treatment of prostate cancer. Here, the CNN is applied to the theranostics procedure involving a pre-therapy PSMA PET/CT (used as one of the inputs of the CNN) and the results are compared with the post-therapy Lu-177 PSMA SPECT.

Date & Time

06.November 2020 -- 11:00

Title

Automatic Feature Interaction Learning for Alzheimer's Disease Diagnosis using Factorization Models

Student

Raphael Ronge

Type

MA Kick-Off

Supervisor

Pölsterl, Sebastian

Additional supervisors

Prof. Christian Wachinger

Director

Prof. Nassir Navab

Abstract

It is possible to divide the slowly progressing neurodegenerative Alzheimer's disease into three groups: normal cognition (healthy patients), mild cognitive impairment (patients with first sings of cognitive malfunction) and Alzheimer's disease patients. The current state of the art medical research uses different biomarker combinations to classify Alzheimer's, but does not pay attention to interactions of biomarkers. In my Master Thesis, I try to find and model these interactions explicitly to classify patients of the three groups and gain a better understanding of biomarker interactions. I am going to leverage the research that is already available in the field of advertisement prediction. To predict personalized advertisements, online platforms need to use user data that is mostly categorical (e.g. gender, location, last visited item on website, ...) and highly sparse. Therefore, they rely on feature interactions, otherwise a prediction would be impossible. In my Master Thesis, I will try different factorization models to find the best one to classify patients from the Alzheimer's Disease Neuroimaging Initiative reliably. In addition, I will model biomarker interactions and try to find the most important ones to classify patients reliably.

Date & Time

06.November 2020 -- 11:30

Title

Acceleration of Non-Negative Model Based Reconstruction for Photoacoustic Imaging

Student

Mariia Borysova

Type

BA Final

Supervisor

Mingchuan Zhou

Director

Prof. Nassir Navab

Abstract

Photoacoustic imaging is a relatively new imaging method that uses the photoacoustic effect to visualize the inner structures of the human body. The basis of the project is the finished Matlab code for the photoacoustic imaging that uses the model-based reconstruction to reconstruct the initial pressure inside the tissue. The code uses non-negative reconstruction with L2 identity and L2 Laplace regularization. The thesis compares the original Matlab code and the implemented C++ code for accuracy of the reconstruction as well as for the speed of execution.

Date & Time

20.November 2020 -- 10:30

Title

Associations of neuroanatomy with multiple neuropsychological disorders and test scores

Student

Richard Gaus

Type

MA Final

Supervisor

Sebastian Pölsterl

Additional supervisors

Prof. Christian Wachinger

Director

Prof. Nassir Navab

Abstract

The Adolescent Brain Cognitive Development (ABCD) study is a multisite longitudinal study focusing on brain, social, emotional, and cognitive development in children. The study recruited over 11,000 children aged 9-10 and acquired structural brain imaging and neuropsychological assessments for each of its participants. In this thesis, I performed an exploratory analysis to determine to which extent a child's neuroanatomy is predictive of a wide range of neuropsychological disorders and tests. The neuroanatomy was captured by summary measures, such as volume and thickness extracted from MRI scans. To avoid that confounding factors inflate the predictive performance, confounder correction was applied prior to analysis. I evaluated the predictive performance by cross-validation and compared to a naive baseline that excludes neuroanatomical information.

Date & Time

20.November 2020 -- 11:00

Title

Detection of Adversarial Examples with Feature Attribution Methods

Student

Youssef Zidan

Type

IDP Final

Supervisor

Seong Tae Kim

Director

Prof. Dr. Nassir Navab

Abstract

It is a well-known drawback of deep neural networks that they are vulnerable to adversarial attacks that make imperceptible changes for humans but cause large changes for the model. In this project, we introduce a new framework to detect adversarial attacks by exploring the attribution maps. We first observe that the feature attribution map (perturbation mask) of an adversarial example is different from that of the original example. Based on this observation, we introduce a method to measure uncertainty based on an attribution map.






  • No labels