Page tree
Skip to end of metadata
Go to start of metadata

!! Due to the current state of emergency (Corona virus), until further notice, all presentations are conducted exclusively virtually via the conferencing system. !!

The seminar takes place continuously (also during the semester break). In order to arrange a talk, please register through our Oberseminar registration formThis can only be done by the project supervisors.

Location: MI 03.13.010 (Seminar Room)

Zoom Link: The link is shared with CAMP members via email roughly two days before each presentation. (To students: please ask your project supervisors for the Zoom link)

Mobile view: If you are having trouble seeing the schedule on your phone screen, please switch to the desktop version using the ≡ button on the top left corner.



PresentationDiscussion
Thesis Kick-off20 min10 min
Thesis Final25 min

5 min

IDP Kick-off10 min10 min
IDP Final15 min5 min
Guided Research Final15 min5 min


Schedule:



Date

Time

Presenter

Title

Type

Supervisor/Contact

22.Okt (Fr) 10:30 Lemonia Konstantinidou 3D Ultrasound Compounding for Volume Estimation in Thyroid Diagnostics MA Final Dr. Thomas Wendler
22.Okt (Fr) 11:00 Andy Chen Unsupervised Domain-Adaptation for Lane Detection from Uncalibrated Monocular Sequences to analyze Lane Variability MA Kick-Off Patrick Ruhkamp
22.Okt (Fr) 11:30 Tobias Muschialik Deep Point Cloud Segmentation for Room Disinfection Guided Research Thomas Wendler
22.Okt (Fr) 12:00 Marcel Ganß Deep learning-based contrast removal for abdominal organs in CTs BA Kick-Off Dr. Thomas Wendler
29.Okt (Fr) 10:30 Nehil Danis Markerless Motion Capture for Robotic 3D US MA Final Zhongliang Jiang
29.Okt (Fr) 11:00 Daniel Scherzer Deep-Learning-Based Multi-Structure Segmentation of 3D+t Echocardiography Data MA Final Beatrice Demiray
29.Okt (Fr) 11:30 Jingsong Liu SLAM based on the spotlight for retinal surgery IDP Final Dr. Mingchuan Zhou
29.Okt (Fr) 12:00 Julia Otto Autism Spectrum Disorder Classification based on Machine Learning and Nonverbal Behavior BA Final Daniel Roth
05.Nov (Fr) 10:30 Mohammad Bagheri Auditory Augmented Reality Design for Tool Alignment in Highly Sensitive Contexts MA Kick-Off Sasan Matinfar
05.Nov (Fr) 11:00 Thomas Nibler Comparison of RTSP Implementations for Data-Driven Applications IDP Kick-Off Kevin Yu
05.Nov (Fr) 11:30 Saptwarshi Saha Explainability methods for GNN based computer-aided diagnosis MA Kick-Off gerome vivar
05.Nov (Fr) 12:00 John Ridley Transcript Modelling for Weakly-Supervised Action Segmentation MA Final HUSEYIN COSKUN
12.Nov (Fr) 10:30 Chenguang Huang Visual-LiDAR Instance-Level Mapping MA Final Shun-Cheng Wu
12.Nov (Fr) 11:00 Yuezhi Cai Weakly Supervised Few Shot Object Localization MA Kick-Off Ashkan Khakzar
12.Nov (Fr) 11:30 Zihan Xu Deep Learning-Based Whole-Heart Segmentation from Low-Contrast CT Scans MA Kick-Off Mai Bui
12.Nov (Fr) 12:00 Emil Suleymanov Assessment of networking systems for Unity in the context of medical applications IDP Final Daniel Roth
19.Nov (Fr) 10:30 Vitor Sternlicht Extending the range of Prosit to any modified peptide MA Kick-Off Dr. Thomas Wendler
19.Nov (Fr) 11:00 Paula Castejon Algorithm Upgrading for Digital Nasoalveolar Molding as part of Cleft Lip and Palate Treatment MA Kick-Off Ardit Ramadani


Detailed information about the above presentations:



Date & Time

22.Oktober 2021 -- 10:30

Title

3D Ultrasound Compounding for Volume Estimation in Thyroid Diagnostics

Student

Lemonia Konstantinidou

Type

MA Final

Supervisor

Dr. Thomas Wendler

Additional supervisors

Matthias Keicher, Mohammad Farid Azampour, Christine Eilers

Director

Prof. Dr. Nassir Navab

Abstract

Image mosaicing/stitching/panorama/compounding are some of the terms used to describe
the method of constructing multiple images of the same scene into a larger image. Human
recognition for alignment of component images is subjective so images cannot be concatenated
in a simple way. Over the years, the research attention on mosaicing has increased due to
growing applications in the areas of computer vision, virtual reality, robotics, medical imaging,
computer graphics. The clinical applications aim to analyze images and help diagnose
patients with implementations related to super resolution, correcting image artifacts and
errors, combining imaging techniques for guided-surgery or minimally invasive procedures,
volumetry through prior segmentation etc. However, this task is not trivial because of several
reasons like low resolution, scanning errors, differences in brightness and in the case of the
3D compounding, small overlap area with different quality and total number of slices of the
separated volumes.
The aim of this project is to provide accurate full thyroid gland volumetry based on the 3D
compounding of 2D ultrasound scans. The compounding of 3D US scans appear promising
regarding the intra- and interobserver variability for thyroid lobe volumetry when compared
to the regular 2D scans. However, the two sweeps for each lobe are frequently not correctly
merged which might lead to overlaps and non-accurate volumetry. Yet, an accurate estimation
is crucial for several clinical indications, such as radioiodine therapy for Graves disease. The
approach followed here is a multiorgan segmentation of the 3D compounded lobe scans using
a 3D U-Net. Subsequently, an atlas-based automatic registration of the segmented structures
of the two lobes is performed. For the construction of the atlas, the ANTs framework is used,
based on the segmented MRI scans of the neck, for the same volunteers that are used for
training the segmentation network.
The obtained results and their comparative analyses clearly illustrate the efficacy of the
approach in obtaining visually pleasant mosaiced images. By using only the thyroid labels
for the registration the absolute error was calculated to 0.95 for the test set and 1.39 for
the training set, when the initial lobes had 1.15 and 1.44 respectively, leading in an overall
improved volume estimation.

Date & Time

22.Oktober 2021 -- 11:00

Title

Unsupervised Domain-Adaptation for Lane Detection from Uncalibrated Monocular Sequences to analyze Lane Variability

Student

Andy Chen

Type

MA Kick-Off

Supervisor

Patrick Ruhkamp

Additional supervisors

Benjamin Busam

Director

Prof. Nassir Navab

Abstract

Robust and accurate lane detections are essential for advanced driving assistance systems and the analysis of driver behaviour. Current learning-based lane detection models require large annotated datasets and show undesired decrease in performance when tested on unseen data with insufficient generalisability. To improve their performance on unseen data and other domains, we propose the combination of annotated lane detection data with other large-scale unlabeled data from driving scenarios, to learn domain-invariant feature embeddings through the joint self-supervised auxiliary task of regressing dense 3D scene structure. The complementary learning task is further leveraged to improve robustness by enforcing novel spatio-temporal consistency of lane predictions by utilising the temporal domain of the monocular input sequences across frames. The model is applied to a large scale unlabeled real-world dataset to study land variability in autonomous driving.

Date & Time

22.Oktober 2021 -- 11:30

Title

Deep Point Cloud Segmentation for Room Disinfection

Student

Tobias Muschialik

Type

Guided Research

Supervisor

Thomas Wendler

Additional supervisors

Francesca De Benetti

Director

Nassir Navab

Abstract

To be sent soon :)

Date & Time

22.Oktober 2021 -- 12:00

Title

Deep learning-based contrast removal for abdominal organs in CTs

Student

Marcel Ganß

Type

BA Kick-Off

Supervisor

Dr. Thomas Wendler

Additional supervisors

Francesca De Benetti, Dr. Johannes Oberreuter

Director

Prof. Dr. Nassir Navab

Abstract

to be submitted

Date & Time

29.Oktober 2021 -- 10:30

Title

Markerless Motion Capture for Robotic 3D US

Student

Nehil Danis

Type

MA Final

Supervisor

Zhongliang Jiang

Director

Prof. Dr. Nassir Navab

Abstract

Robotic three-dimensional (3D) ultrasound (US) imaging has been seen as a promising way to overcome the limitations of traditional US examination, i.e., high inter-operator variability and lack of repeatability. However, human sonographers react to patient movements by repositioning the probe or even restarting the acquisition. Furthermore, several attempts for adjustment of objects are often necessary to clearly and completely image the anatomy of interest, e.g., adjusting limbs to acquire images of their entire artery tree. The fact that robotic US systems do not react to subject movements during the scan limits their extensive use due to inconsistent US compounding results.
Here, we proposed a vision-based system to monitor the subject's movement and to automatically update the initial trajectory thus seamlessly obtaining a complete 3D image of the target anatomy. The US scan trajectory is extracted from a general CT atlas, in which the target object has been segmented. The motion monitoring system is based on the real-time segmented object masks obtained from RGB images. Once the subject moves, the robotic US will stop and automatically update its trajectory by registering the surface point clouds extracted from a depth camera before and after the movement. To smoothly stitch the two partitioned sweeps, a two-step fine-tuning procedure is introduced using both robotic tracking information and B-mode images. In addition, to improve the US imaging quality, we propose a confidence-based orientation correction, which fills in the potential gap between probe and contact surface. The experiments on a human-like arm phantom with an uneven surface demonstrate that the system can automatically resume a sweep when the subject moves during scanning.

Date & Time

29.Oktober 2021 -- 11:00

Title

Deep-Learning-Based Multi-Structure Segmentation of 3D+t Echocardiography Data

Student

Daniel Scherzer

Type

MA Final

Supervisor

Beatrice Demiray

Additional supervisors

Dr. Christoph Hennersperger; Rüdiger Göbl

Director

Prof. Dr. Nassir Navab

Abstract

tba

Date & Time

29.Oktober 2021 -- 11:30

Title

SLAM based on the spotlight for retinal surgery

Student

Jingsong Liu

Type

IDP Final

Supervisor

Dr. Mingchuan Zhou

Director

Prof. Dr. Nassir Navab

Abstract

Retinal surgery is a very complicated and challenging task for an experienced eye surgeon. Robot-assisted image-guidance is one of the novel and promising technologies that may increase the human capabilities during microsurgery. Based on our group's previous work, which analyzed the possibility of using a spotlight with a light beam to localize the instrument inside the eye, the aim of this work is to generate the mapping points of the retinal using the structure light beam and also complete the features (RGB of the mapping points) utilizing the images captured by a global camera.
Each mapping point has two attributes: the global coordinate values and the RGB values. Totally two methods of calculating the coordinates are mentioned: one is to translate the mapping points’ positions from the local beam coordinate to the global coordinate. The other is to obtain the positions directly from the pattern reconstruction of the spotlight’s projection on the retinal. Compared with the former, the second method improves the mapping accuracy from 0.070 mm to 0.020 mm.
In order to simulate the real retinal as much as possible, we introduced bumps to the eyeball in the experiment. 10 bumps with a radius of around 0.25 mm are placed across the area of interest on the retinal. Because the eyeball is not sphere anymore, a new evaluation metric Chamfer Distance is introduced here. The result shows that our method is robust against the bumps.

Date & Time

29.Oktober 2021 -- 12:00

Title

Autism Spectrum Disorder Classification based on Machine Learning and Nonverbal Behavior

Student

Julia Otto

Type

BA Final

Supervisor

Daniel Roth

Director

Nassir Navab

Abstract

Challenges in communication and nonverbal behavior are some of the most prominent symptoms of Autism Spectrum Disorder, a group of neurodevelopment disorders. Diagnosis currently is time consuming, since a behavioral tests over time are necessary. In order to support diagnosis, a master student developed a virtual reality approach, collecting patients nonverbal behavior data in a game-like scenario. This project focusses on the data analysis. Whilst recurrent neural networks (RNNs) are relatively new in the medical field, long short-term memory (LSTM) based classification has proven to achieve a high accuracy analyzing patient screening data. Due to those promising results we applied a LSTM network to classify autism based on data collected in previously named virtual reality setting, proofing its effectiveness in classifying autism.

Date & Time

05.November 2021 -- 10:30

Title

Auditory Augmented Reality Design for Tool Alignment in Highly Sensitive Contexts

Student

Mohammad Bagheri

Type

MA Kick-Off

Supervisor

Sasan Matinfar

Director

Prof. Nassir Navab

Abstract

There is an ever-increasing opportunity to explore auditory displays and sonification in medical interventions today. One of the most important motivations for conveying some information through the auditory channel could be the shift of focus from in situ to external displays, and the overload of visual stimuli in augmented reality applications for surgeons.

This project investigates various sonification methods that give users information about the proximity of navigation tools to targets. Our approach is based on an interactive model-based sonification with different degrees of freedom. However, we will aim to make the sound as enjoyable and informative as possible.

The final step will be to conduct a user study to evaluate the usability of sonification methods for new users.

Date & Time

05.November 2021 -- 11:00

Title

Comparison of RTSP Implementations for Data-Driven Applications

Student

Thomas Nibler

Type

IDP Kick-Off

Supervisor

Kevin Yu

Director

Ulrich Eck

Abstract

The Artekmed telepresence system heavily utilizes low-latency video streaming as part of its architecture with multiple RGB-D cameras.
We evaluate available RTSP libraries to identify the best option for the data-driven architecture of the existing Artekmed application regarding ease of integration, stability, and scalability.
We investigate and adjust the compatibility of the libraries concerning the particular constraints imposed by the unique scenario of ArtekMed, namely external deployment for medical emergencies.

Date & Time

05.November 2021 -- 11:30

Title

Explainability methods for GNN based computer-aided diagnosis

Student

Saptwarshi Saha

Type

MA Kick-Off

Supervisor

gerome vivar

Director

Prof. Navab

Abstract

Recent years have seen unprecedented rates of increase in the collection of multi modal
medical data such as MRI, fMRI, PET scans, etc. as well as non imaging patient data such as
clinical tests, demographics (age, gender, BMI index, etc) in clinical routines for the purpose
of disease prediction. This rich plethora of data is leveraged by Computer Aided Diagnosis
(CAD) systems. Due to rapid advancements in Geometric Deep Learning, Graph Neural
Networks (GNNs) are being increasingly used for computer aided disease classification.
Such systems use patient meta information to build models of patient inter relations and
learn an optimal mapping of multi modal features to disease classes.The adoption of these
increasingly complicated black box models might result in an improvement of classification
accuracy but this comes at the expense of model interpretabilty. This is a problem as an
increased classification accuracy alone by itself is not enough for the widespread adoption
of such models. The reason why a model arrived at a particular prediction might be of
equal importance to a physician. Although the field of Interpretable Machine Learning is still
maturing, it has already reached a first state of readiness as can be seen from the availability
of an increasing number of open source software packages containing implementations of
popular interpretability methods. In this work we explore various interpretabilty methods to
understand the effect of input feature dimensionality on meta feature importance and design
experiments to investigate if this knowledge can be leveraged to design better performing
models. The experiments were developed initially on an MNIST toy data set and later adapted
for use on the TADPOLE and PPMI data sets.

Date & Time

05.November 2021 -- 12:00

Title

Transcript Modelling for Weakly-Supervised Action Segmentation

Student

John Ridley

Type

MA Final

Supervisor

HUSEYIN COSKUN

Additional supervisors

Prof. Nassir Navab

Director

Federico Tombari

Abstract

The task of video action segmentation is regularly explored in a weakly-supervised setting, as it can alleviate some of the challenges with obtaining the frame-wise ground- truth for untrimmed video sequences.
Existing methods predominantly focus on aligning a transcript (action instances with ordering) to a given video sequence. As such, the selection of transcript during inference has a significant impact on frame-wise accuracy. However, existing approaches generally rely implicitly upon alignment modeling for transcript selection.
Due to the significant detrimental effect of poor transcript selection on performance, we explore dedicated transcript selection and inference techniques. In addition to utiliz- ing common action segmentation paradigms, such as discriminative modeling, we also consider other sequence modeling approaches and visual-temporal features as a means to yield more accurate transcripts for use in alignment. Techniques to integrate such modeling into existing segmentation pipelines are also considered.
We demonstrate that the application of transcript selection approaches can improve segmentation performance and significantly reduce inference duration on state-of-art methods

Date & Time

12.November 2021 -- 10:30

Title

Visual-LiDAR Instance-Level Mapping

Student

Chenguang Huang

Type

MA Final

Supervisor

Shun-Cheng Wu

Director

Federico Tombari

Abstract

Localization and mapping are two crucial tasks for robots to perceive and interact with the environment. Systems that can perform these two tasks simultaneously are called SLAM (Simultaneous Localization And Mapping) systems. Traditional SLAM systems build geometry maps where each map element in the maps only reflects the occupancy of objects but lacks semantic or instance-level information that is useful for advanced robotic applications. Meanwhile, many existing SLAM systems only use a single sensor which lacks robustness in odometry estimation. Based on the problems above, we propose a visual-LiDAR instance-level mapping system that can estimate the robot's trajectory robustly and build an instance-level map of the surrounding using camera and LiDAR data. To estimate the odometry, we propose a tightly coupled visual-LiDAR odometry system that solves a nonlinear optimization problem considering visual reprojection error, visual point-to-point error, and LiDAR features distance error. In the mapping part, we fuse the instance-level information from the camera and the point cloud from the LiDAR to generate point clouds with instance labels and propose a loop closure method to ensure global mapping consistency. We test our odometry method in the KITTI dataset and show that our method can outperform the baseline LiDAR odometry method and achieve comparable results to the state-of-the-art methods. In addition, we test our mapping method and show that the system can generate accurate and consistent instance-level maps.

Date & Time

12.November 2021 -- 11:00

Title

Weakly Supervised Few Shot Object Localization

Student

Yuezhi Cai

Type

MA Kick-Off

Supervisor

Ashkan Khakzar

Director

Nassir Navab

Abstract

The collection and labelling of large amount of data are costly for real-life applications especially in industrial and medical domains. This thesis utilizes attribution method to localize the object of interest with only image-level class labels as available supervision, which is called weakly supervised object localization. Since real-life applications also suffer from lack of data apart from lack of annotation, a further step is to tackle the more challenging scenario of few shot weakly supervised object localization

Date & Time

12.November 2021 -- 11:30

Title

Deep Learning-Based Whole-Heart Segmentation from Low-Contrast CT Scans

Student

Zihan Xu

Type

MA Kick-Off

Supervisor

Mai Bui

Additional supervisors

Wen-Yang Chu, Diogo Ferreira de Almeida

Director

Prof. Nassir Navab

Abstract

Cardiovascular disease is the one of the leading causes of death worldwide. However, heart surgery is difficult even for experienced surgeons. Physicians demand a better, patient-specific planning of the surgery to deliver tailored approaches. Vitonomy.io is focusing on providing individualized therapy by utilizing data-driven clinical trials on virtual patients to minimize both the resources and risk during animal or human trials. One of the major and crucial step is whole-heart segmentation, which is the prerequisite for further analyzing and processing data in 3D.

In this Master thesis, we are going to solve three current existing challenges in cardiac segmentation at Virtonomy.io by adapting state-of-the-art deep learning methods that are widely used nowadays in medical image processing. Firstly, current methods only segment the left heart and use two different convolution networks for images of different contrasts. The main goal of this thesis is providing a unified whole-heart segmentation framework for both low- and high-contrast cardiac CT images. Another problem is about dealing with shape prior. Statistical shape modeling methods are already developed at the company, while how to integrate modeling results into the segmentation framework is still undetermined. In the thesis we will try to utilize this prior information in order to increase the segmentation performance. Last but not least, the whole-heart segmentation result for a patient will be processed as a mesh during virtual trail, which requires a smooth 3D mesh. We generally face problems of non-smooth segmentation results for specific cases. Therefore, finding a method for stably generating smooth results is also considered to be an important part.

Date & Time

12.November 2021 -- 12:00

Title

Assessment of networking systems for Unity in the context of medical applications

Student

Emil Suleymanov

Type

IDP Final

Supervisor

Daniel Roth

Director

Nassir Navab

Abstract

Multiplayer High-Level API (UNet HLAPI) has been a widely used networking framework for Unity based applications requiring a multiplayer functionality. In 2019, it was discontinued, which called for an alternative. With many alternatives on the market, an informed decision on which can be used as a replacement has to be made. In this report, networking frameworks, that are suitable for medical applications are selected. This is done by defining requirements, surveying available frameworks, selecting the ones that fit the requirements, and benchmarking them.

Date & Time

19.November 2021 -- 10:30

Title

Extending the range of Prosit to any modified peptide

Student

Vitor Sternlicht

Type

MA Kick-Off

Supervisor

Dr. Thomas Wendler

Additional supervisors

Wassim Gabriel, Prof. Dr. Mathias Wilhelm

Director

Prof. Dr. Nassir Navab

Abstract

To be added soon

Date & Time

19.November 2021 -- 11:00

Title

Algorithm Upgrading for Digital Nasoalveolar Molding as part of Cleft Lip and Palate Treatment

Student

Paula Castejon

Type

MA Kick-Off

Supervisor

Ardit Ramadani

Additional supervisors

Dominik Gau

Director

Nassir Navab

Abstract

Cleft Lip and Palate (CLP) is a birth defect that occurs when a newborn’s lip and mouth do not form properly during pregnancy. Consequently, there is a gap across the lip, alveolar ridge and palate, which hinders the baby from feeding nor developing a properly closed oral cavity.
The treatment of this defect has several steps, from birth until teenage years. One of the methods aims to minimize the gap distance before primary surgery during the first 12 to 16 weeks after birth. It is called Nasoalveolar Molding (NAM). It consists of a plate placed into the newborn’s oral cavity in order to properly drive the growth of the upper jaw into narrowing the cleft. Due to the constant and fast growth of the mouth of the paediatric patients, these plates need to be repeatedly replaced. This process is cost and time intensive, as impressions of the patient’s oral cavity need to be taken frequently, which serve as a basis of the manufacture of the new plates.
To overcome these drawbacks, DigiNAM intends to manufacture all the required plates from a single impression of the newborn’s oral cavity, by means of a semi-automated algorithm which uses growth factors applied to the subsequent plates.
Thus, the purpose of this thesis is to fulfil DigiNAM’s aim by upgrading its algorithm up to a point where it matches the requirements for it to be fully functional. Those are: robustness, speed and automation, among others, as well as to make it user friendly by means of the development of a desktop application.






  • No labels