Page tree
Skip to end of metadata
Go to start of metadata


The seminar takes place continuously (also during the semester break). In order to arrange a talk, please register through our Oberseminar registration formThis can only be done by the project supervisors.

Location: MI 03.13.010 (Seminar Room)

Mobile view: If you are having trouble seeing the schedule on your phone screen, please switch to the desktop version using the ≡ button on the top left corner.



PresentationDiscussion
Kick-offs10 min5 min
Finals20 min10 min


Schedule:




Date

Time

Student

Title

Type

Supervisor

29.Nov (Fr) 13:00 Nazila Esmaeili Classification of audio-based events during device tracking and - in a side project - of laryngeal lesions during Contact Endoscopy procedures Invited Talk Prof. Dr. Michael Friebe
13.Dez (Fr) 10:30 Claudio Benedetti Deep Learning Based Medical Tool Detection for the Use on a Head-Mounted-Display IDP Final Alexander Winkler
13.Dez (Fr) 11:00 Muhammad Arsalan Robust and Accurate Heart-Rate Sensing using mm-wave Radar MA Final Beatrice Demiray
13.Dez (Fr) 11:30 Hannes Hase Robotic navigation with deep reinforcement learning for medical imaging MA Kick-Off Mohammad Farid Azampour
13.Dez (Fr) 13:00 Christina Aigner Axiomatic Local Interpretability of Deep Neural Networks beyond Euclidean Data MA Kick-Off Sebastian Pölsterl
13.Dez (Fr) 13:30 Jongwon Lee 3D Shape Analysis using Mesh Representations MA Kick-Off Ignacio Sarasúa
17.Jan (Fr) 11:00 Mohammad Bagheri & Negar Namdarian Digital Therapy System for Individual Rehabilitation IDP Kick-Off Felix Bork
17.Jan (Fr) 11:30 Jyotirmay Senapati Bayesian Deep Learning for Medical Image Segmentation MA Final Sebastian Poelsterl
17.Jan (Fr) 12:00 Navneet Madhu Kumar Uncertainity-driven ultrasound pose estimation MA Kick-Off Mohammad Farid Azampour


Detailed information about the above presentations:



Date & Time

29.November 2019 -- 13:00

Title

Classification of audio-based events during device tracking and - in a side project - of laryngeal lesions during Contact Endoscopy procedures

Student

Nazila Esmaeili

Type

Invited Talk

Supervisor

Prof. Dr. Michael Friebe

Director

Prof. Dr. Michael Friebe

Abstract

TBA

Date & Time

13.Dezember 2019 -- 10:30

Title

Deep Learning Based Medical Tool Detection for the Use on a Head-Mounted-Display

Student

Claudio Benedetti

Type

IDP Final

Supervisor

Alexander Winkler

Additional supervisors

Philipp Stefan, Hooman Esfandiari

Director

Nassir Navab

Abstract

Traditionally for medical education and training, the apprenticeship model "See One, Do One, Teach One" guaranteed mastery in the healthcare profession through an adequate exposure to a broad range of cases. Due to several working-hour restrictions, surgery has seen training opportunities in the field continuously decreasing while the complexity of interventions is continuously increasing. This discrepancy has led to the development of simulated training environments from cadavers and synthetic models to computer-based simulators. However, trainees progressing from computer-based simulation to cadaver trainings and ultimately patient treatment face a steep learning curve and need methodologies that facilitate this transition.

There is a body of work targeting the use of Virtual and Augmented Reality (VR/AR) for training of healthcare professionals. Most of them are however aimed at the physicians who perform arguably the most critical part of the procedure, are however strongly dependent on the other staff in the operating room as well: Scrub nurses, also called perioperative nurses who support the surgeon while also maintaining patient safety.

Scrub nurses are a major role in the success of surgery as they conduct a large number of tasks, including ensuring the operating room is ready to be set up, preparing the instruments and equipment needed for the surgery, selecting and passing instruments to the surgeon, counting all instruments, sponges and other tools and transporting the patient to the recovery area. Because scrub nurses are so vital to surgical procedures, they may work long hours. They must also have excellent communication skills, because one of their primary duties is working with the surgeon and assisting her with anything she needs during the operation.

A large part of the learning critical to being an experienced scrub nurse is knowledge about the specific workflow of specific procedures, the tools for a specific procedure and when which tools are needed.

AR Head-Mounted-Displays (HMDs) are semitransparent displays devices, worn on the head in front of the users' eyes superimposing virtual objects on the real world (optical see through). HMDs for AR applications have been around since the 1960s. With bulkiness and a high price tag, their use was however limited to laboratory settings for the longest part of their existence. With the advent of the current generation of HMDs that target the consumer and gaming market, their price has considerably dropped and usability and overall quality has improved.

The goal of this interdisciplinary project, is to create an HMD application assisting a scrub nurse. As a first step such a system could be used for training of the nurse, but could ultimately also be transferred into clinical use. For the tasks of the nurse handing specific tools to the surgeon in specific steps in the workflow in the operating room, the main goal of this IDP is the development and integration of an object detection feature into an off the shelf HMD, so that the nurse can pass the correct tool to the surgeon. The special technical challenge in this instance of the problem is a general challenge of AR: Computational speed on a computationally limited platform. There is an abundance of object detection algorithms based on Deep Learning available, however they usually rely on powerful compute hardware, which is expensive, large and has a high power consumption which all are features which prohibit the use in an HMD. On the other hand light weight DL techniques exhibit detection rates, too low for use in healthcare. In this project we propose a hybrid approach, to use the HMD as an in- and output device, however stream data in real time to a powerful workstation, that can perform the detection task.

There are currently several medical training, planning, guidance and assistance systems being developed at Prof. Dr. Navab's NARVIS lab. As a proof of concept one of them should be integrated in the HMD application to evaluate and demonstrate its benefits. In this lab the student is also provided access to HMDs as well as a workstation for the machine learning tasks.

Date & Time

13.Dezember 2019 -- 11:00

Title

Robust and Accurate Heart-Rate Sensing using mm-wave Radar

Student

Muhammad Arsalan

Type

MA Final

Supervisor

Beatrice Demiray

Additional supervisors

Infineon Technologies

Director

Prof. Dr. Nassir Navab

Abstract

In signal processing of Doppler radar vital signs monitoring system, the received signal contains a frequency shift proportional to the target speed (Doppler Effect). Suppose the detected target is human thorax, Doppler echo signal includes possible the respiration and heartbeat information due to the chest motion caused by respiration and heartbeat. However, the radar is quite sensitive to environment and suffers from measurement inaccuracies due to random-body movement, movement, micro-motions and also corrupted by system inaccuracies like inter-modulation product of breathing and the respiratory rate.

This project will focus on two main aspects of heart-rate sensing solution based on spectral analysis. The first focus would be to improve the heart rate accuracy, for instance using beam forming, and the second focus would be to overcome the inaccuracies due to random-body movement, micro-motions and inter-modulation product of breathing and the respiratory rate using machine learning techniques, for instance Long short-term memory (LSTM).

The expected project achievements will have application, for instance, in vital sensing in operating theatre, indoor smart rooms and vital sensing in-cabin automotive applications.

Date & Time

13.Dezember 2019 -- 11:30

Title

Robotic navigation with deep reinforcement learning for medical imaging

Student

Hannes Hase

Type

MA Kick-Off

Supervisor

Mohammad Farid Azampour

Director

Prof. Dr. Nassir Navab

Abstract

The image acquisition of clinically relevant ultrasound images is anything but trivial. An experienced eye and a trained hand are needed to find the correct angle and position to take images in which all required information can be found for posterior interventions. Yet, imaging experts are scarce and with time being a limiting resource, inexperienced users need to step up and try their best.
In this work, we are developing a method based on deep reinforcement learning with which a robot is able to learn the optimal policy to obtain sound medical images. We do so, by setting up a framework to experiment with different Deep Q-Network configurations using ultra-sound frames as inputs.

Date & Time

13.Dezember 2019 -- 13:00

Title

Axiomatic Local Interpretability of Deep Neural Networks beyond Euclidean Data

Student

Christina Aigner

Type

MA Kick-Off

Supervisor

Sebastian Pölsterl

Director

Prof. Dr. Christian Wachinger

Abstract

Deep Neural Networks (DNNs) have an growing impact in biomedical applications due to their enormous potential in solving a variety of problems. On the other hand, the black-box nature of DNNs is still a barrier to the adoption of these systems for those tasks where interpretability is a requirement. In this thesis, we are focusing on local interpretability of a model. Given the network's output for a given input sample, attribution methods assign each input feature a scalar relevance score that denotes its contribution to the network's prediction. Several works have proposed attribution methods for making DNNs interpretable, but many of them are specific to DNNs that take images as inputs. The aim of this thesis is to develop attribution methods that are able to explain predictions of a DNN when inputs are tabular clinical data, anatomical shape, and graphs. We take an axiomatic approach relying on Shapley values from cooperative game theory, and develop a fast and accurate approximation that can be applied to DNNs. In the experiments, we investigate the interpretability of various DNNs and applications: wide and deep PointNets for Alzheimer's disease diagnosis and prognosis, and graph neural networks for the prediction of carcinogenicity properties of chemical compounds.

Date & Time

13.Dezember 2019 -- 13:30

Title

3D Shape Analysis using Mesh Representations

Student

Jongwon Lee

Type

MA Kick-Off

Supervisor

Ignacio Sarasúa

Additional supervisors

Sebastian Pölsterl

Director

Prof. Dr. Christian Wachinger

Abstract

Polygonal meshes provide an efficient representation for 3D shapes. They explicitly capture both shape surface and topology, and leverage non-uniformity to represent large flat regions as well as sharp, intricate features. We explore the effectiveness of different state of the art Deep Learning methods ,proposed for computer vision applications, when used for the analysis of medical data. In particular, we mainly focus on prediction of Alzheimer’s disease and mild cognitive impairment, using anatomical shapes of different brain structures (e.g. Hippocampi, Ventricles, ...).

Date & Time

17.Januar 2020 -- 11:00

Title

Digital Therapy System for Individual Rehabilitation

Student

Mohammad Bagheri & Negar Namdarian

Type

IDP Kick-Off

Supervisor

Felix Bork

Director

Prof. Nassir Navab

Abstract

The main aim of this project is to develop a new rehabilitation game for the Gamo Refit framework. We use dance, rhythmic movements and music as a motivational element in our game. The aim is to provide a standardized way of assessing balance and coordination parameters for doctors and therapists as well as focusing on posture stability, cognition and coordination skills for patients. Our Target patients are children from the age of 6 to 13 who suffer from disorders and conditions of the skeletal system. The Game is developed using Unity Game Engine and Kinect. There will be close cooperation between our team and Special Therapist during project development stages in order to present an ideal game experience for our patient target group.

Date & Time

17.Januar 2020 -- 11:30

Title

Bayesian Deep Learning for Medical Image Segmentation

Student

Jyotirmay Senapati

Type

MA Final

Supervisor

Sebastian Poelsterl

Additional supervisors

Abhijit Guha Roy

Director

Prof. Dr. Christian Wachinger

Abstract

Bayesian Deep Learning has gained a lot of popularity in the Deep Learning community due to its ability to generate well calibrated outputs by associating a confidence score along with the prediction. This has high implications in using deep learning for safety critical applications like Medical diagnosis, Autonomous driving etc. Recently, alot of work has explored and proposed different strategies for variational inference of Deep Networks to estimate the confidence. It is very difficult to say which is better for what application. Towards this end we explore these different strategies to identify their effectiveness targeting the application of segmentation Quality Control.

Date & Time

17.Januar 2020 -- 12:00

Title

Uncertainity-driven ultrasound pose estimation

Student

Navneet Madhu Kumar

Type

MA Kick-Off

Supervisor

Mohammad Farid Azampour

Additional supervisors

Raphael Prevost

Director

Prof. Dr. Nassir Navab

Abstract

Pose estimation from ultrasound is challenging due to the poor image quality, symmetric ambiguity in anatomical structures and large variations in the pose. Anatomical structures from different viewpoints will appear similar due to shape symmetries, occlusion, and repetitive textures. Thus, the pose estimation will have some degree of uncertainty.
In this work, we propose to exploit this uncertainty to robustly estimate the pose for an ultrasound sweep. For each ultrasound image, the network predicts multiple poses. Uncertainty is measured using Monte Carlo dropout for each pose.
We aggregate the results of the network for the unambiguous scans weighted by their uncertainty estimates in a voting framework to predict the pose. We show the benefits of our approach for initial pose prediction for a liver ultrasound sweep.



  • No labels