- Ingenieurpraxis | Engineering Practice
Information about writing a engineering practice can be found here
Archive
- Audio Analysis, Transcription, Composition
- Automatische Musik Score Generierung
- Automatisierte Funktionsdokumentation
- Blood cell classification using pre trained models
- Einwurfsystem für Bibliotheken
- Flow Cytometry Gating Web Interface
- Introduction to Applied Machine Learning with Python
- Kubernetes Cluster and Deployment Administration
Student: Felix Fehlauer Abstract:
Email: f.fehlauer@tum.de Status: STARTING
Supervisor: Röhrl / Lengl - Laser Wolpertinger
- Meetify - Meeting Assistance Application
- Poppy Collision Detection
- Poppy Control
- Poppy Facial Expression
- Poppy Muscle Controlled Gripper
- Poppy Object Tracking
- Poppy Qt GUI
- Smart Tamper
- Turtlebot Tracking
- Vergleich von vortrainierten Neuronalen Netzen
- Audio Analysis, Transcription, Composition
- Bachelorsthesis
Information about writing a bachelorsthesis can be found here.
Archive
- Local Public Transport Server
- Teleoperated control of a humanoid robot
- Poppy Kinect
- autoMAP
- Scene Detection in Skiing Videos
- 3D Print Augmented Reality
- Real-time LRV speed estimation from sigle forward facing camera
- Anitblockier Bilder
- AirHockey Unity Simulation
- Object Classification Autonomous Vehicle
- Adaptive Hintergrundbeleuchtung für Monitore/Fernseher
- Auralisation von Bewegung
- Segmentation of Medical Images with Deep Learning
- GANs learning to draw oil paintings
- Optimierung der Architektur neuronaler Netze für modulare Probleme
- Analysis Pipeline for Quantitavie Mapping of Physical Parameters in Glioma Patients
- Vorsortierung von Fotos
Allgemeine Daten
- Measuring Accurate Timings in Outdoor Sports Using Photoelectric Sensors and Ultrasound
- Audio Analysis and Transcription
- Extreme Learning Machines with Structured Matrices
- Automatisierung eines 3D-Druck-Labors
- Application for the Analysis of Portrait Composition
- Ridge Regression for Big Data Applications: Trade-Off between Memory Consumption and Learning Speed
- Local Public Transport Server
- Forschungspraxis | Research Internship
Detailed information can be found here
Objective
The research internship is intended to prepare students for their later scientific activities in research and development by providing them with insight into current research topics during their master’s studies. The research internship takes the form of project work. Each student in the master’s program works on an individual project assigned by a professor/supervisor from the respective Chair (cf. Procedure) within the scope of the research internship. Upon successful completion of the internship students will be able to draw up (apply learned concepts) and plan a project with an engineering character, to define milestones, as well as to document the progress and results of the project and to present them to an audience.
- Montage Tracking
- Smart Storage
- Reality Gap
- Analysis of Ultrasonic Sensors Regarding Their Use in Safety Systems in Industrial Environments
- 3D-Druck Überwachung
- Sobolev Training with higher order Derivatives
- IR-UWB-based localization for indoor applications: Principles and challenges
- The Role of Optimizers in Continuous Control Problems
- Deep Neural Networks for Video-level Emotion Analysis
- Human Activity Recognition Using A Smartphone’s Inertial Measurement Unit
- Predicting the Emotional Impact of Videos using Echo State Networks
- Audio Object Detection
- Segway Robot Loomo As A Tour Guide
- Ultra-Wide-Band in Traffic Applications
- MPEG Bitstream for Video Classification with Deep Learning
- Attention in Sequence-to-Vector Regression
- Feature Selection Methods for Predicting the Emotional Impact of Videos
- Conception and system design of a reporting platform for cyber attacks
- Mahalanobis distance based time series classification for resource constraint systems
- Linear and Logarithmic Quantization Approaches for Efficient Inference with Deep Neural Networks
Student: Constantin Berger Abstract: Quantization enables efficient processing of Deep Neural Networks. In this work, the methods of Linear and Logarithmic Quantization are discussed. These methodologies are applied to a Deep Neural Network for controlling an autonomous drone. The trade-off between reduction of computational complexity and loss of accuracy is the main subject of this investigation. Moreover, I propose an approach to overcome the limitations of logarithmic quantization, which requires the specific handling of negative values. This is achieved by storing the sign of the non-quantized value in the sign-bit of the fixed-point value representation after Quantization. This approach allows the application of Logarithmic Quantization to Neural Networks with positive and negative weights. The results show that the given hardware does not allow for significant performance improvements.
Email: - Status: FINISHED
Supervisor: Matthias Kissel - Neural Network Online-Pruning: Accelerating Weighted Sum Calculation by Early Stopping
Student: Julian Lorenz Abstract: In this paper I propose a new method to shorten the weighted sum computation at each neuron in a neural network without requiring retraining. I sort the weighted sum computation order by the magnitude of the weights. If the activation function shows converging behavior, I stop the weighted sum computation early after it has passed a predetermined stopping threshold. I show how to find the stopping thresholds by statistical analysis of the weighted sum computation in a network. I also
provide an experimental analysis on how the online-pruning method performs in comparison to the normal feed-forward computation. Using my approach, the MAC operations in the tested network can be reduced by 14.1%. This results in a speed improvement of 5.1% while achieving an average R2 score of 99.09%.Email: - Status: FINISHED
Supervisor: Matthias Kissel - Approximation of Weight Matrices using Hierarchical Matrices
Student: Till Hülder Abstract: Deep neural networks have shown their high performance accuracy in many areas such as image and speech recognition. Further they are also associated with high computational costs. In this paper, the approximation of a weight matrices is investigated in terms of time and prediction accuracy. Hierarchical matrices (H-matrices) are used for the approximation. In order to find a suitable H-matrix approximation, submatrices which are approximately low rank must be found in the original matrix. Various variations in algorithm such as the low-rank approximation method and the rank were investigated. From Pre-trained Pytorch models such as ResNet, GoogLeNet and MobileNetV2 the last layers were extracted and approximated. The ImageNet dataset was used for testing. For all tested models it was shown that the time required for a matrix vector operation is be significantly smaller for an approximated matrix. By using GoogLeNet with an approximation rank of 30, 43.89 % of the computing time was saved with a percentage accuracy loss of 5.69 %.
Email: - Status: FINISHED
Supervisor: Matthias Kissel - Skip-Thought Vector based Chatbots
- An experimental comparison between the performance of different web concurrency paradigms
Student: Eduardo Rodríguez Fernández Abstract: Most of the popular modern web development frameworks, like Node.js and Go, handle the creation and management of a backend service in a mostly abstracted high- level way that does not allow a developer much freedom to modify the inherent system architecture of the server. Such an inflexible and abstracted, often plug-and-play, server implementation helps to facilitate web development by concealing the system-level design choices from the end user. Modern web frameworks mostly try to handle concurrent client connections in user-space under the premise that handling concurrency in kernel-space is too costly. The problem of blindly relying on a web framework without understanding its internal architecture is that it might not be the most efficient choice for a web application that has to deal with multiple concurrent connections. This paper provides an experimental comparison of the CPU utilization efficiency of two completely different concurrency-handling paradigms: a multi-process implementation in C and a goroutine-based non- preemptively scheduled web service in Go. The aim is to see if there is a performance penalty for handling concurrency in web applications primarily in kernel-space, rather than in user-space, as most modern web frameworks tend to do nowadays.
Email: eduardo.rodriguez@tum.de Status: DONE
Supervisor: Röhrl - Human Action Recognition using Compressed Videos in H.264 Format
- Montage Tracking
- Mastersthesis
Information about writing a mastersthesis can be found here.
Archive
- Sequential Recommender Systems
- Modification and Testing of a SLAM Framework for Dynamic Environments
- Multi-Agent Deep Reinforcement Learning
- Correlation between physiological data and video features to better understand video induced emotions
- Improvement of Deep-learned Object Detectors
- Self Supervised Learning for Emotion Prediction induced by Videos
- Machine Learning for Time Series Prediction in the Financial Industry
- On Generating Mathematical Formulae
- Detection of Teeth Grinding and Clenching using Surface Electromyography
- Learned Under Water Feature Matcher based on Generated Artificial Image Base
- Latency Prediction for Wireless System Synchronisation
- Deep Learning for Financial Time Series Prediction
- On the Assessment of RL
- Federated Learning in Pedestrian Trajectory Prediction Tasks
- Distilling Neural Networks for Real-Time Drone Control
- Causal Regularization in Deep Learning Using the Average Causal Effect
Student: Kathrin Khadra Abstract: Causal Interpretability aims to make decisions of algorithms interpretable by investigating what would have happened under different circumstances. These varying circumstances can be manipulations on the algorithm to assess its causality. In this thesis, I include the causal interpretability mechanism called the Average Causal Effect (ACE) into the training of a neural net. To assess the causality of the model, the ACE uses so-called interventions to manipulate the neural net. The goal is to determine more causal weights and biases during the model training. Using this approach, I evaluate whether including a causal interpretability mechanism as a regularization increases the overall causality of the model. Moreover, I investigate whether this improvement in causality also impacts the generalization ability of the neural net. The developed approach is compared to a standard neural net as well as L1 and L2 regularized neural nets. Furthermore, I conduct these experiments with well-balanced datasets, datasets with a prior probability shift, and datasets with a covariate shift. For all datasets, the results show that the presented causal regularization approach is able to improve the overall causality of the neural net. However, the distribution of the shifted training data highly affects the generalization ability. With an increasing variance of the distribution, the developed approach shows significantly lower test Mean Squared Errors than for training data with less variance. This is because the interventions applied by the ACE depend on the variance of the training data distribution.
Email: - Status: FINISHED
Supervisor: Matthias Kissel - Approximative Sparse Factorization of Neural Network Weight Matrices
Student: Michael Brandner Abstract: In modern image processing applications, Convolutional Neural Networks (CNNs) are indispensable. Especially in the domains of object classification and face recognition, CNNs achieve impressive results. However, the increasingly accurate predictions are accompanied by ever-larger networks and consequently more computations. The number of operations required to compute the matrix vector product of a dense matrix M ∈ N N ×N in the fully connected layer scales with O (N 2 ) , mainly responsible that networks like VGG19 require 19.6 billion Floating-point Operations (FLOPs) to evaluate a single image. This thesis investigates the factorization of weight matrices, of the fully connected layer, into a product of sparse matrices, which potentially reduces the order of operations needed to the subquadratic domain. Consequently, the number of operations required for inference and thus, resource consumption is reduced. I examine three approximation algorithms, namely Butterfly factorization, sparse EigenGame, and Flexible Approximate MUlti-layer Sparse Transform ( F AμST ). The approaches are compared regarding the sparseness of their approximation and the approximation error. Furthermore, weight matrices of pre-trained Convolutional Neural Networks are factorized and compared regarding their prediction accuracy after approximation.
The best performance in terms of approximation error of the matrix and subsequent prediction accuracy was achieved by F AμST . F AμST was able to make sufficiently accurate predictions with only 20 % of the parameters. Where sufficient accurate means that the prediction accuracy drops by only 1 %. For similar results, the other algorithms needed 3 % (Butterfly) and 18 % (sparse EigenGame) more computations than the original matrix-vector product.
The experiments show that Approximative Sparse Factorization (ASF) of the weight matrices can significantly decrease resources consumption without deteriorating the accuracy of the predictions too much. This can enable complex computer vision algorithms to be used on devices with low computational resources or time-critical systems.Email: - Status: FINISHED
Supervisor: Matthias Kissel - Algorithms for Matrix Approximations with Time Varying Systems
Student: Stephan Nüßlein Abstract: There are different approaches to approximate matrices using structured matrices to reduce the computational cost of matrix-vector multiplications. A possible structure are sequentially semiseparable matrices, that describe the input-output behavior of time varying systems. If time varying systems are used to approximate weight matrices from neural networks, structural parameters have to be determined. In this thesis two algorithms to obtain the structural parameters are described. The first refines an initial segmentation by optimizing the input and output dimensions of the system. The second algorithm recursively splits the subsystems in a way that makes it possible to recover permuted sequentially semiseparable matrices. In experiments, the algorithms were able to recover the structure of simple test matrices. When used to approximate weight matrices from neural networks, the algorithms were able to reduce the computational cost of the matrix approximation compared to a naive approximation.
Email: - Status: FINISHED
Supervisor: Matthias Kissel - GAN-Based Differentially Private Publication of Vertically Partitioned Data
Student/in
Vorname: Yufei
Name: Zhang
- Sequential Recommender Systems