Kurzbeschreibung

Extreme Learning Machine (ELM) is a quite popular neural network concept that manages to achieve good generalization and is comparably fast. It essentially is a single hidden layer feed-forward network that computes the weight vector, that is used for prediction, purely analytically. It assigns input weights and biases for the activation functions in the hidden neurons randomly and does not adjust them afterwards, like iterative methods do. Yet, as intriguing as ELM seems, it needs to compute memory intensive operations on large matrices. This can of course be problematic and the ELM will easily run out of memory when the sample data get too large. This thesis investigates versions of ELM that can be applied in a regularized context (namely ridge regression) and have the potential to need less memory than the original ELM algorithm. The trade-off between memory consumption and computation time is in the focus. The three algorithms online sequential regularized ELM (OS-RELM), parallel regularized ELM (PR-ELM) and ELM using randomized singular value decomposition (RSVD-ELM) are presented and adapted to this context. It is reasoned how they can address the trade-off problem. Experiments are then conducted that empirically show that all three algorithms can be faster than ELM and require less memory. However, PR-ELM yields the most significant performance boost, needing surprisingly little memory while being incredibly fast and accurate. Other results from the experiments are furthermore discussed and some more insight to the three presented algorithms
are given.

Dokumentation

-


Dateien


  • No labels