The use of structured matrices in Artificial Neural Networks has several advantages: they require significantly less memory, enable faster propagation, and reduce the computational complexity of matrix-vector and matrix-matrix multiplications. The reduction of computational complexity leads to reduced energy consumption, which is especially important for mobile devices.
In this thesis, the use of structured matrices as input weight matrices in Extreme Learning Machines (ELMs) was investigated. The classical ELM is a single-hidden layer feed-forward neural network, in which the input matrix is randomly generated and is not trained, and the output matrix is calculated using linear regression.
An important finding of this thesis is that the input weight matrices can be replaced by structured matrices in ELM without decreasing the accuracy. However, several aspects must be considered for that, such as the distribution of free parameters, choice of a suitable structured matrix, and degree of dissimilarity in the hidden layer’s outputs. Another result of this thesis is that the accuracy of ELMs with structured matrices as input weight matrices does not depend in general on the number of free parameters that were used to construct the structured matrices, but depends on how these parameters were used.