By applying Sobolev Training to a neural network, it is not only trained to fit target output values but also target derivatives with respect to the inputs. This approach can lead to better generalization and also less training samples are required. However, mostly only first order derivatives are used. In this paper we investigate the network behavior when also higher order derivatives are included in the training. We present a training pipeline that enables Sobolev Training for regression problems where target derivatives are directly available. We show for a variety of function regression tasks that by using also higher order derivatives smaller test errors can be achieved compared to the training method using only the first derivative. For our approach we also introduce a adaptive weighting factor for the derivative errors that is calculated during training.



  • No labels