This paper presents an improved training algorithm for feed-forward neural networks that combines natural gradient descent with a modified back-propagation approach. The method significantly enhances convergence rates and performance by reducing the parameter space and using a recursive estimation of the Fisher information matrix. Experimental results demonstrate the efficacy of the adaptive natural gradient learning method compared to conventional techniques.