Neural Network as Universal Approximators : Intuitive Explaination

Posted on May 31, 2016


Came across this wonderful explanation of why the neural network with hidden layer are universal approximators. Although not very helpful for practical purpose gives an intuitive feel of why neural network give reasonable results.

The basic idea is to analyze a sigmoid function as you change w and b . In particular effect on \sigma( w\times x + b) as one varies w and b. It is been shown with an animation that two sigmoids sum can be seen as a step function in some space. Having multiple pairs of step function can be used to approximate a 1d function. Extension of this idea to 2D has also been shown.

See details here :

Also see :

Posted in: Research Blog