Neural Network as Universal Approximators : Intuitive Explaination

Came across this wonderful explanation of why the neural network with hidden layer are universal approximators. Although not very helpful for practical purpose gives an intuitive feel of why neural network give reasonable results.

The basic idea is to analyze a sigmoid function as you change w and b . In particular effect on \sigma( w\times x + b) as one varies w and b. It is been shown with an animation that two sigmoids sum can be seen as a step function in some space. Having multiple pairs of step function can be used to approximate a 1d function. Extension of this idea to 2D has also been shown.

See details here :

Also see :

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s