Deep Learning Overview

View my Deep Learning Overview : [Google Slides] Deep Learning Research Projects: [Google Slides] Beware, these things get out of date very quick. This presentation is from Oct 2016. The outline of the talk: Toy Neural Network Loss Function Stochastic Gradient Descent Forward-pass (Neural Function Evaluation) Backward-pass (Gradient of Neural function wrt to params) Recent … Continue reading Deep Learning Overview

Neural Network as Universal Approximators : Intuitive Explaination

Came across this wonderful explanation of why the neural network with hidden layer are universal approximators. Although not very helpful for practical purpose gives an intuitive feel of why neural network give reasonable results. The basic idea is to analyze a sigmoid function as you change w and b . In particular effect on $latex \sigma( w\times x … Continue reading Neural Network as Universal Approximators : Intuitive Explaination