PhD Candidate, Hong Kong University of Science & Technology (HKUST)

Continuing further with Deep Learning, here I will briefly describe what I learned on convolutional network (CNN). If you understand the basics of a simple 2-layer network (fully connected) and can implement it yourself from scratch you are all set to understand the mighty daddy (ie. CNN). Again it is important to understand that CNN, […]

*December 9, 2016*

Git Gist : https://gist.github.com/mpkuse/6f9dcd419effa707422eb2c5097f51b4 Deep Residual Nets (ResNets) from Microsoft Research has become one of the popular deep learning network architecture. Already 800+ citation, given that the paper appeared in 2015. Recently, I ported all my code from Caffe to Tensorflow. While it is lot easier to deal with caffe but I must say, the control you […]

*June 11, 2016*

In my last post on neural network [HERE], I talked on how one can think of neural network as universal approximators. In this post I am trying to help understand a toy neural network implementation. In particular one can have a clearer and intuitive understanding of what a forward_pass is and what back_propagation means. Most […]

*May 31, 2016*

Came across this wonderful explanation of why the neural network with hidden layer are universal approximators. Although not very helpful for practical purpose gives an intuitive feel of why neural network give reasonable results. The basic idea is to analyze a sigmoid function as you change w and b . In particular effect on as one varies w and […]

December 30, 20160