Recurrent Neural Net: Memo

RNN (Recurrent Neural nets) are used to model sequences. Unlike the usual feedforward nets which are stateless in terms on inputs, RNNs have memory. In particular, its inputs are the output of previous step and also new observation in current step. The basic RNN are notoriously hard to train. LSTM (Long short term memory) networks … Continue reading Recurrent Neural Net: Memo

Generative Networks : Memo

Ian Goodfellow's one of the popular works is the GAN (Generative Adversarial Networks). These networks basically can generate images (which look like real images). In the coming future, I wish to get into this a bit. Below could be a good start point : a) Tutorial by Goodfellow in NIPS2016 : [Arxiv] [Slides]

Reinforcement Learning : Memo

I came across this  tutorial series on Reinforcement Learning by Arthur Juliani: [WWW] Fundamentals textbook : Reinforcement Learning: An Introduction - By Richard S. Sutton and Andrew G. Barto Freely available online : https://webdocs.cs.ualberta.ca/~sutton/book/the-book.html Video Tutorial by R. Sutton. OpenAI Gym OpenAI is an research organization for RL. They have a environment called OpenAI-Gym (Python), useful … Continue reading Reinforcement Learning : Memo

Robust Keypoint Point Matching

Came across this interesting paper which does feature matching (SIFT-like features) between images under a probabilistic formulation. The methods starts with all matches as inliers and as iterations progress gets rid of matches. About 120 citations as of May 2017. Jiayi Ma, Ji Zhao, Jinwen Tian, Alan L. Yuille, and Zhuowen Tu. Robust Point Matching via … Continue reading Robust Keypoint Point Matching