I recently got the Intel Compute Stick 2 (NCS2). It can do neural network inference. The process is to start with a frozen tensorflow file (.pb), then convert it to IR format (which the NCS2 can understand). You need to use OpenVINO for it. There was an earlier API which is now defunct. Here is … Continue reading Intel Compute Sticks 2
Tag: neural network
Organizing my Neural Network Codes
Amazing progress has been made in deep learning. I have been Tensorflow for a while now. I started out with tf0.6 then upgraded to tf0.12 then to tf1.0. The latest version is tf1.10 which is supposed to provide a stable API. I have a lot of code which has now become incompatible. The tf0.6's saver … Continue reading Organizing my Neural Network Codes
Convolutional Networks
Continuing further with Deep Learning, here I will briefly describe what I learned on convolutional network (CNN). If you understand the basics of a simple 2-layer network (fully connected) and can implement it yourself from scratch you are all set to understand the mighty daddy (ie. CNN). Again it is important to understand that CNN, … Continue reading Convolutional Networks
Deep Residual Nets with Tensorflow
Git Gist : https://gist.github.com/mpkuse/6f9dcd419effa707422eb2c5097f51b4 Deep Residual Nets (ResNets) from Microsoft Research has become one of the popular deep learning network architecture. Already 800+ citation, given that the paper appeared in 2015. Recently, I ported all my code from Caffe to Tensorflow. While it is lot easier to deal with caffe but I must say, the control you … Continue reading Deep Residual Nets with Tensorflow
Toy Neural Network
In my last post on neural network [HERE], I talked on how one can think of neural network as universal approximators. In this post I am trying to help understand a toy neural network implementation. In particular one can have a clearer and intuitive understanding of what a forward_pass is and what back_propagation means. Most … Continue reading Toy Neural Network
Neural Network as Universal Approximators : Intuitive Explaination
Came across this wonderful explanation of why the neural network with hidden layer are universal approximators. Although not very helpful for practical purpose gives an intuitive feel of why neural network give reasonable results. The basic idea is to analyze a sigmoid function as you change w and b . In particular effect on $latex \sigma( w\times x … Continue reading Neural Network as Universal Approximators : Intuitive Explaination
You must be logged in to post a comment.