Here is my docker images.
https://hub.docker.com/r/mpkuse/kusevisionkit/
In recent years there has been some buzz on Docker. Although some power computer vision/robotics researchers and developers do use docker for development, an overwhelming majority do not use it. Most people simply install the libraries on their Ubuntu and get to work. While this solution is simple, it often causes a huge mess when your code gets large or the libraries get upgraded rendering your code useless. Me having some code base for my Ph.D. thesis which works, I am always nervous to upgrade libraries. Especially CUDA, OpenCV, ROS. Now my apprehension for upgrading is justified. I need those codes to work at least until I complete my Ph.D. Best case my code should also work for others so that they can build more on top of it.
Docker provides for a virtual environment. It is almost like a virtual machine, only lightweight. This lite-virtual machine (ie. Docker) gives you a fresh Ubuntu (or other base images). You can customize this environment with your needed libraries. You do not need to install these libraries on your host machine but just in this virtual environment. Just like github is to host code, hub.docker hosts images of this lite-virtual machines. In docker lingo they are called images. These hosted images can be downloaded by any user and upon loading the image they will have your custom environment. If you ship your code with it, the users can run your code as well with ease. This virtual environment does not change your host PC.
In this blog post, I am writing up my experience (as a Vision/Robotics researcher) on using docker for my daily tasks. I must say it is somewhat an upward curve to getting familiar to docker. But I believe, using this can solve the issue on incompatible libraries and other issues. This is also supposed to be a cheat sheet of commands in daily use for docker.
Installation
Install Docker-CE (Community Edition). Installation instructions here. If you want to use Nvidia CUDA and other GPU stuff (including Tensorflow etc) you need to have Nvidia drivers on your host PC. You can get it with any version of CUDA. I usually just install the CUDA toolkit on my host PC. The version of CUDA does not matter much (although having it too old can cause issues). Get it from here. Additionally, you will also need nvidia-docker for running cuda programs on your docker. Get it here.
In summary install:
That’s all you need to install on your host PC. Docker should now be installed on your host PC.
If you proceed it is a good idea to add yourself ($USER) to the docker group on your Linux system. If you don’t do this, you will need to execute every docker command with a sudo which can become frustrating and using sudo all the time is frowned upon. So add yourself to the docker group
# Add user to docker group $ sudo groupadd docker $ sudo usermod -aG docker $USER
Basics
I highly recommend the official Docker getting-started tutorial. This is a must. You will get familiar with the lingo. Here I am listing the commonly used docker-commands. Do not try these directly. First, read the getting started guide.
# Execute a basic image from hub.docker $ docker run hello-world # List Images available $ docker images ls # List containers that are running $ docker container ls $ docker container ls -a $ docker container ls -aq # Manager containers $ docker container stop $ docker container kill $ docker container rm # Info $ docker info $(host) docker inspect <containerid> # Ubuntu Image $(host) docker container run -it ubuntu bash $(host) docker exec -it <container id> bash
Tensorflow
There are several good images available on hub.docker that a computer vision/robotics researchers/developer can start. The workflow is that you start with a basic image and then customize this image as per your needs and finally ship your code with it.
Some popular choices are:
# Will download and then run these images. This one is cuda10 # and cudnn7. There are several versions to choice from. Check # under tags on hub.docker $ docker run runtime=nvidia nvidia/cuda:10.0-cudnn7-runtime-ubuntu18.04
If you are using tensorflow, there are official docker images. Some more details on this can be found here. Although it is exactly like running any other image.
Simple Usage
# Drop into bash shell of the tensorflow image $ docker run --runtime=nvidia -it \ tensorflow/tensorflow:latest-gpu bash
Shared Folder
# Share a host folder with the docker container. This is the # recommended way for developers. You edit your files on the # host machine and execute on the docker. This will mount the # hosts's $HOME/docker_ws on /app in the docker virtual machine. $ docker run --runtime=nvidia -it -v $HOME/docker_ws:/app \ tensorflow/tensorflow:latest-gpu bash
Extra Shell
# Open additional shell in existing container $ docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e66e902667cc mpkuse/kusevisionkit:v0.4 "bash" 2 hours ago Up 2 hours 6006/tcp, 8888/tcp tender_raman $ docker exec -it e66e902 bash
Commit Container Image
# After you have a base image like above, go ahead and customize # it with additional libraries that are need to run your code etc. # COmmit and push to your personal hub.docker. You will need # a hub.docker account. Go ahead and create it. $(container) apt-get install python-dev $(host) docker commit <container id> mpkuse/customimage:v0.1 $(host) docker login $(host) docker push mpkuse/customimage:v0.1
Port Forwarding
# Container's port can be made available on the host machine. # This is for enabling communication between applications running # on different containers. Following will make 8080 port of the # container available as port 8080 on the host machine. docker run -it -p 8080:8080 mpkuse/hello:v0.1 bash
GUI Apps
Drawing GUIs etc is often needed for vision/robotics developer. This is not what most docker users intent to do. Here is how you can launch a simple GUI app on docker. Assuming xclock is installed on your image. If it is not, just install it with apt-get install x11-apps.
$(host) xhost +local:root $(host) docker run -ti -e DISPLAY=$DISPLAY \ -v /tmp/.X11-unix:/tmp/.X11-unix \ mpkuse/customimage:v0.1 bash $(container) xclock $(container) exit $(host) xhost -local:root
ROS
I recommend you read my tutorial here:
https://github.com/mpkuse/docker_ros_test
One can either start from ros images from hub.docker or from the tensorflow image and install your own ros. It is up to the user. It is usually not possible to merge two docker-images into one. Maybe things will change in the future, maybe not.
There is also a ros-tutorial on docker. But it is in my opinion too brief. You can install ros on your docker container and commit it (possibly also push to hub.docker). After your container has ros installed, open a bash with exec
to run roscore. Open more bash shells with exec
(see example above) to run more rosnodes/roslaunch/rviz etc.
My image which is gpu ready with tensorflow1.11 and ros-kinetic can be found here:
https://hub.docker.com/r/mpkuse/kusevisionkit/
Here is how I start by ros-kinetic docker image. It already has, opencv3.3, Eigen, ceres-solver. Note this also has tensorflow1.11-gpu. If you wish to start it with GPU environment, start with –runtime=nvidia flag in addition to the following. -v
sets the mount points, adjust as needed.
# Start my ros docker image - with GUI $(host) xhost +local:root $(host) docker run -ti -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v /home/mpkuse/docker_ws_slam/:/app -v /media/mpkuse/Bulk_Data/:/Bulk_Data mpkuse/kusevisionkit:ros-kinetic-vins
# Start ros docker image - no GUI $(host) docker run -ti -v /home/mpkuse/docker_ws_slam/:/app -v /media/mpkuse/Bulk_Data/:/Bulk_Data mpkuse/kusevisionkit:ros-kinetic-vins
Here is a checklist to verify/run ros on docker:
- Run the container images as above
- Using
docker inspect <container name>
get the IP address of the container. Assuming the ip address of container is 172.17.0.2 and the hosts’s ip-address is 172.17.0.1. - Edit the /etc/hosts file on the host and also on docker so that both docker-container and hosts knows each other. This is needed for roscore to work correctly. It looks something like below. Assuming deephorse
is hostname of the host PC and
0ef6065d7b27 is the container id.“`172.17.0.2 0ef6065d7b27
172.17.0.1 deephorse
“`
- Make sure container can ping to host with hostname and host can pind container with container id.“`$(host) ping 0ef6065d7b27
$(container) ping deephorse.
“`
- $(container) roscore
- $(host) export ROS_MASTER_URI=http://0ef6065d7b27:11311/
- Run a dummy node and see if you can receive messages on the host PC. You may like to use my dummy node. https://github.com/mpkuse/docker_ros_test
- $(container) rosrun docker_ros_test pub_text.py
- $(host) rostopic list“`$(host) rostopic list
/chatter
/rosout
/rosout_agg
“`
- $(host) rostopic echo /chatter“`$(host) rostopic echo /chatter
data: hello world 1542596296.97
—
data: hello world 1542596297.07
—
.
.
“`
Hi Manohar, when I try running this command “docker run -ti -e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
mpkuse/customimage:v0.1 bash” I got the following error:
“Unable to find image ‘mpkuse/customimage:v0.1’ locally
docker: Error response from daemon: pull access denied for mpkuse/customimage, repository does not exist or may require ‘docker login’.”
I suppose this is because I do not have the image you referred in this line. But I’m having difficulty parsing what this command is trying to do : is it trying to docker run image mpkuse/customimage:v0.1? What is the final “bash” doing?
As a side note, also I’ve studied in Hong Kong as well, got a BS.c and Mphil from HKU, so seeing your pose reminded me of the good old days there..
Thanks,
Yuqiong
Hi yuqiong, nice to know you studied in Hong Kong.
anyways, back to your question, actually I deleted the v0.1 tag on my images as it was not descriptive. I suggest you head over to hub.docker link (https://hub.docker.com/r/mpkuse/kusevisionkit/) to see all my tags.
Any btw, customimage was just a dummy name, my docker repo name is kusevisionkit.
If you decide to use for example `tf-cpu-opencv-3.4` you need to specify it something like `mpkuse/kusevisionkit:tf-cpi-opencv-3.4`.
Hope this helps!
I see. You are starting bash from your customimage :-> Thanks for the reply!