I have seen a lot of instruction on building and setting up a machine learner, but with nvidia-docker I practically have seen so little relatively.
Using nvidia-docker is a good thing. It simplifies so much the complex part installing a large bunch of dependencies before you can start developing and running your deep learning algorithms.
With nvidia-docker you practically type
nvidia-docker run ... and it just works like you would have intended to with a kind of magic with docker.
One question arises, however, if I were to use nvidia-docker what exactly would I have to install on my computer?
It's kind a obvious for me to look back, but not so for me at the time figuring it out.
Obviously, the first thing you can't miss out is the
docker itself, simple because the nvidia-docker is built upon it.
So, if you don't have it in your machine already, here is a good place to start https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-16-04
Second thing, you need a working driver for your nvidia pascal graphic card, which means GTX1060, 70, 80 and Titan. You will need a nvidia graphic driver better than 367.44 (the first version of the base line graphic driver to support Pascals), or if you to test on more recent drivers you can go to 370 line which is less stable.
# if you don't have the 'add-apt-repository' apt-get install software-properties-common # begin !! sudo add-apt-repository ppa:graphics-drivers/ppa sudo apt update # list all your choices sudo apt search nvidia # install driver 367.44 (at the time of writing) sudo apt install nvidia-367 # install driver 370.23 (at the time of writing) sudo apt install nvidia-370
It's important to mention CUDA driver and CUDA toolkit are packed inside the
nvidia/cuda docker's image, so we not need to install