Deep Learning for Production: Deploying YOLO using Docker.

Abhishek Bose
8 min readDec 15, 2019

--

Building Deep Learning networks is a thing and deploying them for production is a completely different paradigm. Developers spend days scratching their heads over deploying the model and getting all dependencies correct.

We will go over a problem statement of building an object detector using the YOLO object detection framework and deploying it as a docker container. The following paragraphs explains the issues developers generally face while building and deploying deep learning models for inference at a production level, how simple dependency and library mismatch issues can actually give a developer sleepless nights and how all these can solved using docker containers.

Problems of setting up a deep learning environment:

Frustrated Developer on Tatvic.com

A developer would generally build a Deep Learning model on his personal computer in a thoroughly stitched development environment before trying to deploy the same code on the server.

Several problems might arise during deployment since the OS on the server might be different ( a different linux distro), the NVIDIA graphics card and driver might be different, the CUDA and CUDNN libraries might not be as same as on the testing machine.

These inconsistencies can always hamper a smooth sailing deployment of a Deep Learning based model on a production machine.

Docker to the Rescue:

Docker is a tool developed by genius minds which enables us to package our application and deploy it using containers.

Container technology is becoming extremely popular these days. It is a software developed to package an entire application along with all it’s dependencies. Containers can be easily deployed on different computing environments with different underlying hardware configurations.

Fig 1: Containerized Applications. Underlying operating system is the host’s. Containerized applications from docker.com
Fig 2: Virtual Machines using Hypervisors. VMs from docker.com

Unlike virtual machines which require a hypervisor for setting up a development environment with shared resources, containers use the host’s OS to setup the environment with all dependencies specified in an image file by the developer. Hence containers are also extremely lightweight compared to virtual machines. These images need to be build and they become containers at run-time. Fig 1 and Fig 2 above show how containers and virtual machines differ from each other.

Docker logo from docker.com

Docker has set the industry standard for containerizing applications. Docker has been built in such a way that developing and maintaining applications becomes extremely easy for both Deep Learning developers and also the dev-ops guys.

Components required for our object detector:

For our object detector we would be using the YOLOv3 network. It is one of the state-of-the art networks which performs the task of detecting objects extremely well. We will be using the weights file from the official website which has been pre-trained on the MS-COCO dataset. The object detector would be hosted on a docker container and would take an image as an input.

Installing Docker and NVIDIA Docker v2:

NVIDIA docker is basically wrapper around the original docker in order to allow all dockers containers running on a machine to use the host’s machine device driver. This removes all issues faced by docker containers regarding hardware portability and compatibility due to it’s system agnostic nature.

Let’s go ahead and install the latest version of docker and nvidia docker 2

Step 1. Docker Installation:

Note: We are performing this experiment with ubuntu based host machine

Update all packages on the host using:

$ sudo apt-get update
$ sudo apt-get upgrade

In order to install the latest version of docker run the following commands. (Docker installation guidance can also be found on the official docker website)

$ sudo apt-get install apt-transport-https ca-certificates curl software-properties-common$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"$ sudo apt-get update
$ sudo apt-get install docker-ce

Verify the docker installation by checking the version. Use the following command:

$ docker -v
>> Docker version 19.03.5, build 633a0ea838

Let’s test out a simple Hello World docker container. Run the following command

$ docker run hello-world

The output is shown below in Fig 3.

Fig 3: Docker Hello World output

You can also check the hello-world image by running docker images. The docker ps -as command will list all containers which had been created.

Step 2. Getting the NVIDIA Docker v2:

Let’s make sure the NVIDIA device drivers are properly installed. To check if NVIDIA drivers are installed , run the following command:

$ nvidia-smi

And the output expected is…

Fig 4: Output of nvidia-smi command

If the output on your computer screen is not as same as shown above. Do the following to download and install the latest nvidia-driver.

ubuntu-drivers devices

This command lists all nvidia-drivers supported. The latest and recommended ones as well. Run the following command to install the driver of your choice.

$ sudo ubuntu-drivers autoinstall

Restart your machine to complete installation. Now run nvidia-smi and check if the output matches Fig 4.

Now let’s move to the commands required for installing nvidia-docker v2

$ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
sudo tee /etc/apt/sources.list.d/nvidia-docker.list
$ sudo apt-get update$ sudo apt-get install -y nvidia-docker2
$ sudo pkill -SIGHUP dockerd

To test the successful installation of nvidia-docker v2 type the following command and hit enter

$ docker run --runtime=nvidia --rm nvidia/cuda:9.0-base nvidia-smi

This command will download a docker image of cuda-9.0 from docker hub and would use the host’s nvidia device drivers. The output should be same as shown in Fig 4.

Let’s build our object detector:

As mentioned earlier, we will be using YOLO v3 , pre-trained on MS-COCO as our object detector. The official darknet website (https://pjreddie.com/darknet/yolo/) explains beautifully how to download and compile darknet yolo for our machine. The link to the weights and cfg file for YOLO v3 is also present in the same website.

Once YOLO has been compiled and tested successfully , we will go ahead and write our python script for detecting objects in images.

Let get all the headers required in place as shown in Fig 5 below

Fig 5: Important headers for our object detector

I have uploaded the exact object detector code in the git repo (link given below). Some accompanying functions in order to support python usage with darknet are mentioned, which I will skip mentioning here.

Let’s move to the main function directly

Fig 6: Importing our necessary meta and weight files

As shown above in Fig 6, we have defined our config file in line 2. This config file (Fig 7) lists the weights file, .cfg file and .data file required for running yolo.

Fig 7: Config file
Fig 8: Detecting objects from weight files loaded from config file

In Fig 8, the image is taken as an argument from the command line. The image is loaded using OpenCV and then passed as a parameter to the netdetect function.

The script is named as python_docker.py. In order to test the code, run the following command

python3 python_docker.py /path/to/image

The output is shown below in Fig 9.

Fig 9: The output is printed in the terminal as shown

Let’s go ahead and build the dockerfile for our experiment. The dockerfile is kind of a configuration file which lists all dependencies required for to build our container and run it.

Fig 10: DockerFile part 1

In Fig 10 shown above , we get the image with nvidia-cuda 10, cudnn 7 on a ubuntu 18 base. This makes our base container with all NVIDIA dependencies properly installed. Next on line 3 we download python3 and other libraries required. The RUN command executes the command on bash as is.

Fig 11: Dockerfile part 2

In Fig 11, we list down all the libraries required for running our python script successfully.

Next as shown in Fig 12, we change the current directory to the home directory using WORKDIR command. After that, we execute the command to clone to the darknet library from git (this is the same step shown on the original darknet website to download darknet in general).

We use the sed command in linux to edit the Makefile to use GPU and the cudnn library.

Following that we put down the make command to compile darknet from the newly edited Makefile.

The RUN command is again used to download the yolov3 weights file before changing the working directory to home.

Fig 12: Dockerfile part 3

Now run the following command to build the dockerfile:

$ docker build -t image_name

The output of the build command will look something like as shown in Fig 13 below

Fig 13: Output of docker build command

Now run the docker container using the following command:

$ docker run -it --runtime=nvidia --shm-size=1g -d image_name

The command given above will start a docker container in the background (-d detached flag). To test if the container is running properly, run docker ps. Fig 14 shows, our docker container running properly.

Fig 14: Docker container named “yolo_python_docker” running successfully

Now let us copy our object detector script to the home directory of our container. Run the following command to do that:

$ docker cp python_docker.py container_id:/home/

Now our the home folder of our container looks as shown below in Fig 15

Fig 15: home directory inside container

The best part has arrived. In order to run the python code from inside the container just do the following:

$ docker exec -ti container_id python3 python_docker.py path_to_image

The -ti flag allows interactive mode on the container. The exec command allows us execute the python command without moving inside the container. The output of the above command will produce an output same as shown in Fig 9.

Another way of runing the python code is by adding the CMD command to the dockerfile itself as shown below in Fig 16. This will execute the python code as soon as the initial docker run command is executed.

Fig 16: Executing command from inside DockerFile

Congratulations on containerizing your first Deep learning based object detector code. This was a fairly simple implementation but with slight tweaks can serve much complex settings.

Docker is an extremely powerful tool which can be used to simplify many problems faced everyday regarding dependencies and Deep Learning.

I hope more and more developers start using docker in their regular development sprints.

The script and the Dockerfile accompanying it, can be found here →https://github.com/AbhishekBose/yolo_docker.git

--

--

Abhishek Bose

Machine Learning Engineer III at Swiggy. On a quest for technology.