Running Deadline in Containers - Part 1

Version: Deadline 8.0

Overview

In recent years, container technology has exploded in popularity. At the time of this article Thinkbox Software does not offer official Deadline container images on DockerHub, but it's fairly easy to build and run your own Deadline containers. In Part 1 of this two-part article, I'll briefly discuss what containers are, and then I'll work through an example how to get Deadline running in a container.

Wait, What Exactly are Containers?

Hint: If you're already familiar with containers and just want to see how to run Deadline in containers, save yourself some time and jump ahead to the "Containerizing Deadline" section below.

Container technology is a feature of modern operating system kernels that isolates how processes gain access to system resources such as processors, network interfaces, and storage. Container technology, in various forms, has been around in the Linux world for roughly a decade but has recently been made much easier to use by projects like Dockerrkt, and LXD. Microsoft, while late to the party, has been working with Docker to implement container features into the Windows kernel via Windows Server 2016. Mac OSX, like Windows versions other than Server 2016, requires a virtual machine (VM) running in the background, typically using Oracle's VirtualBox, to host Docker containers. In general, containers based on a sufficiently different kernel than that of the host machine must be run in a VM on the host machine.

The Benefits

While there are many use cases for container technology, the one that I think has really sparked widespread interest is the ability to gather not only the application itself, but also all of its dependencies, into a tidy package. The benefits of this are manifold. First, it eliminates the "It Works on My Machine" problem where an app runs on one machine but not another because of subtle library or configuration differences between the two machines. Such differences may be very difficult to diagnose and resolve. And having the dependencies packaged with the application also eliminates complicated installation and un-installation procedures. A big problem with traditional applications is all the cruft they leave behind when they are uninstalled, which can seriously complicate subsequent re-installations of the app or even completely unrelated applications. And the container paradigm has lead to a new deployment approach where container images are stored in a private or public "registry" so that an application can be pulled down to a machine and run with a single command.

This all results in a more rapid, predictable, deployable, and disposable approach to application management, which benefits all stages of an application's lifecycle from development and testing on through distribution/deployment and eventually removal/archival. It's no wonder container technology is taking the world by storm. The graph below from DataDog should give you a sense of the rapid adoption. (Hint: For more interesting insights on Docker adoption, click the graph to go to the original article.)

One point worth remembering is that containers are not virtual machines. Virtual machines attempt to emulate an entire machine in software, though quite often with a form of pass-through for some operations to increase performance. And, because a VM is emulating a machine, it requires a full operating system of its own. As a result VMs are rather bulky, and it's not uncommon for production VMs to take more than a minute to boot up. In contrast, containers make direct calls to the host machine's resources, but these calls are guarded by the OS kernel to provide isolation. So containers have far less overhead than VMs and often start in a few seconds or less.

The Drawbacks

As fantastic as containers are, they do have some drawbacks. For one, because containers are accessing resources directly on the host, they are inherently less secure that VMs, and in an extreme case they could potentially crash the host. The emulation layer of VMs provides a stronger barrier, both in the degree of protection between the host and the VM and in the degree of isolation between separate VMs. As a result, VMs are still preferred for cases were security is a critical concern. Another issue is that container technology is still in a state of rapid advancement. This can cause headaches for those who administer container deployments in production. For example, there have been cases where Docker has released updates that fix critical bugs, but which also introduce breaking changes to its interfaces. In order to get the benefit of the critical fixes, an administrator may need to substantially retool, and therefor re-validate, a portion of the production pipeline to accommodate the interface changes.

Dig a Little Deeper

If you would like to dig a little deeper and find out more about what containers are and how they work, here are a couple of links to short articles that you may find useful:

Do the Tutorial

If you work in a technical role, having a basic knowledge of containers is very much worthwhile. Since Docker is currently the most ubiquitous container technology, if you are new to containers I recommend that you work through a Docker tutorial by going to this page on docker.com, and choosing the "Getting Started Tutorial" for your platform.

That said, don't rule out the other container technologies out there, as they each have their own interesting strengths that may benefit you.

Containerizing Deadline

For the rest of the article, I'll assume the reader has at least the level of familiarity with Docker that is provided by the Docker Getting Started tutorial mentioned above.

Containerizing an application basically involves two steps: The first is to build an image for the application using the "docker build" command, and the second is to instantiate a running container from the image using the "docker run" command. Let's work through a basic example of running Deadline in a container. We'll base the image on CentOS 7 and Deadline 8.

We'll need a Deadline Repository for any containers we create to connect to. And since this is meant for learning and testing, it's probably best to not connect to a production Repository. To share the Repository folder with containers, we will be using Docker's volume feature. On Windows, a little extra is required. (I'll be using my Windows 7 laptop in the examples).

On Windows, the Docker host that runs containers will be a VirtualBox VM. By default the Docker VM, which is created automatically the first time the Docker Quickstart Terminal is started, will synchronize the VM's "/c/Users/" folder to the "C:\Users\" folder on the Windows host. What that means is the easiest way to get a folder on the laptop shared through to the Docker VM, and subsequently through to a container, is to use a folder located somewhere under "C:\Users\".

So I'm going to use the Deadline Repository installer to install a new database and Repository on my laptop. I'll place the Repository into "C:\Users\James\DeadlineRepository8\", but the database can simply go to the default location and use the default port.

We'll also need the Deadline Client software installed on the Windows host so that we can use Deadline Monitor, so install that too if it's not already installed. (If it is already installed, it's just a matter of pointing Monitor to the new, locally-installed Repository at your equivalent to "C:\Users\James\DeadlineRepository8\").

Creating a Deadline Client Image

We start by creating a Dockerfile that describes the container image to be built. For this example we'll base the image on CentOS 7. Below are the contents of the Dockerfile, which I have placed into "C:\Users\James\ClientTest\".

FROM centos:7

# 1. Use your own e-mail for the maintainer.
MAINTAINER yourname@yourcompany.com

# Perform a general update of the OS.
RUN yum -y upgrade

# Add requirements for Deadline 8 headless Slave.
RUN yum -y install redhat-lsb \
 && yum -y install libX11 \
 && yum -y install libXext \
 && yum -y install mesa-libGL

# Copy over the installer.
# 2. Be sure the installer .run file has been placed in the same folder as the Dockerfile.
RUN mkdir /tmp/thinkboxsetup/
COPY DeadlineClient-8.*-linux-x64-installer.run /tmp/thinkboxsetup/

# Run the installer.
# 3. Replace the name of the license server after --licenseserver below with that of your actual license server.
RUN /tmp/thinkboxsetup/DeadlineClient-8.*-linux-x64-installer.run \
    --mode unattended \
    --unattendedmodeui minimal \
    --repositorydir /mnt/DeadlineRepository8 \
    --licenseserver @lic-thinkbox \
    --noguimode true \
    --restartstalled true

WORKDIR /opt/Thinkbox/Deadline8/bin/

Note that a single Linux installer ".run" file (e.g. "DeadlineClient-8.0.11.2-linux-x64-installer.run") should be placed in the same folder as the Dockerfile before issuing the "docker build" command. It may be necessary to edit the Dockerfile to reflect the name of the installer file for future versions of Deadline where the major version number has changed. In a production scenario, it is good practice to specify full version numbers to avoid any potential confusion.

Within the Dockerfile, the "RUN" command that calls the installer uses command-line installation options passed to the installer executable to accomplish the installation. This example uses only a small subset of the possible options that are available. For a broader introduction to command-line installation of Deadline, check out the Feature Blog article on Installing Deadline from the Command Line.

With the Dockerfile ready to roll, we need to issue the "docker build" command to build the image. On Windows I recommend using the Docker Quickstart Terminal to accomplish this, although with some minimal configuration PowerShell and various bash-style terminals can also be used.

cd ClientTest
docker build -t centos7/deadline_client:8.0 .

The "-t centos7/deadline_client:8.0" option causes the resulting image to be tagged (meaning named). Choose a tag that suits your environment and naming conventions. Be aware that the tag used here is also used in the examples below, so if you change the tag, you will need to also change it in the examples below.

Testing the Image with an Interactive Container

Because the client installer includes all of Deadline's client programs, the image we just created can be used to launch containers for any client program that can be run without a GUI. This includes Balancer, License Forwarder, Pulse, Slave, Web Server, and of course deadlinecommand. Some of these may need additional configuration added to the image or added to the container at runtime. For example, the License Forwarder requires a valid certificate for each third party application that will be using 3rd Party compute (or render) time.

I like to test new images by first running an interactive container. We start by opening the Docker Quickstart Terminal and issuing the following command to instantiate a container from the image we built:

docker run -ti --rm --name DeadlineTestContainer \
-h DeadlineTestContainer \
-v /c/Users/James/DeadlineRepository8:/mnt/DeadlineRepository8 \
--add-host lic-thinkbox:192.168.2.14 \
--add-host Agent005:192.168.1.96 \
--entrypoint /bin/bash \
centos7/deadline_client:8.0

The first thing to keep in mind is that the "docker run" command is executed from the perspective of the Docker host, which on Windows is a VM running a lightweight Linux distribution. So all options passed to the "docker run" command, such as paths or IP addresses, should be specified from the Docker host VM's perspective.

Let's look at various options passed to the "docker run" command in detail:

  • -ti: This is shorthand for "-t -i" meaning "terminal mode" and "interactive" respectively.
  • --rm: This causes the container to be removed when it exits. If this option is not present, then the exited container's filesystem will remain after exit, which is useful for troubleshooting. (Hint: One can list all containers, whether running or exited, with: "docker ps -a").
  • --name DeadlineTestContainer: This names the running container "DeadlineTestContainer" for easy identification.
  • -h DeadlineTestContainer: This causes the hostname of the running container to be "DeadlineTestContainer". It's usually a good idea for the container name and the hostname to match, or at least be closely-enough named to be clearly associated.
  • -v /c/Users/James/DeadlineRepository8:/mnt/DeadlineRepository8: This causes the folder "/mnt/DeadlineRepository8" in the container to mount "/c/Users/James/DeadlineRepository8" on the Docker host (this is the blue connector in the diagram below). As mentioned above, on Windows the Docker host is a VirtualBox VM which is auto-configured to synchronize its "/c/Users/" folder to "C:\Users\" on the physical machine (this is the green connector in the diagram below).
  • --add-host This option tells the container a given hostname's IP address, which the container will use instead of consulting any DNS provided via the Docker host. In my case, I'm connecting to a licenser server over a VPN, so the first option tells the container where to find "lic-thinkbox". Connecting to the VPN also causes my laptop to have multiple IPs (one on the local network, and one on the VPN). This can create problems when the container tries to connect to the Deadline database instance running on my laptop, so I explicitly tell the container which IP to use for my laptop with hostname Agent005. If I had been using a license server on the local network (and on the same network segment as my laptop) and not using a VPN, then these two "--add-host" options would not have been necessary.
  • --entrypoint /bin/bash: This causes bash to be run as the terminal program when the container starts.
  • centos7/deadline_client:8.0: Finally, this is the name of the image from which the container will be instantiated. This is just the tag (or name) of the container image we specified when we built the image. Note that the running container gets its own filesystem which represents only the changes to the image. This avoids the overhead of making a full copy of the image when starting the container and is partly responsible for why Docker containers boot so quickly.

To validate the image for this example, I simply ran a few commands with deadlinecommand. For a production-quality image, much more testing would be required, of course. deadlinecommand is very handy for working with Deadline from a terminal and also for automation purposes. If you would like to learn more about deadlinecommand, check out the Feature Blog article on Deadline's Secret Weapon.

Running a Deadline Slave Container

Now that we know the image works as intended, at least within the scope of our limited testing, we can run a Slave container by setting the entrypoint to deadlinelauncher:

docker run -d --name dockerslave01 \
-h dockerslave01 \
-v /c/Users/James/DeadlineRepository8:/mnt/DeadlineRepository8 \
--add-host lic-thinkbox:192.168.2.14 \
--add-host Agent005:192.168.1.96 \
--entrypoint /opt/Thinkbox/Deadline8/bin/deadlinelauncher \
centos7/deadline_client:8.0

The "-d" option causes the container to be run detached, so the prompt will return immediately but the container will continue to run. (Hint: You can omit the "-d" option and watch the container's output in the terminal.)

Since we specified deadlinelauncher as the entrypoint, and since we configured Deadline to automatically launch Slave when Launcher starts, this will result in Slave automatically being started and registering itself.

We can wait for the Slave to show up in Monitor to confirm that it's working. We can then stop the container with:

docker stop dockerslave01

In this case, the container's filesystem is still present after it exits. We can list all Docker processes with:

docker ps -a

We can then remove the exited container with:

docker rm dockerslave01

And finally we can remove the "dockerslave01" entry from Monitor by right-clicking on it and choosing "Mark Slave Offline", and then right-clicking on it again and choosing "Delete Slave". And that completes the example.

Closing Remarks for Part 1

So far we've discussed what containers are, and we've explored how to build a basic Deadline Client image that can be used to instantiate containers that run headless Deadline Client programs such as Slave. In Part 2, coming next week, we'll continue on and explore how we can add worker programs into the mix by extending the Deadline Client image.

In the mean time, I urge you to continue experimenting with Docker and containers. And also check out the DevOps folder in the Thinkbox GitHub repository for Deadline. In preparation for this blog article, I have extended the examples and documentation related to using Deadline with Docker. Feel free to contribute to the documentation and examples with a pull request!

And finally, if you have questions about this blog article or about Deadline in general, feel free to post your questions to our Support Forum.