Version: Deadline 8.0
In Part 1 we discussed at what containers are, and we worked through a basic example of building a Deadline Client container image from which we could launch headless Deadline Client programs like Slave. In Part 2 we dig a little deeper into the motivations for using containers, and then we extend the Deadline Client image built in Part 1 to include a worker program.
One might ask, "Why bother running Deadline in a container?"
To answer that, let's think about the software side of a production pipeline. It is comprised of several connected systems, where the term system might refer to an operating system, a program, a script, or even a configuration, and the data that flows through it. These lower-level systems also get wired together by clever people to form higher-level systems, which might further be combined again at even higher levels.
Regardless of the level at which a system exists, its advancement typically follows a change-test-release cycle where problems found in either the test stage or the release stage feedback into the change stage to start a new cycle. The more rapidly this cycle can be executed, the faster and more fluidly the system will mature.
One of the biggest barriers to the cycle speed is inconsistencies in the components on which the system rests. For example, a developer's (or TD's, or administrator's) computer has a different history than that of a given test computer or that of a given production computer. Suppose a problem is encountered with the system when it is run on a test computer or on a production computer. Is this problem the result of a flaw in the system, or is it due to an issue with the computer perhaps because of old software or leftover configuration data? A lot of time can be consumed just figuring out where the problem lies.
Virtual machines (VMs) can help by providing a clean reference environment for testing and in many cases for production use. But in the testing stage, VMs can be slow to work with in some cases. Each test may require a fresh VM state, which might require a restart of the VM. Or it might involve a more complicated and time-consuming refresh process, such as a lengthy download or a set of complex configuration steps. These conditions can lead very long cycle times.
Containers have some unique properties that can help address these issues. Using Docker as an example, Docker's build process is highly efficient in that it builds out container images in layers. Each command (ADD, COPY, RUN) in a Dockerfile generates a new layer in the resulting image. The benefit is that when the Dockerfile is modified, only those layers from the point of modification are rebuilt. This makes the iteration process very fast since modifications to a Dockerfile are usually being made at the end of the file. Furthermore, an image can reference another image (the FROM command), allowing it to pick-up where the prior image left off to further extend it or specialize it.
And containers keep things fresh by design. While an existing container can be restarted with the "docker start" command, in a testing scenario it's more likely that a fresh container would be started for each test attempt with the "docker run" command. Once a person gets the hang of using containers, it's not terribly difficult to run multiple linked containers to emulate a more complex production environment.
So, for the testing stage alone, containers give us:
- An accelerated way to build up the test context via layered images.
- The ability to boot a fresh test environment, often in less than a couple seconds.
- The ability to launch groups of containers locally to simulate a production environment.
Even if the final system is not intended to be run in containers in production, containers can be used to substantially accelerate the cycle time in the early stages. And if containers happen to be suitable for use in production, then these same benefits are equally applicable there. If the system under development involves Deadline, then the benefits to the testing stage alone provide ample motivation to get Deadline running in containers.
Extending the Deadline Client Image with Worker Software
So now that we have some motivation for running Deadline in containers, let's put the claimed benefits to the test with a concrete example. Suppose we have been tasked with figuring out how to set up Blender, an open source 3D content creation application, for network rendering on CentOS 7 render nodes. We know from a brief look at its documentation that Blender can be run headlessly from the command line, so that means we can work out the requirements and test it locally using containers.
Building the Worker Image
The first thing we need is a folder to hold our working files, which will simply consist of the Blender installer and a Dockerfile which describes the image. So for this we can simply create a "BlenderImage" folder under our home folder, such that the full path to the folder would be (substituting your username for mine, of course):
In practice, I recommend organizing Dockerfiles into a source control repository, such as git or Subversion. But for the sake of clarity, we'll just use the "BlenderImage" folder under our home folder.
Next, we'll need to download the Blender installer that is compatible with OS of the image we are about to create. We will be extending the "centos7/deadline_client:8.0" image we built in Part 1 of this article, so I downloaded the "blender-2.78a-linux-glibc211-x86_64.tar.bz2" file and placed it in the "BlenderImage" folder I just created.
We need a Dockerfile to describe the new image we are about to create. Using an editor of choice, we create a Dockerfile with the following contents and save it into the "BlenderImage" folder:
# 1. Choose a starting image with Deadline Client installed and tested. FROM centos7/deadline_client:8.0 # 2. Use your e-mail address as the maintainer. MAINTAINER email@example.com # 3. Modify lines below to refer to actual Blender archive filename. ADD blender-2.78a-linux-glibc211-x86_64.tar.bz2 /usr/local RUN mv /usr/local/blender-2.78a-linux-glibc211-x86_64 /usr/local/Blender
Let's work through this Dockerfile to see what's happening.
One of the powerful features of Docker is its use of the Union File System. Among other things, this allows new container images to reference existing container images, without duplication of storage. In this case, the "FROM" command is referencing the "centos7/deadline_client:8.0" image so that we don't need to duplicate all the steps of that image. That work is done and tested, so that image provides a good foundation to build upon.
The "MAINTAINER" command just identifies the maintainer of the image, of course.
The "ADD" command adds files into an image. There is an alternative command called "COPY", which also adds files to an image. But before I get into why I chose the "ADD" command over the "COPY" command in this case, let's talk about installing Blender. A quick review of the Blender installation guide reveals that the installer for Linux is just a compressed archive file, and that the installation process mainly consists of extracting the archive into a suitable destination folder.
The "ADD" command has a special feature in that it can recognize a compressed archive file from its content (rather than by the file's name or extension), and it will automatically expand the contents of the archive into the destination folder. Since we have an archive file in this case, using "ADD" is much more efficient than first using "COPY" to get the archive file into the image and then using something like "RUN tar ..." to de-archive it. It also means that the resulting image doesn't have the added bulk of the compressed archive file.
Finally, the "RUN" command runs "mv" to rename the destination folder. We opt, here, to rename the folder to be where Deadline's Blender Job plugin expects to find it by default. (In Monitor, go to Tools, then Configure Plugins..., click Blender from the list on the left, and notice the paths in the Blender Executables group.) Alternatively, we could have omitted the "RUN mv ..." step in the Dockerfile and instead added "/usr/local/blender-2.78a-linux-glibc211-x86_64" to the list of locations to search for Blender in the Blender Executable list.
At this point there should be two files in the "BlenderImage" folder: One is the Blender installation archive file, and the other our Dockerfile. We can now open the Docker Quickstart Terminal, switch to the "BlenderImage" folder, and then build the image:
cd BlenderImage docker build -t deadline8/blender:2.78 .
Add Test Folders and a Test File
With the build stage completed, let's move on to the test stage. Let's create a folder to hold the files for our forthcoming test. The plan for the test will be to mount this test folder into the container with Docker's "volume" feature. For the same reasons as outlined in Part 1, it's easiest to create this test folder somewhere under "C:\Users\". Let's create a new folder, "BlenderTest", under our home folder, substituting your username for mine:
Within that folder, let's also create two sub-folders: One to hold the test Blender scene file and another that will hold the test Job's rendered output:
We're going to need a simple Blender scene file to use in the test. In the context of this hypothetical scenario, I'm going to assume that we reached out to production for a simple Blender test scene file and that someone provided us with a simple file for testing (the default Blender scene file with a box, a light, and a camera). If you already have Blender installed somewhere accessible, you can make your own test file. Or you can click the button below and download the Blender test scene that I used. We'll name the test scene "BlenderTestScene.blend" and place it in the "BlenderTest\Scene\" folder.
Add a Path Mapping to Deadline
We will ultimately be submitting our Blender test Job via Deadline Monitor running on Windows. So the path to the Blender file, and the output path, will be recorded in the Job as Windows-style paths. However, the Job itself will be running in a Linux-based container (CentOS 7). So we'll use Deadline's Path Mapping feature to handle the path conversions. Let's set that up now.
Using Deadline Monitor (and making sure it's pointed to our locally-installed test Repository), navigate to Tools, then to Configure Repository Options..., choose Mapped Paths, and add the following mapping (again, substituting your own username):
Replace Path:C:\Users\James\BlenderTest Linux Path:/mnt/BlenderTest
Testing the Image with an Interactive Container
As I mentioned in Part 1, I like to run an interactive test on new images before doing a full-on test in detached mode. For the sake of brevity, we'll just do some very minimal interactive testing to confirm a couple things:
- We'll call the blender executable with the --version option just to confirm that blender will run.
- We'll do a couple directory listings to confirm that the "-v" options passed to "docker run" allow the container to see into the "BlenderTest" folder as expected.
Here is the "docker run" command we will use:
docker run -it --rm --name blenderinteractive -h blenderinteractive \ -v /c/Users/James/DeadlineRepository8:/mnt/DeadlineRepository8 \ -v /c/Users/James/BlenderTest:/mnt/BlenderTest \ --add-host lic-thinkbox:192.168.2.14 \ --add-host Agent005:192.168.1.96 \ --entrypoint /bin/bash \ deadline8/blender:2.78
All of the options we are passing to the "docker run" command here should be familiar from Part 1. Note, however, the inclusion of a second volume (-v) option to mount our test folder into the container. And again, the two "--add-host" options are unique to me connecting to the license server via a VPN and may not be needed in your case.
Once booted, we can run the checks in the bulleted list above.
Run a Worker Test Container
Ok, we're now ready to fire up a Blender Slave container based on our new "deadline8/blender:2.78" image. Here's the "docker run" command:
docker run -d --name blenderslave01 -h blenderslave01 \ -v /c/Users/James/DeadlineRepository8:/mnt/DeadlineRepository8 \ -v /c/Users/James/BlenderTest:/mnt/BlenderTest \ --add-host lic-thinkbox:192.168.2.14 \ --add-host Agent005:192.168.1.96 \ --entrypoint /opt/Thinkbox/Deadline8/bin/deadlinelauncher \ deadline8/blender:2.78
This command has almost identical parameters to those used in our interactive test, except that we've exchanged "-it --rm" for "-d" so that the container runs in detached mode, and we've set the entrypoint to deadlinelauncherinstead of bash.
Since our new "deadline8/blender:2.78" image is built upon our "centos7/deadline_client:8.0" image from Part 1, we should expect Slave to start when Launcher starts, just as it did in Part 1. This is due to the setting "LaunchSlaveAtStartup=true" in "/var/lib/Thinkbox/Deadline8/deadline.ini" in the image. However, because we specified "blenderslave01" as the hostname in the "docker run" command, we should see the Slave appear as "blenderslave01" in Monitor.
Submit the Test Job
We can now submit a test Job and see if the Blender Slave container successfully processes Tasks from the Job. From Deadline Monitor, we can choose Submit, then 3D, then Blender Submission. We can
- select our test Blender scene located in the "C:\Users\James\BlenderTest\Scene\" folder,
- set the output to go to "C:\Users\James\BlenderTest\Output\TestOut.png" (note the included filename),
- and set the Frame List to to 1-24, since this is merely a proof-of-concept test.
Press Submit and then watch the fireworks! On my laptop the Blender Slave starts picking up Tasks right away and quickly chews through the test Job. And indeed the rendered frames arrive in the "C:\Users\James\BlenderTest\Output\" folder as expected.
Teardown the Test Container
Now that we know the image we built works as expected, we can tear down the test container and delete the associated Slave entry in Deadline Monitor. The process is the same as in Part 1:
docker stop blenderslave01 docker rm blenderslave01
Then in Monitor we can mark "blenderslave01" as offline and then delete it, and we can also delete the test Job. If we want to really eradicate this test we can also delete the image we created. As long as we're keeping the Dockerfile around under source control, we can always rebuild the image again later.
docker rmi deadline8/blender:2.78
In our hypothetical scenario that motivated this example, the objective was to figure out network rendering for Blender. We managed to sort out the vast majority of the details locally, and we did so in a clean test environment to minimize possible sources of confusion. And, assuming we were supplied with a Blender scene file for testing, we didn't necessarily have to install Blender on our workstation/laptop or even inside a VM. It was all done with containers, which were easily removed without leaving any clutter behind.
Closing Remarks for Part 2
The default Docker host VM that is created by the Docker Quickstart Terminal is fairly lightweight, since it is mostly meant for learning and testing. With only one processor and one gigabyte of RAM, it may not be suitable for heavier applications. For more demanding scenarios, a new docker host VM can be created manually, or we can SSH into a Docker host that has been set up on a machine with more headroom.
If containerized applications are going to be used in production, there are other considerations such as managing the ports used by the applications, configuring network shares on the Docker host machines, health monitoring, and whether or not an init system is needed within running containers. We have added some discussion of advanced topics to the Docker section of the Thinkbox GitHub repository for Deadline, and we encourage contributions via pull requests.
And finally, if you have questions about this blog article or about Deadline in general, feel free to post your questions to our Support Forum.