Stackify is now BMC. Read theBlog

Docker Performance Improvement: Tips and Tricks

By: Lou
  |  April 4, 2019
Docker Performance Improvement: Tips and Tricks

Docker is now everywhere. Over the past few years, a lot of modern-day software has now moved to become packaged in a Docker container, and with good reason. One of the biggest benefits touted about Docker containers is their speed. You don’t get lightning-fast performance out of the box without Docker performance tuning.

We’re going to discuss some of the tips and tricks to ensure you are utilizing the real speed of containers. We’ll break down the following into two parts.

Part 1 Optimizing the speed of containers before we ship (build-time configuration):

Part 2 Optimizing your containers in production:

  • Host/container relationship
  • Container performance data
  • Leveraging APMs for easier performance data

Docker recap

It’s important to understand the nuances of how Docker works so we can ensure we’re leveraging its powerful features.

Simply put, Docker containers are a way of packaging and distributing software with simple instructions to run. Containers will always run predictably—no matter where you choose to execute them—as isolated and protected processes.

Some key points to remember about containers:

Containers (nearly always) have hosts. Containers need machines to run on, but don’t expect them to run optimally out of the box. You need to think about what resources the host has, and how it’s sharing these with the containers.

Containers are built from layers. Containers use the union file system. This means, among other things, as you build a Docker image (as part of your Dockerfile) it’s cached for performance. Importantly though, these cached layers are additive, which means you can only add to them (more on this later).

The art of performance debugging

When it comes to performance improvements, stick to the following guidelines:

Optimize the bottleneck. Take into account where your bottleneck is and optimize only at that point. Optimizing up or downstream the bottleneck won’t have an effect on the end user, or the consumer, of the system.

Be data driven. Gathering hard evidence (numbers) about system behavior before and after you run any performance analysis is essential.

With the introduction done, we can now discuss some fun stuff: making containers super fast.

Part 1: Docker build-ime performance

When we work with containers, we’re typically packaging the software we’re working on into a container build. As developers, we run these build steps quite often. Every time we change the software, we’ll want to check that our new artifact is working as expected.

Then, when we’re satisfied that our software works, we push our code through a deployment pipeline. With each step of the pipeline, we’ll be buildingpushing, and pulling images. Because we’re repeating these build, push, and pull steps so many times, we should pay special attention to our container build steps.

In the next few sections, we’ll discuss how to improve the speed from working software on your local machine to a packaged, distributed, and easily runnable container.

Keeping your Docker images lightweight

When dealing with Docker images, build a Dockerfile. A Dockerfile is a set of instructions describing how to build an image. A Dockerfile specifies a set of details:

  • Files to include
  • Necessary environment variables
  • Command(s) to use
  • Installation steps
  • Networking details (such as exposed ports)

One part of the Dockerfile with a big implication on build-time performance is file context.

Why are these build contexts so relevant? The answer is that container builds requires context.

Context outlines the specified files required to build your container. For instance, when you’ve performed a Docker run command, you might have seen the following output:

Sending build context to Docker daemon  2.048kB

Importantly, the output above shows us the size of our specified Docker context. The larger this context, the slower our Docker build is going to be.

Okay, so what if you have a particularly large build context for your container? Start by adding unneeded files to the .dockerignore file, which will exclude these from your build. The usual suspects for a slow build are large asset files, or additional library files that aren’t required for your build. Once created, you can easily see the size of your built Docker image by running the following:

docker images

This will return output similar to the following:

REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
hubuser/largeapp latest 450e3123 5 minutes ago 662 MB
debian jessie 9a61b6b1 4 days ago 125.2 M

It’s important to mention that, because Docker uses base images and a union file system, if you do need to install or modify a base image, you can easily push that image to the container registry and use the sent image as the new image.

Improving network latencies

There are aspects to the Docker build process that involve the internet (the network) in our case. If we have large images, our performance issues become magnified by the fact that we’re often pushing/pulling our image across the internet.

When you build a Docker image on your machine, it will check the base image specified that you want to build from. If the specified base image isn’t found locally, Docker will—by default—try to fetch the image from Dockerhub, which has a latency performance cost.

It’s not just performance that is affected by relying too heavily on a service like Dockerhub. We also need to consider the availability of Dockerhub and the risks associated with building a strong dependency on a service. If Dockerhub were compromised or went down, our images and software would be inaccessible.

To remedy this, Docker allows you to create your own registry, which you can locate within your organization and on your own infrastructure. Doing so will increase the speed of pushing/pulling images, with the added bonus of extra redundancies in the case of an outage on Dockerhub.

Bonus Tip: Azure users should consider creating an Azure container repository to store images for all types of container deployments including Docker.

Part 2: Docker runtime performance

By now, you should have a super lightweight image, and your own registry for fast download and upload of your built images.

We’re now at the point where we’ll want to consider how we’re getting our images running fast in production. We’ll want to ask ourselves the following questions:

  • What host do we want to run our container on?
  • How many containers do we want to run per host?
  • Do we want to use a container orchestration tool (e.g., Kubernetes)?
  • What configuration do we want to set our containers with?
  • Can we leverage cloud tooling like AWS Fargate to do our heavy lifting?

Is it your app, infrastructure, or Docker?

Throughout this article, we’ll be discussing strictly how to optimize Docker performance. I should mention now that, often, it isn’t Docker that needs optimizing, but rather the infrastructure that it’s running on, or the application that’s running inside the container. You can’t drastically improve poorly designed application problems just by adding Docker to the equation.

The following are good ways of assessing application performance:

  • Using visualization tools These tools show you how your software is currently executing.
  • Logging. Application logs are metadata emitted by a running application that indicate your application’s performance. Careful instrumentation with application logs alongside good tooling for viewing and visualizing gives great insight into application performance.

Configuring before you run

As we said at the start, Docker isn’t necessarily super performant out of the box. Like most software, it comes configured with a set of defaults you can override, usually at the point when you execute a docker run.

Our Docker configuration is important; in the case of memory, if your host machine runs low, it can start killing processes to recover memory. When running in production, you’ll want to ensure that you have enough system resources—such as memory—to perform your desired workloads. Most cloud providers have the capability to set triggers (often called scaling rules) that launch/alter machines when under certain conditions (such as low memory).

Gathering container metrics

When it comes to measuring container performance, we’ll need metrics to help us understand our current performance. Luckily for us, Docker gives us tools to extract data on our running containers for the purposes of performance debugging.

Note that the following methods are CLI-based. Using CLI methods would require you to SSH (or similar) into your machine to run them. After going through the data sources themselves, we’ll talk about how you can access these metrics in a more scalable manner and make better use of your data.

The docker stats command (Part 1)

First up, the docker stats command.

Docker provides us with a simple command called docker stats to get metrics about our currently running containers. Because docker stats is so easy to use, let’s go through what the docker stats command gives us, and then we can see what we’re able to understand through the data. I’ve broken down the output of the command into two parts so it’s easier to digest. Let’s begin with part one:

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM %
c796e97878c2 yourcontainer1 1.17% 35.71MiB / 1.952GiB 1.79%
2c02afe562b8 yourcontainer2 0.04% 9.344MiB / 1.952GiB 0.47%
0133d95251a1 yourcontainer3 0.00% 6.363MiB / 1.952GiB 0.32%

Take a good look at this output and guess what each is measuring (and why). Now that you’re more familiar with the data that docker stats exposes, we can go through what these metrics are one by one:

  • Container ID. Container IDs are useful to pass as a reference to other Docker commands for understanding specific information related to that container. We’ll also need to know our container ID if we want to exec into our container to take a look around.
  • CPU. This is the percentage of the host CPU that is being utilized. Note that because this is the host CPU, the more containers you run on your host, the lower this figure can be. Containers on the same host often compete for system resources, depending on how you configure your container.
  • Memory usage/limit. The memory usage metric is the absolute value of the memory the container is currently using. Secondly, this metric is the total amount available alongside the percentage of memory that the container is using.

The docker stats command (Part 2)

NET I/O BLOCK I/O PIDS
1.76MB / 602MB 532kB / 0B 23
308MB / 3.08GB 147kB / 118MB 9
97.6kB / 596kB 28.9MB / 0B 19

The docker stats command also produces the following additional outputs:

  • Net I/O. This is the data being sent and received over the network (network traffic).
  • Block I/O. This is the amount of data reading/writing from block devices on the host. If we’re writing/reading lots of data, we might want to consider leveraging other cloud solutions, such as an in-memory cache or an object storage service such as S3.
  • PIDs. This is the number of threads created by the running container. Depending on the type of work that we’re doing, we might want to consider offloading this processing work into other containers as part of a microservice architecture.

Source 2: Docker REST API

When you’ve exhausted what you can from the docker stats command, a good way to get additional information is through the Docker REST API. The Docker daemon that orchestrates the running of your container provides an API that produces similar, but much more detailed, information as the docker stats command.

To get started with the REST API, you can call GET /container/(id)/stats. Due to the large volumes of data you’ll get from the REST API, you’ll want to pipe your data into a visualization or aggregation tool (more on this in the next section).

What we must ask ourselves about our containers

By using the above information, we start to get a high-level view of which types of behavior our container might be exhibiting. We can then use that data to tweak our application.

Ask yourself the following questions when viewing this type of data:

Getting advanced with our data

Using CLI tools will only get you so far. SSH’ing into your Docker machine to inspect the running processes isn’t sustainable, so instead we can use automation and visualization to improve the process. An APM can help us out with this problem. By installing tooling on our machines, we can pipe data to a single location for viewing and visualization, and tools like Stackify’s Retrace allow us to do that.

Getting the most out of your containers

In part one, we went through how to ensure Docker build times are low and create your own registry. In part two, we discussed leveraging running container data for performance assessments.

Now you have the tools you need to start digging into understanding your containers’ performance in both build and runtime. You should now be able to start realizing the true power and performance of Docker containers!

Improve Your Code with Retrace APM

Stackify's APM tools are used by thousands of .NET, Java, PHP, Node.js, Python, & Ruby developers all over the world.
Explore Retrace's product features to learn more.

Learn More

Want to contribute to the Stackify blog?

If you would like to be a guest contributor to the Stackify blog please reach out to [email protected]