Well, there is no “better” because these aren’t equivalent things. Docker is like an airplane and Kubernetes is like an airport. You wouldn’t ask “Which should I use to travel—airport versus airplane?” So it goes with Docker and Kubernetes. You need both.
In this post, we’ll run through a deployment scenario, how containers and orchestrators can help, and how a developer would use them on a daily basis. You’ll walk away from this post with an understanding of how all the pieces of the puzzle fit together.
So let me start with a typical day in the life of someone who struggles through every deployment. Then I’ll explain how these two technologies can help. For practical purposes, we’ll talk about the fictional developer John Smith. John’s a developer working for a startup, and he’s responsible for deploying his code to a live environment.
John has two apps: one in .NET Core and another in Node.js. He struggles every time a new version of the language, framework, or library comes out and he has to run an upgrade. The problem is when things aren’t compatible with what he’s installed. When something’s not working, he just installs, uninstalls, updates, or removes until finally things get back up and running. The struggle becomes even bigger when he has to push a new change after doing all of that to another environment. It’s kind of hard to remember all the steps when we’re in a rush.
One solution could be for him to work with virtual machines (VMs). That way, he can isolate all dependencies and avoid affecting any existing apps and their dependencies
While that could work, it doesn’t scale. Why? Because every time something changes, he has to take a new snapshot. And then he has to somehow organize all the different versions of those VM snapshots. He’ll still need to deploy changes in code and any dependencies to other environments. Now, he can screw things up in other environments too and then fix it, and that’s okay. But when we’re talking about production, things get risky. He has to work with production-like environments to ease deployments and reduce risk. That’s hard to do.
Even having automation in place, deployments might be too complex or painful. Maybe John even has to spend a whole weekend doing deployments and fixing all sorts of broken things.
We all wish deployments could be as boring as pushing a button. The good news is that that’s where Docker and Kubernetes come into play.
So, what is Docker anyway?
Docker is a company that provides a container platform. Containers are a way to pack and isolate a piece of software with everything that it needs to run. I mean “isolate” in the sense that containers can assign separate resources from the host where it’s running. You might be thinking this sounds pretty similar to VMs, but the difference is that containers are more lightweight: they don’t need another OS to make software run. Containers let you be more agile and build secure and portable apps, which lets you save some costs in infrastructure when done well.
I know that sounds like a textbook definition, so let’s see how this is beneficial by following the day in the life of John.
Let’s say John decides to start his containers journey. He learns that Docker containers work with base images as their foundation to run an app. A base image and all its dependencies are described in a file called “Dockerfile.” A Dockerfile is where you define something like a recipe that you usually have in docs (or in your mind) for anyone who wants to run your app. He starts with the .NET Core app, and the Dockerfile looks like this. Take a look:
FROM microsoft/aspnetcore-build:2.0 AS build-env WORKDIR /app # Copy csproj and restore as distinct layers COPY *.csproj ./ RUN dotnet restore # Copy everything else and build COPY . ./ RUN dotnet publish -c Release -o out # Build runtime image FROM microsoft/aspnetcore:2.0 WORKDIR /app COPY --from=build-env /app/out . ENTRYPOINT ["dotnet", "hello.dll"]
As you can see, it’s as if you were programming. The only difference is that you’re just defining all dependencies and declaring how to build and run the app.
John needs to put that file in the root of the source code and run the following command:
docker build -t dotnetapp .
This command will create an image with the compiled code and all of its dependencies to run. He’ll only do the “build’ once because the idea is to make the app portable to run anywhere. So when he wants to run the app, only Docker needs to be installed. He just needs to run the following command:
docker run -d -p 80:80 dotnetapp
This command will start running the app on port 80 of the host. It doesn’t matter where he runs this command. As long as port 80 isn’t in use, the app will work.
John is now ready to ship the app anywhere because he’s packed it in a Docker container.
So why is this better? Well, John doesn’t have to worry about forgetting what he installed on his local computer or on any other server. When the team grows, a new developer will rapidly start coding. When John’s company hires an operations guy, the new hire will know what exactly what’s included in the container. And if they want to do an upgrade of the framework or some dependency, they’ll do it without worrying about affecting what’s currently working.
Use Docker to pack and ship your app without worrying too much about whether the app will work somewhere else after you’ve tested it locally. If it works on your machine, it will work on others’ machines.
So, John now just needs to go to each of the servers where he wants to ship the app and start a container. Let’s say that, in production, he has ten servers to support the traffic load. He has to run the previous command on all the servers. And if for some reason the container dies, he has to go to that server and run the command to start it again.
Wait. This doesn’t sound like an improvement, right? It’s not much different than spinning up VMs. When something goes down, he’ll still need to manually go and start containers again. He could automate that task too, but he’ll need to take into consideration things like health checks and available resources. So here’s where Kubernetes comes into play.
Kubernetes, as their site says, “is an open-source system for automating deployment, scaling, and management of containerized applications.”There are more of its type, but Kubernetes is the most popular one right now. Kubernetes does the container orchestration so you don’t have to script those tasks. It’s the next step after containerizing your application, and its how you’ll run your containers at scale in production.
Kubernetes will help you to deploy the same way everywhere. Why? Because you just need to say, in a declarative language, how you’d like to run containers. You’ll have a load balancer, a minimum amount of containers running, and the ability to scale up or down only when it’s needed—things that you’d otherwise need to create and configure separately. You’ll have everything you need to run at scale, and you’ll have it all in the same place. But it’s not just that. You can also have the ability now to have your own Kubernetes cluster running locally, thanks to Minikube. Or you can use Docker, because Docker now officially supports Kubernetes.
So, coming back to John. He can define how he wants to deploy an app called “dotnetapp” at scale.
Take a look at the “dotnetapp-deployment.yaml” file, where John defines how to do deployments in a Kubernetes cluster, including all its dependencies at a container level. In this case, besides launching the dotnetapp, it’s also launching the database using a container. Here’s how the file looks:
apiVersion: apps/v1beta1 kind: Deployment metadata: name: dotnetapp spec: replicas: 3 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 minReadySeconds: 5 template: metadata: labels: app: dotnetapp spec: containers: - name: dotnetapp image: johndoe/dotnetapp:1.0 ports: - containerPort: 80 resources: requests: cpu: 250m limits: cpu: 500m env: - name: DB_ENDPOINT value: "dotnetappdb" --- apiVersion: v1 kind: Service metadata: name: dotnetapp spec: type: LoadBalancer ports: - port: 80 selector: app: dotnetapp
John now just needs to run this command to deploy the app in any Kubernetes cluster, locally or in another cluster:
kubectl apply -f .\dotnetapp-deployment.yaml
This command will create everything that’s needed, or it will just apply an update, if there is one.
He can run the exact same command on this computer or any other environment, including production, and it will work the same way everywhere. But it’s not just that. Kubernetes constantly checks the state of your deployment according to the yaml definition you use. So if a Docker container goes down, Kubernetes will spin up a new one automatically. John no longer has to go to each server where the container failed to start it up again; the orchestrator will take care of that for him. And there will be something monitoring the stake to make sure it’s compliant—meaning it’s running as expected—all the time.
That’s how you could easily get to doing several deployments a day that take around five minutes.
Now you know what Docker and Kubernetes are—and not just in concept. You also have a practical perspective. Both technologies use a declarative language to define how they will run and orchestrate an app.
You’ll be able to deliver faster, but more importantly, you’ll deliver in a consistent and predictable manner. Docker containers will help you to isolate and pack your software with all its dependencies. And Kubernetes will help you to deploy and orchestrate your containers. This lets you focus on developing new features and fixing bugs more rapidly. Then you’ll notice, at some point, your deployments stop being a big ceremony.
So, the main thing to remember is this: when you combine Docker and Kubernetes, confidence and productivity increase for everyone.
Read more: Proven Steps to Achieving Deployment Success
- AWS Batch: A Detailed Guide to Kicking Off Your First Job - March 29, 2018
- The Advantages of Using Kubernetes and Docker Together - March 19, 2018