Docker and DevOps – can containers really replace virtual machines?

docker-smallDocker for DevOps is like VM on steroids.  Actually, it’s different and it’s better.  Generally.

But can containers actually replace virtual machines (VM)?  What is Docker? Read on.

 

While VM and cloud servers solved some problems, they created others:  Renting VMs comes with clunky scaling, portability issues and vendor lock-in.

When you understand that vendor lock-in is defined as the restricted  (as in, you have to pay for it) or proprietary (as in you’re going to have to pay for it and maybe you’re going to have to pay someone to do the work for you too) use of a technology, solution or service developed by a single manufacturer or supplier who is very expensive or inconvenient to replace, you can understand why a 2016 survey done for Logicworks by Wakefield  Research revealed that 78% of IT decision makers believe that concerns about vendor lock-in prevent their organisation from maximising the benefits of cloud resourcesThis basically means that the possibility of getting locked into using an inflexible vendor freaks them out, so they stay with more expensive, “conventional” path of owning their own resources.  The big negative with this is that it’s not only more expensive;  it’s also slow.

With virtual machines you’re charged for the resources, whether you’re fully utilizing them or not.  Another way to look at it is, if you rent a 10GB, 4 core VM, you’ll get billed for all 10 GB and all 4 cores, even is your software is only using half or less of the resources you signed up for.

For RADEK OSTROWSKI (FREELANCE SOFTWARE ENGINEER @ TOPTAL),  “virtual machines and Docker containers may seem alike. However, their main differences will become apparent when you take a look at the following diagram.”

Docker VM

 

Some spin VM cloud services as a smart financial decision.  “If you think about the traditional billing model of a cloud server with Amazon, with Google Compute Engine, with Elastichosts, whoever it might be with, it’s on-demand and it’s scalable. The sense in which it’s on-demand and scalable is that you can start a server of any size, and you pay for the size you started. You can say you want an 8 GB instance and you start an 8 GB instance. Every hour you pay for an 8 GB instance. When you turn it off, you stop paying.”  Richard Davies, CEO of ElasticHosts, a cloud provider that launched in 2008.

That would be great, except that when you start thinking about how computing works, you might only use 2GB of memory and 1 core most of the time.  In fact, if you do use all the available resources ever, it’s highly likely that it will only be for seconds, not even minutes.  This means that because you will reach peak resources for a few seconds per hour, you are forced to rent the larger resource package, even though you barely ever use even half the resources.

This is where Docker containers can really shine.  Containers are simply microservices for DevOps.

Docker, an Open Source project originally authored by Solomon Hykes, was designed to automate the deployment of applications inside software containers.  They run on both Linux and Microsoft Windows, while sitting on any infrastructure.

First of all, let’s quickly look at what that is.  Docker is containerization technology, which is defined by Search ITOperations as an operating system level (OS-level) virtualization method for deploying and running distributed applications without launching an entire virtual machine (VM) for each app. Instead, multiple isolated systems are run on a single control host and access a single kernel. The application containers hold the components such as files, environment variables and libraries necessary to run the desired software. Because resources are shared in this way, application containers can be created that place less strain on the overall resources available. 

In a nutshell, containers make a more efficient use of a computing environment possible.  That’s what Docker does.  This means that only one operating system (OS) is necessary to run multiple contains, whereas if you tried to do the same thing with VM on the cloud, you would need multiple VMs, each with their own, independent OS.

Docker containers are therefore a much simpler solution for DevOps than VM.  Let’s take a closer look.

First off, a huge plus in my thinking, is the fact that Docker containers will run on pretty well everything.   This means you can run them on physical computers, bare-metal servers, OpenStack cloud clusters and others.  Whatever your situation, Docker containers will probably work.  But are they efficient?

By themselves, some users find container management a bit of a challenge, especially when figuring out what to do about the obvious security issue of working with a single kernel, rather than multiple individual kernels with VMs.  Docker is designed to be a solution that takes some of the guesswork out of the management process.

Think Docker and container in the same phrase, as in Docker containers.  Docker is a management system for containers that takes the guesswork and potential risks out of container implementation and management.

The advantages of containers make Docker a serious consideration for DevOps.  First of all, only one robust physical machine is required to run multiple containers, because the machine is split into multiple sub-servers, sand-boxed from each other.  Instead of simulating a set of hardware environments and running an entire OS on each of these environments, a single Linux kernel running on the hardware splits into multiple isolated containers.  The sub-servers then run individually in these containers.

Each sub-server, while using the same kernel as the others, has its own software, filing system, users, etc.  It just doesn’t have its own kernel.  It actually looks a lot like VM to users, with an IP, root access, log in, SSH server, database server, web server, etc.

Scalability improves dramatically with Docker containers.  In an environment where more containers can be run that VMs generated, it’s easy to see how more applications can be run and therefore more can be done on the same hardware.  That is a huge savings in financial outlay and from an IT perspective, imagine how the workload is reduced!

Portability also improves dramatically.  With Docker’s light weight, container images can easily be moved throughout virtual environments or even between service providers.

Vendor lock-in isn’t an issue with Docker containers, again, because it will run in virtually any environment on virtually any machine, as well as on cloud platforms, which is a real benefit in itself.

Efficiency and flexibility are major benefits that the implementation of Docker containers will bring to virtually any organization.  The manual maintenance of a consistent environment across multiple multiple servers is a thing of the past with Docker.  Updating existing applications and environments is much simpler and safer, because it all that is required is to stop running the old container and to start running the updated one.  In this way, setup always starts with an empty environment, eliminating many inherent risks.

Continuous delivery is something greatly required in today’s DevOps and Docker delivers there, too.  While is is most impractical to use physical machines for this purpose, Docker manages this beautifully. Due to the fact that we always begin with an empty environment, the old one is simply removed completely, and the new one is set up.

Lightweight virtualization means much faster startup than a VM, because there is no guest operating system to be booted up, which again reduces expenses.  As far as the host OS, it’s just another process starting up when a Docker container is run.  This has the advantage or increasing startup speed, while continuing to provide isolation for the containers.

Security is built right in, because Docker containers isolate applications from each other and the infrastructure they are sitting on.

More will eventually be written about the advantages of Docker for DevOps.  Considering that in its development, it was focused heavily being user friendly, it has made a lot of friends, and inevitably it has become a friend for DevOps.  It may well be that this story is only beginning.

Please share your experiences in the comments!


5 thoughts on “Docker and DevOps – can containers really replace virtual machines?

  1. Very useful post. This is first time I visit here. I found interesting Information from your blog. Really it’s great article. Keep it up.

    Like

    1. What was it you found useful about this post? This is a topic I’m very interested in. It’s often very interesting and helpful to find out what is important and interesting about the topic to others.

      Like

  2. I’m convinced. It probably took me longer than many, as I am always skeptical about anything I consider “beta,” and was very skeptical of cloud computing for too long (which cost me more than I care to share). Sometimes waiting to see if something will catch on is too expensive not to take a risk and try something new.

    Like

  3. Simplicity of deployment. It becomes much easier to deploy your whole application onto any hardware making it very scalable. Every time it is deployed there are no concerns regarding operating systems.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s