Are containers all the rage around your company’s tech department? Here’s an explanation of the hype for the rest of us.
So, you work at a software company and your tech department refers to or is implementing something called containers. They say it revolutionizes the way developers deploy their product to the customers. Awesome! But what the heck is it?
I admit, I work in my company’s tech department and I wasn’t even sure what is. But then again, I’m an analyst, not an engineer or developer, so my tech knowledge is more high-level and conceptual. Nevertheless, my boss told me I should learn so I can have a better understanding of our development infrastructure. But where do you even begin?
Servers, Virtualization, Scalability, and Containers, Oh My!
To even begin to understand containers, we need to understand servers, virtualization and scalability.
Let’s start with the servers
The servers are basically large, powerful computers, with a lot of storage space, memory and processing power. They’re (hopefully) located in a secure data center, either owned by our company (on premises), in a data center owned by someone else (co-location) or in a data center owned by someone who also owns the hardware (cloud).
If we’re developing and publishing a web application, we need to host it on a server. Sounds easy enough. Code it, test it, upload it!
If only it were that easy.
First of all, servers are expensive. How do we determine how much memory and processing power we need to run your application? Do we buy the best, just in case? Or do we roll the dice on something standard and resolve to upgrade when we outgrow it? If we want redundancy or have multiple applications, do we buy one server for each instance we want to run?
This is where virtualization comes in
Virtualization allows multiple operating system images to run on a single server. Using a specialized operating system called a hypervisor, a server engineer can set up multiple instances of operating systems on the server, in which they share the server’s hardware resources. Each instance is logically its own server, and appears on the network as a unique server, even as it runs on a single host server.
In the above diagram, we have a single server with three guest operating systems, each one operating as a separate, logical server. Each application instance is therefore independent, and requires its configuration (bins, libraries, etc.) set up on each instance.
Setting up a virtual server allows us to utilize just as much of the host server’s resources that we need for its purpose, rather than underutilizing a whole physical server. The benefits are obvious: reduced physical hardware and lower overhead costs. In addition, we can configure each virtual server with whatever the application needs to run — configurations, software libraries, etc. This reduces the likelihood of compatibility problems, as we’ve tailored the virtual server for the unique needs of the software it hosts.
Sounds good so far. But what if our virtual server is at capacity? Do we need to monitor and reactively expand it every time there is a sudden burst of activity?
Virtualization also allows engineers to set up scalability. If a sudden surge of users accesses the system, we can program the virtual server to add capacity, either in storage space, processing power, memory or all of the above as needed. The server runs this elastic capacity on demand, or we orchestrate it to run automatically based on certain factors.
Ok, then! Let’s set up a virtual server for our web application!
But wait! What if we have multiple applications? What if our web application has multiple components? What if the components are dependent on different configurations and libraries? What if we want to set up our application so that if one server or component has an issue, it won’t bring down the entire system? Do we need a new virtual server for every application or component?
Now we’re ready for containers!
Containers extend the virtualization idea, making it more efficient. Instead of multiple servers on a host machine, containers share the server hardware and its host operating system, but compartmentalize the hosted applications. The containers each host the libraries and configuration files that the individual application needs on a container image.
We configure containers and manage them with a platform, with Docker as the leading product in this arena.
There are a few clear advantages of containers when compared to virtualization
One is size and portability. An engineer can install the container and configure it on their local machine. They develop the application in this standardized environment, and then push the entire container to the server.
In addition, we can keep components isolated from each other. If one container has an issue with its configuration, it won’t bring down the entire system — only the container.
Just the same, if an application or component within a container uses the same configuration library as another container, we can share these resources across containers.
In a multi-component application, we can set up components in their own containers. This is known as a micro-services setup. Each component communicates with the others using built-in APIs.
Finally, containers are easier to automate and scale than virtual machines using a technique known as orchestration.
Quick Introduction to Orchestration
Container orchestration processes manage and automate the configuration of containers. A container orchestration application becomes more useful as our container needs become larger and more complex. It is a powerful tool that can automate scaling, self-repair for common application issues, and automated container deployments.
For example, we can program our containers to take on more resources, or duplicate themselves based on real-time metrics, such as an increase in application usage. This is especially useful in a pay-as-you-go cloud infrastructure, where we pay for the resources we use. With a scaling agreement with the cloud provider, we can orchestrate our infrastructure to scale as needed, based on demand.
This is where a product like Kubernetes comes in. Think of Docker as the container platform, and Kubernetes as the container orchestrator.
Now, you hopefully have a better idea as to the basics of a container infrastructure, as well as a basis of why containers exist, and what problems they solve.
The next step? Dive into learning more about the products themselves, such as Docker or Kubernetes!
RELATED: check out my guide to introducing yourself to Linux and DevOps technologies