An introduction to Kubernetes.
Kubernetes is a DevOps tool for managing large deployments of containerized applications that was developed by Google to help scale their web services. For an application that makes use of Docker or other container formats, it manages automation, scaling, and deployment of containers across the host VMs.
With Kubernetes, you specify what you want your containers to do, and it manages the underlying VMs, as well as deployments, upgrades, and configuration changes.
Right out of the box, Kubernetes gives you everything you need for a full-scale agile deployment, with built-in DevOps tools that make your life easier.
Kubernetes is designed around a single master server and a set of nodes that you host your application on. Each node will host one or more of the containers that make up your application.
When you provision a master server, you’re provided with an API and a dashboard for managing your application. The master takes care of scheduling, hosts the configuration keystore for the cluster, and runs the managerial processes that control the rest of the cluster.
When you use the command line or the API to specify configuration changes, the master converts those into events and pushes those changes out to the cluster.
After the master is up and running, it needs to have some nodes available on which to put containers. Nodes are simply the virtual machines that make up the cluster, that are each running the kubernetes service, allowing them to host containers.
In Kubernetes, the main unit of containerization is not individual Docker containers, but rather pods, which are groups of containers sharing a purpose.
The final piece of the puzzle is services. Services are logical components of your application that have a policy attached to define how they interface with the outside world. While pods are meant to be ephemeral instances, services are a lasting abstraction that define a lasting contract for services the pods provide.
Making high-availability applications scale is a difficult problem to say the least. Kubernetes is one of the most well-supported attempts to tackle that problem, which confers a lot of advantages. The documentation is excellent, and there’s a large developer community to help you out when things get hairy.
Kubernetes emphasizes ease-of-use, and has the much-celebrated kubectl command line tool and a management REST API, so it interfaces with almost any modern tooling.
Kubernetes also is designed with high-scalability in mind. It supports clusters of more than 2000 nodes and 6000 pods, and the number grows on an almost monthly basis. This makes sense given that the project is backed by Google, a company operating on a scale beyond most enterprise clients. The project also is an integral part of Google’s Cloud strategy, meaning you can expect it to get lots of attention and improvements as time goes on.
While Kubernetes has a lot to recommend it, there are several competing tools that offer similar functionality. Docker Swarm is another cluster management tool that has a lot of overlapping uses. It offers a similar API to Kubernetes, and a full suite of command line tools for managing your cluster on top of your host machines. Combined with the other two Docker tools, Machine and Compose, Swarm is part of a suite of Docker tools that let you get off the ground and build full-scale applications quickly. Other similar technologies include Apache Mesos and Hadoop YARN.