Wednesday, May 22, 2019

Presented at Docker Cambridge. This presentation looks at options for Container Orchestration including; Kubernetes, Docker Swarm, Amazon ECS (and others). Outlining the pros and cons of them and guidelining some of the terms to understand and the tools to use.

Slides

Link to Slides

Workshop

These slides can be linked with the workshop code at https://github.com/willhallonline/orchestration-workshop

Content

Container Orchestration With Docker Swarm and Kubernetes

Container Orchestration Orchestration probably exists to do two main things:

  1. Resource Utilization
  2. Scaling/healing

Orchestration Systems

There are a number of ways to do orchestration:

  1. Kubernetes
  2. Docker Swarm
  3. Mesos
  4. Nomad
  5. Amazon ECS

Managing your platform as a cluster

You have a number of machines, they theoretically can act as 1 total machine (and you add/remove nodes as the workload determines).

Kubernetes

Pros:

  • Large, healthy ecosystem
  • PaaS provided by GCP, Amazon, Azure, DigitalOcean, OVH
  • Future of software delivery??

Cons:

  • Complex management
  • New tooling

Docker Swarm

Pros:

  • Simple setup
  • Simpler management (than Kubernetes)

Cons:

  • No PaaS (Docker Enterprise?)
  • Scale issues
  • Vendor support

Amazon ECS

Pros:

  • PaaS
  • Integration with Amazon toolsets

Cons:

  • Vendor lock-in
  • Limited community support

Docker Swarm Terms:

Swarm and Service Swarm

Like a cluster in Kubernetes, a swarm is a set of nodes with at least one master node and several worker nodes that can be virtual or physical machines.

Service

A service is the tasks a manager or agent nodes must perform on the swarm, as defined by a swarm administrator. A service defines which container images the swarm should use and which commands the swarm will run in each container. A service in this context is analogous to a microservice; for example, it’s where you’d define configuration parameters for an nginx web server running in your swarm. You also define parameters for replicas in the service definition.

Manager node

When you deploy an application into a swarm, the manager node provides several functions: it delivers work (in the form of tasks) to worker nodes, and it also manages the state of the swarm to which it belongs. The manager node can run the same services worker nodes do, but you can also configure them to only run manager node-related services

Worker nodes

These nodes run tasks distributed by the manager node in the swarm. Each worker node runs an agent that reports back to the master node about the state of the tasks assigned to it, so the manager node can keep track of services and tasks running in the swarm.

Task

Tasks are Docker containers that execute the commands you defined in the service. Manager nodes assign tasks to worker nodes, and after this assignment, the task cannot be moved to another worker. If the task fails in a replica set, the manager will assign a new version of that task to another available node in the swarm.

Docker Swarm Tools:

Docker and Docker Compose… If you can use Docker and Docker Compose, you can deploy and manage things inside Docker Swarm. It is the same toolset!

Let’s play You can come back and look at these some other time. https://play-with-docker.com

Kubernetes Terms:

Master

The master manages the scheduling and deployment of application instances across nodes, and the full set of services the master node runs is known as the control plane. The master communicates with nodes through the Kubernetes API server. The scheduler assigns nodes to pods (one or more containers) depending on the resource and policy constraints you’ve defined.

Kubelet

Each Kubernetes node runs an agent process called a kubelet that’s responsible for managing the state of the node: starting, stopping, and maintaining application containers based on instructions from the control plane. A kubelet receives all of its information from the Kubernetes API server.

Pods

The basic scheduling unit, which consists of one or more containers guaranteed to be co-located on the host machine and able to share resources. Each pod is assigned a unique IP address within the cluster, allowing the application to use ports without conflict. You describe the desired state of the containers in a pod through a YAML or JSON object called a PodSpec. These objects are passed to the kubelet through the API server

Deployments, replicas, and ReplicaSets

A deployment is a YAML object that defines the pods and the number of container instances, called replicas, for each pod. You define the number of replicas you want to have running in the cluster via a ReplicaSet, which is part of the deployment object.

Kubernetes Tools

kubeadm and kubectl. You use kubeadm to administer your Kubernetes cluster. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.

Kubernetes Networking

Networking is a central part of Kubernetes, but it can be challenging to understand exactly how it is expected to work. There are 4 distinct networking problems to address:

  1. Highly-coupled container-to-container communications
  2. Pod-to-Pod communications
  3. Pod-to-Service communications
  4. External-to-Service communications

Kubernetes Tools: Helm

Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application. Helm Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste.

Takeaways

  • Orchestration is both efficient and cost-saving
  • It can deliver significant scaling and healing potential
  • Choose the right solution for your problem