A Case Study: With Kubernetes!!

What is Kubernetes??
Kubernetes is an open-source platform designed to automate deploying, scaling, and operating application containers. Kubernetes has rapidly matured and been rolled out in large production deployments by organizations such as Bloomberg, Uber, eBay, and also here at DigitalOcean. In March 2018, Kubernetes became the first project to graduate from the Cloud Native Computing Foundation,
- Portable: public, private, hybrid, multi-cloud
- Extensible: modular, pluggable, hookable, composable
- Self-healing: auto-placement, auto-restart, auto-replication, auto-scaling
Currently in production or preview on Amazon, Google,and Azure.
Who should use Kubernetes??
Kubernetes is an orchestration tool for containerized applications. Starting with a collection of Docker containers, Kubernetes can control resource allocation and traffic management for cloud applications and microservices. As such, it simplifies many aspects of running a service-oriented application infrastructure.
Kubernetes Architecture
- Master API server, scheduler, controller manager, and etcd (HA key-value store for config and service discovery)
- Node Docker or similar (e.g., Rocket) to run containers, kube-proxy (net access to apps), kubelet (takes k8s commands)

Is Kubernetes using Docker??
As Kubernetes is a container orchestrator, it needs a container runtime in order to orchestrate. Kubernetes is most commonly used with Docker, but it can also be used with any container runtime. RunC, cri-o, containerd are other container runtimes that you can deploy with Kubernetes.
Kubernetes Concepts

Pods:- Group of one or more containers, their storage, and config/run options; each Pod gets its own IP address
Labels:- Key/value pairs that Kubernetes attaches to any object (e.g., a Pod)
Annotations:- Key/value pairs for arbitrary non-queryable metadata
Services:- Abstraction defining a logical set of Pods and a network access policy
Replication Controller:- Manage number of pod replicas running
Secrets:- Sensitive information (passwords, certificates, OAuth tokens, etc.)
ConfigMap:- Mechanisms used to inject config into containers while keeping containers agnostic of Kubernetes
Putting it Together
Pods:
Group of containers, basically an application
pods run on Nodes
The (pluggable) Scheduler picks the Nodes based on the Pods’ needs
The Replication Controller makes sure that enough Pods are running, if they’re replicated (self-healing)
A virtual IP per Service, via a proxy on the Node (avoids port collisions)
Storage
Container:- Ephemeral, tied to the lifecycle of a container
Volumes:- Less ephemeral, tied to the lifecycle of a Pod
PersistentVolumes and PersistentVolumeClaims:- Not ephemeral; cluster operators define PersistentVolume objects, application developers define PersistentVolumeClaim objects.
Business Case For Using Kubernetes
Kubernetes is one of the hot technologies in cloud. But as many have learned, chasing after the hot technologies does not equate to more cost effect infrastructure, happier developers, or a better overall cost structure. I’m aware of numerous cases where making a change ended up costing more without seeing gains elsewhere leading to a worse total cost of ownership (TCO). This is exactly the kind of situation business decision makers want to avoid. Just because technology is hot doesn’t mean it’s useful.
The Benefits
There are benefits to using Kubernetes that can shift the TCO in a businesses favor. These are guidelines and anyone considering the switch should look at how their workloads and case maps to these guides. Let’s take a look at a few of these benefits.
1. Higher Infrastructure Utilization
The way Kubernetes schedules containers can end up with a higher infrastructure utilization than typical virtual machines packaged workloads running under virtualization in VMware or a public cloud.
This is not to say that public clouds or VMware are bad. It has to do with the model. Kubernetes treats a cluster of servers as a single computer where Kubernetes is the operating system. Google, with borg — the precursor to Kubernetes, runs warehouses as computers.
2. Workload Portability
A workload in Kubernetes can be run in clusters from varying providers. It’s not uncommon for me to run a workload in Kubernetes running in Google’s Cloud and then run the exact same workload in Kubernetes running in Azure. I do this today.
3. Fault Tolerant By Default
In traditional situations applications run on their set of hardware or virtual machines. If something happens to some hardware the workloads running on that infrastructure has issues. That’s because application workloads are assigned to infrastructure.
The Risks
As with any system there are risks everyone should be aware of. These are the state of Kubernetes as of the writing of this post and they are being actively worked on.
1. Software Services
When we run applications we often will use a Software as a Service for things we need but are not our core competency. A common example is using a database such as MySQL. Why operate it yourself when you can get it via an API request where someone else makes sure it’s running?
2. Developer Tools and Documentation
Kubernetes is hard. If you’re used to Cloud Foundry, where you can have a few lines of YAML to describe an application, the hundreds you’ll need in Kubernetes can seem hard to grasp.
The Kubernetes project is not trying to directly address these. Instead, this is a space for the ecosystem of projects around Kubernetes. Developers have their own styles. That’s why we have Ansible, Chef, and Puppet.
Scaling with Kubernetes Case Study: The Snappy Monolith
To demonstrate the value of implementing Cloud Native best practices including containerization along with a microservices architecture, we’ll use a running example throughout this paper: a photo sharing app called Snappy that provides basic photo upload and sharing functionality between users through a web interface.
Throughout this paper, we’ll modernize Snappy by: -
•Decomposing the app’s business functions into microservices.
• Containerizing the various components into portable and discretely deployable pieces.
- Using a DigitalOcean Kubernetes cluster to scale and continuously deploy these stateless microservices.

With our photo-sharing monolith app Snappy, we observe that at first the web UI, photo management, and user management business functions are combined in a single codebase where these separate components invoke each other via function calls. This codebase is then built, tested, and deployed as a single unit, which can be scaled either horizontally or vertically. As Snappy acquires more users — and subsequently scales and iterates on the product — the codebase gradually becomes extremely large and complex.
Web Sites, Tutorials, Docs
• Kubernetes: https://kubernetes.io/
• Kubernetes Basics: https://kubernetes.io/docs/tutorials/kubernetes-basics/
• Kubernetes the Hard Way: https://github.com/kelseyhightower/kubernetes-the-hard-way
• The Children’s Illustrated Guide to Kubernetes https://deis.com/blog/2016/kubernetes-illustrated-guide/
• Anything by Kelsey Hightower
Presentations, blogs, tutorials, etc.
Cloud Providers
In Kubernetes, there is a concept of cloud providers, which is a module which provides an interface for managing load balancers, nodes (i.e. hosts) and networking routes.
As of Rancher v1.6.11+, Rancher supports three cloud providers when configuring Kubernetes.All major cloud providers provide managed Kubernetes services. Let’s have a look at Amazon Web Services, Microsoft Azure, Google Cloud Platform and IBM Cloud.
Kubernetes Versions
Since we’re considering the managed Kubernetes services, we are limited to the versions supported by the Cloud provider. The availability is shown in the table below.

Perhaps surprising to see, but IBM has the widest availability of newer Kubernetes versions! Amazon, Azure and Google have a very similar availability (as of the 10 March 2020) with Azure and Google allowing previews (presumably not for production use!) of the newer versions.
Cost & SLA
Azure and IBM offer free control planes. For Amazon, you have to pay, but they recently cut the cost in half to $0.10 an hour. It’s a 24x7 cost while the cluster is running though (you can scale you VMs to zero overnight). Google will introduce a charge of $0.10 an hour for the control plane starting 6 June 2020.
The cost estimate below includes load balancers, storage and compute.
- AWS: t3.large 2 vCPU, 8 GiB
- Azure: Standard_B2ms 2 vCPU, 8 GiB
- Google: n1-standard2 2 vCPU, 7.5 GiB
- IBM Cloud: u2c.2x4 2 vCPUs 4 GiB (looks like u3c is the more modern variant now — 11 March 2020)

Perhaps unexpectedly, IBM Cloud is the cheapest. Of course, the cost depends on the Compute, Storage and Data transfers and this is only a simple estimate. There is probably a 20% difference in price, and what we really need to do is run a realistic workload on each to validate that we’ve picked compatible instance types.
Since AWS and Google are the only products with paid-for control planes, they are the only services to offer an SLA on the control plane. If that’s something you need, then AWS is the one for you.
•Google Cloud Platform Kubernetes Engine https://cloud.google.com/kubernetes-engine/
•Azure Container Service (AKS)
https://azure.microsoft.com/en-us/services/container-service/
•In Preview: Amazon Elastic Container Service for Kubernetes (Amazon EKS)
Conclusion
All four Cloud providers allowed provisioning of a Kubernetes cluster within twenty minutes and the costs were very similar. If you need an SLA on the control plane, then AWS (or Google) are the obvious choices.
If you’re already heavily invested in any of these Cloud providers, then you’ll be just fine. No obvious reasons to jump ship. But if your company is multi-Cloud then maybe you can pick the one that suits your needs best?
IBM Cloud is definitely the underdog here, and probably unfairly. Maybe it’s worth having a look at it?At the start of 2015, using containers in production was seen as either an experiment or a bold choice for enterprise. Now, only twelve months later, containers in production are being deployed not just for pilot or specific projects but are being woven into the architectural fabric of an enterprise.Docker is a sponsor of The New Stack.