Kubernetes Q&A - 2
1) What are the key Kubernetes use cases and advantages?
- DevOps CI/CD Automation
- Modernizing legacy applications by implementing and utilizing a container-based infrastructure
- Automated app operations
- Optimize hardware usage to make the most of your resources.
- Automate and control application deployments and updates.
- Scale your containerized applications as well as their resources—on the fly.
- Declaratively manage services to ensure deployed applications run as intended.
- Perform health checks and enable application self-healing.
2) How many Kubernetes clusters should you use to run a set of applications?
Following are some of the options:
- One large shared cluster
- Many small single-use clusters
- Cluster per application
- Cluster per environment
The answer depends on your use case — you have to trade-off the pros and cons of the different approaches against each other to find the solution that works best for you.
3) How many nodes can a Kubernetes cluster have?
A cluster is a set of nodes (physical or virtual machines) running Kubernetes agents, managed by the control plane. Kubernetes v1.25 supports clusters with up to 5000 nodes. In practice, challenges may show up already with much smaller cluster sizes, such as 500 nodes. The reason is that larger clusters put a higher strain on the Kubernetes control plane.
4) What’s the difference between Docker & Kubernetes?
Docker is a suite of software development tools for creating, sharing and running individual containers
Kubernetes is a system for operating containerized applications at scale.
Docker Swarm is an orchestrator like Kubernetes for operating containerized applications at scale. Mirantis has acquired the Docker Enterprise business. It has large enterprise customers using Swarm in production at scale & continues to invest in it. Swarm runs on Windows containers.
5) Why is Kubernetes deprecating Docker as a container runtime and what are the alternatives? [A Cloud Guru, 2/21]
Kubernetes does not actually handle the process of running containers on a machine. Instead, it relies on another piece of software called a container runtime. The container runtime runs containers on a host, and Kubernetes tells the container runtime on each host what to do. You can actually choose from a variety of options when it comes to what software you want to use as your container runtime when running Kubernetes. Up to now, a fairly popular option was to use Docker as the container runtime.
Kubernetes is deprecating support for Docker as a container runtime starting with Kubernetes version 1.20.
Docker is not actually a container runtime! It’s actually a collection of tools that sits on top of a container runtime called containerd.
Docker as an underlying runtime is being deprecated in favor of runtimes that use the Container Runtime Interface (CRI) created for Kubernetes. Docker-produced images will continue to work in your cluster with all runtimes, as they always have.
If you are currently using Docker as a container runtime in your Kubernetes environment, replace it another container runtime, such as containerd or CRI-O. Docker contributed containerd to the open-source community.
Kubernetes works with all container runtimes that implement a standard known as the Container Runtime Interface (CRI).
Docker does not implement the Container Runtime Interface (CRI). In its early days, Docker was a monolithic application responsible for creating and running containers, pulling images from registries, managing data, and so on. Pretty much everything that Docker does today was part of that monolith.
Since Docker version 1.11.0, the monolith has been decoupled into a set of independent components that follow well-defined standards.
The Docker engine is already built on top of containerd, so using Docker in Kubernetes meant running the dockershim Container Runtime Interface implementation (because Docker doesn’t have a way to interact with the CRI), as well as Docker itself — and containerd inside Docker.
In the past, there weren’t as many good options for container runtimes, and Kubernetes implemented the Docker shim, an additional layer to serve as an interface between Kubernetes and Docker. Now, however, there are plenty of runtimes available that implement the CRI, and it no longer makes sense for Kubernetes to maintain special support for Docker.
Kubernetes can still run containers built using Docker’s Open Container Initiative (OCI) image format, meaning you can still use Dockerfiles and build your container images using Docker.
Kubernetes will also continue to be able to pull from Docker registries (such as Docker hub). This means that Docker will remain a powerful contender when it comes to managing the images once they are built.
Here is what the deprecation of Docker in Kubernetes means for you, depending on your use case:
Kubernetes end-users do not need to change their environment, and can continue using Docker in their development processes. However, developers should realize that the images they create will run within Kubernetes using other container runtimes, not Docker.
Users of managed Kubernetes services like Google Kubernetes Engine (GKE) or Elastic Kubernetes Service (EKS) need to ensure worker nodes are running a supported container runtime (i.e. not Docker). Customized nodes may need to be updated.
Administrators managing clusters on their own infrastructure must reinstall container runtimes on their nodes (if they are currently running Docker) to avoid clusters from breaking, when Docker support is removed in the future. Kubernetes nodes should run another, CRI-based container runtime, like condainerd or CRI-O.
6) What is the difference between K8s and K3s?
K8s is a general-purpose container orchestrator, while K3s is a purpose-built container orchestrator for running Kubernetes on bare-metal servers. K3s is a lightweight Kubernetes distribution created by Rancher Labs, and it is fully certified by the Cloud Native Computing Foundation (CNCF). K3s includes and defaults to containerd, an industry-standard container runtime. Kubernetes is a 10-letter word stylized as K8s. So something half as big as Kubernetes would be a 5-letter word stylized as K3s.
7) Which companies are using Kubernetes?
Over 55% of Fortune 500 companies have adopted Kubernetes into their IT solutions. Review how these companies are implementing it. When using Kubernetes, commits to production is 86% faster. Time to provisioning resources is 450 times faster. Code deployment frequency becomes five times faster. [Kube Academy].
Some well known companies using Kubernetes are:
- Airbnb
- Booking.com
- Forbes
- ING
- JD.com, China's largest retailer
- Northwestern Mutual
- Pearson
- Pokemon Go
- Spotify
- The New York Times
- Tinder
Check a list of Case Studies on the official Kubernetes site.
8) Which cloud provider’s managed Kubernetes service is best?
Which cloud provider’s managed Kubernetes service you use really depends on the situation.
- Amazon Elastic Kubernetes Service (EKS) is the most widely used managed Kubernetes service, according to a 2020 survey by CNCF.
- GKE has the most features and automated capabilities.
- Azure Kubernetes Service (AKS) may be the most cost-effective option and integrates well with all things Microsoft.
9) Why is Kubernetes so complicated?
All common facilities needed by any application - like error handling, scalability and redundancy - are now located inside the Kubernetes ecosystem.
Capabilities that were once part of the application code are now external, so the application code can be much smaller and simpler than before.
The application can concentrate on processing payload data and doesn’t have to concern itself with ancillary things like scaling and redundancy due to containerization.
The pursuit of a simplified application environment has led to an explosion of configurable items in the Kubernetes ecosystem. The Kubernetes environment has become almost infinitely configurable. While this flexibility is praiseworthy, the confusing permutations and combinations have become unwieldy.
10) What is an Alpine image?
Alpine Linux is a small and lightweight Linux distribution that is very popular with Docker users because it’s compatible with a lot of apps, while still keeping containers small. By using a smaller base container image such as Alpine, you can significantly cut down on the size of your container. Base images can consist of up to 80% of packages and libraries that won’t be needed. Alpine images are about 10 times smaller than other base images. It is a Kubernetes best practice to use small containers as they offer performance and security advantages.
Also see - Kubernetes Q&A -1
Comments
Post a Comment