Kubernetes

Kubernetes was developed for the delivery of scalable, always-on, reliable web-services in Google’s cloud. In the Kubernetes world, the focus is on scalable, long-running application services such as web-stores, databases, API services, and mobile application back-ends. Kubernetes applications are assumed to be containerized and adhere to a cloud-native design approach. Applications are comprised of Pods – essentially groups of one or more Docker or OCI compliant containers(2) that can be deployed on a cluster to provide specific functionality for an application. Internet-scale applications frequently need to continuously available, so Kubernetes provides features supporting continuous integration / continuous delivery (CI/CD) pipelines and modern DevOps techniques. Developers can build and roll out new functionality and automatically roll back to previous deployment versions in case of a failure. Health checks provide mechanisms to send readiness and liveness probes to ensure continuous service availability. Kubernetes is more than just a resource manager – it’s a complete management and runtime environment. Kubernetes includes services that applications rely on. These include DNS management, ingress controllers, virtual networking, persistent volumes, secret management, and more. Applications built for Kubernetes will only run in a Kubernetes environment.

Kubernetes distributions

  • Minikube An official repackaging of Kubernetes, provides a local instance of Kubernetes small enough to install on a developer’s notebook. Many developers use Minikube as a personal development cluster or a Docker Desktop replacement. The minimum requirements are 2GB of free memory, 2 CPUs, 20GB of storage, and a container or virtual machine (VM) manager such as Docker, Hyper-V, or Parallels. Note that for Mac users there is as yet no M1 build, only x86-64.
  • k3s A Cloud Native Computing Foundation project, is “lightweight Kubernetes.” It is best suited to running Kubernetes in resource-constrained environments. Even a Raspberry Pi will work as a k3s device, as k3s comes in ARM64 and ARMv7 builds. Note that it does not work on Microsoft Windows or macOS, only on modern Linux such as Red Hat Enterprise Linux or Raspberry Pi OS. k3s requires no more than 512 MB to 1 GB RAM, 1 CPU, and at least 4 GB of disk space for its cluster database. By default k3s uses SQLite for its internal database, although you can swap that for etcd, the conventional Kubernetes default, or for MySQL or Postgres. k3s is best used for edge computing, embedded scenarios, and tinkering.
  • k0s From Mirantis, k0s also comes distributed in a single binary for convenient deployment. Its resource demands are minimal—1 CPU and 1GB RAM for a single node—and it can run as a single node, a cluster, an air-gapped configuration, or inside Docker. If you want to get started quickly, you can grab the k0s binary and set it up as a service. Or you can use a dedicated installation tool, k0sctl, to set up or upgrade multiple nodes in a cluster. Use cases for k0s include personal development and initial deployments to be expanded later.

Kubernetes local installation

Minikube

Microservices demo

microservices-demo
Sample cloud-first application with 10 microservices showcasing Kubernetes, Istio, and gRPC.

example-voting-app
Example distributed app composed of multiple containers for Docker, Compose, Swarm, and Kubernetes.

Google Kubernetes

CKAD-exercises
A set of exercises to prepare for Certified Kubernetes Application Developer exam by Cloud Native Computing Foundation.

Kubernetes monitoring

Kubernetes in the real world

Kubernetes video tutorials


Deployment vs StatefulSet

Namespaces

Kubernetes controllers

AWS EKS

Resource management in Kubernetes

  • Collecting metrics with built-in Kubernetes monitoring tools
  • Resource Management for Pods and Containers When you specify a Pod, you can optionally specify how much of each resource a container needs. The most common resources to specify are CPU and memory (RAM); there are others. When you specify the resource request for containers in a Pod, the kube-scheduler uses this information to decide which node to place the Pod on. When you specify a resource limit for a container, the kubelet enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. The kubelet also reserves at least the request amount of that system resource specifically for that container to use.

Nota importante sul Registry

  • registry.k8s.io: faster, cheaper and Generally Available (GA) – Nov. 28, 2022 Starting with Kubernetes 1.25, our container image registry has changed from k8s.gcr.io to registry.k8s.io. This new registry spreads the load across multiple Cloud Providers & Regions, functioning as a sort of content delivery network (CDN) for Kubernetes container images. This change reduces the project’s reliance on a single entity and provides a faster download experience for a large number of users.
  • k8s.gcr.io is hosted on a custom Google Container Registry (GCR) domain that was setup solely for the Kubernetes project. This has worked well since the inception of the project, and we thank Google for providing these resources, but today there are other cloud providers and vendors that would like to host images to provide a better experience for the people on their platforms. In addition to Google’s renewed commitment to donate $3 million to support the project’s infrastructure, Amazon announced a matching donation during their Kubecon NA 2022 keynote in Detroit. This will provide a better experience for users (closer servers = faster downloads) and will reduce the egress bandwidth and costs from GCR at the same time. registry.k8s.io will spread the load between Google and Amazon, with other providers to follow in the future.
    • Container images for Kubernetes releases from 1.25 onward are no longer published to k8s.gcr.io, only to registry.k8s.io.

Kubernetes networking

Kubernetes-related projects

  • KubeVirt.io KubeVirt allows your virtual machine workloads to be run as pods inside a Kubernetes cluster. This allows you to manage them with Kubernetes without having to convert them to containers. KubeVirt  provides a unified development platform where developers can build, modify, and deploy applications residing in both Application Containers, as well as Virtual Machines, in a common, shared environment.
  • Enabling New Features with Kubernetes for NFV
  • Kubeflow The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Our goal is not to recreate other services, but to provide a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures. Anywhere you are running Kubernetes, you should be able to run Kubeflow.

Helm

Lo sviluppo applicativo con Kubernetes è un’attività complessa: per ciascuna applicazione occorre installare, gestire e aggiornare centinaia di configurazioni. Helm semplifica questo processo automatizzando le attività di configurazione del cluster. Questo strumento di gestione dei pacchetti per Kubernetes funge da sistema condivisibile e ripetibile che utilizza singoli file manifesto YAML per definire e distribuire le applicazioni. Helm è equiparabile a uno strumento di creazione di modelli che mantiene la coerenza tra i container e stabilisce come soddisfare i requisisti specifici di un’applicazione. È possibile applicare lo stesso framework di configurazione a più istanze utilizzando le sostituzioni dei valori; il tutto in base alle priorità della configurazione specifica. Helm è un progetto open source che ha raggiunto il livello di maturità Graduated della Cloud Native Computing Foundation (CNCF).

Kubernetes and Infrastructure as Code

Kubernetes and HPC

Cloud Native applications and Kubernetes