Kubernetes
Kubernetes was developed for the delivery of scalable, always-on, reliable web-services in Google’s cloud. In the Kubernetes world, the focus is on scalable, long-running application services such as web-stores, databases, API services, and mobile application back-ends. Kubernetes applications are assumed to be containerized and adhere to a cloud-native design approach. Applications are comprised of Pods – essentially groups of one or more Docker or OCI compliant containers(2) that can be deployed on a cluster to provide specific functionality for an application. Internet-scale applications frequently need to continuously available, so Kubernetes provides features supporting continuous integration / continuous delivery (CI/CD) pipelines and modern DevOps techniques. Developers can build and roll out new functionality and automatically roll back to previous deployment versions in case of a failure. Health checks provide mechanisms to send readiness and liveness probes to ensure continuous service availability. Kubernetes is more than just a resource manager – it’s a complete management and runtime environment. Kubernetes includes services that applications rely on. These include DNS management, ingress controllers, virtual networking, persistent volumes, secret management, and more. Applications built for Kubernetes will only run in a Kubernetes environment.
- Orchestrazione di microservizi e applicazioni a più contenitori per la scalabilità e la disponibilità elevate
- Blog posts on Rancher web-site
- The Business Case for Container Adoption
- What’s a Kubernetes Cluster?
- How Kubernetes works
- Which Managed Kubernetes Is Right for Me?
- Getting started (again) with Kubernetes on Oracle Cloud
- 3 tiny Kubernetes distributions for compact container management
- How to Deploy a Multi-Tier Web Application with Kubernetes – IMPORTANTE
- Rootless containers on kubernetes — Part 2 – K8Spin – Medium
- Introduction to YAML: Creating a Kubernetes deployment
- Which Managed Kubernetes Is Right for Me?
- Kubernetes Crash Course for Absolute Beginners [NEW] – YouTube
- When to Use Docker Compose vs. Kubernetes
Kubernetes distributions
- Minikube An official repackaging of Kubernetes, provides a local instance of Kubernetes small enough to install on a developer’s notebook. Many developers use Minikube as a personal development cluster or a Docker Desktop replacement. The minimum requirements are 2GB of free memory, 2 CPUs, 20GB of storage, and a container or virtual machine (VM) manager such as Docker, Hyper-V, or Parallels. Note that for Mac users there is as yet no M1 build, only x86-64.
- k3s A Cloud Native Computing Foundation project, is “lightweight Kubernetes.” It is best suited to running Kubernetes in resource-constrained environments. Even a Raspberry Pi will work as a k3s device, as k3s comes in ARM64 and ARMv7 builds. Note that it does not work on Microsoft Windows or macOS, only on modern Linux such as Red Hat Enterprise Linux or Raspberry Pi OS. k3s requires no more than 512 MB to 1 GB RAM, 1 CPU, and at least 4 GB of disk space for its cluster database. By default k3s uses SQLite for its internal database, although you can swap that for etcd, the conventional Kubernetes default, or for MySQL or Postgres. k3s is best used for edge computing, embedded scenarios, and tinkering.
- k0s From Mirantis, k0s also comes distributed in a single binary for convenient deployment. Its resource demands are minimal—1 CPU and 1GB RAM for a single node—and it can run as a single node, a cluster, an air-gapped configuration, or inside Docker. If you want to get started quickly, you can grab the k0s binary and set it up as a service. Or you can use a dedicated installation tool, k0sctl, to set up or upgrade multiple nodes in a cluster. Use cases for k0s include personal development and initial deployments to be expanded later.
Kubernetes local installation
- Local Kubernetes for Linux — MiniKube vs MicroK8s – 2020
- Comparing Local Kubernetes Development Solutions – June 2022
- WSL+Docker: Kubernetes on the Windows Desktop – May 2020
- Building a Kubernetes cluster with WSL2 on Windows
- Run Kubernetes on Windows 10 using WSL 2 Docker Desktop for Windows integrates with the WSL and can create a Kubernetes cluster using Docker container nodes.
- Setting up Minikube and Accessing Minikube Dashboard Remotely
Minikube
Microservices demo
microservices-demo
Sample cloud-first application with 10 microservices showcasing Kubernetes, Istio, and gRPC.
example-voting-app
Example distributed app composed of multiple containers for Docker, Compose, Swarm, and Kubernetes.
Google Kubernetes
CKAD-exercises
A set of exercises to prepare for Certified Kubernetes Application Developer exam by Cloud Native Computing Foundation.
Kubernetes monitoring
- Monitoring a Kubernetes Cluster using Prometheus and Grafana
- Simplified Kubernetes Monitoring with Minikube, Helm, Prometheus, and Grafana
- How to Setup Prometheus Monitoring On Kubernetes Cluster
- Kubernetes Monitoring: Effective Cluster Tracking with Prometheus
- HOWTO install Prometheus & Grafana on a single host with microk8s
Kubernetes in the real world
- Lesson learned while scaling Kubernetes cluster to 1000 pods in AWS EKS
- Comparing the Top Eight Managed Kubernetes Providers
Kubernetes video tutorials
Deployment vs StatefulSet
- Kubernetes Deployment vs. StatefulSets
- Kubernetes | Deployments V/s StatefulSets
- Stateless vs. Stateful Kubernetes
- K8s: Deployments vs StatefulSets vs DaemonSets
- Kubernetes StatefulSet – Examples & Best Practices
Namespaces
- kubectl get all resources in namespace This article provided you with the basic concept of Kubernetes namespaces
Kubernetes controllers
AWS EKS
- Lesson learned while scaling Kubernetes cluster to 1000 pods in AWS EKS
- Integrate AWS IAM with Kubernetes RBAC in an Amazon EKS cluster
Resource management in Kubernetes
- Collecting metrics with built-in Kubernetes monitoring tools
- Resource Management for Pods and Containers When you specify a Pod, you can optionally specify how much of each resource a container needs. The most common resources to specify are CPU and memory (RAM); there are others. When you specify the resource request for containers in a Pod, the kube-scheduler uses this information to decide which node to place the Pod on. When you specify a resource limit for a container, the kubelet enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. The kubelet also reserves at least the request amount of that system resource specifically for that container to use.
Nota importante sul Registry
- registry.k8s.io: faster, cheaper and Generally Available (GA) – Nov. 28, 2022 Starting with Kubernetes 1.25, our container image registry has changed from k8s.gcr.io to registry.k8s.io. This new registry spreads the load across multiple Cloud Providers & Regions, functioning as a sort of content delivery network (CDN) for Kubernetes container images. This change reduces the project’s reliance on a single entity and provides a faster download experience for a large number of users.
- k8s.gcr.io is hosted on a custom Google Container Registry (GCR) domain that was setup solely for the Kubernetes project. This has worked well since the inception of the project, and we thank Google for providing these resources, but today there are other cloud providers and vendors that would like to host images to provide a better experience for the people on their platforms. In addition to Google’s renewed commitment to donate $3 million to support the project’s infrastructure, Amazon announced a matching donation during their Kubecon NA 2022 keynote in Detroit. This will provide a better experience for users (closer servers = faster downloads) and will reduce the egress bandwidth and costs from GCR at the same time. registry.k8s.io will spread the load between Google and Amazon, with other providers to follow in the future.
- Container images for Kubernetes releases from 1.25 onward are no longer published to k8s.gcr.io, only to registry.k8s.io.
Kubernetes networking
- A Guide to the Kubernetes Networking Model
- Kubernetes 101, part VIII, networking fundamentals
- How Kubernetes Networking Works – Under the Hood
- Tracing the path of network traffic in Kubernetes
- Understanding kubernetes networking: pods
- Comparing Kubernetes Container Network Interface (CNI) providers CNI is a network framework that allows the dynamic configuration of networking resources through a group of Go-written specifications and libraries.
- Kubernetes e CNI: analisi tecnica e comparazione dei plugin più utilizzati
- Kubernetes Journey — Up and running out of the cloud — flannel
- Kubernetes: Flannel networking
- How to Enable Multi-Interface Pods in a Kubernetes Environment
- CNI – the Container Network Interface on GitHub
- Multus CNI on GitHub Multus CNI is a container network interface (CNI) plugin for Kubernetes that enables attaching multiple network interfaces to pods.
- Linen CNI plugin on GitHub Linen provides a convenient way to easily setup networking between pods across nodes. To support multi-host overlay networking and large scale isolation, VxLAN tunnel end point (VTEP) is used instead of GRE. Linen creates an OVS bridge and added as a port to the linux bridge.
- How to Use MetalLB in BGP Mode MetalLB is an open source project that provides the ability to create LoadBalancer type of Kubernetes services on top of a bare-metal OpenShift/Kubernetes cluster and in turn create the same user experience that one would get on a public cloud provider as with AWS, GCP, or Azure.
Kubernetes-related projects
- KubeVirt.io KubeVirt allows your virtual machine workloads to be run as pods inside a Kubernetes cluster. This allows you to manage them with Kubernetes without having to convert them to containers. KubeVirt provides a unified development platform where developers can build, modify, and deploy applications residing in both Application Containers, as well as Virtual Machines, in a common, shared environment.
- Bringing Your VMs to Kubernetes With KubeVirt This tutorial will guide you through the installation of KubeVirt with Minikube. Minikube allows you to run Kubernetes on your local computer.
- Enabling New Features with Kubernetes for NFV
- Kubeflow The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Our goal is not to recreate other services, but to provide a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures. Anywhere you are running Kubernetes, you should be able to run Kubeflow.
Helm
Lo sviluppo applicativo con Kubernetes è un’attività complessa: per ciascuna applicazione occorre installare, gestire e aggiornare centinaia di configurazioni. Helm semplifica questo processo automatizzando le attività di configurazione del cluster. Questo strumento di gestione dei pacchetti per Kubernetes funge da sistema condivisibile e ripetibile che utilizza singoli file manifesto YAML per definire e distribuire le applicazioni. Helm è equiparabile a uno strumento di creazione di modelli che mantiene la coerenza tra i container e stabilisce come soddisfare i requisisti specifici di un’applicazione. È possibile applicare lo stesso framework di configurazione a più istanze utilizzando le sostituzioni dei valori; il tutto in base alle priorità della configurazione specifica. Helm è un progetto open source che ha raggiunto il livello di maturità Graduated della Cloud Native Computing Foundation (CNCF).
- Helm – The package manager for Kubernetes
- Cos’è Helm?
Kubernetes and Infrastructure as Code
Kubernetes and HPC
Cloud Native applications and Kubernetes
- Cloud-Native Network Function and Kubernetes Part 1 — Introduction
- How CI/CD pipeline works with Kubernetes?