Introduction to AKS
Creating a Kubernetes cluster and deploying your applications into AKS is good milestone. But it not enough for production ready cluster. In order to make Kubernetes ready for Production, we need to make right choice for the networking plugin, TLS certificate, POD communication. Is API server protected using a private endpoint. Is egress traffic filtered using a firewall? Many more question. In this series of article, we will dive deeper into Azure Kubernetes (AKS) architecture and choices we need to make it for production ready.
In this article, we will start with AKS Architecture.
AKS cluster is same as Kubernetes cluster but hosted and configured as a managed service within Azure.
Control Plane
Control Plane will contain the four components, API server, Scheduler, Cloud Manager, and Control Manager.
- API Server: - Entry point or the front-end service for managing Kubernetes cluster. This API server will be exposed on a public or private endpoint. It can be exposed via VNET integration as well. API server endpoint will be used either by the worker nodes or also by the cluster.
- Cloud manager: — Cloud manager will be connected to Azure API, which will be used provision Azure resources because along with cluster we will also create other resources like Load Balancer.
- control manager: -
- Scheduler: -
- ETCD: — Along with these components, we have 5th components called etcd database. etcd database managed outside the control plane, as control plane is virtual machine, and database doesn’t present within the control plane. etcd database and its replicas will be managed by Azure.
Worker Node
Worker nodes are virtual machine present in the azure. in Kubernetes term we call them as nodes/nodepool. We might have one or more nodepool within the cluster. There are 2 types of node pool, 1. user node pool 2. System node pool
- System Node Pool: — The system nodepool is supposed to be dedicated for the Kubernetes platform or for the Kubernetes pods. For example, Pods running in kube-system namespace, like the core DNS, kube-proxy.
Why we need separate worker node,
a. It is better to dedicate a node pool for these application as they are critical.
b. To have different SKU, Os for system node pool
2. User Node Pool: user node pool will be used to host our application.
Inside the Node pool, we will have containerd( container runtime for running the pod), kubelet ( kubelet will run as a process, which will help communicate with API server) and kubeproxy (kubeproxy will enable the communication between multiple nodepools)
How Azure will Deploy application into the Kubernetes cluster?
Cluster admin or devops pipelines will go to API server endpoint through the command line via kubectl. In kubectl, we will add security to the cluster using Azure Active Directory Authentication.
The communication between worker node and Control plane node will through public Ip in public cluster (communication go through internet). or via private endpoint in private cluster.
How Azure will Expose application to the end user?
Kubernetes will expose application to end user using Load Balancer, Ingress Controller, AGIC using Public Ip /Domain name.