kube-apiserver
It is designed to scale horizontally, meaning you can deploy and run multiple instances of kube-apiserver to balance the traffic between them, ensuring high availability and reliability of the Kubernetes API.
Manual Installation
Manual Installation
You can download the kube-apiserver binary from the kube-apiserver releases and run it manually on your control-plane node. However, this method is not recommended for production environments as it requires manual configuration and management of the kube-apiserver process.
kubeadm
kubeadm
If you’re using kubeadm to setup your Kubernetes cluster, the kube-apiserver will be automatically deployed as a static pod on the control-plane node.You can use the
kubectl get pods -n kube-system command to find the kube-apiserver pod. The kube-apiserver will be running as a static pod, which means it will be managed by the kubelet and will automatically restart if it crashes.Guide to check kube-apiserver status
Process flow of getting data from the cluster
You can interact with kube-apiserver by calling the Kubernetes API directly as well.
kubectl from the user will first go to the kube-apiserver.
Process flow of creating a new pod
All these steps are very similar when a change happens in the cluster. The kube-apiserver will always be the central point of communication between all the components in the cluster.
Create Pod via API
etcd
You can refer to the etcd documentation for more details about etcd, including its architecture, features, and how to use it effectively in a Kubernetes environment.
kube-apiserver interacts with etcd to read and write data about the cluster’s state, making it an essential part of the Kubernetes architecture.
All information you see when you run the kubectl get command is from the ETCD server. Remember all changes made to the cluster like adding additional nodes, deploying pods, etc, will be updated in the ETCD server.
Manual Installation
Manual Installation
You can refer to the etcd releases to download the etcd binary and follow the etcd installation instructions to set up an etcd server on your control-plane node. However, this method is suitable for learning and testing purposes but is not recommended for production environments due to the complexity of managing and maintaining the
etcd cluster manually.There is one important configuration option to note when setting up etcd manually: --advertise-client-urls 'http://{IPADDRESS}:2379'. This option specifies the address that etcd listens on for client requests. The default port for etcd is 2379. When configuring the kube-apiserver, you need to ensure that it is set to connect to this URL, as the kube-apiserver will use this URL to communicate with the etcd server.kubeadm
kubeadm
If you’re using kubeadm to set up your Kubernetes cluster, the You can also use the You will notice that the root directory is the registry, and below that are various Kubernetes objects like nodes, pods, deployments, etc, as it stores data in a specific directory structure.
etcd server will be automatically deployed as a static pod on the control-plane node.You can use the kubectl get pods -n kube-system command to find the etcd pod. The etcd server will be running as a static pod, which means it will be managed by the kubelet and will automatically restart if it crashes.Guide to check etcd status
etcdctl command-line tool to interact with the etcd server directly. For example, you can run the following command to get all keys stored by Kubernetes in etcd:Get all keys from etcd
kube-controller manager
Ideally, each controller should run in its own process, but to reduce complexity, they’re all compiled into a single binary and run in a single process. This design choice simplifies the deployment and management of the controllers while still allowing for scalability and reliability.
- Node Controller - It is responsible for monitoring the nodes and taking action when a node goes down or becomes unresponsive.
- Replication Controller - It is responsible for ensuring that the desired number of pod replicas are running at any given time.
- Endpoints Controller - It is responsible for managing the endpoints that are used to connect services to pods.
- Service Account & Token Controllers - They are responsible for managing service accounts and tokens that are used for authentication and authorization in the cluster.
Manual Installation
Manual Installation
You can download the kube-controller-manager binary from the kube-controller-manager releases and run it manually on your control-plane node. However, this method is not recommended for production environments as it requires manual configuration and management of the kube-controller-manager process.
kubeadm
kubeadm
If you’re using kubeadm to setup your Kubernetes cluster, the kube-controller-manager will be automatically deployed as a static pod on the control-plane node.You can use the
kubectl get pods -n kube-system command to find the kube-controller-manager pod. The kube-controller-manager will be running as a static pod, which means it will be managed by the kubelet and will automatically restart if it crashes.Guide to check kube-controller-manager status
Node Controller
The node controller is responsible for monitoring the nodes in the cluster and taking action when a node goes down or becomes unresponsive. It does this by watching for events related to the nodes and updating the status of the nodes in theetcd cluster.
Replication Controller
The replication controller is responsible for ensuring that the desired number of pod replicas are running at any given time. It does this by watching for events related to the pods and updating the status of the pods in theetcd cluster. If the replication controller detects that a pod is not running or has been deleted, it will create a new pod to replace it, ensuring that the desired number of replicas is maintained.
kube-scheduler
Remember, kubelet is the one who will place and create the pod on the node.
- Resource Requirements - CPU and memory requirements of the pod
- Constraints - Hardware, software, and policy constraints
- Affinity and Anti-affinity - Rules about which pods should be co-located or separated based on node or other pod labels
- Data Locality - Scheduling pods close to the data they need to access
- Inter-workload Interference - Avoiding scheduling pods that may interfere with each other on the same node
- Deadlines - Scheduling pods based on their deadlines and priorities
Manual Installation
Manual Installation
You can download the kube-scheduler binary from the kube-scheduler releases and run it manually on your control-plane node. However, this method is not recommended for production environments as it requires manual configuration and management of the kube-scheduler process.
kubeadm
kubeadm
If you’re using kubeadm to setup your Kubernetes cluster, the kube-scheduler will be automatically deployed as a static pod on the control-plane node.You can use the
kubectl get pods -n kube-system command to find the kube-scheduler pod. The kube-scheduler will be running as a static pod, which means it will be managed by the kubelet and will automatically restart if it crashes.Guide to check kube-scheduler status
Nodes Ranking
To determine which node is the best fit for a pod, it will rank the nodes. Here is an example, currently we have one pod with CPU requirements of 10. The kube scheduler will be going through 2 phases to identify and schedule the pod on the best node.- The kube scheduler will filter out those nodes that do not fit the requirements. So in this case, node 1 will be filtered out as node 1 only has 4 CPUs.
- (Rank nodes) By using a priority function or class, the kube scheduler assigns a score and calculates how much free space is available on the nodes after the pod is placed. The highest score after calculation will place the pod on that node.
- Assuming the priority score is 5
- Score on node 2 =
10 - 5 = 5 - Score on node 3 =
20 - 5 = 15(Win)
- Score on node 2 =
- Assuming the priority score is 5
