https://youtu.be/oqWgMc9yYcc
Resources
- Chapter 1 of the training course please refer to
Copy of #100DaysOfKubernetes
Main Node
Where does the orchestration come in?
- Managed by several operators and controllers — will look at operators and controllers later one. Operators make use of custom resources to manage an application and their components.
- "Each controller interrogates the kube-apiserver for a particular object state, modifying the object until the declared state matches the current state." In short, controllers are used to ensure a process is happening in the desired way.
- "The ReplicaSet is a controller which deploys and restarts containers, Docker by default, until the requested number of containers is running." In short, its purpose is to ensure a specific number of nodes are running
There are several other API objects which can be used to deloy pods. A DaemonSet will ensure that a single pod is deployed on every node. These are often used for logging and metrics pods. A StatefulSet can be used to deploy pods in a particular order, such that following pods are only deployed if previous pods report a ready status.
API objects can be used to know
-
What containerized applications are running (and on which nodes)
-
The resources available to those applications
-
The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance
-
kube-apiserver
- Provides the front-end to the cluster's shared state through which all components interact
- It is he kube-apiserver is central to the operation of the Kubernetes cluster.
- Handles internal and external traffic
- The only agent that connects to the etcd database
- Acts as the master process for the entire cluster
- Provides the out-wards facing state for the cluster's state
- Each API call goes through three steps:
authentication, authorization, and several admission controllers.
-
kube-scheduler
- The Scheduler sees a request for running a container and will run the container in the best suited node
- When a new pod has to be deployed, the kube-scheduler determines through an algorithm to which node the pod should be deployed
- If the pod fails to be deployed, the kube-scheduler will try again based on the resources available across nodes
- A user could also determine which node the pod should be deployed to —this can be done through a custom scheduler
- Nodes that meet scheduling requirements are called feasible nodes.
- You can find more details about the scheduler on GitHub.
-
etcd Database
- The state of the cluster, networking, and other persistent information is kept in an etcd database
- etcd is a consistent and highly-available key value store used as Kubernetes' backing store for all cluster data
- Note that this database does not change; previous entries are not modified and new values are appended at the end.
- Once and entry can be deleted, it will be labelled for future removal by a compaction process. It works with curl and other HTTP libraries and provides reliable watch queries.
- Requests to update the database are all sent through the kube api-server; each request has its own version number which allows the etcd to distinguish between requests. If two requests are sent simultaneously, the second request would then be flagged as invalid with a 409 error and the etcd will only update as per instructed by the first request.
- Note that it has to be specifically configured
-
Other Agents
- The kube-controller-manager is a core control loop daemon which
interacts with the kube-apiserver to determine the state of the cluster.
If the state does not match, the manager will contact the necessary
controller to match the desired state.
It is also responsible to interact with third-party cluster management and reporting.
- The cluster has several controllers in
use, such as endpoints, namespace, and replication. The full list has
expanded as Kubernetes has matured. Remaining in beta as of
v1.16, the cloud-controller-manager interacts with agents outside of the
cloud. It handles tasks once handled by kube-controller-manager. This
allows faster changes without altering the core Kubernetes control
process. Each kubelet must use the --cloud-provider-external settings passed to the binary.
-
There are several add-ons which have become essential to a typical
production cluster, such as DNS services. Others are third-party
solutions where Kubernetes has not yet developed a local component, such
as cluster-level logging and resource monitoring.