Day 30 : Kubernetes Architecture

Day 30 : Kubernetes Architecture

·

8 min read

What is Kubernetes? Why do we call it k8s?

Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation (CNCF).

Kubernetes is often referred to as "k8s" for short. The name "Kubernetes" comes from the Greek word for "helmsman" or "pilot," and the "8" in "k8s" refers to the eight letters between the "K" and the "s" in the name. The name was chosen because it represents the idea of a platform that helps guide and manage the deployment of applications, much like a helmsman guides a ship.

The "k8s" abbreviation was first used by the Kubernetes community as a way to shorten the name of the project and make it easier to reference in conversation and in code. It's now widely used by developers, engineers, and other professionals who work with Kubernetes on a regular basis.

What are the benefits of using k8s?

There are many benefits to using Kubernetes (k8s) for container orchestration. Some of the main benefits include:

  1. Scalability: Kubernetes allows you to easily scale your applications up or down as needed, without requiring manual intervention. This means you can quickly respond to changes in demand and only run the number of instances you need, which can save resources and reduce costs.

  2. High availability: Kubernetes can automatically detect and recover from node failures, which means your applications will continue to run even if one or more of your servers goes down. This provides a high level of availability and reliability for your applications.

  3. Flexibility: Kubernetes supports a wide range of container runtimes, including Docker, rkt, and others. This means you can use the container runtime that best fits your needs, and you're not locked into a specific technology.

  4. Automation: Kubernetes provides a rich set of APIs and tools for automating the deployment, scaling, and management of applications. This means you can automate many of the repetitive tasks involved in managing your applications, which can save time and reduce the risk of human error.

  5. Portability: Kubernetes is a cloud-agnostic platform, which means you can run your applications on any infrastructure that supports Kubernetes, whether that's on-premises, in the cloud, or across multiple clouds. This provides a high level of portability and flexibility for your applications.

  6. Security: Kubernetes provides a number of security features, including network policies, secret management, and role-based access control (RBAC). These features help ensure that your applications are secure and that access to sensitive data is restricted to authorized users.

  7. Community support: Kubernetes is an open-source project with a large and active community of developers, users, and contributors. This means there are many resources available for learning and troubleshooting, as well as a wealth of community-created tools and plugins that can help you customize and extend Kubernetes to meet your specific needs.

Explain the architecture of Kubernetes.

Kubernetes is a complex system, but at a high level, its architecture can be broken down into several main components.

  1. Nodes: The nodes are the machines that run your application containers. Each node can run a subset of the containers in your application, and Kubernetes automatically distributes the containers across the nodes to optimize resource utilization.

  2. Pods: A pod is the basic execution unit in Kubernetes. It's a logical host for one or more containers and provides a way to package and deploy containers together. Each pod represents a single instance of an application and can contain one or more containers.

  3. ReplicaSets: A ReplicaSet ensures that a specified number of replicas (i.e., copies) of a pod are running at any given time. If a pod fails or is terminated, the ReplicaSet will create a new replica to replace it. This ensures that your application is always available, even if one or more of your nodes fail.

  4. Deployments: Deployment is a way to manage the rollout of new versions of an application. It allows you to specify the desired state of your application (e.g., the number of replicas, the container images to use, etc.), and Kubernetes will automatically update the application to match that state. This makes it easy to roll out new features or bug fixes without downtime.

  5. Services: A Service is a logical abstraction over a set of pods that provides a network identity and load balancing. It allows you to access your application through a stable IP address and DNS name, even as the underlying pods change. Services can also be used to expose your application to the outside world, such as through a load balancer or ingress controller.

  6. Persistent Volumes (PVs) and Persistent Volume Claims (PVCs): PVs are storage resources that are provisioned and managed by Kubernetes, while PVCs are used to request storage resources from a PV. This allows you to store data outside of containers, which can be useful for data that needs to persist across container restarts or node failures.

  7. ConfigMaps and Secrets: ConfigMaps store configuration data as key-value pairs, while Secrets store sensitive data such as passwords or API keys. Both can be used to decouple configuration and secrets from the application code, making it easier to manage and rotate them.

  8. Namespaces: Namespaces provide a way to partition resources and isolate workloads in a cluster. This allows you to run multiple applications in the same cluster without conflicts, or to create separate environments for development, staging, and production.

  9. Clusters: A Kubernetes cluster is a set of nodes that run Kubernetes and are managed by a central Kubernetes control plane. The control plane consists of components such as the API server, controller manager, and scheduler, which work together to manage the state of the cluster.

What is a Control Plane?

The control plane is a critical component of a Kubernetes cluster. It's the part of the cluster that manages the state of the cluster and makes decisions about how to deploy and run applications.

The control plane consists of several components, including the API server, the controller manager, and the scheduler. The API server is the central management interface for the cluster, and it provides a way for components to communicate with each other. The controller manager is responsible for running and managing various control plane components, such as the API server, and it also provides a way for components to communicate with each other. The scheduler is responsible for deciding which nodes in the cluster should run which pods, and it ensures that the cluster is running at optimal capacity.

The control plane is responsible for a number of critical functions, including:

  • Managing the state of the cluster: The control plane keeps track of the state of the cluster, including which nodes are running, which pods are deployed, and which services are available.

  • Deploying and scaling applications: The control plane is responsible for deploying and scaling applications, based on the desired state specified in configuration files or through the Kubernetes API.

  • Ensuring high availability: The control plane ensures that the cluster is highly available, by automatically detecting and recovering from node failures, and by ensuring that there are always enough replicas of each application running.

  • Providing a management interface: The control plane provides a management interface for the cluster, through which administrators can monitor the state of the cluster, deploy new applications, and troubleshoot issues.

What is the difference between kubectl and kubelets.

Kubectl and kubelets are two of the core tools for working with Kubernetes. They serve different purposes and have different responsibilities within a Kubernetes cluster.

Kubectl is the command-line tool for interacting with Kubernetes resources. It provides a unified way to manage and access your cluster, including running commands, creating and managing resources, and accessing cluster information. Kubectl is the primary way to interact with your Kubernetes cluster.

On the other hand, kubelets are components that run on each machine in your cluster and are responsible for managing the cluster's resources and providing a layer of abstraction between the runtime environment and the Kubernetes API.

The kubelet is responsible for running and managing the containers in your cluster, as well as providing a connection to the control plane (which is made up of the API server, controller manager, and scheduler).

In other words, kubectl is the command-line interface for interacting with the Kubernetes cluster, while kubelets are the components that manage the cluster's resources and provide a connection to the control plane.

Explain the role of the API server.

The API server is the front-end of the Kubernetes control plane and is the central hub for all communication between users, components, and external systems. It exposes a RESTful API that allows users to create, read, update, and delete (CRUD) Kubernetes objects. The API server also validates and authenticates requests, and performs admission control checks to ensure that requests are authorized and do not violate cluster policies.

the API server performs the following key roles:

  • Provides a unified interface for managing Kubernetes clusters. The API server is the single point of entry for all requests to manage a Kubernetes cluster. This allows users to interact with the cluster using a consistent set of commands and tools, regardless of the underlying implementation.

  • Validates and authenticates requests. The API server ensures that requests are well-formed and that they are authorized to be performed. This helps to protect the cluster from unauthorized access and malicious activity.

  • Performs admission control checks. The API server can be configured to perform admission control checks, which are used to validate requests before they are applied to the cluster. This can be used to enforce cluster policies, such as resource quotas or security constraints.

  • Stores the desired state of the cluster. The API server stores the desired state of the cluster, which is the set of Kubernetes objects that the cluster should maintain. This information is used by the other components of the Kubernetes control plane to ensure that the cluster is in the desired state.

  • Provides a watch mechanism for monitoring changes to the cluster. The API server provides a watch mechanism that allows users to be notified of changes to the cluster state. This can be used to implement event-driven workflows.

Overall, Kubernetes is a tool that provides a powerful and flexible platform for managing containerized applications, and its architecture is designed to provide scalability, reliability, and automation for modern software development and deployment practices.


Thankyou for following until here.See you in the next one.