top of page

GCP Compute Services

  • Writer: Anand Nerurkar
    Anand Nerurkar
  • Jun 30, 2022
  • 5 min read

GKE

Google Kubernetes Engine (GKE) provides a managed environment for deploying, managing, and scaling your containerized applications using Google infrastructure. The GKE environment consists of multiple machines (specifically, Compute Engine instances) grouped together to form a cluster.


ree

ree


ree

ree


ree

Cluster orchestration with GKE

GKE clusters are powered by the Kubernetes open source cluster management system. Kubernetes provides the mechanisms through which you interact with your cluster. You use Kubernetes commands and resources to deploy and manage your applications, perform administration tasks, set policies, and monitor the health of your deployed workloads.

It provides the below benefits:

· automatic management

· monitoring

· liveness probes for application containers

· automatic scaling

· rolling updates, and more.

Benefit of advanced cluster management features that Google Cloud provides

GKE workloads

GKE works with containerized applications. These are applications packaged into platform independent, isolated user-space instances, for example by using Docker. In GKE and Kubernetes, these containers, whether for applications or batch jobs, are collectively called workloads. Before you deploy a workload on a GKE cluster, you must first package the workload into a container.


GKE supports the use of container images that are built with Docker, for example as part of a build and deploy pipeline. In GKE version 1.24 and later, Docker cannot manage the lifecycle of containers running on GKE nodes.


Google Cloud provides continuous integration and continuous delivery tools to help you build and serve application containers. You can use Cloud Build to build container images (such as Docker) from a variety of source code repositories, and Artifact Registry or Container Registry to store and serve your container images.

Modes of operation

GKE clusters have two modes of operation to choose from:

· Autopilot: Manages the entire cluster and node infrastructure for you. Autopilot provides a hands-off Kubernetes experience so that you can focus on your workloads and only pay for the resources required to run your applications. Autopilot clusters are pre-configured with an optimized cluster configuration that is ready for production workloads.

· Standard: Provides you with node configuration flexibility and full control over managing your clusters and node infrastructure. For clusters created using the Standard mode, you determine the configurations needed for your production workloads, and you pay for the nodes that you use.



A cluster is the foundation of Google Kubernetes Engine (GKE): the Kubernetes objects that represent your containerized applications all run on top of a cluster.

In GKE, a cluster consists of at least one control plane and multiple worker machines called nodes. These control plane and node machines run the Kubernetes cluster orchestration system.

The following diagram provides an overview of the architecture for a zonal cluster in GKE:



ree


Control plane

The control plane runs the control plane processes, including the Kubernetes API server, scheduler, and core resource controllers. The lifecycle of the control plane is managed by GKE when you create or delete a cluster. This includes upgrades to the Kubernetes version running on the control plane, which GKE performs automatically, or manually at your request if you prefer to upgrade earlier than the automatic schedule.

Control plane and the Kubernetes API

The control plane is the unified endpoint for your cluster. You interact with the cluster through Kubernetes API calls, and the control plane runs the Kubernetes API Server process to handle those requests. You can make Kubernetes API calls directly via HTTP/gRPC, or indirectly, by running commands from the Kubernetes command-line client (kubectl) or by interacting with the UI in the Google Cloud console.

The API server process is the hub for all communication for the cluster. All internal cluster processes (such as the cluster nodes, system and components, application controllers) act as clients of the API server; the API server is the single "source of truth" for the entire cluster.

Control plane and node interaction

The control plane decides what runs on all of the cluster's nodes. The control plane schedules workloads, like containerized applications, and manages the workloads' lifecycle, scaling, and upgrades. The control plane also manages network and storage resources for those workloads.

The control plane and nodes communicate using Kubernetes APIs.

Control plane interactions with Artifact Registry and Container Registry

When you create or update a cluster, container images for the Kubernetes software running on the control plane (and nodes) are pulled from the pkg.dev Artifact Registry or the gcr.io Container Registry. An outage affecting these registries might cause the following types of failures:

  • Creating new clusters fail during the outage.

  • Upgrading clusters fail during the outage.

  • Disruptions to workloads might occur even without user intervention, depending on the specific nature and duration of the outage.

In the event of a regional outage of the pkg.dev Artifact Registry or the gcr.io Container Registry, Google might redirect requests to a zone or region not affected by the outage.

To check the current status of Google Cloud services, go to the Google Cloud status dashboard.

Nodes

A cluster typically has one or more nodes, which are the worker machines that run your containerized applications and other workloads. The individual machines are Compute Engine VM instances that GKE creates on your behalf when you create a cluster.

Each node is managed from the control plane, which receives updates on each node's self-reported status. You can exercise some manual control over node lifecycle, or you can have GKE perform automatic repairs and automatic upgrades on your cluster's nodes.

A node runs the services necessary to support the containers that make up your cluster's workloads. These include the runtime and the Kubernetes node agent (kubelet), which communicates with the control plane and is responsible for starting and running containers scheduled on the node.

In GKE, there are also a number of special containers that run as per-node agents to provide functionality such as log collection and intra-cluster network connectivity.

Node machine type

Each node is of a standard Compute Engine machine type. The default type is e2-medium. You can select a different machine type when you create a cluster.

Node OS images

Each node runs a specialized OS image for running your containers. You can specify which OS image your clusters and node pools use.

Minimum CPU platform

When you create a cluster or node pool, you can specify a baseline minimum CPU platform for its nodes. Choosing a specific CPU platform can be advantageous for advanced or compute-intensive workloads. For more information, refer to Minimum CPU Platform.

Node allocatable resources

Some of a node's resources are required to run the GKE and Kubernetes node components necessary to make that node function as part of your cluster. For this reason, you might notice a disparity between your node's total resources (as specified in the machine type documentation) and the node's allocatable resources in GKE



ree

ree

ree

ree


ree


ree


ree



ree


ree

ree

ree

ree


ree



ree


ree

ree


ree

ree

Command to connect GKE Cluster with CloudShell


This command is w.r. t. my GCP console for a project project retail-project-1405 and already creatyed gke cluster gke-demo.


gcloud container clusters get-credentials gke-demo --zone us-central1-c --project retail-project-1405


once connected to GKE Cluster, we can issue kubectl command


To view number of nodes running in cluster

kubectl get nodes


To view number of pods running in cluster

kubectl get pods


To view deployment running in cluster

kubectl get deployments


To view service exposed for deployment in cluster

kubectl get service



















 
 
 

Recent Posts

See All

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
  • Facebook
  • Twitter
  • LinkedIn

©2024 by AeeroTech. Proudly created with Wix.com

bottom of page