nicolai’s space.:space:

Kubernetes Cheatsheet 📝

Nicolai
August 18th, 2021 · 3 min read

When I started this blog, I decided to set the focus on quality, not quantity. Although the aim has not changed since then, I chose to publish posts at shorter intervals, that are not so time-consuming to write.

PS: I am still working on detailed (longer) blog posts, so be excited for updates! 😉

Introduction

I’ve been working with Kubernetes for quite some time and would like to use this post as a place to store commands and tips, that I use quite often. As I am still learning and broaden my knowledge about Kubernetes, I consider updating the content on a regular basis.

Further, I incorporate tips and tricks about Helm as well.

Access to multiple Clusters

In most cases, there is the need to work with different clusters, e.g. as it is considered best practice to separate and isolate environments (development, staging, production). Dealing with multiple customers, the number of clusters can increase utterly fast, too. Having access to multiple Kubernetes clusters is therefore a common use case.

When working with Microsoft Azure and using AKS, there’s an easy way to configure access to a cluster. First, make sure, that Azure CLI is installed and then proceed with:

1# You will be redirected to the Azure Portal to login
2az login
3# Set the subscription, in which the AKS is located
4az account set --subscription <Name-of-Subscription>
5# Get configuration and merge with current context
6az aks get-credentials -n <Name-of-AKS> -g <Name-of-Resource-Group>

Switching between multiple clusters can be done by using the subsequent command:

1kubectl config use-context <Name-of-Cluster>

However, I prefer to use kubectx instead, because it’s more intuitive and I don’t have to remember cluster names. 🥳

ReplicaSets, ReplicaSets, …

A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.

In most cases, you would rather go for Deployments to manage your Kubernetes workloads. But: For each change that is made to the deployment, Kubernetes will create and gradually scale up a ReplicaSet, while scaling down the old one and keeping it around as history — up to a default limit of 10 revisions.

Even though they don’t consume compute power, the history still messes up Kubernetes’ etcd store. In order to reduce the amount of ReplicaSets to retain, you could either set the optional field .spec.revisionHistoryLimit or use the subsequent command to delete the historical and obsolete ones across all namespaces.

1kubectl get replicaset -A | awk '(NR>1) {if ($2 + $3 + $4 == 0) print "kubectl -n " $1 " delete replicaset " $2}'

The command iterates over a list of ReplicaSets in all namespaces and evaluates, whether the desired, current and ready state is set to 0. If that’s the case, then kubectl commands are printed, that can be used to delete the history of all ReplicaSets. Let’s check, how the command works:

In a first step, kubectl is used to obtain and print a list like the following:

1NAMESPACE NAME DESIRED CURRENT READY AGE
2monitoring management-grafana-5799dbc495 0 0 0 1d5h
3monitoring management-grafana-6878946bd6 0 0 0 17h
4monitoring management-grafana-7cbdbb6b94 1 1 1 12h

You may notice, that the output is tab-separated and therefore suitable to be piped to awk for further processing. As the first line is a header, awk is configured to ignore it by using (NR>1). Next, the command evaluates whether the columns 3, 4 and 5 are equal to 0 - which translates to the columns desired, current and ready. Remember, that the column index starts with 0, that’s why $2, $3 and $4 are used.

Finally, a kubectl command is constructed by using the namespace (column 1) and name (column 2) of the ReplicaSet.

Copy Resources across Namespaces

A Secret for example can be copied from one namespace to another by exporting it to YAML in a first step and applying the output to the destination namespace then. By piping the export to the second kubectl command, the Secret is copied in-place without the need to temporarily store a physical file.

1kubectl -n <src-namespace> get secret <secret> --export -o yaml | kubectl apply -n <dest-namespace> -f -

The above command is also applicable for other Kubernetes resources, e.g. ConfigMaps by slightly adopting the type in the statement. Please keep in mind, that changes to a Secret in one namespace are not automatically replicated to the other ones — so think of a process to keep the resources consistent and synchronized.

Dealing with Finalizers

Finalizers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources marked for deletion. The Kubernetes resource remains in a terminating state while the actions defined by the finalizers are taken. After these actions are complete, the controller removes the relevant finalizers from the target object. When the metadata.finalizers field is empty, Kubernetes considers the deletion complete.

However, sometimes resources become stuck at the deletion process, because the actions defined by the finalizers are not responding for example. A workaround is to manually set the metadata.finalizers field to an empty array, so that Kubernetes continues the deletion process. Use the subsequent command to do so:

1kubectl -n <namespace> patch <resource-type> <resource-name> -p '{"metadata": {"finalizers": []}}' --type merge

The resource-type refers to Kubernetes resources like Deployment, ConfigMap, Pod, (…), but also includes CRDs.

Connect to Pod (/bin/sh)

For debugging purposes, it’s quite useful having the possibility to dig around the running container. You may want to check the current processes or whether the ConfigMap is mounted at a given path. Depending on the container, you could either go for /bin/bash or /bin/sh, while the latter one is available more often — according to the experience I made so far.

1kubectl -n <namespace> exec -it <pod> -- bin/sh
2kubectl -n <namespace> exec -it <pod> -c <container> -- bin/sh

If the Pod consists of multiple containers, e.g. when a Sidecar is used to collect logs or provide metrics, go for the second command to also specify the container, that you want to connect to.

Decode a Secret

In Kubernetes, Secrets are used to store confidential information like passwords or private SSH keys. As the data is encoded using Base64, it isn’t that complex to print the actual value to the console. Knowing the Secret's key, it’s as simple as:

1kubectl -n <namespace> get secret <secret> -o jsonpath="{.data.password}" | base64 --decode

You may want to adopt the jsonpath to match your Secret. ▪

Join the mailing list

High-quality blog posts are like shiny Pokémon - they don't appear often. But when they do, be the first to receive the latest content with the ability to opt-out at anytime.

More articles from Nicolai

How I work with Emails 📮

This post explains, how I work with emails and why I believe that having just 6 inbox folders are enough to stay productive.

June 22nd, 2021 · 2 min read

Kong - Log Requests to the Console 🦧

This post explains, how a request can be logged to the console for debugging purposes when using Kong as API gateway.

June 19th, 2021 · 2 min read
© 2017–2022 Nicolai
Link to $mailto:nicolai+blog@disroot.orgLink to $https://github.com/nicolai92Link to $https://www.instagram.com/nicolai92_/Link to $https://www.linkedin.com/in/nicolai92/Link to $https://medium.com/@nicolai92Link to $https://www.xing.com/profile/Nicolai_Ernst/cv
Sometimes, the questions are complicated – and the answers are simple.