Prepare Your Environment Deploy Your Own Cluster I highly encourage you to get your personal environment to practice for the exam. Set Up the Master Node The three main things I recommend you to do are: Define aliases (this is already set up in the exam) alias k="kubectl" Define variables export do="--dry-run=client -o yaml" export do="--force --grace-period 0" Configure vim. Open and include: ~/.vimrc set tabstop=2 set expandtab set shiftwidth=2 Topics Contexts Get used to moving between different contexts. Although these commands will be given by the CKA questions, it is important you know: How to create set up a user k config set-credentials <name> \ --client-certificate=<path to .crt> \ --client-key=<path to .key> How to set a new context k config set-context <context name> \ --cluster=<cluster name> \ --user=<name> How to change between contexts k config use-context <context name> Useful documentation: https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/ https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#config JSON Paths Most probably, you will need to get through objects and filter some data. You should be able to be familiar with JSON paths. This does not mean you need to memorize everything. My advice is: to keep an idea of the basic structure of the objects (e.g., when it is a key-value or key-array), and use the following parameter: -o jsonpath="<filter>" Useful documentation: https://kubernetes.io/docs/reference/kubectl/jsonpath/ Important Configuration Files Know where the configuration files for are stored: kubelet /var/lib/kubelet/ Know where the configuration files for are stored: kubeadm /etc/kubernetes/admin.conf Know where the configuration files are stored: CNI /etc/cni/inet.d/ Know where configurations are stored: static pod /etc/kubernetes/manifests/ Declarative Syntax Be comfortable creating and editing YAML files. By practice, get the structure of the key resources: Pod Service PersistentVolume PersistentVolumeClaim Secret ConfigMap Most of the time, we will be using the option to create our template. This is why we created the environment variable at the beginning. dry run do k -n <namespace> run <pod name> --image <image name> $do > pod-example.yaml k -n <namespace> create deployment <dply name> --image <image name> --replicas <num> $do > dply.yaml The following structure defines the skeleton of any K8s object: apiVersion: metadata: kind: spec: Monitoring How to see nodes and pods, as well as containers, and resource usage: k top nodes k top pods --containers=true RBAC (Role Based Access Control) Be comfortable creating Roles, RoleBinding, ClusterRole, ClusterRoleBinding, and ServiceAccount objects. For these tasks, I would always try to use imperative commands. It is way easier to create them using imperative commands and this saves us time. Remember that Roles and RoleBindings are namespaced API-resources: The most common steps will be: Create the User or ServiceAccount. Create the Role or ClusterRole object. Create the RoleBinding or ClusterRoleBinding. Test the changes with the command. auth can-i k auth can-i <verb> <resources> --as=<user/sa> --namespace=<namespace> DaemonSet Just a quick tip here. We can create a DaemonSet object using as a template a Deployment YAML file. We just need to change the entry, and remove the , and entries. kind replicas strategy status k -n <ns> create deployment <dply name> --image <img name> $do > daemonset.yaml Container Runtime Get familiar with to start, stop, inspect, and delete containers. crictl Useful documentation: https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/ Schedulers kube-scheduler is a static Pod defined in the manifests we can find under by default. We can stop this pod by simply moving the manifest file from this directory to another one. /etc/kubernetes/manifests The way to schedule a Pod in a node is by using the following directive inside the part: spec nodeName Have in mind that as well as affinity and anti-affinity directives are actually used by the scheduler to decide in which node (if there is one) our Pod will be scheduled. nodeSelector Scheduling Pods We can give directives to the schedulers to choose an appropriate node by using: , it uses node labels. nodeSelector .spec.nodeSelector -> situation in a YAML file spec: nodeSelector: <label>: <value> Affinity and anti-affinity constraints. More complex constraints than . nodeSelector .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions.[] .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms.matchExpressions.[] spec: affinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: operator: values: - spec: affinity: preferredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: operator: values: - Topology spread constraints can be used to control how Pods are spread across your cluster. Taints and tolerations are also a topic which is very easy to manage and important when we talk about scheduling. k taint nodes <node name> <key>=<value>:<effect> In order to remove a taint, we just need to use the same command as we did above but with a minor inclusion: k taint nodes <node name> <key>=<value>:<effect>- We will include a toleration in a Pod spec by using the following directives: .spec.tolerations.[] -> situation in a YAML file spec: tolerations: - key: operator: value: effect: Useful documentation: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/ https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ Upgrades Suppose a node has never been initialized. Upgrade with Kubeadm will fail because there is nothing to update. We just need to focus on kubectl and kubelet upgrades. After that, it is just a matter of creating the new token to join the node to the cluster with Kubeadm. The steps to upgrade are: Check what versions are being used. Check what is the target version. Check the plan. Upgrade kubeadm. Upgrade kubelet. The process is different if a node is an active node or not. For an active node, we need to drain the node first. Update kubelet and then uncordon the node to make it available. Useful documentation: https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/ Static Pods Static Pods are not observed by the API server and are managed by Kubelet on a specific node. If you want to create a static Pod, you need to place the YAML manifest for that Pod inside the default manifest path. This path is configured in the Kubelet configuration file ( ) /var/lib/kubelet/config.yaml After you have placed your YAML file there, just restart the kubelet service by using: systemctl restart kubelet etcd Backups It is very common that you will need to backup and restore etcd. You can do it via etcdctl. The following is an example of doing so through the etcd pod running in the cluster: k -n kube-system exec <pod-name> -- /bin/sh -c \ "ETCDCTL_API=3 etcdctl \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key \ snapshot save /var/lib/etcd/snapshot.db"