Pages
Monday, January 10, 2022
Kubernetes - Ephemeral Container : Copy Of Pod By Changing Image
Docker - Running Container in privileged mode
Kubernetes - Static pods
But there is another type of pod that is managed directly by the kubelet rather than the api-server. This means the pod created by the kubelet is managed by the kubelet by itself. Control plane is not involved in the lifecycle of the static pod. In addition to this, kubelet also tries to create a mirror pod on the kubernetes api-server for each static pod so that the static pods are visible when we list the pods.Static pods are usually used by software bootstrapping kubernetes itself. For example, kubeadm uses static pods to bring up kubernetes control plane components like api-server, controller-manager as static pods. Since the static pods are managed by the Kubelet itself, we need to create these static pods on any of the worker nodes.
kubelet can watch a directory on the host file system (configured using --pod-manifest-path argument to kubelet) or sync pod manifests from a web url periodically (configured using --manifest-url argument to kubelet). When kubeadm is bringing up the kubernetes control plane, it generates pod manifests for api-server, controller-manager in a directory which kubelet is monitoring. Then kubelet brings up these control plane components as static pods. In this article, we will see how we can create a Static pod,
Create a Simple pod definition file as below,
[root@ec2-18-217-122-218 kubelet]# cat /root/staticPod/static-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: static-pod
labels:
role: app
spec:
containers:
- name: app
image: busybox:1.28
ports:
- name: app
containerPort: 443
protocol: TCP
Now once the pod definition file is available, we need to identify the kubelet watch directory. Now check the static pod location where kubelet tries to read. To find this go to,
[root@ec2-18-217-122-218 kubelet]# cd /var/lib/kubelet/
[root@ec2-18-217-122-218 kubelet]# cat config.yaml | grep static
staticPodPath: /etc/kubernetes/manifests
We can see the location where kubelet reads for static pod is /etc/kubernetes/manifests
Copy your static pod file over this location and restart the kubelet as below
[root@ec2-18-217-122-218 manifests]# systemctl restart kubelet
Now after a few seconds, if we do a kubectl get pods from the master node, we can see the mirror pod details as below,
We can see that the static nignx pod, is running.
If we try to delete the pod from the master node, it deletes but the kubelet again starts the pod as below,
kubelet periodically scans the configured directory for static pods and if there are any changes, the kubelet performs them accordingly.
Identify status pod
Since static pods are not controlled by an api-server, the controlled by Element in the pod description shows a different value. Checking the owner reference of a static pod using kubectl describe command should indicate that such a pod is not controlled by a ReplicaSet but rather from Node/controlplane
So Static pods are introduced for running pods across all or a subset of chosen nodes. This was useful for system service components like log forwarders like fluentd , networking components like kube-proxy. Because of the limitation with the static pods like no health check ets, kubernetes came up with Daemon Sets.
Hope this helps in understanding Static Pods
Sunday, January 9, 2022
Kubernetes - Labels, Selector and matchLabels
Labels are a mechanism that we use to organize objects in kubernetes.a K8s object can be anything from containers, pods to services and deployments. Labels are key-value pairs that we can attach to a resource for identification. The labels contain information and are used by kubernetes to query objects based on these labels.
A label can be applied to a pods as shown below,
[jagadishmanchala@Jagadish-theOne:k8s] cat simple-label.yml
apiVersion: v1
kind: Pod
metadata:
name: testing-service
labels:
env: dev
spec:
containers:
- name: test-ser
image: docker.io/jagadesh1982/testing-service
ports:
- containerPort: 9876
From the above config we are actually creating a pod with the label “env=dev”.
To see the label available for a pod we can use the command,
If we want to assign a label while a pod is running, we can use
Now we can see the labels attached to the pod using,
Consider the below deployment,
[jagadishmanchala@Jagadish-theOne:k8s] cat simple-label.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: testing-service-v1
labels:
app: testing-service
spec:
replicas: 3
selector:
matchLabels:
app: testing-service
version: "1.0"
template:
metadata:
labels:
app: testing-service
version: "1.0"
spec:
containers:
- name: testing-service
image: docker.io/jagadesh1982/testing-service
ports:
- name: http
containerPort: 9876
env:
- name: VERSION
value: "1.0"
In the above config, we can see that there are labels defined in multiple places as
************
metadata:
name: testing-service-v1
labels:
app: testing-service
tier: backend
spec:
replicas: 3
selector:
matchLabels:
app: testing-service
template:
metadata:
labels:
app: testing-service
tier: backend
************
In the above config, we can see metadata.labels, selector.matchLabels and template.metadata.labels. What are these and in this article we will see what these do exactly?
The first metadata talks about the deployment itself. This means we are assigning a label “app: testing-service” and “tier: backed” to the deployment itself. This means we can delete the deployment using the
“kubectl delete deploy -l app=testing-service,tier=backend”. This label is used to identify the kind that we are creating which is deployment here.
Now the selector is used for a whole different purpose. Once we create a deployment,replica set or replication controller, pods also get created along. Now we need to tell the deployment how to find the pods that are created.For instance, in future if we want to scale a pod from 1 to 3, the deployment needs to find first what pods are currently running. For this it needs a way to identify the specific pods that are created as a part of deployment. The labels defined in the selector field helps deployment to identify the pods that are created as a part of that deployment. Now from the above configuration it will find all pods that have labels “app: testing-service” and will manage the pods. We call this a selector label.
To say exactly , to find out what pods are running or if we need to manage a group of pods we need to find them. The labels defined in the selector fields will help us to find the pods to manage them
But in order to manage or find pods running with a specific selector label, we need to assign them the labels. Once we assign the labels, then only we can find them using the selector. The labels that need to be assigned to the pods are defined in the template.metadata.labels.
So
.metadata.labels is for labeling the kind that we are creating. In the above case, the kind is deployment. If we want to find the deployment, we can use the labels defined in the metadata.labels
.spec.selector tells the deployment or replica set or replication controller how to find pods to manage , scale or delete.
.spec.template.metadata.labels are used to create the pods with labels so that deployment or replica set or replication controller can find or manage the pods.
In the above case, labels defined in .spec.selector and .spec.template.metadata.labels should be same
Hope this helps in understanding the labels, selectors and podTemplate labels.
Kubernetes - Ephemeral Container : Copy Of Pod
Run the command to create a application container as below,
[root@ec2-3-138-100-101 ~]# kubectl run myapp --image=busybox --restart=Never -- sleep 1d
pod/myapp created
Once the pod is created, create the debug pod and attach to the myapp pod above. In this case we are running this command to create a copy of myapp named myapp-debug that adds a new Ubuntu container for debugging:
[root@ec2-3-138-100-101 ~]# kubectl debug myapp -it --image=ubuntu --share-processes --copy-to=myapp-debug
Kubernetes - Ephemeral containers
Things go well with the above packed containers until issues come. These issues can be application issues and troubleshooting them can be hard as we don’t have any tools to troubleshoot or package manager to install these tools. The only way is to rebuild the image with troubleshooting tools and re-run the application to troubleshoot the issue.
Another option provided by Docker is to attach a container to the existing application container on the same network and use the tools available. For instance, we can attach a container which has troubleshooting tools to an application container on the same network space and use tools to troubleshoot things. This is the same concept for Ephemeral containers.
We create a container image with all troubleshooting tools and when needed for debugging, we can deploy this ephemeral container into a running pod and troubleshoot things. Ephemeral containers are an alpha feature in Kubernetes 1.22, so the official recommendation is not to use it in production environments.
In this article, we will see how to use ephemeral containers for debugging things in a running container of pod.
A simple pod with a ubuntu container looks as below,
[root@ec2-3-138-100-101 ~]# cat ephemeral-example.yml
apiVersion: v1
kind: Pod
metadata:
name: single-pod
labels:
env: dev
spec:
containers:
- name: testing-service
image: ubuntu
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
Run the “kubectl create -f ephemeral-example.yml” to create the pod. Now consider for instance we want to test the internet from this container in the pod as below,
We can see that the ping command is not available to troubleshoot our network issues. Now lets create a ephemeral container and attach to this running pod single-pod as below,
[root@ec2-3-138-100-101 ~]# kubectl debug -it single-pod --image=alpine:latest --target=testing-service
In the above command, iam running a ephemeral container with image alpine:latest and attaching to the pod single-pod and to container testing-service running inside the single-pod. Once the ephemeral container is added to the running testing-service container, we can use the ping tool to perform our troubleshooting as below,
Now if come out the command prompt by pressing CTRL P+Q , and describe the pod we can see new entries as Ephemeral container as below,
This is how Ephemeral containers work. We Can create a container image which contains all troubleshooting tools and use when we need to troubleshoot the application containers as above.
Hope this helps you in understanding Ephemeral Containers.
Kubernetes - Probes
Kubernetes - Restart Pods
There are many times where a developer needs to restart his application. When the application is deployed in a container/pod implementation like in kubernetes, the restart works in a different way.
There are multiple ways to restart an application pod.
1.Use the deployment rollout restart command as below
[jagadishmanchala@Jagadish-theOne:k8s] kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
test-service-deploy 2/2 2 2 23m
Restart the deployment using below command,
[jagadishmanchala@Jagadish-theOne:k8s] kubectl rollout restart deployment test-service-deploy
deployment.apps/test-service-deploy restarted
Now we can see that the deployment is restarted
[jagadishmanchala@Jagadish-theOne:k8s] kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
test-service-deploy 2/2 2 2 1m
2. Modify the Env Variables of a deployment as below,
[jagadishmanchala@Jagadish-theOne:k8s] kubectl set env deployment test-service-deploy DEPLOY_DATE="$(date)"
deployment.apps/test-service-deploy env updated
Set env sets up a change in env variables , and set the env variable DEPLOY_DATE to the latest timestamp for the deployment and causes the pod to restart
3. Scale Down or Up for the deployment
[jagadishmanchala@Jagadish-theOne:k8s] kubectl scale deployment test-service-deploy --replicas=0
deployment.apps/test-service-deploy scaled
[jagadishmanchala@Jagadish-theOne:k8s] kubectl scale deployment test-service-deploy --replicas=1
deployment.apps/test-service-deploy scaled
Note - We can’t just restart the pods alone, pods that are created as a part of deployment or replica set or replication controller can have the option to restart.
Kubernetes - Mounting a Host Path On To Pod
We already know kubernetes provides us with multiple ways of saving data on the disk. We have seen volume types using emptyDir and memory. The other and most used volume type is the Hostpath. In this article we will see how we can use a host path on disk to mount on to the pod and save data from the pod to the host machine.
The Host path details are defined in the volumes and attached to the pod using the VolumeMounts. Check the below example,
[jagadishmanchala@Jagadish-theOne:k8s] cat hostpath-volume.yml
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: my-app
image: nginx
ports:
- containerPort: 8080
volumeMounts:
- name: my-volume
mountPath: /app
volumes:
- name: my-volume
hostPath:
path: /Volumes/Working/k8s/test
In the above code, we are using the hostpath /Volumes/Working/k8s/test directory on the host machine to be mounted to the container on /app location. The name of the mount is my-volume
Run the code to see if the pod is running as below,
[jagadishmanchala@Jagadish-theOne:k8s] kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp 1/1 Running 0 30s
Login to the pod and create few files for testing as below,
[jagadishmanchala@Jagadish-theOne:k8s] kubectl exec myapp -c my-app -it -- bash
root@myapp:/# df -hT
Filesystem Type Size Used Avail Use% Mounted on
overlay overlay 59G 4.5G 51G 8% /
tmpfs tmpfs 64M 0 64M 0% /dev
tmpfs tmpfs 987M 0 987M 0% /sys/fs/cgroup
grpcfuse fuse.grpcfuse 47G 33G 14G 71% /app
/dev/vda1 ext4 59G 4.5G 51G 8% /etc/hosts
shm tmpfs 64M 0 64M 0% /dev/shm
tmpfs tmpfs 987M 12K 987M 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs tmpfs 987M 0 987M 0% /sys/firmware
root@myapp:/# cd /app/
root@myapp:/app# touch hai hello
root@myapp:/app# exit
exit
Now check on the host path to confirm as below,
[jagadishmanchala@Jagadish-theOne:k8s] ll test/
total 0
-rw-r--r-- 1 jagadishmanchala staff 0 Dec 28 12:30 hai
-rw-r--r-- 1 jagadishmanchala staff 0 Dec 28 12:30 hello
Hope this helps in understanding the hostpath Volume.