In the next series of
K8 release management, we will see how rolling update works. A rolling
deployment updates the pod in a rolling update where a secondary set of pods
will be created while the first set are terminated. So while the first set is
terminated slowly one by one, second version of pods are created one by one.
If we trigger a
deployment while an existing rollout is in progress, the deployment will pause
the rollout and proceed to the new release by overriding the rollout. So be
cautious.
Pros :
Version is slowly
released across worker nodes
Cons:
Rollout/rollback can
take more time
Traffic to the pods
needs to be handled correctly.
Lets see an example
of how rolling deployment works. The same example will be used,
[root@manja17-I13330 kubenetes-config]#
cat rolling-update-deployment-v1.yml
apiVersion: v1
kind: Service
metadata:
name: testing-service
labels:
app: testing-service
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: http
selector:
app: testing-service
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: testing-service
labels:
app: testing-service
spec:
replicas: 10
selector:
matchLabels:
app: testing-service
template:
metadata:
labels:
app: testing-service
version: "1.0"
spec:
containers:
- name: test-service
image:
docker.io/jagadesh1982/testing-service
ports:
- name: http
containerPort: 9876
env:
- name: VERSION
value: "1.0"
Lets create the deployment using,
[root@manja17-I13330
kubenetes-config]# kubectl create -f rolling-update-deployment.yml
service
"testing-service" created
deployment.apps
"testing-service" created
Once the deployment
is created, we can see 10 pods being created. A service as well as the
deployment with 10 pods are created. We can use the curl command to access the
application as,
[root@manja17-I13330
kubenetes-config]# curl 10.100.196.68/info
{"host":
"10.100.196.68", "version": "1.0",
"from": "10.32.0.1"}
Now lets deploy the
second version of the application using rolling update. The manifest file for
the second deployment looks as,
[root@manja17-I13330
kubenetes-config]# cat rolling-update-deployment-v2.yml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: testing-service
labels:
app: testing-service
spec:
replicas: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: testing-service
template:
metadata:
labels:
app: testing-service
version: "2.0"
spec:
containers:
- name: test-service
image:
docker.io/jagadesh1982/testing-service
ports:
- name: http
containerPort: 9876
env:
- name: VERSION
value: "2.0"
The important element
in the above configuration file is,
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
We have defined the
RollingUpdate as a deployment strategy. We have to define the maxSurge and
maxUnavailable elements for the Rolling Update.
maxSurge – defines
how many pods we can add at a time
maxUnavailable –
defines how many pods can be unavailable
The maxUnavailable
parameter is the maximum number of pods that can be unavailable during the
update. The maxSurge parameter is the maximum number of pods that can be
scheduled above the original number of pods. Both parameters can be set to
either a percentage (e.g., 10%) or an absolute value (e.g., 2). The default
value for both is 25%
[root@manja17-I13330 kubenetes-config]# kubectl apply -f rolling-update-deployment-v2.yml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps "testing-service" configured
Also watch the pods using “watch kubectl get po”.We will see how the first set of pods will be terminated and the second version of the application pod are up and running.
Lets try to access the service using,
[root@manja17-I13330 kubenetes-config]# while sleep 0.1; do curl "10.100.196.68/info"; done
{"host":
"10.100.196.68", "version": "2.0",
"from": "10.32.0.1"}{"host":
"10.100.196.68", "version": "2.0",
"from": "10.32.0.1"}{"host": "10.100.196.68",
"version": "1.0", "from":
"10.32.0.1"}{"host": "10.100.196.68",
"version": "1.0", "from":
"10.32.0.1"}{"host": "10.100.196.68",
"version": "2.0", "from":
"10.32.0.1"}{"host": "10.100.196.68",
"version": "1.0", "from":
"10.32.0.1"}{"host": "10.100.196.68",
"version": "1.0", "from":
"10.32.0.1"}{"host": "10.100.196.68",
"version": "1.0", "from":
"10.32.0.1"}{"host": "10.100.196.68",
"version": "1.0", "from":
"10.32.0.1"}{"host": "10.100.196.68",
"version": "1.0", "from":
"10.32.0.1"}{"host": "10.100.196.68",
"version": "1.0", "from":
"10.32.0.1"}{"host": "10.100.196.68",
"version": "1.0", "from":
"10.32.0.1"}{"host": "10.100.196.68",
"version": "1.0", "from":
"10.32.0.1"}{"host": "10.100.196.68",
"version": "2.0", "from":
"10.32.0.1"}{"host": "10.100.196.68",
"version": "2.0", "from":
"10.32.0.1"}{"host": "10.100.196.68",
"version": "1.0", "from":
"10.32.0.1"}{"host": "10.100.196.68",
"version": "1.0", "from":
"10.32.0.1"}{"host": "10.100.196.68",
"version": "1.0", "from":
"10.32.0.1"}{"host": "10.100.196.68",
"version": "2.0", "from":
"10.32.0.1"}{"host": "10.100.196.68",
"version": "1.0", "from":
"10.32.0.1"}{"host": "10.100.196.68",
"version": "2.0", "from": "10.32.0.1"}
In the above snippet,
I just used a while loop for hitting the service for a response from the
application. We can see some hit are returning version as 1 and some are
returning version as 2. We need to have a fine control over the traffic.
Roll back a Deployment - In case we find some issues and want to
rollback to a older version we can do some thing like this
[root@manja17-I13330 kubenetes-config]#
kubectl rollout undo deploy testing-service
[root@manja17-I13330 kubenetes-config]# curl
10.100.196.68/info
{"host": "10.100.196.68",
"version": "1.0", "from": "10.32.0.1"}
Pause a Deployment - If you can also pause the rollout if you want to run the
application for a subset of users:
[root@manja17-I13330
kubenetes-config]# Kubectl rollout pause deploy testing-service
Happy with the Deployment- Once we are happy with the newer version we can rollout the rest using,
[root@manja17-I13330 kubenetes-config]# Kubectl rollout resume deploy testing-service
This is how the rolling update works. More to come , happy learning :-)
No comments :
Post a Comment