You can access this page also inside the Remote Desktop by using the icons on the desktop
- Score
- Questions and Answers
- Preview Questions and Answers
- Exam Tips
You can access this page also inside the Remote Desktop by using the icons on the desktop
The DevOps team would like to get the list of all Namespaces in the cluster.
Get the list and save it to /opt/course/1/namespaces.
k get ns > /opt/course/1/namespacesThe content should then look like:
xxxxxxxxxx# /opt/course/1/namespacesNAME STATUS AGEdefault Active 150mearth Active 76mjupiter Active 76mkube-public Active 150mkube-system Active 150mmars Active 76mmercury Active 76mmoon Active 76mneptune Active 76mpluto Active 76msaturn Active 76mshell-intern Active 76msun Active 76mvenus Active 76m
Create a single Pod of image httpd:2.4.41-alpine in Namespace default.
The Pod should be named pod1 and the container should be named pod1-container.
Your manager would like to run a command manually on occasion to output the status of that exact Pod. Please write a command that does this into /opt/course/2/pod1-status-command.sh. The command should use kubectl.
xk run # help
k run pod1 --image=httpd:2.4.41-alpine --dry-run=client -oyaml > 2.yaml
vim 2.yamlChange the container name in 2.yaml to pod1-container:
xxxxxxxxxx# 2.yamlapiVersionv1kindPodmetadata creationTimestampnull labels runpod1 namepod1spec containersimagehttpd2.4.41-alpine namepod1-container # change resources dnsPolicyClusterFirst restartPolicyAlwaysstatusThen run:
xxxxxxxxxx➜ k create -f 2.yamlpod/pod1 created
➜ k get podNAME READY STATUS RESTARTS AGEpod1 0/1 ContainerCreating 0 6s
➜ k get podNAME READY STATUS RESTARTS AGEpod1 1/1 Running 0 30sNext create the requested command:
xxxxxxxxxxvim /opt/course/2/pod1-status-command.shThe content of the command file could look like:
xxxxxxxxxx# /opt/course/2/pod1-status-command.shkubectl -n default describe pod pod1 | grep -i status:
Another solution would be using jsonpath:
xxxxxxxxxx# /opt/course/2/pod1-status-command.shkubectl -n default get pod pod1 -o jsonpath="{.status.phase}"
To test the command:
xxxxxxxxxx➜ sh /opt/course/2/pod1-status-command.shRunning
Team Neptune needs a Job template located at /opt/course/3/job.yaml. This Job should run image busybox:1.31.0 and execute sleep 2 && echo done. It should be in namespace neptune, run a total of 3 times and should execute 2 runs in parallel.
Start the Job and check its history. Each pod created by the Job should have the label id: awesome-job. The job should be named neb-new-job and the container neb-new-job-container.
xxxxxxxxxxk -n neptun create job -h
k -n neptune create job neb-new-job --image=busybox:1.31.0 --dry-run=client -oyaml > /opt/course/3/job.yaml -- sh -c "sleep 2 && echo done"
vim /opt/course/3/job.yamlMake the required changes in the yaml:
xxxxxxxxxx# /opt/course/3/job.yamlapiVersionbatch/v1kindJobmetadata creationTimestampnull nameneb-new-job namespaceneptune # addspec completions3 # add parallelism2 # add template metadata creationTimestampnull labels# add idawesome-job # add spec containerscommandsh-csleep 2 && echo done imagebusybox1.31.0 nameneb-new-job-container # update resources restartPolicyNeverstatusThen to create it:
xxxxxxxxxxk -f /opt/course/3/job.yaml create # namespace already set in yamlCheck Job and Pods, you should see two running parallel at most but three in total:
xxxxxxxxxx➜ k -n neptune get pod,job | grep neb-new-jobpod/neb-new-job-jhq2g 0/1 ContainerCreating 0 4spod/neb-new-job-vf6ts 0/1 ContainerCreating 0 4s
job.batch/neb-new-job 0/3 4s 5s
➜ k -n neptune get pod,job | grep neb-new-jobpod/neb-new-job-gm8sz 0/1 ContainerCreating 0 0spod/neb-new-job-jhq2g 0/1 Completed 0 10spod/neb-new-job-vf6ts 1/1 Running 0 10s
job.batch/neb-new-job 1/3 10s 11s
➜ k -n neptune get pod,job | grep neb-new-jobpod/neb-new-job-gm8sz 0/1 ContainerCreating 0 5s
pod/neb-new-job-jhq2g 0/1 Completed 0 15spod/neb-new-job-vf6ts 0/1 Completed 0 15sjob.batch/neb-new-job 2/3 15s 16s
➜ k -n neptune get pod,job | grep neb-new-jobpod/neb-new-job-gm8sz 0/1 Completed 0 12spod/neb-new-job-jhq2g 0/1 Completed 0 22spod/neb-new-job-vf6ts 0/1 Completed 0 22s
job.batch/neb-new-job 3/3 21s 23sCheck history:
xxxxxxxxxx➜ k -n neptune describe job neb-new-job...Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 2m52s job-controller Created pod: neb-new-job-jhq2g Normal SuccessfulCreate 2m52s job-controller Created pod: neb-new-job-vf6ts Normal SuccessfulCreate 2m42s job-controller Created pod: neb-new-job-gm8szAt the age column we can see that two pods run parallel and the third one after that. Just as it was required in the task.
Team Mercury asked you to perform some operations using Helm, all in Namespace mercury:
Delete release internal-issue-report-apiv1
Upgrade release internal-issue-report-apiv2 to any newer version of chart bitnami/nginx available
Install a new release internal-issue-report-apache of chart bitnami/apache. The Deployment should have two replicas, set these via Helm-values during install
There seems to be a broken release, stuck in pending-install state. Find it and delete it
Helm Chart: Kubernetes YAML template-files combined into a single package, Values allow customisation
Helm Release: Installed instance of a Chart
Helm Values: Allow to customise the YAML template-files in a Chart when creating a Release
1.
First we should delete the required release:
xxxxxxxxxx➜ helm -n mercury lsNAME NAMESPACE STATUS CHART APP VERSIONinternal-issue-report-apiv1 mercury deployed nginx-9.5.0 1.21.1 internal-issue-report-apiv2 mercury deployed nginx-9.5.0 1.21.1 internal-issue-report-app mercury deployed nginx-9.5.0 1.21.1
➜ helm -n mercury uninstall internal-issue-report-apiv1release "internal-issue-report-apiv1" uninstalled
➜ helm -n mercury lsNAME NAMESPACE STATUS CHART APP VERSIONinternal-issue-report-apiv2 mercury deployed nginx-9.5.0 1.21.1 internal-issue-report-app mercury deployed nginx-9.5.0 1.21.1
2.
Next we need to upgrade a release, for this we could first list the charts of the repo:
xxxxxxxxxx➜ helm repo listNAME URL bitnami https://charts.bitnami.com/bitnami
➜ helm repo updateHang tight while we grab the latest from your chart repositories......Successfully got an update from the "bitnami" chart repositoryUpdate Complete. ⎈Happy Helming!⎈
➜ helm search repo nginxNAME CHART VERSION APP VERSION DESCRIPTION bitnami/nginx 9.5.2 1.21.1 Chart for the nginx server ...Here we see that a newer chart version 9.5.2 is available. But the task only requires us to upgrade to any newer chart version available, so we can simply run:
xxxxxxxxxx➜ helm -n mercury upgrade internal-issue-report-apiv2 bitnami/nginxRelease "internal-issue-report-apiv2" has been upgraded. Happy Helming!NAME: internal-issue-report-apiv2LAST DEPLOYED: Tue Aug 31 17:40:42 2021NAMESPACE: mercurySTATUS: deployedREVISION: 2TEST SUITE: None...
➜ helm -n mercury lsNAME NAMESPACE STATUS CHART APP VERSIONinternal-issue-report-apiv2 mercury deployed nginx-9.5.2 1.21.1 internal-issue-report-app mercury deployed nginx-9.5.0 1.21.1 Looking good!
INFO: Also check out
helm rollbackfor undoing a helm rollout/upgrade
3.
Now we're asked to install a new release, with a customised values setting. For this we first list all possible value settings for the chart, we can do this via:
xxxxxxxxxxhelm show values bitnami/apache # will show a long list of all possible value-settings
helm show values bitnami/apache | yq e # parse yaml and show with colorsHuge list, if we search in it we should find the setting replicaCount: 1 on top level. This means we can run:
xxxxxxxxxx➜ helm -n mercury install internal-issue-report-apache bitnami/apache --set replicaCount=2NAME: internal-issue-report-apacheLAST DEPLOYED: Tue Aug 31 17:57:23 2021NAMESPACE: mercurySTATUS: deployedREVISION: 1TEST SUITE: None...If we would also need to set a value on a deeper level, for example image.debug, we could run:
xxxxxxxxxxhelm -n mercury install internal-issue-report-apache bitnami/apache \ --set replicaCount=2 \ --set image.debug=trueInstall done, let's verify what we did:
xxxxxxxxxx➜ helm -n mercury lsNAME NAMESPACE STATUS CHART APP VERSIONinternal-issue-report-apache mercury deployed apache-8.6.3 2.4.48...
➜ k -n mercury get deploy internal-issue-report-apacheNAME READY UP-TO-DATE AVAILABLE AGEinternal-issue-report-apache 2/2 2 2 96sWe see a healthy deployment with two replicas!
4.
By default releases in pending-upgrade state aren't listed, but we can show all to find and delete the broken release:
xxxxxxxxxx➜ helm -n mercury ls -aNAME NAMESPACE STATUS CHART APP VERSIONinternal-issue-report-apache mercury deployed apache-8.6.3 2.4.48 internal-issue-report-apiv2 mercury deployed nginx-9.5.2 1.21.1 internal-issue-report-app mercury deployed nginx-9.5.0 1.21.1 internal-issue-report-daniel mercury pending-install nginx-9.5.0 1.21.1
➜ helm -n mercury uninstall internal-issue-report-danielrelease "internal-issue-report-daniel" uninstalledThank you Helm for making our lifes easier! (Till something breaks)
Team Neptune has its own ServiceAccount named neptune-sa-v2 in Namespace neptune. A coworker needs the token from the Secret that belongs to that ServiceAccount. Write the base64 decoded token to file /opt/course/5/token.
Since K8s 1.24, Secrets won't be created automatically for ServiceAccounts any longer. But it's still possible to create a Secret manually and attach it to a ServiceAccount by setting the correct annotation on the Secret. This was done for this task.
xxxxxxxxxxk -n neptune get sa # get overviewk -n neptune get secrets # shows all secrets of namespacek -n neptune get secrets -oyaml | grep annotations -A 1 # shows secrets with first annotationIf a Secret belongs to a ServiceAccont, it'll have the annotation kubernetes.io/service-account.name. Here the Secret we're looking for is neptune-secret-1.
xxxxxxxxxx➜ k -n neptune get secret neptune-secret-1 -o yamlapiVersion: v1data:... token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNkltNWFaRmRxWkRKMmFHTnZRM0JxV0haT1IxZzFiM3BJY201SlowaEhOV3hUWmt3elFuRmFhVEZhZDJNaWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUp1WlhCMGRXNWxJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5elpXTnlaWFF1Ym1GdFpTSTZJbTVsY0hSMWJtVXRjMkV0ZGpJdGRHOXJaVzR0Wm5FNU1tb2lMQ0pyZFdKbGNtNWxkR1Z6TG1sdkwzTmxjblpwWTJWaFkyTnZkVzUwTDNObGNuWnBZMlV0WVdOamIzVnVkQzV1WVcxbElqb2libVZ3ZEhWdVpTMXpZUzEyTWlJc0ltdDFZbVZ5Ym1WMFpYTXVhVzh2YzJWeWRtbGpaV0ZqWTI5MWJuUXZjMlZ5ZG1salpTMWhZMk52ZFc1MExuVnBaQ0k2SWpZMlltUmpOak0yTFRKbFl6TXROREpoWkMwNE9HRTFMV0ZoWXpGbFpqWmxPVFpsTlNJc0luTjFZaUk2SW5ONWMzUmxiVHB6WlhKMmFXTmxZV05qYjNWdWREcHVaWEIwZFc1bE9tNWxjSFIxYm1VdGMyRXRkaklpZlEuVllnYm9NNENUZDBwZENKNzh3alV3bXRhbGgtMnZzS2pBTnlQc2gtNmd1RXdPdFdFcTVGYnc1WkhQdHZBZHJMbFB6cE9IRWJBZTRlVU05NUJSR1diWUlkd2p1Tjk1SjBENFJORmtWVXQ0OHR3b2FrUlY3aC1hUHV3c1FYSGhaWnp5NHlpbUZIRzlVZm1zazVZcjRSVmNHNm4xMzd5LUZIMDhLOHpaaklQQXNLRHFOQlF0eGctbFp2d1ZNaTZ2aUlocnJ6QVFzME1CT1Y4Mk9KWUd5Mm8tV1FWYzBVVWFuQ2Y5NFkzZ1QwWVRpcVF2Y3pZTXM2bno5dXQtWGd3aXRyQlk2VGo5QmdQcHJBOWtfajVxRXhfTFVVWlVwUEFpRU43T3pka0pzSThjdHRoMTBseXBJMUFlRnI0M3Q2QUx5clFvQk0zOWFiRGZxM0Zrc1Itb2NfV013kind: Secret...This shows the base64 encoded token. To get the encoded one we could pipe it manually through base64 -d or we simply do:
xxxxxxxxxx➜ k -n neptune describe secret neptune-secret-1...Data====token: eyJhbGciOiJSUzI1NiIsImtpZCI6Im5aZFdqZDJ2aGNvQ3BqWHZOR1g1b3pIcm5JZ0hHNWxTZkwzQnFaaTFad2MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJuZXB0dW5lIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Im5lcHR1bmUtc2EtdjItdG9rZW4tZnE5MmoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoibmVwdHVuZS1zYS12MiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjY2YmRjNjM2LTJlYzMtNDJhZC04OGE1LWFhYzFlZjZlOTZlNSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpuZXB0dW5lOm5lcHR1bmUtc2EtdjIifQ.VYgboM4CTd0pdCJ78wjUwmtalh-2vsKjANyPsh-6guEwOtWEq5Fbw5ZHPtvAdrLlPzpOHEbAe4eUM95BRGWbYIdwjuN95J0D4RNFkVUt48twoakRV7h-aPuwsQXHhZZzy4yimFHG9Ufmsk5Yr4RVcG6n137y-FH08K8zZjIPAsKDqNBQtxg-lZvwVMi6viIhrrzAQs0MBOV82OJYGy2o-WQVc0UUanCf94Y3gT0YTiqQvczYMs6nz9ut-XgwitrBY6Tj9BgPprA9k_j5qEx_LUUZUpPAiEN7OzdkJsI8ctth10lypI1AeFr43t6ALyrQoBM39abDfq3FksR-oc_WMwca.crt: 1066 bytesnamespace: 7 bytesCopy the token (part under token:) and paste it using vim.
xxxxxxxxxxvim /opt/course/5/tokenFile /opt/course/5/token should contain the token:
xxxxxxxxxx# /opt/course/5/tokeneyJhbGciOiJSUzI1NiIsImtpZCI6Im5aZFdqZDJ2aGNvQ3BqWHZOR1g1b3pIcm5JZ0hHNWxTZkwzQnFaaTFad2MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJuZXB0dW5lIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Im5lcHR1bmUtc2EtdjItdG9rZW4tZnE5MmoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoibmVwdHVuZS1zYS12MiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjY2YmRjNjM2LTJlYzMtNDJhZC04OGE1LWFhYzFlZjZlOTZlNSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpuZXB0dW5lOm5lcHR1bmUtc2EtdjIifQ.VYgboM4CTd0pdCJ78wjUwmtalh-2vsKjANyPsh-6guEwOtWEq5Fbw5ZHPtvAdrLlPzpOHEbAe4eUM95BRGWbYIdwjuN95J0D4RNFkVUt48twoakRV7h-aPuwsQXHhZZzy4yimFHG9Ufmsk5Yr4RVcG6n137y-FH08K8zZjIPAsKDqNBQtxg-lZvwVMi6viIhrrzAQs0MBOV82OJYGy2o-WQVc0UUanCf94Y3gT0YTiqQvczYMs6nz9ut-XgwitrBY6Tj9BgPprA9k_j5qEx_LUUZUpPAiEN7OzdkJsI8ctth10lypI1AeFr43t6ALyrQoBM39abDfq3FksR-oc_WMw
Create a single Pod named pod6 in Namespace default of image busybox:1.31.0. The Pod should have a readiness-probe executing cat /tmp/ready. It should initially wait 5 and periodically wait 10 seconds. This will set the container ready only if the file /tmp/ready exists.
The Pod should run the command touch /tmp/ready && sleep 1d, which will create the necessary file to be ready and then idles. Create the Pod and confirm it starts.
xxxxxxxxxxk run pod6 --image=busybox:1.31.0 --dry-run=client -oyaml --command -- sh -c "touch /tmp/ready && sleep 1d" > 6.yaml
vim 6.yamlSearch for a readiness-probe example on https://kubernetes.io/docs, then copy and alter the relevant section for the task:
xxxxxxxxxx# 6.yamlapiVersionv1kindPodmetadata creationTimestampnull labels runpod6 namepod6spec containerscommandsh-ctouch /tmp/ready && sleep 1d imagebusybox1.31.0 namepod6 resources readinessProbe# add exec# add command# addsh # add-c # addcat /tmp/ready # add initialDelaySeconds5 # add periodSeconds10 # add dnsPolicyClusterFirst restartPolicyAlwaysstatusThen:
xxxxxxxxxxk -f 6.yaml createRunning k get pod6 we should see the job being created and completed:
xxxxxxxxxx➜ k get pod pod6NAME READY STATUS RESTARTS AGEpod6 0/1 ContainerCreating 0 2s
➜ k get pod pod6NAME READY STATUS RESTARTS AGEpod6 0/1 Running 0 7s
➜ k get pod pod6NAME READY STATUS RESTARTS AGEpod6 1/1 Running 0 15sWe see that the Pod is finally ready.
The board of Team Neptune decided to take over control of one e-commerce webserver from Team Saturn. The administrator who once setup this webserver is not part of the organisation any longer. All information you could get was that the e-commerce system is called my-happy-shop.
Search for the correct Pod in Namespace saturn and move it to Namespace neptune. It doesn't matter if you shut it down and spin it up again, it probably hasn't any customers anyways.
Let's see all those Pods:
xxxxxxxxxx➜ k -n saturn get podNAME READY STATUS RESTARTS AGEwebserver-sat-001 1/1 Running 0 111mwebserver-sat-002 1/1 Running 0 111mwebserver-sat-003 1/1 Running 0 111mwebserver-sat-004 1/1 Running 0 111mwebserver-sat-005 1/1 Running 0 111mwebserver-sat-006 1/1 Running 0 111mThe Pod names don't reveal any information. We assume the Pod we are searching has a label or annotation with the name my-happy-shop, so we search for it:
xxxxxxxxxxk -n saturn describe pod # describe all pods, then manually look for it
# or do some filtering like thisk -n saturn get pod -o yaml | grep my-happy-shop -A10We see the webserver we're looking for is webserver-sat-003
xxxxxxxxxxk -n saturn get pod webserver-sat-003 -o yaml > 7_webserver-sat-003.yaml # exportvim 7_webserver-sat-003.yamlChange the Namespace to neptune, also remove the status: section, the token volume, the token volumeMount and the nodeName, else the new Pod won't start. The final file could look as clean like this:
xxxxxxxxxx# 7_webserver-sat-003.yamlapiVersionv1kindPodmetadata annotations descriptionthis is the server for the E-Commerce System my-happy-shop labels idwebserver-sat-003 namewebserver-sat-003 namespaceneptune # new namespace herespec containersimagenginx1.16.1-alpine imagePullPolicyIfNotPresent namewebserver-sat restartPolicyAlwaysThen we execute:
xxxxxxxxxxk -n neptune create -f 7_webserver-sat-003.yamlxxxxxxxxxx➜ k -n neptune get pod | grep webserverwebserver-sat-003 1/1 Running 0 22sIt seems the server is running in Namespace neptune, so we can do:
xxxxxxxxxxk -n saturn delete pod webserver-sat-003 --force --grace-period=0Let's confirm only one is running:
xxxxxxxxxx➜ k get pod -A | grep webserver-sat-003neptune webserver-sat-003 1/1 Running 0 6sThis should list only one pod called webserver-sat-003 in Namespace neptune, status running.
There is an existing Deployment named api-new-c32 in Namespace neptune. A developer did make an update to the Deployment but the updated version never came online. Check the Deployment history and find a revision that works, then rollback to it. Could you tell Team Neptune what the error was so it doesn't happen again?
xxxxxxxxxxk -n neptune get deploy # overviewk -n neptune rollout -hk -n neptune rollout history -hxxxxxxxxxx➜ k -n neptune rollout history deploy api-new-c32deployment.extensions/api-new-c32 REVISION CHANGE-CAUSE1 <none>2 kubectl edit deployment api-new-c32 --namespace=neptune3 kubectl edit deployment api-new-c32 --namespace=neptune4 kubectl edit deployment api-new-c32 --namespace=neptune5 kubectl edit deployment api-new-c32 --namespace=neptuneWe see 5 revisions, let's check Pod and Deployment status:
xxxxxxxxxx➜ k -n neptune get deploy,pod | grep api-new-c32deployment.extensions/api-new-c32 3/3 1 3 141m
pod/api-new-c32-65d998785d-jtmqq 1/1 Running 0 141mpod/api-new-c32-686d6f6b65-mj2fp 1/1 Running 0 141mpod/api-new-c32-6dd45bdb68-2p462 1/1 Running 0 141mpod/api-new-c32-7d64747c87-zh648 0/1 ImagePullBackOff 0 141mLet's check the pod for errors:
xxxxxxxxxx➜ k -n neptune describe pod api-new-c32-7d64747c87-zh648 | grep -i error ... Error: ImagePullBackOffxxxxxxxxxx➜ k -n neptune describe pod api-new-c32-7d64747c87-zh648 | grep -i image Image: ngnix:1.16.3 Image ID: Reason: ImagePullBackOff Warning Failed 4m28s (x616 over 144m) kubelet, gke-s3ef67020-28c5-45f7--default-pool-248abd4f-s010 Error: ImagePullBackOffSomeone seems to have added a new image with a spelling mistake in the name ngnix:1.16.3, that's the reason we can tell Team Neptune!
Now let's revert to the previous version:
xxxxxxxxxxk -n neptune rollout undo deploy api-new-c32Does this one work?
xxxxxxxxxx➜ k -n neptune get deploy api-new-c32NAME READY UP-TO-DATE AVAILABLE AGEapi-new-c32 3/3 3 3 146mYes! All up-to-date and available.
Also a fast way to get an overview of the ReplicaSets of a Deployment and their images could be done with:
xxxxxxxxxxk -n neptune get rs -o wide | grep api-new-c32
In Namespace pluto there is single Pod named holy-api. It has been working okay for a while now but Team Pluto needs it to be more reliable.
Convert the Pod into a Deployment named holy-api with 3 replicas and delete the single Pod once done. The raw Pod template file is available at /opt/course/9/holy-api-pod.yaml.
In addition, the new Deployment should set allowPrivilegeEscalation: false and privileged: false for the security context on container level.
Please create the Deployment and save its yaml under /opt/course/9/holy-api-deployment.yaml.
There are multiple ways to do this, one is to copy an Deployment example from https://kubernetes.io/docs and then merge it with the existing Pod yaml. That's what we will do now:
xxxxxxxxxxcp /opt/course/9/holy-api-pod.yaml /opt/course/9/holy-api-deployment.yaml # make a copy!
vim /opt/course/9/holy-api-deployment.yamlNow copy/use a Deployment example yaml and put the Pod's metadata: and spec: into the Deployment's template: section:
xxxxxxxxxx# /opt/course/9/holy-api-deployment.yamlapiVersionapps/v1kindDeploymentmetadata nameholy-api # name stays the same namespacepluto # importantspec replicas3 # 3 replicas selector matchLabels idholy-api # set the correct selector template # => from here down its the same as the pods metadata: and spec: sections metadata labels idholy-api nameholy-api spec containersenvnameCACHE_KEY_1 valueb&MTCi0=T66RXm!jO@nameCACHE_KEY_2 valuePCAILGej5Ld@Q%Q1=#nameCACHE_KEY_3 value2qz-2OJlWDSTn_;RFQ imagenginx1.17.3-alpine nameholy-api-container securityContext# add allowPrivilegeEscalationfalse # add privilegedfalse # add volumeMountsmountPath/cache1 namecache-volume1mountPath/cache2 namecache-volume2mountPath/cache3 namecache-volume3 volumesemptyDir namecache-volume1emptyDir namecache-volume2emptyDir namecache-volume3To indent multiple lines using vim you should set the shiftwidth using :set shiftwidth=2. Then mark multiple lines using Shift v and the up/down keys.
To then indent the marked lines press > or < and to repeat the action press .
Next create the new Deployment:
xxxxxxxxxxk -f /opt/course/9/holy-api-deployment.yaml createand confirm it's running:
xxxxxxxxxx➜ k -n pluto get pod | grep holyNAME READY STATUS RESTARTS AGEholy-api 1/1 Running 0 19mholy-api-5dbfdb4569-8qr5x 1/1 Running 0 30sholy-api-5dbfdb4569-b5clh 1/1 Running 0 30sholy-api-5dbfdb4569-rj2gz 1/1 Running 0 30sFinally delete the single Pod:
xxxxxxxxxxk -n pluto delete pod holy-api --force --grace-period=0xxxxxxxxxx➜ k -n pluto get pod,deployment | grep holypod/holy-api-5dbfdb4569-8qr5x 1/1 Running 0 2m4spod/holy-api-5dbfdb4569-b5clh 1/1 Running 0 2m4spod/holy-api-5dbfdb4569-rj2gz 1/1 Running 0 2m4s
deployment.extensions/holy-api 3/3 3 3 2m4s
Team Pluto needs a new cluster internal Service. Create a ClusterIP Service named project-plt-6cc-svc in Namespace pluto. This Service should expose a single Pod named project-plt-6cc-api of image nginx:1.17.3-alpine, create that Pod as well. The Pod should be identified by label project: plt-6cc-api. The Service should use tcp port redirection of 3333:80.
Finally use for example curl from a temporary nginx:alpine Pod to get the response from the Service. Write the response into /opt/course/10/service_test.html. Also check if the logs of Pod project-plt-6cc-api show the request and write those into /opt/course/10/service_test.log.
xxxxxxxxxxk -n pluto run project-plt-6cc-api --image=nginx:1.17.3-alpine --labels project=plt-6cc-apiThis will create the requested Pod. In yaml it would look like this:
xxxxxxxxxxapiVersionv1kindPodmetadata creationTimestampnull labels projectplt-6cc-api nameproject-plt-6cc-apispec containersimagenginx1.17.3-alpine nameproject-plt-6cc-api resources dnsPolicyClusterFirst restartPolicyAlwaysstatusNext we create the service:
xxxxxxxxxxk -n pluto expose pod -h # help
k -n pluto expose pod project-plt-6cc-api --name project-plt-6cc-svc --port 3333 --target-port 80Expose will create a yaml where everything is already set for our case and no need to change anything:
xxxxxxxxxxapiVersionv1kindServicemetadata creationTimestampnull labels projectplt-6cc-api nameproject-plt-6cc-svc # good namespacepluto # greatspec portsport3333 # awesome protocolTCP targetPort80 # nice selector projectplt-6cc-api # beautifulstatus loadBalancerWe could also use create service but then we would need to change the yaml afterwards:
xxxxxxxxxxk -n pluto create service -h # helpk -n pluto create service clusterip -h #helpk -n pluto create service clusterip project-plt-6cc-svc --tcp 3333:80 --dry-run=client -oyaml# now we would need to set the correct selector labelsCheck the Service is running:
xxxxxxxxxx➜ k -n pluto get pod,svc | grep 6ccpod/project-plt-6cc-api 1/1 Running 0 9m42s
service/project-plt-6cc-svc ClusterIP 10.31.241.234 <none> 3333/TCP 2m24sDoes the Service has one Endpoint?
xxxxxxxxxx➜ k -n pluto describe svc project-plt-6cc-svcName: project-plt-6cc-svcNamespace: plutoLabels: project=plt-6cc-apiAnnotations: <none>Selector: project=plt-6cc-apiType: ClusterIPIP: 10.3.244.240Port: <unset> 3333/TCPTargetPort: 80/TCPEndpoints: 10.28.2.32:80 Session Affinity: NoneEvents: <none>Or even shorter:
xxxxxxxxxx➜ k -n pluto get epNAME ENDPOINTS AGEproject-plt-6cc-svc 10.28.2.32:80 84mYes, endpoint there! Finally we check the connection using a temporary Pod:
xxxxxxxxxx➜ k run tmp --restart=Never --rm --image=nginx:alpine -i -- curl http://project-plt-6cc-svc.pluto:3333 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 612 100 612 0 0 32210 0 --:--:-- --:--:-- --:--:-- 32210<!DOCTYPE html><html><head><title>Welcome to nginx!</title><style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; }</style></head><body><h1>Welcome to nginx!</h1>...Great! Notice that we use the Kubernetes Namespace dns resolving (project-plt-6cc-svc.pluto) here. We could only use the Service name if we would also spin up the temporary Pod in Namespace pluto .
And now really finally copy or pipe the html content into /opt/course/10/service_test.html.
xxxxxxxxxx# /opt/course/10/service_test.html<!DOCTYPE html><html><head><title>Welcome to nginx!</title><style>body {width: 35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}...
Also the requested logs:
xxxxxxxxxxk -n pluto logs project-plt-6cc-api > /opt/course/10/service_test.logxxxxxxxxxx# /opt/course/10/service_test.log10.44.0.0 - - [22/Jan/2021:23:19:55 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.69.1" "-"
During the last monthly meeting you mentioned your strong expertise in container technology. Now the Build&Release team of department Sun is in need of your insight knowledge. There are files to build a container image located at /opt/course/11/image. The container will run a Golang application which outputs information to stdout. You're asked to perform the following tasks:
NOTE: Make sure to run all commands as user
k8s, for docker usesudo docker
Change the Dockerfile. The value of the environment variable SUN_CIPHER_ID should be set to the hardcoded value 5b9c1065-e39d-4a43-a04a-e59bcea3e03f
Build the image using Docker, named registry.killer.sh:5000/sun-cipher, tagged as latest and v1-docker, push these to the registry
Build the image using Podman, named registry.killer.sh:5000/sun-cipher, tagged as v1-podman, push it to the registry
Run a container using Podman, which keeps running in the background, named sun-cipher using image registry.killer.sh:5000/sun-cipher:v1-podman. Run the container from k8s@terminal and not root@terminal
Write the logs your container sun-cipher produced into /opt/course/11/logs. Then write a list of all running Podman containers into /opt/course/11/containers
Dockerfile: list of commands from which an Image can be build
Image: binary file which includes all data/requirements to be run as a Container
Container: running instance of an Image
Registry: place where we can push/pull Images to/from
1.
First we need to change the Dockerfile to:
xxxxxxxxxx# build container stage 1FROM docker.io/library/golang:1.15.15-alpine3.14WORKDIR /srcCOPY . .RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o bin/app .
# app container stage 2FROM docker.io/library/alpine:3.12.4COPY --from=0 /src/bin/app app# CHANGE NEXT LINEENV SUN_CIPHER_ID=5b9c1065-e39d-4a43-a04a-e59bcea3e03fCMD ["./app"]
2.
Then we build the image using Docker:
xxxxxxxxxx➜ cd /opt/course/11/image
➜ sudo docker build -t registry.killer.sh:5000/sun-cipher:latest -t registry.killer.sh:5000/sun-cipher:v1-docker ....Successfully built 409fde3c5bf9Successfully tagged registry.killer.sh:5000/sun-cipher:latestSuccessfully tagged registry.killer.sh:5000/sun-cipher:v1-docker
➜ sudo docker image lsREPOSITORY TAG IMAGE ID CREATED SIZEregistry.killer.sh:5000/sun-cipher latest 409fde3c5bf9 24 seconds ago 7.76MBregistry.killer.sh:5000/sun-cipher v1-docker 409fde3c5bf9 24 seconds ago 7.76MB...
➜ sudo docker push registry.killer.sh:5000/sun-cipher:latestThe push refers to repository [registry.killer.sh:5000/sun-cipher]c947fb5eba52: Pushed 33e8713114f8: Pushed latest: digest: sha256:d216b4136a5b232b738698e826e7d12fccba9921d163b63777be23572250f23d size: 739
➜ sudo docker push registry.killer.sh:5000/sun-cipher:v1-dockerThe push refers to repository [registry.killer.sh:5000/sun-cipher]c947fb5eba52: Layer already exists 33e8713114f8: Layer already exists v1-docker: digest: sha256:d216b4136a5b232b738698e826e7d12fccba9921d163b63777be23572250f23d size: 739There we go, built and pushed.
3.
Next we build the image using Podman. Here it's only required to create one tag. The usage of Podman is very similar (for most cases even identical) to Docker:
xxxxxxxxxx➜ cd /opt/course/11/image
➜ podman build -t registry.killer.sh:5000/sun-cipher:v1-podman ....--> 38adc53bd92Successfully tagged registry.killer.sh:5000/sun-cipher:v1-podman38adc53bd92881d91981c4b537f4f1b64f8de1de1b32eacc8479883170cee537
➜ podman image lsREPOSITORY TAG IMAGE ID CREATED SIZEregistry.killer.sh:5000/sun-cipher v1-podman 38adc53bd928 2 minutes ago 8.03 MB...
➜ podman push registry.killer.sh:5000/sun-cipher:v1-podmanGetting image source signaturesCopying blob 4d0d60db9eb6 done Copying blob 33e8713114f8 done Copying config bfa1a225f8 done Writing manifest to image destinationStoring signaturesBuilt and pushed using Podman.
4.
We'll create a container from the perviously created image, using Podman, which keeps running in the background:
xxxxxxxxxx➜ podman run -d --name sun-cipher registry.killer.sh:5000/sun-cipher:v1-podmanf8199cba792f9fd2d1bd4decc9b7a9c0acfb975d95eda35f5f583c9efbf95589
5.
Finally we need to collect some information into files:
xxxxxxxxxx➜ podman psCONTAINER ID IMAGE COMMAND ... f8199cba792f registry.killer.sh:5000/sun-cipher:v1-podman ./app ...
➜ podman ps > /opt/course/11/containers
➜ podman logs sun-cipher2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 80812077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 78872077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 18472077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 40592077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 20812077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 13182077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 44252077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 25402077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 4562077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 33002077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 6942077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 85112077/03/13 06:50:44 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 81622077/03/13 06:50:54 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 5089
➜ podman logs sun-cipher > /opt/course/11/logsThis is looking not too bad at all. Our container skills are back in town!
Create a new PersistentVolume named earth-project-earthflower-pv. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data and no storageClassName defined.
Next create a new PersistentVolumeClaim in Namespace earth named earth-project-earthflower-pvc . It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.
Finally create a new Deployment project-earthflower in Namespace earth which mounts that volume at /tmp/project-data. The Pods of that Deployment should be of image httpd:2.4.41-alpine.
xxxxxxxxxxvim 12_pv.yamlFind an example from https://kubernetes.io/docs and alter it:
xxxxxxxxxx# 12_pv.yamlkindPersistentVolumeapiVersionv1metadata nameearth-project-earthflower-pvspec capacity storage2Gi accessModesReadWriteOnce hostPath path"/Volumes/Data"Then create it:
xxxxxxxxxxk -f 12_pv.yaml createNext the PersistentVolumeClaim:
xxxxxxxxxxvim 12_pvc.yamlFind an example from https://kubernetes.io/docs and alter it:
xxxxxxxxxx# 12_pvc.yamlkindPersistentVolumeClaimapiVersionv1metadata nameearth-project-earthflower-pvc namespaceearthspec accessModesReadWriteOnce resources requests storage2GiThen create:
xxxxxxxxxxk -f 12_pvc.yaml createAnd check that both have the status Bound:
xxxxxxxxxx➜ k -n earth get pv,pvcNAME CAPACITY ACCESS MODES ... STATUS CLAIM persistentvolume/...earthflower-pv 2Gi RWO ... Bound ...er-pvc
NAME STATUS VOLUME CAPACITYpersistentvolumeclaim/...earthflower-pvc Bound earth-project-earthflower-pv 2GiNext we create a Deployment and mount that volume:
xxxxxxxxxxk -n earth create deploy project-earthflower --image=httpd:2.4.41-alpine --dry-run=client -oyaml > 12_dep.yaml
vim 12_dep.yamlAlter the yaml to mount the volume:
xxxxxxxxxx# 12_dep.yamlapiVersionapps/v1kindDeploymentmetadata creationTimestampnull labels appproject-earthflower nameproject-earthflower namespaceearthspec replicas1 selector matchLabels appproject-earthflower strategy template metadata creationTimestampnull labels appproject-earthflower spec volumes# addnamedata # add persistentVolumeClaim# add claimNameearth-project-earthflower-pvc # add containersimagehttpd2.4.41-alpine namecontainer volumeMounts# addnamedata # add mountPath/tmp/project-data # addxxxxxxxxxxk -f 12_dep.yaml createWe can confirm it's mounting correctly:
xxxxxxxxxx➜ k -n earth describe pod project-earthflower-d6887f7c5-pn5wv | grep -A2 Mounts: Mounts: /tmp/project-data from data (rw) # there it is /var/run/secrets/kubernetes.io/serviceaccount from default-token-n2sjj (ro)
Team Moonpie, which has the Namespace moon, needs more storage. Create a new PersistentVolumeClaim named moon-pvc-126 in that namespace. This claim should use a new StorageClass moon-retain with the provisioner set to moon-retainer and the reclaimPolicy set to Retain. The claim should request storage of 3Gi, an accessMode of ReadWriteOnce and should use the new StorageClass.
The provisioner moon-retainer will be created by another team, so it's expected that the PVC will not boot yet. Confirm this by writing the log message from the PVC into file /opt/course/13/pvc-126-reason.
xxxxxxxxxxvim 13_sc.yamlHead to https://kubernetes.io/docs, search for "storageclass" and alter the example code to this:
xxxxxxxxxx# 13_sc.yamlapiVersionstorage.k8s.io/v1kindStorageClassmetadata namemoon-retainprovisionermoon-retainerreclaimPolicyRetainxxxxxxxxxxk create -f 13_sc.yamlNow the same for the PersistentVolumeClaim, head to the docs, copy an example and transform it into:
xxxxxxxxxxvim 13_pvc.yamlxxxxxxxxxx# 13_pvc.yamlapiVersionv1kindPersistentVolumeClaimmetadata namemoon-pvc-126 # name as requested namespacemoon # importantspec accessModesReadWriteOnce # RWO resources requests storage3Gi # size storageClassNamemoon-retain # uses our new storage classxxxxxxxxxxk -f 13_pvc.yaml createNext we check the status of the PVC :
xxxxxxxxxx➜ k -n moon get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEmoon-pvc-126 Pending moon-retain 2m57sxxxxxxxxxx➜ k -n moon describe pvc moon-pvc-126Name: moon-pvc-126...Status: Pending...Events:...waiting for a volume to be created, either by external provisioner "moon-retainer" or manually created by system administratorThis confirms that the PVC waits for the provisioner moon-retainer to be created. Finally we copy or write the event message into the requested location:
xxxxxxxxxx# /opt/course/13/pvc-126-reasonwaiting for a volume to be created, either by external provisioner "moon-retainer" or manually created by system administrator
You need to make changes on an existing Pod in Namespace moon called secret-handler. Create a new Secret secret1 which contains user=test and pass=pwd. The Secret's content should be available in Pod secret-handler as environment variables SECRET1_USER and SECRET1_PASS. The yaml for Pod secret-handler is available at /opt/course/14/secret-handler.yaml.
There is existing yaml for another Secret at /opt/course/14/secret2.yaml, create this Secret and mount it inside the same Pod at /tmp/secret2. Your changes should be saved under /opt/course/14/secret-handler-new.yaml. Both Secrets should only be available in Namespace moon.
xxxxxxxxxxk -n moon get pod # show podsk -n moon create secret -h # helpk -n moon create secret generic -h # helpk -n moon create secret generic secret1 --from-literal user=test --from-literal pass=pwdThe last command would generate this yaml:
xxxxxxxxxxapiVersionv1data passcHdk userdGVzdA==kindSecretmetadata creationTimestampnull namesecret1 namespacemoonNext we create the second Secret from the given location, making sure it'll be created in Namespace moon:
xxxxxxxxxxk -n moon -f /opt/course/14/secret2.yaml createxxxxxxxxxx➜ k -n moon get secretNAME TYPE DATA AGEdefault-token-rvzcf kubernetes.io/service-account-token 3 66msecret1 Opaque 2 4m3ssecret2 Opaque 1 8sWe will now edit the Pod yaml:
xxxxxxxxxxcp /opt/course/14/secret-handler.yaml /opt/course/14/secret-handler-new.yamlvim /opt/course/14/secret-handler-new.yamlAdd the following to the yaml:
xxxxxxxxxx# /opt/course/14/secret-handler-new.yamlapiVersionv1kindPodmetadata labels idsecret-handler uuid1428721e-8d1c-4c09-b5d6-afd79200c56a red_ident9cf7a7c0-fdb2-4c35-9c13-c2a0bb52b4a9 typeautomatic namesecret-handler namespacemoonspec volumesnamecache-volume1 emptyDirnamecache-volume2 emptyDirnamecache-volume3 emptyDirnamesecret2-volume # add secret# add secretNamesecret2 # add containersnamesecret-handler imagebash5.0.11 args'bash' '-c' 'sleep 2d' volumeMountsmountPath/cache1 namecache-volume1mountPath/cache2 namecache-volume2mountPath/cache3 namecache-volume3namesecret2-volume # add mountPath/tmp/secret2 # add envnameSECRET_KEY_1 value">8$kH#kj..i8}HImQd{"nameSECRET_KEY_2 value"IO=a4L/XkRdvN8jM=Y+"nameSECRET_KEY_3 value"-7PA0_Z]>{pwa43r)__"nameSECRET1_USER # add valueFrom# add secretKeyRef# add namesecret1 # add keyuser # addnameSECRET1_PASS # add valueFrom# add secretKeyRef# add namesecret1 # add keypass # addThere is also the possibility to import all keys from a Secret as env variables at once, though the env variable names will then be the same as in the Secret, which doesn't work for the requirements here:
xxxxxxxxxx containersnamesecret-handler... envFromsecretRef# also works for configMapRef namesecret1Then we apply the changes:
xxxxxxxxxxk -f /opt/course/14/secret-handler.yaml delete --force --grace-period=0k -f /opt/course/14/secret-handler-new.yaml createInstead of running delete and create we can also use recreate:
xxxxxxxxxxk -f /opt/course/14/secret-handler-new.yaml replace --force --grace-period=0It was not requested directly, but you should always confirm it's working:
xxxxxxxxxx➜ k -n moon exec secret-handler -- env | grep SECRET1SECRET1_USER=testSECRET1_PASS=pwd
➜ k -n moon exec secret-handler -- find /tmp/secret2 /tmp/secret2/tmp/secret2/..data/tmp/secret2/key/tmp/secret2/..2019_09_11_09_03_08.147048594/tmp/secret2/..2019_09_11_09_03_08.147048594/key
➜ k -n moon exec secret-handler -- cat /tmp/secret2/key12345678
Team Moonpie has a nginx server Deployment called web-moon in Namespace moon. Someone started configuring it but it was never completed. To complete please create a ConfigMap called configmap-web-moon-html containing the content of file /opt/course/15/web-moon.html under the data key-name index.html.
The Deployment web-moon is already configured to work with this ConfigMap and serve its content. Test the nginx configuration for example using curl from a temporary nginx:alpine Pod.
Let's check the existing Pods:
xxxxxxxxxx➜ k -n moon get podNAME READY STATUS RESTARTS AGEsecret-handler 1/1 Running 0 55mweb-moon-847496c686-2rzj4 0/1 ContainerCreating 0 33sweb-moon-847496c686-9nwwj 0/1 ContainerCreating 0 33sweb-moon-847496c686-cxdbx 0/1 ContainerCreating 0 33sweb-moon-847496c686-hvqlw 0/1 ContainerCreating 0 33sweb-moon-847496c686-tj7ct 0/1 ContainerCreating 0 33sxxxxxxxxxx➜ k -n moon describe pod web-moon-847496c686-2rzj4...Warning FailedMount 31s (x7 over 63s) kubelet, gke-test-default-pool-ce83a51a-p6s4 MountVolume.SetUp failed for volume "html-volume" : configmaps "configmap-web-moon-html" not foundGood so far, now let's create the missing ConfigMap:
xxxxxxxxxxk -n moon create configmap -h # help
k -n moon create configmap configmap-web-moon-html --from-file=index.html=/opt/course/15/web-moon.html # important to set the index.html keyThis should create a ConfigMap with yaml like:
xxxxxxxxxxapiVersionv1data index.html# notice the key index.html, this will be the filename when mounted <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Web Moon Webpage</title> </head> <body> This is some great content. </body> </html>kindConfigMapmetadata creationTimestampnull nameconfigmap-web-moon-html namespacemoonAfter waiting a bit or deleting/recreating (k -n moon rollout restart deploy web-moon) the Pods we should see:
xxxxxxxxxx➜ k -n moon get podNAME READY STATUS RESTARTS AGEsecret-handler 1/1 Running 0 59mweb-moon-847496c686-2rzj4 1/1 Running 0 4m28sweb-moon-847496c686-9nwwj 1/1 Running 0 4m28sweb-moon-847496c686-cxdbx 1/1 Running 0 4m28sweb-moon-847496c686-hvqlw 1/1 Running 0 4m28sweb-moon-847496c686-tj7ct 1/1 Running 0 4m28sLooking much better. Finally we check if the nginx returns the correct content:
xxxxxxxxxxk -n moon get pod -o wide # get pod cluster IPsThen use one IP to test the configuration:
xxxxxxxxxx➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl 10.44.0.78 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 161 100 161 0 0 80500 0 --:--:-- --:--:-- --:--:-- 157k<!DOCTYPE html><html lang="en"><head> <meta charset="UTF-8"> <title>Web Moon Webpage</title></head><body>This is some great content.</body>For debugging or further checks we could find out more about the Pods volume mounts:
xxxxxxxxxx➜ k -n moon describe pod web-moon-c77655cc-dc8v4 | grep -A2 Mounts: Mounts: /usr/share/nginx/html from html-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-rvzcf (ro)And check the mounted folder content:
xxxxxxxxxx➜ k -n moon exec web-moon-c77655cc-dc8v4 find /usr/share/nginx/html/usr/share/nginx/html/usr/share/nginx/html/..2019_09_11_10_05_56.336284411/usr/share/nginx/html/..2019_09_11_10_05_56.336284411/index.html/usr/share/nginx/html/..data/usr/share/nginx/html/index.htmlHere it was important that the file will have the name index.html and not the original one web-moon.html which is controlled through the ConfigMap data key.
The Tech Lead of Mercury2D decided it's time for more logging, to finally fight all these missing data incidents. There is an existing container named cleaner-con in Deployment cleaner in Namespace mercury. This container mounts a volume and writes logs into a file called cleaner.log.
The yaml for the existing Deployment is available at /opt/course/16/cleaner.yaml. Persist your changes at /opt/course/16/cleaner-new.yaml but also make sure the Deployment is running.
Create a sidecar container named logger-con, image busybox:1.31.0 , which mounts the same volume and writes the content of cleaner.log to stdout, you can use the tail -f command for this. This way it can be picked up by kubectl logs.
Check if the logs of the new container reveal something about the missing data incidents.
xxxxxxxxxxcp /opt/course/16/cleaner.yaml /opt/course/16/cleaner-new.yamlvim /opt/course/16/cleaner-new.yamlAdd a sidecar container which outputs the log file to stdout:
xxxxxxxxxx# /opt/course/16/cleaner-new.yamlapiVersionapps/v1kindDeploymentmetadata creationTimestampnull namecleaner namespacemercuryspec replicas2 selector matchLabels idcleaner template metadata labels idcleaner spec volumesnamelogs emptyDir initContainersnameinit imagebash5.0.11 command'bash' '-c' 'echo init > /var/log/cleaner/cleaner.log' volumeMountsnamelogs mountPath/var/log/cleaner containersnamecleaner-con imagebash5.0.11 args'bash' '-c' 'while true; do echo `date`: "remove random file" >> /var/log/cleaner/cleaner.log; sleep 1; done' volumeMountsnamelogs mountPath/var/log/cleanernamelogger-con # add imagebusybox1.31.0 # add command"sh" "-c" "tail -f /var/log/cleaner/cleaner.log" # add volumeMounts# addnamelogs # add mountPath/var/log/cleaner # addThen apply the changes and check the logs of the sidecar:
xxxxxxxxxxk -f /opt/course/16/cleaner-new.yaml applyThis will cause a deployment rollout of which we can get more details:
xxxxxxxxxxk -n mercury rollout history deploy cleanerk -n mercury rollout history deploy cleaner --revision 1k -n mercury rollout history deploy cleaner --revision 2Check Pod statuses:
xxxxxxxxxx➜ k -n mercury get podNAME READY STATUS RESTARTS AGEcleaner-86b7758668-9pw6t 2/2 Running 0 6scleaner-86b7758668-qgh4v 0/2 Init:0/1 0 1s
➜ k -n mercury get podNAME READY STATUS RESTARTS AGEcleaner-86b7758668-9pw6t 2/2 Running 0 14scleaner-86b7758668-qgh4v 2/2 Running 0 9sFinally check the logs of the logging sidecar container:
xxxxxxxxxx➜ k -n mercury logs cleaner-576967576c-cqtgx -c logger-coninitWed Sep 11 10:45:44 UTC 2099: remove random fileWed Sep 11 10:45:45 UTC 2099: remove random file...Mystery solved, something is removing files at random ;) It's important to understand how containers can communicate with each other using volumes.
Last lunch you told your coworker from department Mars Inc how amazing InitContainers are. Now he would like to see one in action. There is a Deployment yaml at /opt/course/17/test-init-container.yaml. This Deployment spins up a single Pod of image nginx:1.17.3-alpine and serves files from a mounted volume, which is empty right now.
Create an InitContainer named init-con which also mounts that volume and creates a file index.html with content check this out! in the root of the mounted volume. For this test we ignore that it doesn't contain valid html.
The InitContainer should be using image busybox:1.31.0. Test your implementation for example using curl from a temporary nginx:alpine Pod.
xxxxxxxxxxcp /opt/course/17/test-init-container.yaml ~/17_test-init-container.yaml
vim 17_test-init-container.yamlAdd the InitContainer:
xxxxxxxxxx# 17_test-init-container.yamlapiVersionapps/v1kindDeploymentmetadata nametest-init-container namespacemarsspec replicas1 selector matchLabels idtest-init-container template metadata labels idtest-init-container spec volumesnameweb-content emptyDir initContainers# initContainer startnameinit-con imagebusybox1.31.0 command'sh' '-c' 'echo "check this out!" > /tmp/web-content/index.html' volumeMountsnameweb-content mountPath/tmp/web-content # initContainer end containersimagenginx1.17.3-alpine namenginx volumeMountsnameweb-content mountPath/usr/share/nginx/html portscontainerPort80Then we create the Deployment:
xxxxxxxxxxk -f 17_test-init-container.yaml createFinally we test the configuration:
xxxxxxxxxxk -n mars get pod -o wide # to get the cluster IPxxxxxxxxxx➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl 10.0.0.67 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speedcheck this out!Beautiful.
There seems to be an issue in Namespace mars where the ClusterIP service manager-api-svc should make the Pods of Deployment manager-api-deployment available inside the cluster.
You can test this with curl manager-api-svc.mars:4444 from a temporary nginx:alpine Pod. Check for the misconfiguration and apply a fix.
First let's get an overview:
xxxxxxxxxx➜ k -n mars get allNAME READY STATUS RESTARTS AGEpod/manager-api-deployment-dbcc6657d-bg2hh 1/1 Running 0 98mpod/manager-api-deployment-dbcc6657d-f5fv4 1/1 Running 0 98mpod/manager-api-deployment-dbcc6657d-httjv 1/1 Running 0 98mpod/manager-api-deployment-dbcc6657d-k98xn 1/1 Running 0 98mpod/test-init-container-5db7c99857-htx6b 1/1 Running 0 2m19s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/manager-api-svc ClusterIP 10.15.241.159 <none> 4444/TCP 99m
NAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/manager-api-deployment 4/4 4 4 98mdeployment.apps/test-init-container 1/1 1 1 2m19s...Everything seems to be running, but we can't seem to get a connection:
xxxxxxxxxx➜ k -n mars run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc:4444If you don't see a command prompt, try pressing enter. 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0curl: (28) Connection timed out after 1000 millisecondspod "tmp" deletedpod mars/tmp terminated (Error)Ok, let's try to connect to one pod directly:
xxxxxxxxxxk -n mars get pod -o wide # get cluster IPxxxxxxxxxx➜ k -n mars run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 10.0.1.14 % Total % Received % Xferd Average Speed Time Time Time Current<!DOCTYPE html><html><head><title>Welcome to nginx!</title>...The Pods itself seem to work. Let's investigate the Service a bit:
xxxxxxxxxx➜ k -n mars describe service manager-api-svcName: manager-api-svcNamespace: marsLabels: app=manager-api-svc...Endpoints: <none>...Endpoint inspection is also possible using:
xxxxxxxxxxk -n mars get epNo endpoints - No good. We check the Service yaml:
xxxxxxxxxxk -n mars edit service manager-api-svcxxxxxxxxxx# k -n mars edit service manager-api-svcapiVersionv1kindServicemetadata... labels appmanager-api-svc namemanager-api-svc namespacemars...spec clusterIP10.3.244.121 portsname4444-80 port4444 protocolTCP targetPort80 selector #id: manager-api-deployment # wrong selector, needs to point to pod! idmanager-api-pod sessionAffinityNone typeClusterIPThough Pods are usually never created without a Deployment or ReplicaSet, Services always select for Pods directly. This gives great flexibility because Pods could be created through various customized ways. After saving the new selector we check the Service again for endpoints:
xxxxxxxxxx➜ k -n mars get epNAME ENDPOINTS AGEmanager-api-svc 10.0.0.30:80,10.0.1.30:80,10.0.1.31:80 + 1 more... 41mEndpoints - Good! Now we try connecting again:
xxxxxxxxxx➜ k -n mars run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc:4444 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 612 100 612 0 0 99k 0 --:--:-- --:--:-- --:--:-- 99k<!DOCTYPE html><html><head><title>Welcome to nginx!</title>...And we fixed it. Good to know is how to be able to use Kubernetes DNS resolution from a different Namespace. Not necessary, but we could spin up the temporary Pod in default Namespace:
xxxxxxxxxx➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc:4444 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (6) Could not resolve host: manager-api-svcpod "tmp" deletedpod default/tmp terminated (Error)
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc.mars:4444 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 612 100 612 0 0 68000 0 --:--:-- --:--:-- --:--:-- 68000<!DOCTYPE html><html><head><title>Welcome to nginx!</title>Short manager-api-svc.mars or long manager-api-svc.mars.svc.cluster.local work.
In Namespace jupiter you'll find an apache Deployment (with one replica) named jupiter-crew-deploy and a ClusterIP Service called jupiter-crew-svc which exposes it. Change this service to a NodePort one to make it available on all nodes on port 30100.
Test the NodePort Service using the internal IP of all available nodes and the port 30100 using curl, you can reach the internal node IPs directly from your main terminal. On which nodes is the Service reachable? On which node is the Pod running?
First we get an overview:
xxxxxxxxxx➜ k -n jupiter get allNAME READY STATUS RESTARTS AGEpod/jupiter-crew-deploy-8cdf99bc9-klwqt 1/1 Running 0 34m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/jupiter-crew-svc ClusterIP 10.100.254.66 <none> 8080/TCP 34m...(Optional) Next we check if the ClusterIP Service actually works:
xxxxxxxxxx➜ k -n jupiter run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 jupiter-crew-svc:8080 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 45 100 45 0 0 5000 0 --:--:-- --:--:-- --:--:-- 5000<html><body><h1>It works!</h1></body></html>The Service is working great. Next we change the Service type to NodePort and set the port:
xxxxxxxxxxk -n jupiter edit service jupiter-crew-svcxxxxxxxxxx# k -n jupiter edit service jupiter-crew-svcapiVersionv1kindServicemetadata namejupiter-crew-svc namespacejupiter...spec clusterIP10.3.245.70 portsname8080-80 port8080 protocolTCP targetPort80 nodePort30100 # add the nodePort selector idjupiter-crew sessionAffinityNone #type: ClusterIP typeNodePort # change typestatus loadBalancerWe check if the Service type was updated:
xxxxxxxxxx➜ k -n jupiter get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEjupiter-crew-svc NodePort 10.3.245.70 <none> 8080:30100/TCP 3m52s(Optional) And we confirm that the service is still reachable internally:
xxxxxxxxxx➜ k -n jupiter run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 jupiter-crew-svc:8080 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed<html><body><h1>It works!</h1></body></html>Nice. A NodePort Service kind of lies on top of a ClusterIP one, making the ClusterIP Service reachable on the Node IPs (internal and external). Next we get the internal IPs of all nodes to check the connectivity:
xxxxxxxxxx➜ k get nodes -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP ...cluster1-controlplane1 Ready control-plane 18h v1.30.0 192.168.100.11 ...cluster1-node1 Ready <none> 18h v1.30.0 192.168.100.12 ...On which nodes is the Service reachable?
xxxxxxxxxx➜ curl 192.168.100.11:30100<html><body><h1>It works!</h1></body></html>
➜ curl 192.168.100.12:30100<html><body><h1>It works!</h1></body></html>On both, even the controlplane. On which node is the Pod running?
xxxxxxxxxx➜ k -n jupiter get pod jupiter-crew-deploy-8cdf99bc9-klwqt -o yaml | grep nodeName nodeName: cluster1-node1 ➜ k -n jupiter get pod -o wide # or even shorterIn our case on cluster1-node1, but could be any other worker if more available. Here we hopefully gained some insight into how a NodePort Service works. Although the Pod is just running on one specific node, the Service makes it available through port 30100 on the internal and external IP addresses of all nodes. This is at least the common/default behaviour but can depend on cluster configuration.
In Namespace venus you'll find two Deployments named api and frontend. Both Deployments are exposed inside the cluster using Services. Create a NetworkPolicy named np1 which restricts outgoing tcp connections from Deployment frontend and only allows those going to Deployment api. Make sure the NetworkPolicy still allows outgoing traffic on UDP/TCP ports 53 for DNS resolution.
Test using: wget www.google.com and wget api:2222 from a Pod of Deployment frontend.
INFO: For learning NetworkPolicies check out https://editor.cilium.io. But you're not allowed to use it during the exam.
First we get an overview:
xxxxxxxxxx➜ k -n venus get allNAME READY STATUS RESTARTS AGEpod/api-5979b95578-gktxp 1/1 Running 0 57spod/api-5979b95578-lhcl5 1/1 Running 0 57spod/frontend-789cbdc677-c9v8h 1/1 Running 0 57spod/frontend-789cbdc677-npk2m 1/1 Running 0 57spod/frontend-789cbdc677-pl67g 1/1 Running 0 57spod/frontend-789cbdc677-rjt5r 1/1 Running 0 57spod/frontend-789cbdc677-xgf5n 1/1 Running 0 57s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/api ClusterIP 10.3.255.137 <none> 2222/TCP 37sservice/frontend ClusterIP 10.3.255.135 <none> 80/TCP 57s...(Optional) This is not necessary but we could check if the Services are working inside the cluster:
xxxxxxxxxx➜ k -n venus run tmp --restart=Never --rm -i --image=busybox -i -- wget -O- frontend:80Connecting to frontend:80 (10.3.245.9:80)<!DOCTYPE html><html><head><title>Welcome to nginx!</title>...
➜ k -n venus run tmp --restart=Never --rm --image=busybox -i -- wget -O- api:2222Connecting to api:2222 (10.3.250.233:2222)<html><body><h1>It works!</h1></body></html>Then we use any frontend Pod and check if it can reach external names and the api Service:
xxxxxxxxxx➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- www.google.comConnecting to www.google.com (216.58.205.227:80)- 100% |********************************| 12955 0:00:00 ETA<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="en"><head>...
➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- api:2222 <html><body><h1>It works!</h1></body></html>Connecting to api:2222 (10.3.255.137:2222)- 100% |********************************| 45 0:00:00 ETA...We see Pods of frontend can reach the api and external names.
xxxxxxxxxxvim 20_np1.yamlNow we head to https://kubernetes.io/docs, search for NetworkPolicy, copy the example code and adjust it to:
xxxxxxxxxx# 20_np1.yamlapiVersionnetworking.k8s.io/v1kindNetworkPolicymetadata namenp1 namespacevenusspec podSelector matchLabels idfrontend # label of the pods this policy should be applied on policyTypesEgress # we only want to control egress egressto# 1st egress rulepodSelector# allow egress only to pods with api label matchLabels idapiports# 2nd egress ruleport53 # allow DNS UDP protocolUDPport53 # allow DNS TCP protocolTCPNotice that we specify two egress rules in the yaml above. If we specify multiple egress rules then these are connected using a logical OR. So in the example above we do:
xxxxxxxxxxallow outgoing traffic if(destination pod has label id:api) OR ((port is 53 UDP) OR (port is 53 TCP))
Let's have a look at example code which wouldn't work in our case:
xxxxxxxxxx# this example does not work in our case... egressto# 1st AND ONLY egress rulepodSelector# allow egress only to pods with api label matchLabels idapi ports# STILL THE SAME RULE but just an additional selectorport53 # allow DNS UDP protocolUDPport53 # allow DNS TCP protocolTCPIn the yaml above we only specify one egress rule with two selectors. It can be translated into:
xxxxxxxxxxallow outgoing traffic if(destination pod has label id:api) AND ((port is 53 UDP) OR (port is 53 TCP))
Apply the correct policy:
xxxxxxxxxxk -f 20_np1.yaml createAnd try again, external is not working any longer:
xxxxxxxxxx➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- www.google.de Connecting to www.google.de:2222 (216.58.207.67:80)^C
➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- -T 5 www.google.de:80 Connecting to www.google.com (172.217.203.104:80)wget: download timed outcommand terminated with exit code 1Internal connection to api work as before:
xxxxxxxxxx➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- api:2222 <html><body><h1>It works!</h1></body></html>Connecting to api:2222 (10.3.255.137:2222)- 100% |********************************| 45 0:00:00 ETA
Team Neptune needs 3 Pods of image httpd:2.4-alpine, create a Deployment named neptune-10ab for this. The containers should be named neptune-pod-10ab.
Each container should have a memory request of 20Mi and a memory limit of 50Mi.
Team Neptune has it's own ServiceAccount neptune-sa-v2 under which the Pods should run. The Deployment should be in Namespace neptune.
xxxxxxxxxxk -n neptune create deployment -h # helpk -n neptune create deploy -h # deploy is short for deployment
k -n neptune create deploy neptune-10ab --image=httpd:2.4-alpine --dry-run=client -oyaml > 21.yaml
vim 21.yamlNow make the required changes using vim:
xxxxxxxxxx# 21.yamlapiVersionapps/v1kindDeploymentmetadata creationTimestampnull labels appneptune-10ab nameneptune-10ab namespaceneptunespec replicas3 # change selector matchLabels appneptune-10ab strategy template metadata creationTimestampnull labels appneptune-10ab spec serviceAccountNameneptune-sa-v2 # add containersimagehttpd2.4-alpine nameneptune-pod-10ab # change resources# add limits# add memory50Mi # add requests# add memory20Mi # addstatusThen create the yaml:
xxxxxxxxxxk create -f 21.yaml # namespace already set in yamlTo verify all Pods are running we do:
xxxxxxxxxx➜ k -n neptune get pod | grep neptune-10abneptune-10ab-7d4b8d45b-4nzj5 1/1 Running 0 57sneptune-10ab-7d4b8d45b-lzwrf 1/1 Running 0 17sneptune-10ab-7d4b8d45b-z5hcc 1/1 Running 0 17s
Team Sunny needs to identify some of their Pods in namespace sun. They ask you to add a new label protected: true to all Pods with an existing label type: worker or type: runner. Also add an annotation protected: do not delete this pod to all Pods having the new label protected: true.
xxxxxxxxxx➜ k -n sun get pod --show-labelsNAME READY STATUS RESTARTS AGE LABELS0509649a 1/1 Running 0 25s type=runner,type_old=messenger0509649b 1/1 Running 0 24s type=worker1428721e 1/1 Running 0 23s type=worker1428721f 1/1 Running 0 22s type=worker43b9a 1/1 Running 0 22s type=test4c09 1/1 Running 0 21s type=worker4c35 1/1 Running 0 20s type=worker4fe4 1/1 Running 0 19s type=worker5555a 1/1 Running 0 19s type=messenger86cda 1/1 Running 0 18s type=runner8d1c 1/1 Running 0 17s type=messengera004a 1/1 Running 0 16s type=runnera94128196 1/1 Running 0 15s type=runner,type_old=messengerafd79200c56a 1/1 Running 0 15s type=workerb667 1/1 Running 0 14s type=workerfdb2 1/1 Running 0 13s type=workerIf we would only like to get pods with certain labels we can run:
xxxxxxxxxxk -n sun get pod -l type=runner # only pods with label runnerWe can use this label filtering also when using other commands, like setting new labels:
xxxxxxxxxxk label -h # helpk -n sun label pod -l type=runner protected=true # run for label runnerk -n sun label pod -l type=worker protected=true # run for label workerOr we could run:
xxxxxxxxxxk -n sun label pod -l "type in (worker,runner)" protected=trueLet's check the result:
xxxxxxxxxx➜ k -n sun get pod --show-labelsNAME ... AGE LABELS0509649a ... 56s protected=true,type=runner,type_old=messenger0509649b ... 55s protected=true,type=worker1428721e ... 54s protected=true,type=worker1428721f ... 53s protected=true,type=worker43b9a ... 53s type=test4c09 ... 52s protected=true,type=worker4c35 ... 51s protected=true,type=worker4fe4 ... 50s protected=true,type=worker5555a ... 50s type=messenger86cda ... 49s protected=true,type=runner8d1c ... 48s type=messengera004a ... 47s protected=true,type=runnera94128196 ... 46s protected=true,type=runner,type_old=messengerafd79200c56a ... 46s protected=true,type=workerb667 ... 45s protected=true,type=workerfdb2 ... 44s protected=true,type=workerLooking good. Finally we set the annotation using the newly assigned label protected: true:
xxxxxxxxxxk -n sun annotate pod -l protected=true protected="do not delete this pod"Not requested in the task but for your own control you could run:
xxxxxxxxxxk -n sun get pod -l protected=true -o yaml | grep -A 8 metadata:
This is a preview of the full CKAD Simulator course content.
The full course contains 22 questions and scenarios which cover all the CKAD areas. The course also provides a browser terminal which is a very close replica of the original one. This is great to get used and comfortable before the real exam. After the test session (120 minutes), or if you stop it early, you'll get access to all questions and their detailed solutions. You'll have 36 hours cluster access in total which means even after the session, once you have the solutions, you can still play around.
The following preview will give you an idea of what the full course will provide. These preview questions are not part of the 22 in the full course but in addition to it. But the preview questions are part of the same CKAD simulation environment which we setup for you, so with access to the full course you can solve these too.
The answers provided here assume that you did run the initial terminal setup suggestions as provided in the tips section, but especially:
These questions can be solved in the test environment provided through the CKAD Simulator
In Namespace pluto there is a Deployment named project-23-api. It has been working okay for a while but Team Pluto needs it to be more reliable. Implement a liveness-probe which checks the container to be reachable on port 80. Initially the probe should wait 10, periodically 15 seconds.
The original Deployment yaml is available at /opt/course/p1/project-23-api.yaml. Save your changes at /opt/course/p1/project-23-api-new.yaml and apply the changes.
First we get an overview:
x➜ k -n pluto get all -o wideNAME READY STATUS ... IP ...pod/holy-api 1/1 Running ... 10.12.0.26 ...pod/project-23-api-784857f54c-dx6h6 1/1 Running ... 10.12.2.15 ...pod/project-23-api-784857f54c-sj8df 1/1 Running ... 10.12.1.18 ...pod/project-23-api-784857f54c-t4xmh 1/1 Running ... 10.12.0.23 ...
NAME READY UP-TO-DATE AVAILABLE ...deployment.apps/project-23-api 3/3 3 3 ...To note: we see another Pod here called holy-api which is part of another section. This is often the case in the provided scenarios, so be careful to only manipulate the resources you need to. Just like in the real world and in the exam.
Next we use nginx:alpine and curl to check if one Pod is accessible on port 80:
xxxxxxxxxx➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 10.12.2.15 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed<!DOCTYPE html><html><head><title>Welcome to nginx!</title>...We could also use busybox and wget for this:
xxxxxxxxxx➜ k run tmp --restart=Never --rm --image=busybox -i -- wget -O- 10.12.2.15Connecting to 10.12.2.15 (10.12.2.15:80)writing to stdout- 100% |********************************| 612 0:00:00 ETAwritten to stdout<!DOCTYPE html><html><head><title>Welcome to nginx!</title>Now that we're sure the Deployment works we can continue with altering the provided yaml:
xxxxxxxxxxcp /opt/course/p1/project-23-api.yaml /opt/course/p1/project-23-api-new.yamlvim /opt/course/p1/project-23-api-new.yamlAdd the liveness-probe to the yaml:
xxxxxxxxxx# /opt/course/p1/project-23-api-new.yamlapiVersionapps/v1kindDeploymentmetadata nameproject-23-api namespaceplutospec replicas3 selector matchLabels appproject-23-api template metadata labels appproject-23-api spec volumesnamecache-volume1 emptyDirnamecache-volume2 emptyDirnamecache-volume3 emptyDir containersimagehttpd2.4-alpine namehttpd volumeMountsmountPath/cache1 namecache-volume1mountPath/cache2 namecache-volume2mountPath/cache3 namecache-volume3 envnameAPP_ENV value"prod"nameAPP_SECRET_N1 value"IO=a4L/XkRdvN8jM=Y+"nameAPP_SECRET_P1 value"-7PA0_Z]>{pwa43r)__" livenessProbe# add tcpSocket# add port80 # add initialDelaySeconds10 # add periodSeconds15 # addThen let's apply the changes:
xxxxxxxxxxk -f /opt/course/p1/project-23-api-new.yaml apply Next we wait 10 seconds and confirm the Pods are still running:
xxxxxxxxxx➜ k -n pluto get podNAME READY STATUS RESTARTS AGEholy-api 1/1 Running 0 144mproject-23-api-5b4579fd49-8knh8 1/1 Running 0 90sproject-23-api-5b4579fd49-cbgph 1/1 Running 0 88sproject-23-api-5b4579fd49-tcfq5 1/1 Running 0 86sWe can also check the configured liveness-probe settings on a Pod or the Deployment:
xxxxxxxxxx➜ k -n pluto describe pod project-23-api-5b4579fd49-8knh8 | grep Liveness Liveness: tcp-socket :80 delay=10s timeout=1s period=15s #success=1 #failure=3 ➜ k -n pluto describe deploy project-23-api | grep Liveness Liveness: tcp-socket :80 delay=10s timeout=1s period=15s #success=1 #failure=3
Team Sun needs a new Deployment named sunny with 4 replicas of image nginx:1.17.3-alpine in Namespace sun. The Deployment and its Pods should use the existing ServiceAccount sa-sun-deploy.
Expose the Deployment internally using a ClusterIP Service named sun-srv on port 9999. The nginx containers should run as default on port 80. The management of Team Sun would like to execute a command to check that all Pods are running on occasion. Write that command into file /opt/course/p2/sunny_status_command.sh. The command should use kubectl.
xxxxxxxxxxk -n sun create deployment -h #help
k -n sun create deployment sunny --image=nginx:1.17.3-alpine --dry-run=client -oyaml > p2_sunny.yaml
vim p2_sunny.yamlThen alter its yaml to include the requirements:
xxxxxxxxxx# p2_sunny.yamlapiVersionapps/v1kindDeploymentmetadata creationTimestampnull labels appsunny namesunny namespacesunspec replicas4 # change selector matchLabels appsunny strategy template metadata creationTimestampnull labels appsunny spec serviceAccountNamesa-sun-deploy # add containersimagenginx1.17.3-alpine namenginx resourcesstatusNow create the yaml and confirm it's running:
xxxxxxxxxx➜ k create -f p2_sunny.yaml deployment.apps/sunny created
➜ k -n sun get podNAME READY STATUS RESTARTS AGE0509649a 1/1 Running 0 149m0509649b 1/1 Running 0 149m1428721e 1/1 Running 0 149m...sunny-64df8dbdbb-9mxbw 1/1 Running 0 10ssunny-64df8dbdbb-mp5cf 1/1 Running 0 10ssunny-64df8dbdbb-pggdf 1/1 Running 0 6ssunny-64df8dbdbb-zvqth 1/1 Running 0 7sConfirmed, the AGE column is always in important information about if changes were applied. Next we expose the Pods by created the Service:
xxxxxxxxxxk -n sun expose -h # helpk -n sun expose deployment sunny --name sun-srv --port 9999 --target-port 80Using expose instead of kubectl create service clusterip is faster because it already sets the correct selector-labels. The previous command would produce this yaml:
xxxxxxxxxx# k -n sun expose deployment sunny --name sun-srv --port 9999 --target-port 80apiVersionv1kindServicemetadata creationTimestampnull labels appsunny namesun-srv # required by taskspec portsport9999 # service port protocolTCP targetPort80 # target port selector appsunny # selector is importantstatus loadBalancerLet's test the Service using wget from a temporary Pod:
xxxxxxxxxx➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 sun-srv.sun:9999Connecting to sun-srv.sun:9999 (10.23.253.120:9999)<!DOCTYPE html><html><head><title>Welcome to nginx!</title>...Because the Service is in a different Namespace as our temporary Pod, it is reachable using the names sun-srv.sun or fully: sun-srv.sun.svc.cluster.local.
Finally we need a command which can be executed to check if all Pods are runing, this can be done with:
xxxxxxxxxxvim /opt/course/p2/sunny_status_command.shxxxxxxxxxx# /opt/course/p2/sunny_status_command.shkubectl -n sun get deployment sunny
To run the command:
xxxxxxxxxx➜ sh /opt/course/p2/sunny_status_command.shNAME READY UP-TO-DATE AVAILABLE AGEsunny 4/4 4 4 13m
Management of EarthAG recorded that one of their Services stopped working. Dirk, the administrator, left already for the long weekend. All the information they could give you is that it was located in Namespace earth and that it stopped working after the latest rollout. All Services of EarthAG should be reachable from inside the cluster.
Find the Service, fix any issues and confirm it's working again. Write the reason of the error into file /opt/course/p3/ticket-654.txt so Dirk knows what the issue was.
First we get an overview of the resources in Namespace earth:
xxxxxxxxxx➜ k -n earth get allNAME READY STATUS RESTARTS AGEpod/earth-2x3-api-584df69757-ngnwp 1/1 Running 0 116mpod/earth-2x3-api-584df69757-ps8cs 1/1 Running 0 116mpod/earth-2x3-api-584df69757-ww9q8 1/1 Running 0 116mpod/earth-2x3-web-85c5b7986c-48vjt 1/1 Running 0 116mpod/earth-2x3-web-85c5b7986c-6mqmb 1/1 Running 0 116mpod/earth-2x3-web-85c5b7986c-6vjll 1/1 Running 0 116mpod/earth-2x3-web-85c5b7986c-fnkbp 1/1 Running 0 116mpod/earth-2x3-web-85c5b7986c-pjm5m 1/1 Running 0 116mpod/earth-2x3-web-85c5b7986c-pwfvj 1/1 Running 0 116mpod/earth-3cc-runner-6cb6cc6974-8wm5x 1/1 Running 0 116mpod/earth-3cc-runner-6cb6cc6974-9fx8b 1/1 Running 0 116mpod/earth-3cc-runner-6cb6cc6974-b9nrv 1/1 Running 0 116mpod/earth-3cc-runner-heavy-6bf876f46d-b47vq 1/1 Running 0 116mpod/earth-3cc-runner-heavy-6bf876f46d-mrzqd 1/1 Running 0 116mpod/earth-3cc-runner-heavy-6bf876f46d-qkd74 1/1 Running 0 116mpod/earth-3cc-web-6bfdf8b848-f74cj 0/1 Running 0 116mpod/earth-3cc-web-6bfdf8b848-n4z7z 0/1 Running 0 116mpod/earth-3cc-web-6bfdf8b848-rcmxs 0/1 Running 0 116mpod/earth-3cc-web-6bfdf8b848-xl467 0/1 Running 0 116m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/earth-2x3-api-svc ClusterIP 10.3.241.242 <none> 4546/TCP 116mservice/earth-2x3-web-svc ClusterIP 10.3.250.247 <none> 4545/TCP 116mservice/earth-3cc-web ClusterIP 10.3.243.24 <none> 6363/TCP 116m
NAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/earth-2x3-api 3/3 3 3 116mdeployment.apps/earth-2x3-web 6/6 6 6 116mdeployment.apps/earth-3cc-runner 3/3 3 3 116mdeployment.apps/earth-3cc-runner-heavy 3/3 3 3 116mdeployment.apps/earth-3cc-web 0/4 4 0 116m
NAME DESIRED CURRENT READY AGEreplicaset.apps/earth-2x3-api-584df69757 3 3 3 116mreplicaset.apps/earth-2x3-web-85c5b7986c 6 6 6 116mreplicaset.apps/earth-3cc-runner-6cb6cc6974 3 3 3 116mreplicaset.apps/earth-3cc-runner-heavy-6bf876f46d 3 3 3 116mreplicaset.apps/earth-3cc-web-6895587dc7 0 0 0 116mreplicaset.apps/earth-3cc-web-6bfdf8b848 4 4 0 116mreplicaset.apps/earth-3cc-web-d49645966 0 0 0 116mFirst impression could be that all Pods are in status RUNNING. But looking closely we see that some of the Pods are not ready, which also confirms what we see about one Deployment and one replicaset. This could be our error to further investigate.
Another approach could be to check the Services for missing endpoints:
xxxxxxxxxx➜ k -n earth get epNAME ENDPOINTS AGEearth-2x3-api-svc 10.0.0.10:80,10.0.1.5:80,10.0.2.4:80 116mearth-2x3-web-svc 10.0.0.11:80,10.0.0.12:80,10.0.1.6:80 + 3 more... 116mearth-3cc-web Service earth-3cc-web doesn't have endpoints. This could be a selector/label misconfiguration or the endpoints are actually not available/ready.
Checking all Services for connectivity should show the same (this step is optional and just for demonstration):
xxxxxxxxxx➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 earth-2x3-api-svc.earth:4546...<html><body><h1>It works!</h1></body></html>
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 earth-2x3-web-svc.earth:4545 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 45 100 45 0 0 5000 0 --:--:-- --:--:-- --:--:-- 5000<html><body><h1>It works!</h1></body></html>
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 earth-3cc-web.earth:6363If you don't see a command prompt, try pressing enter. 0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0curl: (28) Connection timed out after 5000 millisecondspod "tmp" deletedpod default/tmp terminated (Error)Notice that we use here for example earth-2x3-api-svc.earth. We could also spin up a temporary Pod in Namespace earth and connect directly to earth-2x3-api-svc.
We get no connection to earth-3cc-web.earth:6363. Let's look at the Deployment earth-3cc-web. Here we see that the requested amount of replicas is not available/ready:
xxxxxxxxxx➜ k -n earth get deploy earth-3cc-webNAME READY UP-TO-DATE AVAILABLE AGEearth-3cc-web 0/4 4 0 7m18sTo continue we check the Deployment yaml for some misconfiguration:
xxxxxxxxxxk -n earth edit deploy earth-3cc-webxxxxxxxxxx# k -n earth edit deploy earth-3cc-webapiVersionextensions/v1beta1kindDeploymentmetadata... generation3 # there have been rollouts nameearth-3cc-web namespaceearth...spec... template metadata creationTimestampnull labels idearth-3cc-web spec containersimagenginx1.16.1-alpine imagePullPolicyIfNotPresent namenginx readinessProbe failureThreshold3 initialDelaySeconds10 periodSeconds20 successThreshold1 tcpSocket port82 # this port doesn't seem to be right, should be 80 timeoutSeconds1...We change the readiness-probe port, save and check the Pods:
xxxxxxxxxx➜ k -n earth get pod -l id=earth-3cc-webNAME READY STATUS RESTARTS AGEearth-3cc-web-d49645966-52vb9 0/1 Running 0 6searth-3cc-web-d49645966-5tts6 0/1 Running 0 6searth-3cc-web-d49645966-db5gp 0/1 Running 0 6searth-3cc-web-d49645966-mk7gr 0/1 Running 0 6sRunning, but still not in ready state. Wait 10 seconds (initialDelaySeconds of readinessProbe) and check again:
xxxxxxxxxx➜ k -n earth get pod -l id=earth-3cc-webNAME READY STATUS RESTARTS AGEearth-3cc-web-d49645966-52vb9 1/1 Running 0 32searth-3cc-web-d49645966-5tts6 1/1 Running 0 32searth-3cc-web-d49645966-db5gp 1/1 Running 0 32searth-3cc-web-d49645966-mk7gr 1/1 Running 0 32sLet's check the service again:
xxxxxxxxxx➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 earth-3cc-web.earth:6363 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 612 100 612 0 0 55636 0 --:--:-- --:--:-- --:--:-- 55636<!DOCTYPE html><html><head><title>Welcome to nginx!</title><style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; }</style></head><body><h1>Welcome to nginx!</h1>...We did it! Finally we write the reason into the requested location:
xxxxxxxxxxvim /opt/course/p3/ticket-654.txtxxxxxxxxxx# /opt/course/p3/ticket-654.txtyo Dirk, wrong port for readinessProbe defined!
In this section we'll provide some tips on how to handle the CKAD exam and browser terminal.
Study all topics as proposed in the curriculum till you feel comfortable with all
Learn and Study the in-browser scenarios on https://killercoda.com/killer-shell-ckad
Read this and do all examples: https://kubernetes.io/docs/concepts/cluster-administration/logging
Understand Rolling Update Deployment including maxSurge and maxUnavailable
Do 1 or 2 test session with this CKAD Simulator. Understand the solutions and maybe try out other ways to achieve the same
Be fast and breath kubectl
Read the Curriculum
https://github.com/cncf/curriculum
Read the Handbook
https://docs.linuxfoundation.org/tc-docs/certification/lf-handbook2
Read the important tips
https://docs.linuxfoundation.org/tc-docs/certification/tips-cka-and-ckad
Read the FAQ
https://docs.linuxfoundation.org/tc-docs/certification/faq-cka-ckad
Get familiar with the Kubernetes documentation and be able to use the search. Allowed links are:
NOTE: Verify the list here
You'll be provided with a browser terminal which uses Ubuntu 20. The standard shells included with a minimal install of Ubuntu 20 will be available, including bash.
Laggin
There could be some lagging, definitely make sure you are using a good internet connection because your webcam and screen are uploading all the time.
Kubectl autocompletion and commands
Autocompletion is configured by default, as well as the k alias source and others:
kubectl with k alias and Bash autocompletion
yq and jqfor YAML/JSON processing
tmux for terminal multiplexing
curl and wget for testing web services
man and man pages for further documentation
Copy & Paste
There could be issues copying text (like pod names) from the left task information into the terminal. Some suggested to "hard" hit or long hold Cmd/Ctrl+C a few times to take action. Apart from that copy and paste should just work like in normal terminals.
Score
There are 15-20 questions in the exam. Your results will be automatically checked according to the handbook. If you don't agree with the results you can request a review by contacting the Linux Foundation Support.
Notepad & Skipping Questions
You have access to a simple notepad in the browser which can be used for storing any kind of plain text. It might makes sense to use this for saving skipped question numbers. This way it's possible to move some questions to the end.
Contexts
You'll receive access to various different clusters and resources in each. They provide you the exact command you need to run to connect to another cluster/context. But you should be comfortable working in different namespaces with kubectl.
Starting with PSI Bridge:
The exam will now be taken using the PSI Secure Browser, which can be downloaded using the newest versions of Microsoft Edge, Safari, Chrome, or Firefox
Multiple monitors will no longer be permitted
Use of personal bookmarks will no longer be permitted
The new ExamUI includes improved features such as:
A remote desktop configured with the tools and software needed to complete the tasks
A timer that displays the actual time remaining (in minutes) and provides an alert with 30, 15, or 5 minute remaining
The content panel remains the same (presented on the Left Hand Side of the ExamUI)
Read more here.
Use the history command to reuse already entered commands or use even faster history search through Ctrl r .
If a command takes some time to execute, like sometimes kubectl delete pod x. You can put a task in the background using Ctrl z and pull it back into foreground running command fg.
You can delete pods fast with:
k delete pod x --grace-period 0 --force
Be great with vim.
Settings
In case you face a situation where vim is not configured properly and you face for example issues with pasting copied content you should be able to configure via ~/.vimrc or by entering manually in vim settings mode:
xxxxxxxxxxset tabstop=2set expandtabset shiftwidth=2
The expandtab make sure to use spaces for tabs.
Toggle vim line numbers
When in vim you can press Esc and type :set number or :set nonumber followed by Enter to toggle line numbers. This can be useful when finding syntax errors based on line - but can be bad when wanting to mark© by mouse. You can also just jump to a line number with Esc :22 + Enter.
Copy&Paste
Get used to copy/paste/cut with vim:
xxxxxxxxxxMark lines: Esc+V (then arrow keys)Copy marked lines: yCut marked lines: dPast lines: p or P
Indent multiple lines
To indent multiple lines press Esc and type :set shiftwidth=2. First mark multiple lines using Shift v and the up/down keys. Then to indent the marked lines press > or <. You can then press . to repeat the action.