1. Take a backup of the etcd cluster and save it to /opt/etcd-backup.db.
- Backup Completed
https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#volume-snapshot
controlplane /etc/kubernetes/manifests ➜ cat etcd.yaml |grep file
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
seccompProfile:
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
--cacert=<trusted-ca-file> --cert=<cert-file> --key=<key-file> \
snapshot save <backup-file-location>
controlplane ~ ➜
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379\
--cacert=/etc/kubernetes/pki/etcd/ca.crt\
--cert=/etc/kubernetes/pki/etcd/server.crt\
--key=/etc/kubernetes/pki/etcd/server.key\
snapshot save /opt/etcd-backup.db
Snapshot saved at /opt/etcd-backup.db
2. Create a Pod called redis-storage with image: redis:alpine with a Volume of type emptyDir that lasts for the life of the Pod.
Create a Pod called redis-storage with image: redis:alpine with a Volume of type emptyDir that
lasts for the life of the Pod.
Specs on the below.
- Pod named 'redis-storage' created
- Pod 'redis-storage' uses Volume type of emptyDir
- Pod 'redis-storage' uses volumeMount with mountPath = /data/redis
https://kubernetes.io/docs/concepts/storage/volumes/#emptydir-configuration-example
controlplane ~ ➜ k run redis-strage --image redis:alpine --dry-run=client -o yaml >redis-storage.yaml
controlplane ~ ➜ vim redis-storage.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: redis-strage
name: redis-strage
spec:
containers:
- image: redis:alpine
name: redis-strage
resources: {}
volumeMounts: # ++
- mountPath: /data/redis # ++
name: cache-volume # ++
volumes: # ++
- name: cache-volume # ++
emptyDir: {} # ++
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
Create a Pod called redis-storage with image: redis:alpine with a Volume of type emptyDir that lasts for the life of the Pod.
Specs on the below.
Use the command kubectl run and create a pod definition file for redis-storage pod and add volume.
Alternatively, run the command:
kubectl run redis-storage --image=redis:alpine --dry-run=client -oyaml > redis-storage.yaml
and add volume emptyDir in it.
Solution manifest file to create a pod redis-storage as follows:
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: redis-storage
name: redis-storage
spec:
containers:
- image: redis:alpine
name: redis-storage
volumeMounts:
- mountPath: /data/redis
name: temp-volume
volumes:
- name: temp-volume
emptyDir: {}
Pod named 'redis-storage' created
Pod 'redis-storage' uses Volume type of emptyDir
Pod 'redis-storage' uses volumeMount with mountPath = /data/redis
3. Create a new pod called super-user-pod with image busybox:1.28.
Create a new pod called super-user-pod with image busybox:1.28.
Allow the pod to be able to set system_time.
The container should sleep for 4800 seconds.
- Pod: super-user-pod
- Container Image: busybox:1.28
- Is SYS_TIME capability set for the container?
controlplane ~ ➜ cat super.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: super-user-pod
name: super-user-pod
spec:
containers:
- command:
- sleep
- "4800"
securityContext:
capabilities:
add: ["SYS_TIME"]
image: busybox:1.28
name: super-user-pod
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
4. A pod definition file is created at /root/CKA/use-pv.yaml.
A pod definition file is created at /root/CKA/use-pv.yaml.
Make use of this manifest file and mount the persistent volume called pv-1.
Ensure the pod is running and the PV is bound.
- mountPath: /data
- persistentVolumeClaim Name: my-pvc
- persistentVolume Claim configured correctly
- pod using the correct mountPath
- pod using the persistent volume claim?
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes
# 오답
controlplane ~/CKA ➜ cat use-pv.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: use-pv
name: use-pv
spec:
containers:
- image: nginx
name: use-pv
volumeMounts:
- mountPath: "/data"
name: my-pvc
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- name: my-pvc
persistentVolumeClaim:
claimName: my-pvc
status: {}
A pod definition file is created at /root/CKA/use-pv.yaml. Make use of this manifest file and mount the persistent volume called pv-1. Ensure the pod is running and the PV is bound.
mountPath: /data
persistentVolumeClaim Name: my-pvc
Add a persistentVolumeClaim definition to pod definition file.
Solution manifest file to create a pvc my-pvc as follows:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
And then, update the pod definition file as follows:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: use-pv
name: use-pv
spec:
containers:
- image: nginx
name: use-pv
volumeMounts:
- mountPath: "/data"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: my-pvc
Finally, create the pod by running:
kubectl create -f /root/CKA/use-pv.yaml
persistentVolume Claim configured correctly
pod using the correct mountPath
pod using the persistent volume claim?
controlplane ~/CKA ➜ cat use-pv.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: use-pv
name: use-pv
spec:
containers:
- image: nginx
name: use-pv
resources: {}
volumeMounts:
- mountPath: "/data"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: my-pvc
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Mi
5. Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica.
Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica.
Next upgrade the deployment to version 1.17 using rolling update.
- Deployment : nginx-deploy. Image: nginx:1.16
- Image: nginx:1.16
- Task: Upgrade the version of the deployment to 1:17
- Task: Record the changes for the image upgrade
controlplane ~ ➜ k create deployment nginx-deploy --image nginx:1.16 --replicas 1 --dry-run=client -o yaml > nginx.yaml
controlplane ~ ➜ k apply -f nginx-deploy.yaml
controlplane ~ ➜ k set image deployment/nginx-deploy nginx=nginx:1.17
deployment.apps/nginx-deploy image updated
controlplane ~ ➜ k get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deploy 1/1 1 1 97s
6. Create a new user called john. Grant him access to the cluster.reate a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica. (Pass..)
Create a new user called john.
Grant him access to the cluster.
John should have permission to create, list, get, update and delete pods in the development namespace.
The private key exists in the location: /root/CKA/john.key and csr at /root/CKA/john.csr.
Important Note: As of kubernetes 1.19, the CertificateSigningRequest object expects a signerName.
Please refer the documentation to see an example.
The documentation tab is available at the top right of terminal.
- CSR: john-developer Status:Approved
- Role Name: developer, namespace: development, Resource: Pods
- Access: User 'john' has appropriate permissions
Task
Create a new user called john. Grant him access to the cluster. John should have permission to create, list, get, update and delete pods in the development namespace . The private key exists in the location: /root/CKA/john.key and csr at /root/CKA/john.csr.
Important Note: As of kubernetes 1.19, the CertificateSigningRequest object expects a signerName.
Please refer the documentation to see an example. The documentation tab is available at the top right of terminal.
Solution manifest file to create a CSR as follows:
---
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: john-developer
spec:
signerName: kubernetes.io/kube-apiserver-client
request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZEQ0NBVHdDQVFBd0R6RU5NQXNHQTFVRUF3d0VhbTlvYmpDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRApnZ0VQQURDQ0FRb0NnZ0VCQUt2Um1tQ0h2ZjBrTHNldlF3aWVKSzcrVVdRck04ZGtkdzkyYUJTdG1uUVNhMGFPCjV3c3cwbVZyNkNjcEJFRmVreHk5NUVydkgyTHhqQTNiSHVsTVVub2ZkUU9rbjYra1NNY2o3TzdWYlBld2k2OEIKa3JoM2prRFNuZGFvV1NPWXBKOFg1WUZ5c2ZvNUpxby82YU92czFGcEc3bm5SMG1JYWpySTlNVVFEdTVncGw4bgpjakY0TG4vQ3NEb3o3QXNadEgwcVpwc0dXYVpURTBKOWNrQmswZWhiV2tMeDJUK3pEYzlmaDVIMjZsSE4zbHM4CktiSlRuSnY3WDFsNndCeTN5WUFUSXRNclpUR28wZ2c1QS9uREZ4SXdHcXNlMTdLZDRaa1k3RDJIZ3R4UytkMEMKMTNBeHNVdzQyWVZ6ZzhkYXJzVGRMZzcxQ2NaanRxdS9YSmlyQmxVQ0F3RUFBYUFBTUEwR0NTcUdTSWIzRFFFQgpDd1VBQTRJQkFRQ1VKTnNMelBKczB2czlGTTVpUzJ0akMyaVYvdXptcmwxTGNUTStsbXpSODNsS09uL0NoMTZlClNLNHplRlFtbGF0c0hCOGZBU2ZhQnRaOUJ2UnVlMUZnbHk1b2VuTk5LaW9FMnc3TUx1a0oyODBWRWFxUjN2SSsKNzRiNnduNkhYclJsYVhaM25VMTFQVTlsT3RBSGxQeDNYVWpCVk5QaGhlUlBmR3p3TTRselZuQW5mNm96bEtxSgpvT3RORStlZ2FYWDdvc3BvZmdWZWVqc25Yd0RjZ05pSFFTbDgzSkljUCtjOVBHMDJtNyt0NmpJU3VoRllTVjZtCmlqblNucHBKZWhFUGxPMkFNcmJzU0VpaFB1N294Wm9iZDFtdWF4bWtVa0NoSzZLeGV0RjVEdWhRMi80NEMvSDIKOWk1bnpMMlRST3RndGRJZjAveUF5N05COHlOY3FPR0QKLS0tLS1FTkQgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==
usages:
- digital signature
- key encipherment
- client auth
To approve this certificate, run: kubectl certificate approve john-developer
Next, create a role developer and rolebinding developer-role-binding, run the command:
$ kubectl create role developer --resource=pods --verb=create,list,get,update,delete --namespace=development
$ kubectl create rolebinding developer-role-binding --role=developer --user=john --namespace=development
To verify the permission from kubectl utility tool:
$ kubectl auth can-i update pods --as=john --namespace=development
CSR: john-developer Status:Approved
Role Name: developer, namespace: development, Resource: Pods
Access: User 'john' has appropriate permissions
7. Create a nginx pod called nginx-resolver using image nginx...
Create a nginx pod called nginx-resolver using image nginx,
expose it internally with a service called nginx-resolver-service.
Test that you are able to look up the service and pod names from within the cluster.
Use the image: busybox:1.28 for dns lookup.
Record results in /root/CKA/nginx.svc and /root/CKA/nginx.pod
- Pod: nginx-resolver created
- Service DNS Resolution recorded correctly
- Pod DNS resolution recorded correctly
controlplane ~ ✖ k run nginx-resolver --image nginx --dry-run=client -o yaml > resolver.yaml
controlplane ~ ➜ k apply -f resolver.yaml
pod/nginx-resolver created
controlplane ~ ✖ k expose pod nginx-resolver --name nginx-resolver-service --port 80
service/nginx-resolver-service exposed
controlplane ~ ➜ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 58m
nginx-resolver-service ClusterIP 10.108.219.201 <none> 80/TCP 7s
controlplane ~ ➜ k run busybokx -- image=busybox:1.28 -- sleep 4000
# sleep 은 왜 해주는거지 /?.
controlplane ~ ➜ k exec busybox -- nslookup nginx-reslover-service > /roott/CKA/nginx.svc
controlplane ~ ➜ k get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 0 6m4s 10.244.192.6 node01 <none> <none>
nginx-deploy-58f87d49-nlbvz 1/1 Running 0 35m 10.244.192.5 node01 <none> <none>
nginx-resolver 1/1 Running 0 19m 10.244.192.4 node01 <none> <none>
redis-storage 1/1 Running 0 55m 10.244.192.1 node01 <none> <none>
super-user-pod 1/1 Running 0 52m 10.244.192.2 node01 <none> <none>
use-pv 1/1 Running 0 38m 10.244.192.3 node01 <none> <none>
controlplane ~ ➜ k exec busybox -- nslookup 10.244.192.4
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: 10.244.192.4
Address 1: 10.244.192.4 10-244-192-4.nginx-resolver-service.default.svc.cluster.local
controlplane ~ ➜ k exec busybox -- nslookup 10.244.192.4 > /root/CKA/nginx.pod
'자격증 > CKA' 카테고리의 다른 글
[직장인] 2024년 9월 CKA 합격 후기 (10) | 2024.10.14 |
---|---|
Udemy Mock Exam - 1 (0) | 2024.07.30 |
[CKA] Udemy Lightning Lab (7/7) (0) | 2024.07.29 |
[CKA] Udemy Lightning Lab (6/7) (0) | 2024.07.29 |
[CKA] Udemy Lightning Lab (5/7) (0) | 2024.07.25 |