The cluster admin asked you to find out the following information about etcd running on cluster2-controlplane1: - Server private key location - Server certificate expiration date - Is client certificate authentication enabled
Write these information into /opt/course/p1/etcd-info.txt Finally you're asked to save an etcd snapshot at /etc/etcd-snapshot.db on cluster2-controlplane1 and display its status
You're asked to confirm that kube-proxy is running correctly on all nodes. For this perform the following in Namespace project-hamster: Create a new Pod named p2-pod with two containers, one of image nginx:1.21.3-alpine and one of image busybox:1.31. Make sure the busybox container keeps running for some time. Create a new Service named p2-service which exposes that Pod internally in the cluster on port 3000->80. Find the kube-proxy container on all nodes cluster1-controlplane1, cluster1-node1 and cluster1-node2 and make sure that it's using iptables. Use command crictl for this. Write the iptables rules of all nodes belonging the created Service p2-service into file /opt/course/p2/iptables.txt. Finally delete the Service and confirm that the iptables rules are gone from all nodes.
Create a Pod named check-ip in Namespace default using image httpd:2.4.41-alpine. Expose it on port 80 as a ClusterIP Service named check-ip-service. Remember/output the IP of that Service. Change the Service CIDR to 11.96.0.0/12 for the cluster. Then create a second Service named check-ip-service2 pointing to the same Pod to check if your settings did take effect. Finally check if the IP of the first Service has changed.
// 서비스 expose 후에 kube-apiserver와 kube-controller-manager 의 CIDR 수정하기
vi /etc/kubernetes/manifests/kube-apiserver.yaml
vi /etc/kubernetes/manifests/kube-controller-manager.yaml
Extra Qusestion 1 | Find Pods first to be terminated
Use context: kubectl config use-context k8s-c1-H
Check all available Pods in the Namespace project-c13 and find the names of those that would probably be terminated first if the nodes run out of resources (cpu or memory) to schedule all Pods. Write the Pod names into /opt/course/e1/pods-not-stable.txt.
k -n project-c13 describe pod | less -p Requests # 모든 파드를 설명하고 Requests 강조
//or
k -n project-c13 describe pod | egrep "^(Name:| Requests:)" -A1
example from killer.sh
//jsonpath
k -n project-c13 get pod -o jsonpath="{range .items[*]} {.metadata.name}{.spec.containers[*].resources}{'\n'}"
//or
k get pods -n project-c13 -o jsonpath="{range .items[*]}{.metadata.name} {.status.qosClass}{'\n'}"
There is an existing ServiceAccount secret-reader in Namespace project-hamster. Create a Pod of image curlimages/curl:7.65.3 named tmp-api-contact which uses this ServiceAccount. Make sure the container keeps running. Exec into the Pod and use curl to access the Kubernetes Api of that cluster manually, listing all available secrets. You can ignore insecure https connection. Write the command(s) for this into file /opt/course/e4/list-secrets.sh.
*주의* - killer.sh 문제풀이 정리용 -Use context :~ 조건은 적용하지 않았음 (실습환경에는 1개의 마스터노드와 2개의 워커노드가 있는 1개의 클러스터 환경에서 작업 control-plane은 master라고 naming되어 있음) - 일부 udemy lab을 사용 - 오류 있을 수 있음
Qusestion 21 | Create a Static Pod and Service
Use context: kubectl config use-context k8s-c3-CCC
Create a Static Pod named my-static-pod in Namespace default on cluster3-controlplane1. It should be of image nginx:1.16-alpine and have resource requests for 10m CPU and 20Mi memory. Then create a NodePort Service named static-pod-service which exposes that static Pod on port 80 and check if it has Endpoints and if it's reachable through the cluster3-controlplane1 internal IP address. You can connect to the internal node IPs from your main terminal.
Qusestion 22 | Check how long certificates are valid
Use context: kubectl config use-context k8s-c2-AC
Check how long the kube-apiserver server certificate is valid on cluster2-controlplane1. Do this with openssl or cfssl. Write the exipiration date into /opt/course/22/expiration. Also run the correct kubeadm command to list the expiration dates and confirm both methods show the same date. Write the correct kubeadm command that would renew the apiserver server certificate into /opt/course/22/kubeadm-renew-certs.sh.
Node cluster2-node1 has been added to the cluster using kubeadm and TLS bootstrapping. Find the "Issuer" and "Extended Key Usage" values of the cluster2-node1:
1. kubelet client certificate, the one used for outgoing connections to the kube-apiserver. 2. kubelet server certificate, the one used for incoming connections from the kube-apiserver.
Write the information into file /opt/course/23/certificate-info.txt. Compare the "Issuer" and "Extended Key Usage" fields of both certificates and make sense of these.
There was a security incident where an intruder was able to access the whole cluster from a single hacked backend Pod. To prevent this create a NetworkPolicy called np-backend in Namespace project-snake. It should allow the backend-* Pods only to: - connect to db1-* Pods on port 1111 - connect to db2-* Pods on port 2222
Use the app label of Pods in your policy. After implementation, connections from backend-* Pods to vault-* Pods on port 3333 should for example no longer work.
Use context: kubectl config use-context k8s-c3-CCC
Make a backup of etcd running on cluster3-controlplane1 and save it on the controlplane node at /tmp/etcd-backup.db. Then create any kind of Pod in the cluster. Finally restore the backup, confirm the cluster is still working and that the created Pod is no longer with us.
//임의의 pod 생성
kubectl run test --image=nginx
//etcd 복원
//컨트롤플레인의 구성요소를 모두 중지
root@controlplane:~# cd /etc/kubernetes/manifests/
root@controlplane:/etc/kubernetes/manifests# mv * ..
//백업을 특정디렉토리에 복원
ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcd-backup.db \
--data-dir /var/lib/etcd-backup \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt \
--key /etc/kubernetes/pki/etcd/server.key
//etcd.yaml 수정
- hostPath:
path: /var/lib/etcd-backup # 변경
type: DirectoryOrCreate
name: etcd-data
//yaml파일을 다시 매니패스트 디렉토리로 이동
root@controlplane:/etc/kubernetes/manifests# mv ../*.yaml .
//etcd와 api-server가 다시 시작되도록 대기, Pod 확인
*주의* - killer.sh 문제풀이 정리용 -Use context :~ 조건은 적용하지 않았음 (실습환경에는 1개의 마스터노드와 2개의 워커노드가 있는 1개의 클러스터 환경에서 작업 control-plane은 master라고 naming되어 있음) - udemy lab 일부 사용 - 오류 있을 수 있음
Qusestion 11 | DaemonSet on all Nodes
Use context: kubectl config use-context k8s-c1-H
Use Namespace project-tiger for the following. Create a DaemonSet named ds-important with image httpd:2.4-alpine and labels id=ds-important and uuid=18426a0b-5f59-4e10-923f-c0e078e82462. The Pods it creates should request 10 millicore cpu and 10 mebibyte memory. The Pods of that DaemonSet should run on all nodes, also controlplanes.
Use Namespace project-tiger for the following. Create a Deployment named deploy-important with label id=very-important (the Pods should also have this label) and 3 replicas. It should contain two containers, the first named container1 with image nginx:1.17.6-alpine and the second one named container2 with image google/pause. There should be only ever one Pod of that Deployment running on one worker node. We have two worker nodes: cluster1-node1 and cluster1-node2. Because the Deployment has three replicas the result should be that on both nodes one Pod is running. The third Pod won't be scheduled, unless a new worker node will be added. Use topologyKey: kubernetes.io/hostname for this. In a way we kind of simulate the behaviour of a DaemonSet here, but using a Deployment and a fixed number of replicas.
Qusestion 13 | Multi Containers and Pod shared Volume
Use context: kubectl config use-context k8s-c1-H
Create a Pod named multi-container-playground in Namespace default with three containers, named c1, c2 and c3. There should be a volume attached to that Pod and mounted into every container, but the volume shouldn't be persisted or shared with other Pods. Container c1 should be of image nginx:1.17.6-alpine and have the name of the node where its Pod is running available as environment variable MY_NODE_NAME. Container c2 should be of image busybox:1.31.1 and write the output of the date command every second in the shared volume into file date.log. You can use while true; do date >> /your/vol/path/date.log; sleep 1; done for this. Container c3 should be of image busybox:1.31.1 and constantly send the content of file date.log from the shared volume to stdout. You can use tail -f /your/vol/path/date.log for this. Check the logs of container c3 to confirm correct setup.
You're ask to find out following information about the cluster k8s-c1-H: How many controlplane nodes are available?How many worker nodes are available?What is the Service CIDR? Which Networking (or CNI Plugin) is configured and where is its config file?Which suffix will static pods have that run on cluster1-node1? Write your answers into file /opt/course/14/cluster-info, structured like this:
k get node
//How many controlplane and worker nodes are available?
ssh cluster1-controlplane1
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep range
//What is the Service CIDR?
find /etc/cni/net.d/
cat /etc/cni/net.d/10-weave.conflist
//Which Networking (or CNI Plugin) is configured and where is its config file?
# /opt/course/14/cluster-info
# How many controlplane nodes are available?
1: 1
# How many worker nodes are available?
2: 2
# What is the Service CIDR?
3: 10.96.0.0/12
# Which Networking (or CNI Plugin) is configured and where is its config file?
4: Weave, /etc/cni/net.d/10-weave.conflist
# Which suffix will static pods have that run on cluster1-node1?
5: -cluster1-node1
Write a command into /opt/course/15/cluster_events.sh which shows the latest events in the whole cluster, ordered by time (metadata.creationTimestamp). Use kubectl for it. Now delete the kube-proxy Pod running on node cluster2-node1 and write the events this caused into /opt/course/15/pod_kill.log. Finally kill the containerd container of the kube-proxy Pod on node cluster2-node1 and write the events into /opt/course/15/container_kill.log. Do you notice differences in the events both actions caused?
# /opt/course/15/cluster_events.sh
kubectl get events -A --sort-by=.metadata.creationTimestamp
k -n kube-system get pod -o wide | grep proxy # find pod running on cluster2-node1
k -n kube-system delete pod kube-proxy-xxxxxxx
# 클러스터 이벤트 확인
sh /opt/course/15/cluster_events.sh
# 이벤트를 pod_kill.log에 기록
sh /opt/course/15/cluster_events.sh > /opt/course/15/pod_kill.log
ssh cluster2-node1
crictl ps | grep kube-proxy
crictl rm 1e020b43c44xxxx
crictl ps | grep kube-proxy
sh /opt/course/15/cluster_events.sh
Write the names of all namespaced Kubernetes resources (like Pod, Secret, ConfigMap...) into /opt/course/16/resources.txt. Find the project-* Namespace with the highest number of Roles defined in it and write its name and amount of Roles into /opt/course/16/crowded-namespace.txt.
k api-resources # shows all
k api-resources -h # help always good
k api-resources --namespaced -o name > /opt/course/16/resources.txt
k get role --h
k -n project-c13 get role --no-headers | wc -l
//No resources found in project-c13 namespace.
//0
k -n project-c14 get role --no-headers | wc -l
//300
k -n project-hamster get role --no-headers | wc -l
//No resources found in project-hamster namespace.
//0
k -n project-snake get role --no-headers | wc -l
//No resources found in project-snake namespace.
//0
k -n project-tiger get role --no-headers | wc -l
//No resources found in project-tiger namespace.
//0
Qusestion 17 | Find Container of Pod and check info
Use context: kubectl config use-context k8s-c1-H
In Namespace project-tiger create a Pod named tigers-reunite of image httpd:2.4.41-alpine with labels pod=container and container=pod. Find out on which node the Pod is scheduled. Ssh into that node and find the containerd container belonging to that Pod. Using command crictl: 1. Write the ID of the container and the info.runtimeType into /opt/course/17/pod-container.txt 2. Write the logs of the container into /opt/course/17/pod-container.log
Use context: kubectl config use-context k8s-c3-CCC
There seems to be an issue with the kubelet not running on cluster3-node1. Fix it and confirm that cluster has node cluster3-node1 available in Ready state afterwards. You should be able to schedule a Pod on cluster3-node1 afterwards. Write the reason of the issue into /opt/course/18/reason.txt.
NAME STATUS ROLES AGE VERSION
cluster3-controlplane1 Ready control-plane 14d v1.30.1
cluster3-node1 NotReady <none> 14d v1.30.1
ssh cluster3-node1
ps aux | grep kubelet
service kubelet start
service kubelet status
/usr/local/bin/kubelet
// /usr/local/bin/kubelet: No such file or directory
whereis kubelet
// /usr/bin/kubelet
vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf # 경로 수정
systemctl daemon-reload
service kubelet restart
service kubelet status # 이제 실행 중이어야 합니다
echo "wrong path to kubelet binary specified in service config" > /opt/course/18/reason.txt
Use context: kubectl config use-context k8s-c3-CCC
Do the following in a new Namespace secret. Create a Pod named secret-pod of image busybox:1.31.1 which should keep running for some time. There is an existing Secret located at /opt/course/19/secret1.yaml, create it in the Namespace secret and mount it readonly into the Pod at /tmp/secret1. Create a new Secret in Namespace secret called secret2 which should contain user=user1 and pass=1234. These entries should be available inside the Pod's container as environment variables APP_USER and APP_PASS. Confirm everything is working.
Qusestion 20 | Update Kubernetes Version and join cluster
Use context: kubectl config use-context k8s-c3-CCC
Your coworker said node cluster3-node2 is running an older Kubernetes version and is not even part of the cluster. Update Kubernetes on that node to the exact version that's running on cluster3-controlplane1. Then add this node to the cluster. Use kubeadm for this.
k get node
//버전확인
ssh cluster3-node2
kubectl version
kubelet --version
kubeadm version
//오류 -> 노드 초기화 하지 않아서
kubeadm upgrade node
//업그레이드
apt update
apt install kubectl=1.30.1-1.1 kubelet=1.30.1-1.1
service kubelet restart
service kubelet status
// 업그레이드 한 노드를 클러스터에 추가
ssh cluster3-controlplane1
kubeadm token create --print-join-command
ssh cluster3-node2
kubeadm join 192.168.100.31:6443 --token zodhba.wlxmtvumtjpgaevg --discovery-token-ca-cert-hash sha256:6819708bfec2336183a68138d680ea7bf81dfbf9a57b3fca1c51bdc2f4fc6e99
service kubelet status
*주의* - killer.sh 문제풀이 정리용 - Use context : ~ 조건은 적용하지 않았음 (실습환경에는 1개의 마스터노드와 2개의 워커노드가 있는 1개의 클러스터 환경에서 작업 control-plane은 master라고 naming되어 있음) - 오류 있을 수 있음
Qusestion 1 | Contexts
You have access to multiple clusters from your main terminal through kubectl contexts. Write all those context names into /opt/course/1/contexts.
Next write a command to display the current context into /opt/course/1/context_default_kubectl.sh, the command should use kubectl.
Finally write a second command doing the same thing into /opt/course/1/context_default_no_kubectl.sh, but without the use of kubectl.
Create a single Pod of image httpd:2.4.41-alpine in Namespace default. The Pod should be named pod1 and the container should be named pod1-container. This Pod should only be scheduled on controlplane nodes. Do not add new labels to any nodes.
Do the following in Namespace default. Create a single Pod named ready-if-service-ready of image nginx:1.16.1-alpine. Configure a LivenessProbe which simply executes command true. Also configure a ReadinessProbe which does check if the url http://service-am-i-ready:80 is reachable, you can use wget -T2 -O- http://service-am-i-ready:80 for this. Start the Pod and confirm it isn't ready because of the ReadinessProbe.
Create a second Pod named am-i-ready of image nginx:1.16.1-alpine with label id: cross-server-ready. The already existing Service service-am-i-ready should now have that second Pod as endpoint.
Now the first Pod should be in ready state, confirm that.
There are various Pods in all namespaces. Write a command into /opt/course/5/find_pods.sh which lists all Pods sorted by their AGE (metadata.creationTimestamp). Write a second command into /opt/course/5/find_pods_uid.sh which lists all Pods sorted by field metadata.uid. Use kubectl sorting for both commands.
Create a new PersistentVolume named safari-pv. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data and no storageClassName defined. Next create a new PersistentVolumeClaim in Namespace project-tiger named safari-pvc . It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly. Finally create a new Deployment safari in Namespace project-tiger which mounts that volume at /tmp/safari-data. The Pods of that Deployment should be of image httpd:2.4.41-alpine.
The metrics-server has been installed in the cluster. Your college would like to know the kubectl commands to: 1. show Nodes resource usage 2. show Pods and their containers resource usage
Please write the commands into /opt/course/7/node.sh and /opt/course/7/pod.sh.
Ssh into the controlplane node with ssh cluster1-controlplane1. Check how the controlplane components kubelet, kube-apiserver, kube-scheduler, kube-controller-manager and etcd are started/installed on the controlplane node. Also find out the name of the DNS application and how it's started/installed on the controlplane node. Write your findings into file /opt/course/8/controlplane-components.txt. The file should be structured like:
Ssh into the controlplane node with ssh cluster2-controlplane1. Temporarily stop the kube-scheduler, this means in a way that you can start it again afterwards. Create a single Pod named manual-schedule of image httpd:2.4-alpine, confirm it's created but not scheduled on any node. Now you're the scheduler and have all its power, manually schedule that Pod on node cluster2-controlplane1. Make sure it's running. Start the kube-scheduler again and confirm it's running correctly by creating a second Pod named manual-schedule2 of image httpd:2.4-alpine and check if it's running on cluster2-node1.
cd /etc/kubernetes/manifests/
mv ../kube-scheduler.yaml .
kubectl -n kube-system get pod | grep schedule //running
k run manual-schedule2 --image=httpd:2.4-alpine
k get pod -o wide | grep schedule
//manual-schedule 1/1 Running ... cluster2-controlplane1
//manual-schedule2 1/1 Running ... cluster2-node1
Qusestion 10 | RBAC ServiceAccount Role RoleBinding
Use context: kubectl config use-context k8s-c1-H
Create a new ServiceAccount processor in Namespace project-hamster. Create a Role and RoleBinding, both named processor as well. These should allow the new SA to only create Secrets and ConfigMaps in that Namespace.
#자동완성
source <(kubectl completion bash) # set up autocomplete in bash into the current shell, bash-completion package should be installed first.
echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.
#별칭
alias k=kubectl
complete -o default -F __start_kubectl k
k logs orange init-myservice
#sh: sleeeep: not found
#=> k describe pod orange에서 init containers의 Command를 확인해보면
#Command:
# sh
# -c
# sleeep 2;
#라고 되어 있음 명령어 오류
#오류 수정
k edit pod orange
#위의 잘못된 명령어를 수정
#주의사항! : 종료저장(:q!)
#기존 포드 삭제 후 다시 생성
k replace --force -f yaml파일경로