[CKA] killer.sh ( Preview Question 1 ~ 3 )
Preview Qusestion 1 |
Use context: kubectl config use-context k8s-c2-AC
The cluster admin asked you to find out the following information about etcd running on cluster2-controlplane1:
- Server private key location
- Server certificate expiration date
- Is client certificate authentication enabled
Write these information into /opt/course/p1/etcd-info.txt
Finally you're asked to save an etcd snapshot at /etc/etcd-snapshot.db on cluster2-controlplane1 and display its status
Answer:
// /etc/kubernetes/manifest/etcd.yaml에서 구성정보 grep 하기
// 인증서 유효기간 확인하기
openssl x509 -noout -text -in /etc/kubernetes/pki/etcd/server.crt | grep Validity -A2
// 스냅샷 만들기 - ETCDCTL 이용하기
ETCDCTL_API=3 etcdctl snapshot save /etc/etcd-snapshot.db \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt \
--key /etc/kubernetes/pki/etcd/server.key
https://kubernetes.io/docs/tasks/administer-cluster/certificates/
Generate Certificates Manually
When using client certificate authentication, you can generate certificates manually through easyrsa, openssl or cfssl. easyrsa easyrsa can manually generate certificates for your cluster. Download, unpack, and initialize the patched version of easyrsa3. c
kubernetes.io
Operating etcd clusters for Kubernetes
etcd is a consistent and highly-available key value store used as Kubernetes' backing store for all cluster data. If your Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for the data. You can find in-depth information a
kubernetes.io
★
Preview Qusestion 2 |
Use context: kubectl config use-context k8s-c1-H
You're asked to confirm that kube-proxy is running correctly on all nodes. For this perform the following in Namespace project-hamster:
Create a new Pod named p2-pod with two containers, one of image nginx:1.21.3-alpine and one of image busybox:1.31. Make sure the busybox container keeps running for some time.
Create a new Service named p2-service which exposes that Pod internally in the cluster on port 3000->80.
Find the kube-proxy container on all nodes cluster1-controlplane1, cluster1-node1 and cluster1-node2 and make sure that it's using iptables. Use command crictl for this.
Write the iptables rules of all nodes belonging the created Service p2-service into file /opt/course/p2/iptables.txt.
Finally delete the Service and confirm that the iptables rules are gone from all nodes.
Answer:
k get node
ssh cluster1-controlplane1
root@cluster1-controlplane1$ crictl ps | grep kube-proxy
root@cluster1-controlplane1~# crictl logs 2xxxxxx
ssh cluster1-controlplane1 iptables-save | grep p2-service
ssh cluster1-node1 iptables-save | grep p2-service
ssh cluster1-node2 iptables-save | grep p2-service
ssh cluster1-controlplane1 iptables-save | grep p2-service >> /opt/course/p2/iptables.txt
ssh cluster1-node1 iptables-save | grep p2-service >> /opt/course/p2/iptables.txt
ssh cluster1-node2 iptables-save | grep p2-service >> /opt/course/p2/iptables.txt
k -n project-hamster delete svc p2-service
ssh cluster1-controlplane1 iptables-save | grep p2-service
ssh cluster1-node1 iptables-save | grep p2-service
ssh cluster1-node2 iptables-save | grep p2-service
Preview Qusestion 3 |
Use context: kubectl config use-context k8s-c2-AC
Create a Pod named check-ip in Namespace default using image httpd:2.4.41-alpine. Expose it on port 80 as a ClusterIP Service named check-ip-service. Remember/output the IP of that Service.
Change the Service CIDR to 11.96.0.0/12 for the cluster.
Then create a second Service named check-ip-service2 pointing to the same Pod to check if your settings did take effect. Finally check if the IP of the first Service has changed.
Answer:
// 서비스 expose 후에 kube-apiserver와 kube-controller-manager 의 CIDR 수정하기
vi /etc/kubernetes/manifests/kube-apiserver.yaml
vi /etc/kubernetes/manifests/kube-controller-manager.yaml
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/
kube-apiserver
Synopsis The Kubernetes API server validates and configures data for the api objects which include pods, services, replicationcontrollers, and others. The API Server services REST operations and provides the frontend to the cluster's shared state through w
kubernetes.io