728x90

Preview Qusestion 1 | 

Use context: kubectl config use-context k8s-c2-AC

The cluster admin asked you to find out the following information about etcd running on cluster2-controlplane1:
- Server private key location
- Server certificate expiration date
- Is client certificate authentication enabled

Write these information into /opt/course/p1/etcd-info.txt
Finally you're asked to save an etcd snapshot at /etc/etcd-snapshot.db on cluster2-controlplane1 and display its status
더보기

Answer:

// /etc/kubernetes/manifest/etcd.yaml에서 구성정보 grep 하기
// 인증서 유효기간 확인하기
openssl x509  -noout -text -in /etc/kubernetes/pki/etcd/server.crt | grep Validity -A2


// 스냅샷 만들기 - ETCDCTL 이용하기
ETCDCTL_API=3 etcdctl snapshot save /etc/etcd-snapshot.db \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt \
--key /etc/kubernetes/pki/etcd/server.key

https://kubernetes.io/docs/tasks/administer-cluster/certificates/

 

Generate Certificates Manually

When using client certificate authentication, you can generate certificates manually through easyrsa, openssl or cfssl. easyrsa easyrsa can manually generate certificates for your cluster. Download, unpack, and initialize the patched version of easyrsa3. c

kubernetes.io

 

https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster

 

Operating etcd clusters for Kubernetes

etcd is a consistent and highly-available key value store used as Kubernetes' backing store for all cluster data. If your Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for the data. You can find in-depth information a

kubernetes.io

 


Preview Qusestion 2 | 

Use context: kubectl config use-context k8s-c1-H
 
You're asked to confirm that kube-proxy is running correctly on all nodes. For this perform the following in Namespace project-hamster:
Create a new Pod named p2-pod with two containers, one of image nginx:1.21.3-alpine and one of image busybox:1.31. Make sure the busybox container keeps running for some time.
Create a new Service named p2-service which exposes that Pod internally in the cluster on port 3000->80.
Find the kube-proxy container on all nodes cluster1-controlplane1, cluster1-node1 and cluster1-node2 and make sure that it's using iptables. Use command crictl for this.
Write the iptables rules of all nodes belonging the created Service p2-service into file /opt/course/p2/iptables.txt.
Finally delete the Service and confirm that the iptables rules are gone from all nodes.
더보기

Answer:

k get node

ssh cluster1-controlplane1

root@cluster1-controlplane1$ crictl ps | grep kube-proxy

root@cluster1-controlplane1~# crictl logs 2xxxxxx


ssh cluster1-controlplane1 iptables-save | grep p2-service

ssh cluster1-node1 iptables-save | grep p2-service

ssh cluster1-node2 iptables-save | grep p2-service


ssh cluster1-controlplane1 iptables-save | grep p2-service >> /opt/course/p2/iptables.txt
ssh cluster1-node1 iptables-save | grep p2-service >> /opt/course/p2/iptables.txt
ssh cluster1-node2 iptables-save | grep p2-service >> /opt/course/p2/iptables.txt
 


k -n project-hamster delete svc p2-service


ssh cluster1-controlplane1 iptables-save | grep p2-service
ssh cluster1-node1 iptables-save | grep p2-service
ssh cluster1-node2 iptables-save | grep p2-service

 


Preview Qusestion 3 | 

Use context: kubectl config use-context k8s-c2-AC
 
Create a Pod named check-ip in Namespace default using image httpd:2.4.41-alpine. Expose it on port 80 as a ClusterIP Service named check-ip-service. Remember/output the IP of that Service.
Change the Service CIDR to 11.96.0.0/12 for the cluster.
Then create a second Service named check-ip-service2 pointing to the same Pod to check if your settings did take effect. Finally check if the IP of the first Service has changed.
더보기

Answer:

// 서비스 expose 후에 kube-apiserver와 kube-controller-manager 의 CIDR 수정하기

vi /etc/kubernetes/manifests/kube-apiserver.yaml

vi /etc/kubernetes/manifests/kube-controller-manager.yaml

https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/

 

kube-apiserver

Synopsis The Kubernetes API server validates and configures data for the api objects which include pods, services, replicationcontrollers, and others. The API Server services REST operations and provides the frontend to the cluster's shared state through w

kubernetes.io

 

728x90
728x90

Extra Qusestion 1 | Find Pods first to be terminated

Use context: kubectl config use-context k8s-c1-H
 
Check all available Pods in the Namespace project-c13 and find the names of those that would probably be terminated first if the nodes run out of resources (cpu or memory) to schedule all Pods. Write the Pod names into /opt/course/e1/pods-not-stable.txt.
더보기

Answer:

k -n project-c13 describe pod | less -p Requests # 모든 파드를 설명하고 Requests 강조

//or
k -n project-c13 describe pod | egrep "^(Name:|    Requests:)" -A1

 

example from killer.sh
//jsonpath
k -n project-c13 get pod -o jsonpath="{range .items[*]} {.metadata.name}{.spec.containers[*].resources}{'\n'}"

//or
k get pods -n project-c13 -o jsonpath="{range .items[*]}{.metadata.name} {.status.qosClass}{'\n'}"

https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/

 

Configure Quality of Service for Pods

This page shows how to configure Pods so that they will be assigned particular Quality of Service (QoS) classes. Kubernetes uses QoS classes to make decisions about evicting Pods when Node resources are exceeded. When Kubernetes creates a Pod it assigns on

kubernetes.io

 


Extra Qusestion 2 | Curl Manually Contact API

Use context: kubectl config use-context k8s-c1-H
 
There is an existing ServiceAccount secret-reader in Namespace project-hamster. Create a Pod of image curlimages/curl:7.65.3 named tmp-api-contact which uses this ServiceAccount. Make sure the container keeps running.
Exec into the Pod and use curl to access the Kubernetes Api of that cluster manually, listing all available secrets. You can ignore insecure https connection. Write the command(s) for this into file /opt/course/e4/list-secrets.sh.
더보기

시나리오:

1. --dry-run=client -o  yaml 이용해서 pod yaml 파일 작성 후 serviceAccountName 과 namespace 추가하기

2. pod apply 하고 k exec -it -- sh 이용해서 생성한 파드 접속

3. curl

curl https://kubernetes.default
curl -k https://kubernetes.default # 불안전한 연결 무시
curl -k https://kubernetes.default/api/v1/secrets # 403 Forbidden

4.  파일에 명령어 입력

# /opt/course/e4/list-secrets.sh
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl -k https://kubernetes.default/api/v1/secrets -H "Authorization: Bearer ${TOKEN}"

 

ps) 불완전한 연결 해결방법

// 암호화된 https 연결 실행하려면
CACERT=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
curl --cacert ${CACERT} https://kubernetes.default/api/v1/secrets -H "Authorization: Bearer ${TOKEN}"

https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/

 

Accessing the Kubernetes API from a Pod

This guide demonstrates how to access the Kubernetes API from within a pod. Before you begin You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutoria

kubernetes.io

 

728x90
728x90

Ver. 240629

*주의*
- killer.sh 문제풀이 정리용
- Use context : ~ 조건은 적용하지 않았음
(실습환경에는 1개의 마스터노드와 2개의 워커노드가 있는 1개의 클러스터 환경에서 작업
control-plane은 master라고 naming되어 있음)
- 일부 udemy lab을 사용
- 오류 있을 수 있음

Qusestion 21 | Create a Static Pod and Service

Use context: kubectl config use-context k8s-c3-CCC
 
Create a Static Pod named my-static-pod in Namespace default on cluster3-controlplane1. It should be of image nginx:1.16-alpine and have resource requests for 10m CPU and 20Mi memory.
Then create a NodePort Service named static-pod-service which exposes that static Pod on port 80 and check if it has Endpoints and if it's reachable through the cluster3-controlplane1 internal IP address. You can connect to the internal node IPs from your main terminal.
더보기

Answer:

cd /etc/kubernetes/manifests/

kubectl run my-static-pod \
    --image=nginx:1.16-alpine \
    -o yaml --dry-run=client > my-static-pod.yaml
Then edit the my-static-pod.yaml to add the requested resource requests:

# /etc/kubernetes/manifests/my-static-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: my-static-pod
  name: my-static-pod
spec:
  containers:
  - image: nginx:1.16-alpine
    name: my-static-pod
    resources:
      requests:
        cpu: 10m
        memory: 20Mi
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

 

k expose pod my-static-pod --name=static-pod-service --port=80 --type=NodePort --dry-run=client -o yaml

// svc.yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    run: my-static-pod
  name: static-pod-service
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    run: my-static-pod
  type: NodePort
status:
  loadBalancer: {}

https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/

 

Create static Pods

Static Pods are managed directly by the kubelet daemon on a specific node, without the API server observing them. Unlike Pods that are managed by the control plane (for example, a Deployment); instead, the kubelet watches each static Pod (and restarts it i

kubernetes.io

https://kubernetes.io/docs/concepts/services-networking/service/

 

Service

Expose an application running in your cluster behind a single outward-facing endpoint, even when the workload is split across multiple backends.

kubernetes.io

 


Qusestion 22 | Check how long certificates are valid

Use context: kubectl config use-context k8s-c2-AC
 
Check how long the kube-apiserver server certificate is valid on cluster2-controlplane1. Do this with openssl or cfssl. Write the exipiration date into /opt/course/22/expiration.
Also run the correct kubeadm command to list the expiration dates and confirm both methods show the same date.
Write the correct kubeadm command that would renew the apiserver server certificate into /opt/course/22/kubeadm-renew-certs.sh.
더보기

Answer: 

// kube-apiserver
ssh cluster2-controlplane1
find /etc/kubernetes/pki | grep apiserver

//인증서 위치
/etc/kubernetes/pki/apiserver.crt

//openssl
openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt | grep Validity -A2

echo "Jul  3 00:16:25 2025 GMT" > /opt/course/22/expiration
kubeadm certs check-expiration | grep apiserver

# /opt/course/22/kubeadm-renew-certs.sh
kubeadm certs renew apiserver

 

https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/

 

Certificate Management with kubeadm

FEATURE STATE: Kubernetes v1.15 [stable] Client certificates generated by kubeadm expire after 1 year. This page explains how to manage certificate renewals with kubeadm. It also covers other tasks related to kubeadm certificate management. Before you begi

kubernetes.io

https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-token/

 

kubeadm token

Bootstrap tokens are used for establishing bidirectional trust between a node joining the cluster and a control-plane node, as described in authenticating with bootstrap tokens. kubeadm init creates an initial token with a 24-hour TTL. The following comman

kubernetes.io

https://kubernetes.io/docs/tasks/administer-cluster/certificates/

 

Generate Certificates Manually

When using client certificate authentication, you can generate certificates manually through easyrsa, openssl or cfssl. easyrsa easyrsa can manually generate certificates for your cluster. Download, unpack, and initialize the patched version of easyrsa3. c

kubernetes.io


Qusestion 23 | Kubelet client/server cert info

Use context: kubectl config use-context k8s-c2-AC
 
Node cluster2-node1 has been added to the cluster using kubeadm and TLS bootstrapping.
Find the "Issuer" and "Extended Key Usage" values of the cluster2-node1:

1. kubelet client certificate, the one used for outgoing connections to the kube-apiserver.
2. kubelet server certificate, the one used for incoming connections from the kube-apiserver.

Write the information into file /opt/course/23/certificate-info.txt.
Compare the "Issuer" and "Extended Key Usage" fields of both certificates and make sense of these.
더보기

Answer:

ssh cluster2-node1

openssl x509  -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem | grep Issuer
        
openssl x509  -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem | grep "Extended Key Usage" -A1

 

openssl x509  -noout -text -in /var/lib/kubelet/pki/kubelet.crt | grep Issuer

openssl x509  -noout -text -in /var/lib/kubelet/pki/kubelet.crt | grep "Extended Key Usage" -A1

 

/opt/course/23/certificate-info.txt

 

https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/

 

TLS bootstrapping

In a Kubernetes cluster, the components on the worker nodes - kubelet and kube-proxy - need to communicate with Kubernetes control plane components, specifically kube-apiserver. In order to ensure that communication is kept private, not interfered with, an

kubernetes.io

https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/

 

kubelet

Synopsis The kubelet is the primary "node agent" that runs on each node. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider. The kubelet works in terms of a PodSpe

kubernetes.io

 


Qusestion 24 | NetworkPolicy

Use context: kubectl config use-context k8s-c1-H
 
There was a security incident where an intruder was able to access the whole cluster from a single hacked backend Pod.
To prevent this create a NetworkPolicy called np-backend in Namespace project-snake. It should allow the backend-* Pods only to:
- connect to db1-* Pods on port 1111
- connect to db2-* Pods on port 2222

Use the app label of Pods in your policy.
After implementation, connections from backend-* Pods to vault-* Pods on port 3333 should for example no longer work.
더보기

Answer:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: np-backend
  namespace: project-snake
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Egress                    # policy is only about Egress
  egress:
    -                           # first rule
      to:                           # first condition "to"
      - podSelector:
          matchLabels:
            app: db1
      ports:                        # second condition "port"
      - protocol: TCP
        port: 1111
    -                           # second rule
      to:                           # first condition "to"
      - podSelector:
          matchLabels:
            app: db2
      ports:                        # second condition "port"
      - protocol: TCP
        port: 2222

https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-egress-traffic

 

Network Policies

If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), NetworkPolicies allow you to specify rules for traffic flow within your cluster, and also between Pods and the outside world. Your cluster must use a network plugin tha

kubernetes.io


Qusestion 25 | Etcd Snapshot Save and Restore

Use context: kubectl config use-context k8s-c3-CCC
 
Make a backup of etcd running on cluster3-controlplane1 and save it on the controlplane node at /tmp/etcd-backup.db.
Then create any kind of Pod in the cluster.
Finally restore the backup, confirm the cluster is still working and that the created Pod is no longer with us.
더보기

Answer:

root@cluster3-controlplane1:~# ETCDCTL_API=3 etcdctl snapshot save /tmp/etcd-backup.db \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt \
--key /etc/kubernetes/pki/etcd/server.key

 

<etcd 복원> ★★★

//임의의 pod 생성
kubectl run test --image=nginx

//etcd 복원
//컨트롤플레인의 구성요소를 모두 중지
root@controlplane:~# cd /etc/kubernetes/manifests/
root@controlplane:/etc/kubernetes/manifests# mv * ..

//백업을 특정디렉토리에 복원
ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcd-backup.db \
--data-dir /var/lib/etcd-backup \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt \
--key /etc/kubernetes/pki/etcd/server.key

//etcd.yaml 수정
  - hostPath:
      path: /var/lib/etcd-backup                # 변경
      type: DirectoryOrCreate
    name: etcd-data

//yaml파일을 다시 매니패스트 디렉토리로 이동
root@controlplane:/etc/kubernetes/manifests# mv ../*.yaml .

//etcd와 api-server가 다시 시작되도록 대기, Pod 확인

https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster

 

Operating etcd clusters for Kubernetes

etcd is a consistent and highly-available key value store used as Kubernetes' backing store for all cluster data. If your Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for the data. You can find in-depth information a

kubernetes.io

 

728x90
728x90

Ver. 240629

*주의*
- killer.sh 문제풀이 정리용
- Use context : ~ 조건은 적용하지 않았음
(실습환경에는 1개의 마스터노드와 2개의 워커노드가 있는 1개의 클러스터 환경에서 작업
control-plane은 master라고 naming되어 있음)
- udemy lab 일부 사용
- 오류 있을 수 있음

Qusestion 11 | DaemonSet on all Nodes

Use context: kubectl config use-context k8s-c1-H
 
Use Namespace project-tiger for the following. Create a DaemonSet named ds-important with image httpd:2.4-alpine and labels id=ds-important and uuid=18426a0b-5f59-4e10-923f-c0e078e82462. The Pods it creates should request 10 millicore cpu and 10 mebibyte memory. The Pods of that DaemonSet should run on all nodes, also controlplanes.
더보기

Answer:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ds-important
  namespace: project-tiger
  labels:
    id: ds-important
    uuid: 18426a0b-5f59-4e10-923f-c0e078e82462
spec:
  selector:
    matchLabels:
      id: ds-important
      uuid: 18426a0b-5f59-4e10-923f-c0e078e82462
  template:
    metadata:
      labels:
        id: ds-important
        uuid: 18426a0b-5f59-4e10-923f-c0e078e82462
    spec:
      tolerations:
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: ds-important
        image: httpd:2.4-alpine
        resources:
          requests:
            cpu: 10m
            memory: 10Mi

 

 

https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

 

DaemonSet

A DaemonSet defines Pods that provide node-local facilities. These might be fundamental to the operation of your cluster, such as a networking helper tool, or be part of an add-on.

kubernetes.io


Qusestion 12 | Deployment on all Nodes

Use context: kubectl config use-context k8s-c1-H
 
Use Namespace project-tiger for the following. Create a Deployment named deploy-important with label id=very-important (the Pods should also have this label) and 3 replicas. It should contain two containers, the first named container1 with image nginx:1.17.6-alpine and the second one named container2 with image google/pause.
There should be only ever one Pod of that Deployment running on one worker node. We have two worker nodes: cluster1-node1 and cluster1-node2. Because the Deployment has three replicas the result should be that on both nodes one Pod is running. The third Pod won't be scheduled, unless a new worker node will be added. Use topologyKey: kubernetes.io/hostname for this.
In a way we kind of simulate the behaviour of a DaemonSet here, but using a Deployment and a fixed number of replicas.
더보기

Answer:

There are two possible ways, one using podAntiAffinity and one using topologySpreadConstraint.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-important
  namespace: project-tiger
  labels:
    id: very-important
spec:
  replicas: 3
  selector:
    matchLabels:
      id: very-important
  template:
    metadata:
      labels:
        id: very-important
    spec:
      containers:
      - name: container1
        image: nginx:1.17.6-alpine
        ports:
        - containerPort: 80
      - name: container2
        image: google/pause
      affinity:                                             
        podAntiAffinity:                                    
          requiredDuringSchedulingIgnoredDuringExecution:   
          - labelSelector:                                  
              matchExpressions:                             
              - key: id                                     
                operator: In                                
                values:                                     
                - very-important                            
            topologyKey: kubernetes.io/hostname

https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity

 

Assigning Pods to Nodes

You can constrain a Pod so that it is restricted to run on particular node(s), or to prefer to run on particular nodes. There are several ways to do this and the recommended approaches all use label selectors to facilitate the selection. Often, you do not

kubernetes.io

 


Qusestion 13 | Multi Containers and Pod shared Volume

Use context: kubectl config use-context k8s-c1-H
 
Create a Pod named multi-container-playground in Namespace default with three containers, named c1, c2 and c3. There should be a volume attached to that Pod and mounted into every container, but the volume shouldn't be persisted or shared with other Pods.
Container c1 should be of image nginx:1.17.6-alpine and have the name of the node where its Pod is running available as environment variable MY_NODE_NAME.
Container c2 should be of image busybox:1.31.1 and write the output of the date command every second in the shared volume into file date.log. You can use while true; do date >> /your/vol/path/date.log; sleep 1; done for this.
Container c3 should be of image busybox:1.31.1 and constantly send the content of file date.log from the shared volume to stdout. You can use tail -f /your/vol/path/date.log for this.
Check the logs of container c3 to confirm correct setup.
더보기

Answer:

//multi-container.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: multi-container-playground
  name: multi-container-playground
spec:
  containers:
  - image: nginx:1.17.6-alpine
    name: c1
    resources: {}
    env:
    - name: MY_NODE_NAME                                                          
      valueFrom:                                                                  
        fieldRef:                                                                 
          fieldPath: spec.nodeName                                                
    volumeMounts:                                                                 
    - name: vol                                                                   
      mountPath: /vol                                                             
  - image: busybox:1.31.1                                                         
    name: c2                                                                      
    command: ["sh", "-c", "while true; do date >> /vol/date.log; sleep 1; done"]  
    volumeMounts:                                                                 
    - name: vol                                                                   
      mountPath: /vol
  - image: busybox:1.31.1
    name: c3
    command: ["sh", "-c", "tail -f /vol/date.log"]
    volumeMounts:
    - name: vol
      mountPath: /vol
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  volumes:
  - name: vol
    emptyDir: {}
status: {}

 

https://kubernetes.io/docs/concepts/workloads/pods/#sidecar-containers

 

Pods

Production-Grade Container Orchestration

kubernetes.io

https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/

 

Define Environment Variables for a Container

This page shows how to define environment variables for a container in a Kubernetes Pod. Before you begin You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run

kubernetes.io


Qusestion 14 | Find out Cluster Information

Use context: kubectl config use-context k8s-c1-H
 
You're ask to find out following information about the cluster k8s-c1-H:
How many controlplane nodes are available?How many worker nodes are available?What is the Service CIDR?
Which Networking (or CNI Plugin) is configured and where is its config file?Which suffix will static pods have that run on cluster1-node1?
Write your answers into file /opt/course/14/cluster-info, structured like this:
# /opt/course/14/cluster-info
1: [ANSWER]
2: [ANSWER]
3: [ANSWER]
4: [ANSWER]
5: [ANSWER]​
더보기

Answer:

k get node
//How many controlplane and worker nodes are available?

ssh cluster1-controlplane1
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep range
//What is the Service CIDR?

find /etc/cni/net.d/
cat /etc/cni/net.d/10-weave.conflist
//Which Networking (or CNI Plugin) is configured and where is its config file?

 

# /opt/course/14/cluster-info

# How many controlplane nodes are available?
1: 1

# How many worker nodes are available?
2: 2

# What is the Service CIDR?
3: 10.96.0.0/12

# Which Networking (or CNI Plugin) is configured and where is its config file?
4: Weave, /etc/cni/net.d/10-weave.conflist

# Which suffix will static pods have that run on cluster1-node1?
5: -cluster1-node1

 

https://kubernetes.io/docs/concepts/architecture/nodes/

 

Nodes

Kubernetes runs your workload by placing containers into Pods to run on Nodes. A node may be a virtual or physical machine, depending on the cluster. Each node is managed by the control plane and contains the services necessary to run Pods. Typically you h

kubernetes.io

https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/

 

kube-apiserver

Synopsis The Kubernetes API server validates and configures data for the api objects which include pods, services, replicationcontrollers, and others. The API Server services REST operations and provides the frontend to the cluster's shared state through w

kubernetes.io

https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/

 

Network Plugins

Kubernetes 1.30 supports Container Network Interface (CNI) plugins for cluster networking. You must use a CNI plugin that is compatible with your cluster and that suits your needs. Different plugins are available (both open- and closed- source) in the wide

kubernetes.io


Qusestion 15 | Cluster Event Logging

Use context: kubectl config use-context k8s-c2-AC
 
Write a command into /opt/course/15/cluster_events.sh which shows the latest events in the whole cluster, ordered by time (metadata.creationTimestamp). Use kubectl for it.
Now delete the kube-proxy Pod running on node cluster2-node1 and write the events this caused into /opt/course/15/pod_kill.log.
Finally kill the containerd container of the kube-proxy Pod on node cluster2-node1 and write the events into /opt/course/15/container_kill.log.
Do you notice differences in the events both actions caused?
더보기

Answer:

# /opt/course/15/cluster_events.sh
kubectl get events -A --sort-by=.metadata.creationTimestamp

k -n kube-system get pod -o wide | grep proxy # find pod running on cluster2-node1
k -n kube-system delete pod kube-proxy-xxxxxxx

# 클러스터 이벤트 확인
sh /opt/course/15/cluster_events.sh

# 이벤트를 pod_kill.log에 기록
sh /opt/course/15/cluster_events.sh > /opt/course/15/pod_kill.log

ssh cluster2-node1
crictl ps | grep kube-proxy
crictl rm 1e020b43c44xxxx

crictl ps | grep kube-proxy

sh /opt/course/15/cluster_events.sh

 

https://kubernetes.io/docs/reference/kubectl/generated/kubectl_events/

 

kubectl events

Production-Grade Container Orchestration

kubernetes.io

https://kubernetes.io/docs/reference/kubectl/generated/kubectl_delete/

 

kubectl delete

Production-Grade Container Orchestration

kubernetes.io

https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/

 

Debugging Kubernetes nodes with crictl

FEATURE STATE: Kubernetes v1.11 [stable] crictl is a command-line interface for CRI-compatible container runtimes. You can use it to inspect and debug container runtimes and applications on a Kubernetes node. crictl and its source are hosted in the cri-too

kubernetes.io

 


Qusestion 16 | Namespaces and Api Resources

Use context: kubectl config use-context k8s-c1-H
 
Write the names of all namespaced Kubernetes resources (like Pod, Secret, ConfigMap...) into /opt/course/16/resources.txt.
Find the project-* Namespace with the highest number of Roles defined in it and write its name and amount of Roles into /opt/course/16/crowded-namespace.txt.
더보기

Answer:

k api-resources    # shows all

k api-resources -h # help always good

k api-resources --namespaced -o name > /opt/course/16/resources.txt

k get role --h 

k -n project-c13 get role --no-headers | wc -l
//No resources found in project-c13 namespace.
//0

k -n project-c14 get role --no-headers | wc -l
//300

k -n project-hamster get role --no-headers | wc -l
//No resources found in project-hamster namespace.
//0

k -n project-snake get role --no-headers | wc -l
//No resources found in project-snake namespace.
//0

k -n project-tiger get role --no-headers | wc -l
//No resources found in project-tiger namespace.
//0

 

https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/

 

Namespaces

In Kubernetes, namespaces provide a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Namespace-based scoping is applicable only for namespaced objects (

kubernetes.io

 


Qusestion 17 | Find Container of Pod and check info

Use context: kubectl config use-context k8s-c1-H
 
In Namespace project-tiger create a Pod named tigers-reunite of image httpd:2.4.41-alpine with labels pod=container and container=pod. Find out on which node the Pod is scheduled. Ssh into that node and find the containerd container belonging to that Pod.
Using command crictl:
1. Write the ID of the container and the info.runtimeType into /opt/course/17/pod-container.txt
2. Write the logs of the container into /opt/course/17/pod-container.log
더보기

Answer:

 


Qusestion 18 | Fix Kubelet

Use context: kubectl config use-context k8s-c3-CCC
 
There seems to be an issue with the kubelet not running on cluster3-node1. Fix it and confirm that cluster has node cluster3-node1 available in Ready state afterwards. You should be able to schedule a Pod on cluster3-node1 afterwards.
Write the reason of the issue into /opt/course/18/reason.txt.
더보기

Answer:

NAME                     STATUS     ROLES           AGE   VERSION
cluster3-controlplane1   Ready      control-plane   14d   v1.30.1
cluster3-node1           NotReady   <none>          14d   v1.30.1
ssh cluster3-node1

ps aux | grep kubelet

service kubelet start

service kubelet status
 /usr/local/bin/kubelet
// /usr/local/bin/kubelet: No such file or directory

whereis kubelet
// /usr/bin/kubelet

vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf # 경로 수정

systemctl daemon-reload

service kubelet restart

service kubelet status  # 이제 실행 중이어야 합니다
echo "wrong path to kubelet binary specified in service config" > /opt/course/18/reason.txt

https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/?ref=kubefirst.io

 

kubelet

Synopsis The kubelet is the primary "node agent" that runs on each node. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider. The kubelet works in terms of a PodSpe

kubernetes.io

 


Qusestion 19 | Create Secret and mount into Pod

Use context: kubectl config use-context k8s-c3-CCC
 
Do the following in a new Namespace secret. Create a Pod named secret-pod of image busybox:1.31.1 which should keep running for some time.
There is an existing Secret located at /opt/course/19/secret1.yaml, create it in the Namespace secret and mount it readonly into the Pod at /tmp/secret1.
Create a new Secret in Namespace secret called secret2 which should contain user=user1 and pass=1234. These entries should be available inside the Pod's container as environment variables APP_USER and APP_PASS.
Confirm everything is working.

 

더보기

Answer:

k create ns secret

cp /opt/course/19/secret1.yaml 19_secret1.yaml
vim 19_secret1.yaml

apiVersion: v1
data:
  halt: IyEgL2Jpbi9zaAo...
kind: Secret
metadata:
  creationTimestamp: null
  name: secret1
  namespace: secret  # 변경
k -n secret create secret generic secret2 --from-literal=user=user1 --from-literal=pass=1234

k -n secret run secret-pod --image=busybox:1.31.1 -- sh -c "sleep 5d" > 19.yaml
vim 19.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: secret-pod
  name: secret-pod
  namespace: secret  # 추가
spec:
  containers:
  - args:
    - sh
    - -c
    - sleep 1d
    image: busybox:1.31.1
    name: secret-pod
    resources: {}
    env:  # 추가
    - name: APP_USER
      valueFrom:
        secretKeyRef:
          name: secret2
          key: user
    - name: APP_PASS
      valueFrom:
        secretKeyRef:
          name: secret2
          key: pass
    volumeMounts:  # 추가
    - name: secret1
      mountPath: /tmp/secret1
      readOnly: true
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  volumes:  # 추가
  - name: secret1
    secret:
      secretName: secret1
status: {}

https://kubernetes.io/docs/concepts/configuration/secret/

 

Secrets

A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in a container image. Using a Secret means that you don't need to include confiden

kubernetes.io

https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/

 

Distribute Credentials Securely Using Secrets

This page shows how to securely inject sensitive data, such as passwords and encryption keys, into Pods. Before you begin You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is re

kubernetes.io


Qusestion 20 | Update Kubernetes Version and join cluster

Use context: kubectl config use-context k8s-c3-CCC
 
Your coworker said node cluster3-node2 is running an older Kubernetes version and is not even part of the cluster. Update Kubernetes on that node to the exact version that's running on cluster3-controlplane1. Then add this node to the cluster. Use kubeadm for this.
더보기

Answer:

k get node

//버전확인
ssh cluster3-node2
kubectl version
kubelet --version
kubeadm version

//오류 -> 노드 초기화 하지 않아서
kubeadm upgrade node

//업그레이드
apt update
apt install kubectl=1.30.1-1.1 kubelet=1.30.1-1.1
service kubelet restart
service kubelet status
// 업그레이드 한 노드를 클러스터에 추가
ssh cluster3-controlplane1
kubeadm token create --print-join-command

ssh cluster3-node2
kubeadm join 192.168.100.31:6443 --token zodhba.wlxmtvumtjpgaevg --discovery-token-ca-cert-hash sha256:6819708bfec2336183a68138d680ea7bf81dfbf9a57b3fca1c51bdc2f4fc6e99
service kubelet status

 

https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

 

Upgrading kubeadm clusters

This page explains how to upgrade a Kubernetes cluster created with kubeadm from version 1.29.x to version 1.30.x, and from version 1.30.x to 1.30.y (where y > x). Skipping MINOR versions when upgrading is unsupported. For more details, please visit Versio

kubernetes.io

https://kubernetes.io/docs/reference/kubectl/kubectl/

 

kubectl

Synopsis kubectl controls the Kubernetes cluster manager. Find more information in Command line tool (kubectl). kubectl [flags] Options --add-dir-header If true, adds the file directory to the header of the log messages --alsologtostderr log to standard er

kubernetes.io

https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/

 

kubeadm join

This command initializes a Kubernetes worker node and joins it to the cluster. Run this on any machine you wish to join an existing cluster Synopsis When joining a kubeadm initialized cluster, we need to establish bidirectional trust. This is split into di

kubernetes.io

 

728x90
728x90

Ver. 240629

*주의*
- killer.sh 문제풀이 정리용
- Use context : ~ 조건은 적용하지 않았음
(실습환경에는 1개의 마스터노드와 2개의 워커노드가 있는 1개의 클러스터 환경에서 작업
control-plane은 master라고 naming되어 있음)
- 오류 있을 수 있음

Qusestion 1 | Contexts

You have access to multiple clusters from your main terminal through kubectl contexts. Write all those context names into /opt/course/1/contexts.

Next write a command to display the current context into /opt/course/1/context_default_kubectl.sh, the command should use kubectl.

Finally write a second command doing the same thing into /opt/course/1/context_default_no_kubectl.sh, but without the use of kubectl.
더보기

Answer:

//alias k='kubectl'
k config get-contexts -o name > /opt/course/1/contexts
echo 'kubectl config current-context' >> /opt/course/1/context_default_kubectl.sh
sh /opt/course/1/context_default_kubectl.sh
echo 'cat ~/.kube/config | grep current' >> /opt/course/1/context_default_no_kubectl.sh
sh /opt/course/1/context_default_no_kubectl.sh

 

https://kubernetes.io/docs/reference/kubectl/quick-reference/

 

kubectl Quick Reference

This page contains a list of commonly used kubectl commands and flags. Note:These instructions are for Kubernetes v1.30. To check the version, use the kubectl version command. Kubectl autocomplete BASH source <(kubectl completion bash) # set up autocomplet

kubernetes.io


Qusestion 2 | Schedule Pod on Controlplane Nodes

Use context: kubectl config use-context k8s-c1-H
 
Create a single Pod of image httpd:2.4.41-alpine in Namespace default. The Pod should be named pod1 and the container should be named pod1-container. This Pod should only be scheduled on controlplane nodes. Do not add new labels to any nodes.
더보기

Answer:

k run pod1 --image=httpd:2.4.41-alpine --dry-run=client -o yaml
k get node master --show-labels | grep control-plane
//vi pod1.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pod1
  name: pod1
spec:
  containers:
  - image: httpd:2.4.41-alpine
    name: pod1-container
  tolerations:
  - key: "node-role.kubernetes.io/control-plane"
    operator: "Exists"
    effect: "NoSchedule"
  nodeSelector:
    node-role.kubernetes.io/control-plane: ""
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
k apply -f pod1.yaml
k get po -o wide

https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/

 

Taints and Tolerations

Node affinity is a property of Pods that attracts them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite -- they allow a node to repel a set of pods. Tolerations are applied to pods. Tolerations allow the scheduler t

kubernetes.io

https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

 

Assigning Pods to Nodes

You can constrain a Pod so that it is restricted to run on particular node(s), or to prefer to run on particular nodes. There are several ways to do this and the recommended approaches all use label selectors to facilitate the selection. Often, you do not

kubernetes.io

https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity

 

Assigning Pods to Nodes

You can constrain a Pod so that it is restricted to run on particular node(s), or to prefer to run on particular nodes. There are several ways to do this and the recommended approaches all use label selectors to facilitate the selection. Often, you do not

kubernetes.io

 


Qusestion 3 | Scale down StatefulSet

Use context: kubectl config use-context k8s-c1-H
 
There are two Pods named o3db-* in Namespace project-c13. C13 management asked you to scale the Pods down to one replica to save resources.
더보기

Answer:

k scale sts o3db -n project-c13 --replicas=1

https://kubernetes.io/docs/tasks/run-application/scale-stateful-set/

 

Scale a StatefulSet

This task shows how to scale a StatefulSet. Scaling a StatefulSet refers to increasing or decreasing the number of replicas. Before you begin StatefulSets are only available in Kubernetes version 1.5 or later. To check your version of Kubernetes, run kubec

kubernetes.io


Qusestion 4 | Pod Ready if Service is reachable

Use context: kubectl config use-context k8s-c1-H
 
Do the following in Namespace default.
Create a single Pod named ready-if-service-ready of image nginx:1.16.1-alpine.
Configure a LivenessProbe which simply executes command true.
Also configure a ReadinessProbe which does check if the url http://service-am-i-ready:80 is reachable, you can use wget -T2 -O- http://service-am-i-ready:80 for this.
Start the Pod and confirm it isn't ready because of the ReadinessProbe.

Create a second Pod named am-i-ready of image nginx:1.16.1-alpine with label id: cross-server-ready.
The already existing Service service-am-i-ready should now have that second Pod as endpoint.

Now the first Pod should be in ready state, confirm that.
더보기

Answer:

4_pod.yaml

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: ready-if-service-ready
  name: ready-if-service-ready
spec:
  containers:
  - image: nginx:1.16.1-alpine
    name: ready-if-service-ready
    resources: {}
    livenessProbe:                                      # add from here
      exec:
        command:
        - 'true'
    readinessProbe:
      exec:
        command:
        - sh
        - -c
        - 'wget -T2 -O- http://service-am-i-ready:80'   # to here
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
 k run am-i-ready --image=nginx:1.16.1-alpine --labels="id=cross-server-ready"
k get pod ready-if-service-ready

https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

 

Configure Liveness, Readiness and Startup Probes

This page shows how to configure liveness, readiness and startup probes for containers. The kubelet uses liveness probes to know when to restart a container. For example, liveness probes could catch a deadlock, where an application is running, but unable t

kubernetes.io

 


Qusestion 5 | Kubectl sorting

Use context: kubectl config use-context k8s-c1-H
 
There are various Pods in all namespaces. Write a command into /opt/course/5/find_pods.sh which lists all Pods sorted by their AGE (metadata.creationTimestamp).
Write a second command into /opt/course/5/find_pods_uid.sh which lists all Pods sorted by field metadata.uid. Use kubectl sorting for both commands.
더보기

Answer:

# /opt/course/5/find_pods.sh
kubectl get pod -A --sort-by=.metadata.creationTimestamp
# /opt/course/5/find_pods_uid.sh
kubectl get pod -A --sort-by=.metadata.uid

https://kubernetes.io/pt-br/docs/reference/kubectl/cheatsheet/

 

kubectl Cheat Sheet

Esta página contém uma lista de comandos kubectl e flags frequentemente usados. Kubectl Autocomplete BASH source <(kubectl completion bash) # configuração de autocomplete no bash do shell atual, o pacote bash-completion precisa ter sido instalado prime

kubernetes.io

 


Qusestion 6 | Storage, PV, PVC, Pod volume

Use context: kubectl config use-context k8s-c1-H
 
Create a new PersistentVolume named safari-pv. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data and no storageClassName defined.
Next create a new PersistentVolumeClaim in Namespace project-tiger named safari-pvc . It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.
Finally create a new Deployment safari in Namespace project-tiger which mounts that volume at /tmp/safari-data. The Pods of that Deployment should be of image httpd:2.4.41-alpine.
더보기

Answer:

//cat 6_pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: safari-pv
spec:
  hostPath:
    path: /Volumes/Data
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce


//cat 6_pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: safari-pvc
  namespace: project-tiger
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 2Gi
//cat 6_deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: safari
  name: safari
  namespace: project-tiger
spec:
  replicas: 1
  selector:
    matchLabels:
      app: safari
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: safari
    spec:
      containers:
      - image: httpd:2.4.41-alpine
        name: httpd
        volumeMounts:
        - mountPath: "/tmp/safari-data"
          name: mypd
      volumes:
        - name: mypd
          persistentVolumeClaim:
            claimName: safari-pvc
status: {}

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes

 

Persistent Volumes

This document describes persistent volumes in Kubernetes. Familiarity with volumes, StorageClasses and VolumeAttributesClasses is suggested. Introduction Managing storage is a distinct problem from managing compute instances. The PersistentVolume subsystem

kubernetes.io

 


Qusestion 7 | Node and Pod Resource Usage

Use context: kubectl config use-context k8s-c1-H
 
The metrics-server has been installed in the cluster. Your college would like to know the kubectl commands to:
1. show Nodes resource usage
2. show Pods and their containers resource usage

Please write the commands into /opt/course/7/node.sh and /opt/course/7/pod.sh.
더보기

Answer:

# /opt/course/7/node.sh
kubectl top node

# /opt/course/7/pod.sh
kubectl top pod --containers=true

meric server는 kubernetes의 공식 애드온임. 설치는 curl raw 주소 넣어서 설치함

 

https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/

 

kubectl top

Production-Grade Container Orchestration

kubernetes.io

- metric server

https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-api

 

Resource metrics pipeline

For Kubernetes, the Metrics API offers a basic set of metrics to support automatic scaling and similar use cases. This API makes information available about resource usage for node and pod, including metrics for CPU and memory. If you deploy the Metrics AP

kubernetes.io

 


Qusestion 8 | Get Controlplane Information

Use context: kubectl config use-context k8s-c1-H
 
Ssh into the controlplane node with ssh cluster1-controlplane1. Check how the controlplane components kubelet, kube-apiserver, kube-scheduler, kube-controller-manager and etcd are started/installed on the controlplane node. Also find out the name of the DNS application and how it's started/installed on the controlplane node.
Write your findings into file /opt/course/8/controlplane-components.txt. The file should be structured like:
# /opt/course/8/controlplane-components.txt
kubelet: [TYPE]
kube-apiserver: [TYPE]
kube-scheduler: [TYPE]
kube-controller-manager: [TYPE]
etcd: [TYPE]
dns: [TYPE] [NAME]​

 

Choices of [TYPE] are: not-installed, process, static-pod, pod
더보기

Answer:

ps aux | grep kubelet # shows kubelet process

find /usr/lib/systemd | grep kube

find /usr/lib/systemd | grep etcd

find /etc/kubernetes/manifests/

kubectl -n kube-system get pod -o wide | grep master

kubectl -n kube-system get ds

kubectl -n kube-system get deploy

 

# /opt/course/8/controlplane-components.txt
kubelet: process
kube-apiserver: static-pod
kube-scheduler: static-pod
kube-controller-manager: static-pod
etcd: static-pod
dns: pod coredns

 

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/

 

Configuring each kubelet in your cluster using kubeadm

Note: Dockershim has been removed from the Kubernetes project as of release 1.24. Read the Dockershim Removal FAQ for further details. FEATURE STATE: Kubernetes v1.11 [stable] The lifecycle of the kubeadm CLI tool is decoupled from the kubelet, which is a

kubernetes.io


Qusestion 9 | Kill Scheduler, Manual Scheduling

Use context: kubectl config use-context k8s-c2-AC
 
Ssh into the controlplane node with ssh cluster2-controlplane1. Temporarily stop the kube-scheduler, this means in a way that you can start it again afterwards.
Create a single Pod named manual-schedule of image httpd:2.4-alpine, confirm it's created but not scheduled on any node.
Now you're the scheduler and have all its power, manually schedule that Pod on node cluster2-controlplane1. Make sure it's running.
Start the kube-scheduler again and confirm it's running correctly by creating a second Pod named manual-schedule2 of image httpd:2.4-alpine and check if it's running on cluster2-node1.

 

더보기

Answer:

kubectl -n kube-system get pod | grep schedule

cd /etc/kubernetes/manifests/

mv kube-scheduler.yaml ..

//kube-scsheduler stopped

pod pending

k run manual-schedule --image=httpd:2.4-alpine

//9.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2020-09-04T15:51:02Z"
  labels:
    run: manual-schedule
  managedFields:
...
    manager: kubectl-run
    operation: Update
    time: "2020-09-04T15:51:02Z"
  name: manual-schedule
  namespace: default
  resourceVersion: "3515"
  selfLink: /api/v1/namespaces/default/pods/manual-schedule
  uid: 8e9d2532-4779-4e63-b5af-feb82c74a935
spec:
  nodeName: cluster2-controlplane1        # add the controlplane node name
  containers:
  - image: httpd:2.4-alpine
    imagePullPolicy: IfNotPresent
    name: manual-schedule
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-nxnc7
      readOnly: true
  dnsPolicy: ClusterFirst
...

k -f 9.yaml replace --force

pod running

cd /etc/kubernetes/manifests/

mv ../kube-scheduler.yaml .

kubectl -n kube-system get pod | grep schedule //running

 

k run manual-schedule2 --image=httpd:2.4-alpine

k get pod -o wide | grep schedule
//manual-schedule    1/1     Running   ...   cluster2-controlplane1
//manual-schedule2   1/1     Running   ...   cluster2-node1

 

https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/

 

Kubernetes Scheduler

In Kubernetes, scheduling refers to making sure that Pods are matched to Nodes so that Kubelet can run them. Scheduling overview A scheduler watches for newly created Pods that have no Node assigned. For every Pod that the scheduler discovers, the schedule

kubernetes.io

 


Qusestion 10 | RBAC ServiceAccount Role RoleBinding

Use context: kubectl config use-context k8s-c1-H
 
Create a new ServiceAccount processor in Namespace project-hamster. Create a Role and RoleBinding, both named processor as well. These should allow the new SA to only create Secrets and ConfigMaps in that Namespace.
더보기

Answer:

k -n project-hamster create sa processor

k -n project-hamster create role processor \
  --verb=create \
  --resource=secret \
  --resource=configmap
  
  k -n project-hamster create rolebinding processor \
  --role processor \
  --serviceaccount project-hamster:processor

 

result

➜ k -n project-hamster auth can-i create secret \
  --as system:serviceaccount:project-hamster:processor
yes

➜ k -n project-hamster auth can-i create configmap \
  --as system:serviceaccount:project-hamster:processor
yes

➜ k -n project-hamster auth can-i create pod \
  --as system:serviceaccount:project-hamster:processor
no

➜ k -n project-hamster auth can-i delete secret \
  --as system:serviceaccount:project-hamster:processor
no

➜ k -n project-hamster auth can-i get configmap \
  --as system:serviceaccount:project-hamster:processor
no

https://kubernetes.io/docs/reference/access-authn-authz/rbac/

 

Using RBAC Authorization

Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. RBAC authorization uses the rbac.authorization.k8s.io API group to drive authorization decis

kubernetes.io

 

728x90
728x90

● 2022년 6월 이후로 바뀐 시험환경

https://itnext.io/cks-cka-ckad-changed-terminal-to-remote-desktop-157a26c1d5e

 

CKS CKA CKAD changed Terminal to Remote Desktop

The exams moved 06/2022 to PSI Bridge which now provides a VNC connection to a remote desktop

itnext.io

https://killer.sh/faq

 

Killer Shell - Exam Simulators

Linux Foundation CKS CKA CKAD LFCS LFCT Kubernetes Linux Exam Simulators / Example Questions / Practice Exam

killer.sh

https://killer.sh/attendee/f3af607b-5396-4de6-b98f-a0520d3e9dda/tips

 

Killer Shell - Exam Simulators

Linux Foundation CKS CKA CKAD LFCS LFCT Kubernetes Linux Exam Simulators / Example Questions / Practice Exam

killer.sh

 


● 북마크 기능이 없으므로 kubernetes docs 사이트 외우고 있기

https://kubernetes.io/docs/reference/kubectl/quick-reference/

 

kubectl Quick Reference

This page contains a list of commonly used kubectl commands and flags. Note:These instructions are for Kubernetes v1.30. To check the version, use the kubectl version command. Kubectl autocomplete BASH source <(kubectl completion bash) # set up autocomplet

kubernetes.io

 


● 복사&붙여넣기 

PSI 내의 웹 브라우저는 Firefox를 사용하니 참고하자


● 자동완성 & 별칭

https://kubernetes.io/docs/reference/kubectl/quick-reference/

 

kubectl Quick Reference

This page contains a list of commonly used kubectl commands and flags. Note:These instructions are for Kubernetes v1.30. To check the version, use the kubectl version command. Kubectl autocomplete BASH source <(kubectl completion bash) # set up autocomplet

kubernetes.io

#자동완성
source <(kubectl completion bash) # set up autocomplete in bash into the current shell, bash-completion package should be installed first.
echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.

#별칭
alias k=kubectl
complete -o default -F __start_kubectl k

● killer.sh 연습

 

728x90
728x90

docs : https://kubernetes.io/docs/concepts/services-networking/network-policies/

 

Network Policies

If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), NetworkPolicies allow you to specify rules for traffic flow within your cluster, and also between Pods and the outside world. Your cluster must use a network plugin tha

kubernetes.io

#k run curl --image=alpine/curl --rm -it --sh

#networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: ingress-to-nptest
  namespace: default
spec:
  podSelector:
    matchLabels:
      run: np-test-1
  policyTypes:
  - Ingress
  ingress:
    ports:
    - protocol: TCP
      port: 80


k apply -f networkpolicy.yaml
728x90
728x90

docs : https://kubernetes.io/docs/tasks/debug/debug-application/debug-init-containers/

 

Debug Init Containers

This page shows how to investigate problems related to the execution of Init Containers. The example command lines below refer to the Pod as <pod-name> and the Init Containers as <init-container-1> and <init-container-2>. Before you begin You need to have

kubernetes.io

k logs orange init-myservice
#sh: sleeeep: not found
#=> k describe pod orange에서 init containers의 Command를 확인해보면
#Command:
#	sh
#	-c
#	sleeep 2;
#라고 되어 있음 명령어 오류

#오류 수정
k edit pod orange
#위의 잘못된 명령어를 수정
#주의사항! : 종료저장(:q!)

#기존 포드 삭제 후 다시 생성
k replace --force -f yaml파일경로
728x90

+ Recent posts