1번
Create a Pod mc-pod in the mc-namespace namespace with three containers.
The first container should be named mc-pod-1, run the nginx:1-alpine image,
and set an environment variable NODE_NAME to the node name.
The second container should be named mc-pod-2, run the busybox:1 image,
and continuously log the output of the date command
to the file /var/log/shared/date.log every second.
The third container should have the name mc-pod-3, run the image busybox:1,
and print the contents of the date.log file generated
by the second container to stdout. Use a shared, non-persistent volume.
- 1번 컨테이너 기준으로 yaml 만들기
- k run mc-pod -n mc-namespace --image=nginx:1-alpine --dry-run=client -o yaml > a.yaml
- vi a.yaml
- 1번 컨테이너에 환경변수로 노드 이름 자동 주입
- spec: containers: - image: nginx:1-alpine name: mc-pod-1 env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName
- 2번 컨테이너
- 1번 컨테이너 아래 name, image 추가
- 매초마다 date 명령어의 결과를 /var/log/shared/date.log 파일에 기록
- - name: mc-pod-2 image: busybox:1 command: - "sh" - "-c" - "while true; do date >> /var/log/shared/date.log; sleep 1;done"
apiVersion: v1
kind: Pod
metadata:
name: mc-pod
namespace: mc-namespace
spec:
volumes:
- name: shared-volume
emptyDir: {}
containers:
- image: nginx:1-alpine
name: mc-pod-1
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: mc-pod-2
image: busybox:1
command:
- "sh"
- "-c"
- "while true; do date >> /var/log/shared/date.log; sleep 1;done"
volumeMounts:
- name: shared-volume
mountPath: /var/log/shared/
- name: mc-pod-3
image: busybox:1
command:
- "sh"
- "-c"
- "tail -f /var/log/shared/date.log"
volumeMounts:
- name: shared-volume
mountPath: /var/log/shared/
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
- apply
- test
- k logs mc-pod -c mc-pod-3 -f -n mc-namespace
2번
This question needs to be solved on node node01. To access the node using SSH, use the credentials below:
username: bob
password: caleston123
As an administrator, you need to prepare node01 to install kubernetes.
One of the steps is installing a container runtime.
Install the cri-docker_0.3.16.3-0.debian.deb package located in /root and
ensure that the cri-docker service is running and enabled to start on boot.
- name=bob으로 ssh 환경 접속
- ssh bob@node01
- 권한 부여 후 root 폴더로 진입
- 파일 다운로드
- dpkg -i /root/cri-docker_0.3.16.3-0.debian.deb
or root에 접속했다면
dpkg -i cri-docker_0.3.16.3-0.debian.deb
- 도커 서비스 시작
- systemctl start cri-docker
- running 확인
- systemctl status cri-docker
3번
On controlplane node, identify all CRDs related to VerticalPodAutoscaler
and save their names into the file /root/vpa-crds.txt.
- crd 확인
- /root/vpa-crds.txt에 저장
- vi /root/vpa-crds.txt 후 verticalPodAutoscaler와 관련된 crd name 저장
4번
Create a service messaging-service to expose the messaging application
within the cluster on port 6379.
5번
Create a deployment named hr-web-app using the image kodekloud/webapp-color with 2 replicas.
- k create deploy hr-web-app --image=kodekloud/webapp-color --replicas=2
6번
A new application orange is deployed. There is something wrong with it. Identify and fix the issue.
- 문제 있는 pod를 describe를 통해 어디서 문제가 발생했는지 확인
- 문제 있는 pod를 yaml 파일로 변환
- k get pod orange -o yaml > b.yaml
- vi b.yaml
- 문제 해결(이 문제는 initcontainers command가 sleeeep로 되어있음)
- 수정 후 기존 orange pod 삭제 및 k apply -f b.yaml
7번
Expose the hr-web-app created in the previous task as a service named hr-web-app-service,
accessible on port 30082 on the nodes of the cluster.
The web application listens on port 8080.
8번
Create a Persistent Volume with the given specification: -
Volume name: pv-analytics
Storage: 100Mi
Access mode: ReadWriteMany
Host path: /pv/data-analytics
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-analytics
spec:
capacity:
storage: 100Mi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
hostPath:
path: /pv/data-analytics
9번
Create a Horizontal Pod Autoscaler (HPA) with name webapp-hpa
for the deployment named kkapp-deploy in the default namespace
with the webapp-hpa.yaml file located under the root folder.
Ensure that the HPA scales the deployment based on CPU utilization,
maintaining an average CPU usage of 50% across all pods.
Configure the HPA to cautiously scale down pods by setting a stabilization window of 300 seconds
to prevent rapid fluctuations in pod count.
Note: The kkapp-deploy deployment is created for backend; you can check in the terminal.
- 이미 만들어진 webapp-hpa.yaml을 사용
- vi webapp-hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: webapp-hpa
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: kkapp-deploy
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
behavior:
scaleDown:
stabilizationWindowSeconds: 300
- 적용 및 확인
- k apply -f webapp-hpa.yaml
- k get hpa
10번
Deploy a Vertical Pod Autoscaler (VPA) with name analytics-vpa
for the deployment named analytics-deployment in the default namespace.
The VPA should automatically adjust the CPU and memory requests of the pods to optimize resource utilization.
Ensure that the VPA operates in Auto mode, allowing it to evict and recreate pods
with updated resource requests as needed.
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: analytics-vpa
namespace: default
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: analytics-deployment
updatePolicy:
updateMode: "Auto"
- 참고 docs 없음 hpa와 유사한 형태
- apiVersion, targetRef, updatePolicy 변경
- k apply -f e.yaml
11번
Create a Kubernetes Gateway resource with the following specifications:
Name: web-gateway
Namespace: nginx-gateway
Gateway Class Name: nginx
Listeners:
Protocol: HTTP
Port: 80
Name: http
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: web-gateway
namespace: nginx-gateway
spec:
gatewayClassName: nginx
listeners:
- name: http
protocol: HTTP
port: 80
12번
One co-worker deployed an nginx helm chart kk-mock1 in the kk-ns namespace on the cluster.
A new update is pushed to the helm chart,
and the team wants you to update the helm repository to fetch the new changes.
After updating the helm chart, upgrade the helm chart version to 18.1.15.