- What will happen when you delete a pod that has been created using Deployment?
- Pod will be recreated after 30 seconds
- Pod will be recreated immediately using ReplicaSet
- Pod will be gone
- Pod will be recreated using Service
- You’ve created the following Service:
apiVersion: v1
kind: Service
metadata:
name: MyAwesomeService
spec:
selector:
app: shiny-pods
ports:
- protocol: TCP
port: 80
targetPort: 80
But it doesn’t seem to reach any pods, what would be the first debugging step to find the root cause of this problem?
- What component is required to automatically scale pods horizontally using Horizontal Pod Autoscaling (HPA)?
- Karpenter
- Cluster AutoScaler
- fluentbit server to aggregate resource metrics from pods
- metrics-server
- What will happen when you scale out/up (add more pods) pods that need more resource and you’re not using either ClusterAutoScaler, Karpenter or EKS AutoMode?
- EKS will create new nodes using Auto Scaling Group
- EKS will create new nodes using Karpenter and run new pods there
- Pods that exceed resources on running nodes will fail to schedule and will have pending status
- Scaling pods will fail and you have to repeat it when additional nodes are available
- You need to isolate two set of pods with the following labels app=legacyA and app=legacyB, so that both apps cannot communicate with each other, how would you do that?
- No need to do anything, pods are already isolated
- Use NACL to setup ingress and egress for each set of pods
- Use NetworkPolicy
- Configure Security Groups on each node where each pods are running
- You’ve deployed a pod, but a container failed to create, where would you look first to debug the problem?
- Pod’s logs using eksctl get-logs pod pod-name
- Pod’s logs using kubectl logs pod-name
- Pod’s status and debug info using kubectl describe pod pod-name
- Node’s logs