1. What will happen when you delete a pod that has been created using Deployment?
    1. Pod will be recreated after 30 seconds
    2. Pod will be recreated immediately using ReplicaSet
    3. Pod will be gone
    4. Pod will be recreated using Service
  2. You’ve created the following Service:
apiVersion: v1
kind: Service
metadata:
  name: MyAwesomeService
spec:
  selector:
   app: shiny-pods 
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

But it doesn’t seem to reach any pods, what would be the first debugging step to find the root cause of this problem?

  1. What component is required to automatically scale pods horizontally using Horizontal Pod Autoscaling (HPA)?
    1. Karpenter
    2. Cluster AutoScaler
    3. fluentbit server to aggregate resource metrics from pods
    4. metrics-server
  2. What will happen when you scale out/up (add more pods) pods that need more resource and you’re not using either ClusterAutoScaler, Karpenter or EKS AutoMode?
    1. EKS will create new nodes using Auto Scaling Group
    2. EKS will create new nodes using Karpenter and run new pods there
    3. Pods that exceed resources on running nodes will fail to schedule and will have pending status
    4. Scaling pods will fail and you have to repeat it when additional nodes are available
  3. You need to isolate two set of pods with the following labels app=legacyA and app=legacyB, so that both apps cannot communicate with each other, how would you do that?
    1. No need to do anything, pods are already isolated
    2. Use NACL to setup ingress and egress for each set of pods
    3. Use NetworkPolicy
    4. Configure Security Groups on each node where each pods are running
  4. You’ve deployed a pod, but a container failed to create, where would you look first to debug the problem?
    1. Pod’s logs using eksctl get-logs pod pod-name
    2. Pod’s logs using kubectl logs pod-name
    3. Pod’s status and debug info using kubectl describe pod pod-name
    4. Node’s logs