Linux Foundation KCNA Kubernetes and Cloud Native Associate Exam Dumps and Practice Test Questions Set 1 Q1-15

Linux Foundation KCNA Kubernetes and Cloud Native Associate Exam Dumps and Practice Test Questions Set 1 Q1-15

Visit here for our full Linux Foundation KCNA exam dumps and practice test questions.

Question 1

Which kubectl command should you use to create or update a resource declaratively from a manifest file, with Kubernetes managing the changes idempotently?

A) kubectl run -f manifest.yaml
B) kubectl create -f manifest.yaml
C) kubectl apply -f manifest.yaml
D) kubectl replace -f manifest.yaml

Answer: C) kubectl apply -f manifest.yaml

Explanation:

kubectl run -f manifest.yaml is a command historically used to create a single pod or deployment quickly from command-line arguments. It is intended for ad-hoc creation rather than declarative management from a manifest file. When run with a manifest file it is not the standard, supported method for managing resources declaratively. The run command focuses on instantiating a workload resource directly and often infers defaults; it does not perform merge logic with existing server-side configuration and is not idempotent in the way apply is. For example, repeatedly invoking run might create new resources or behave differently depending on flags used, and it does not store or track the declarative intent of the manifest in the same manner.

kubectl create -f manifest.yaml will create the resource in the cluster from the manifest file. It is useful for creating a resource for the first time, but it will fail if the resource already exists. There is no built-in merge capability to update an existing resource by applying changes in the file; create is a create-only operation. If you attempt to use create on an existing resource, kubectl returns an error unless you use flags or delete the existing resource first. Because create does not track previous manifests or perform three-way merges, it is not the right tool for iterative, idempotent management where you expect to push updated manifests and have the cluster adopt them cleanly.

kubectl apply -f manifest.yaml is the command designed for declarative management of resources. It performs a three-way strategic merge (client-side or server-side depending on cluster features) between the last-applied-configuration, the current live state, and the new manifest. This allows kubectl to compute what changed and update the resource incrementally without clobbering fields that were changed elsewhere. Apply stores the last-applied-configuration in an annotation, enabling subsequent apply runs to compute diffs. Because of that, repeatedly running apply with the same manifest is idempotent: the first run will create or update the resource to match the manifest, and subsequent identical runs will result in no changes. This behavior makes apply the recommended approach for GitOps-style workflows and for managing Kubernetes objects using declarative manifests.

kubectl replace -f manifest.yaml will replace an existing resource with the new manifest and fail if the resource does not exist. It performs a full object replace rather than a strategic merge, which can inadvertently remove fields that are not present in the manifest but are present in the live object (for example, some system-managed fields). Replace is useful when you intend to substitute the whole object and you are confident that the manifest contains every necessary field. However, replace is not idempotent in the same way apply is for continuous management because it does not use the last-applied-configuration to compute changes and may clash with concurrent updates.

Reasoning about the correct answer: kubectl apply is explicitly designed for declarative, idempotent management of Kubernetes resources. It saves the last-applied manifest and performs merges so that subsequent runs make only the intended changes, making it ideal for workflows where manifests are the source of truth. Create is only for first-time creation and fails if the object exists, replace substitutes the entire object and risks clobbering fields, and run is an imperative helper for quick workload creation rather than a manifest-based declarative tool. For these reasons, apply is the correct command when you want to manage resources declaratively and idempotently from manifest files.

Question 2

Which Service type exposes an internal cluster-only IP address by default and is not reachable from outside the cluster without additional configuration?

A) ClusterIP
B) NodePort
C) LoadBalancer
D) ExternalName

Answer:  A) ClusterIP

Explanation:

ClusterIP provides a virtual IP address that is accessible only within the Kubernetes cluster network. It is the default Service type. When a Service is created with ClusterIP, Kubernetes assigns it a stable internal address that other pods and cluster components can use to reach the service. ClusterIP does not configure any external visibility or listen on nodes’ network interfaces; traffic from outside the cluster cannot reach a ClusterIP Service without additional network plumbing such as an Ingress or a NodePort/LoadBalancer gateway. This makes ClusterIP suitable for internal microservices that communicate within the cluster and for building internal application components that should not be exposed externally.

NodePort allocates a port on every node in the cluster and forwards traffic from that node port to the Service. This allows access from outside the cluster by targeting any node’s IP combined with the allocated port. NodePort effectively opens up an external entry point on each node and is more exposed than ClusterIP. It is commonly used for simple external access or as a building block for external load balancers, but it is not the default and does expose the Service externally.

LoadBalancer provisions an external load balancer from the cloud provider (if supported) and maps it to the Service. This results in an externally reachable IP address (or DNS name) provided by the cloud environment and routes traffic into the cluster. LoadBalancer is used when you want a managed external endpoint for your Service. It is clearly reachable from outside and therefore not the correct answer for a cluster-internal-only service type.

ExternalName maps the Service to a DNS name outside of the cluster by returning a CNAME record for the external name. It does not create proxying or IP allocation in the cluster; instead it allows in-cluster clients to resolve a Service name to an external DNS name and reach external endpoints. ExternalName is used to integrate external services transparently, not to limit access to internal cluster traffic.

Reasoning about the correct answer: ClusterIP is the only Service type among these that by default provides cluster-internal-only access, making it the appropriate choice when you want a Service not reachable from outside without additional components. NodePort and LoadBalancer explicitly create external entry points, and ExternalName delegates to external DNS, so ClusterIP is the correct answer for internal-only exposure.

Question 3

A Deployment specification uses a label selector to manage its Pods. Which statement best describes how label selectors are used by a Deployment?

A) The selector defines which nodes the Pods will be scheduled onto.
B) The selector identifies which existing Pods are managed by the Deployment and which new Pods the Deployment should create to match the template.
C) The selector sets security policies applied to the Pods.
D) The selector controls which container images can be pulled into Pods.

Answer: B) The selector identifies which existing Pods are managed by the Deployment and which new Pods the Deployment should create to match the template.

Explanation:

The selector does not determine node scheduling. Node selection is controlled by nodeSelector, nodeAffinity, taints, and tolerations. Those mechanisms influence which nodes a pod may be scheduled onto. A label selector on a Deployment is unrelated to the node selection process and instead pertains to matching Pods for management.

The selector defines the set of Pods that the Deployment considers part of its ReplicaSet. The Deployment uses this selector to adopt existing Pods that match or to create new Pods matching the Pod template. If the selector is misconfigured and does not match the labels in the Pod template, the Deployment will not manage the Pods correctly and you can end up with zero replicas or orphaned Pods. A properly configured selector matches the labels applied to the Pods created from the Pod template, enabling the ReplicaSet controller (and thus the Deployment) to maintain the desired number of replicas and perform rollouts.

The selector does not set security policies. PodSecurityPolicy (deprecated in many clusters), Pod Security admission, NetworkPolicy, and RBAC govern security aspects. While labels can be used as selectors by security-related resources (for example a NetworkPolicy selects pods by label), the selector field in a Deployment specifically is about identification and management of pods, not directly applying security policies.

The selector does not control which container images can be pulled. Image selection is defined inside the Pod template’s container image fields. ImagePullSecrets and Admission Controllers may influence image pulling behavior, but the Deployment selector simply matches pods by labels and has no authority over container image restrictions.

Reasoning about the correct answer: A Deployment’s selector is a label-based filter that determines which Pods belong to the Deployment’s ReplicaSet. Accurate label alignment between the selector and the Pod template is essential for the Deployment to manage replicas and perform rolling updates. The other statements confuse selectors with node scheduling, security policies, or image control, which are handled by different Kubernetes features. Therefore the correct description of selector behavior is that it identifies the Pods the Deployment manages and informs the creation of new Pods to satisfy the desired state.

Question 4

What is the primary purpose of etcd in a Kubernetes cluster?

A) Storing and managing container images used by Pods
B) Maintaining the cluster’s configuration and state data in a distributed key-value store
C) Handling the scheduling of workloads to appropriate worker nodes
D) Monitoring node health and restarting failed Pods automatically

Answer: B) Maintaining the cluster’s configuration and state data in a distributed key-value store

Explanation:

The idea that the system responsible for the core operations of Kubernetes would also store entire container images may seem plausible at first but is not accurate. Container images are stored in image registries such as Docker Hub, Amazon ECR, or a private registry. Nodes pull images onto their local storage through a container runtime when needed, but storing full images inside the Kubernetes control plane database is beyond its intended function. Kubernetes would not scale or operate efficiently if its key-value store were burdened with large binary image data. Therefore, an assertion that the cluster data store handles container images does not align with Kubernetes architecture principles that separate registry concerns from orchestration concerns.

The assignment of workloads to nodes is a task handled by a different key control plane component: the scheduler. The Kubernetes scheduler evaluates resource requirements, constraints, affinities, taints, tolerations, and current cluster utilization to determine the best node on which to run a pod. While the data that the scheduler reads is stored within the Kubernetes data store, the responsibility for making scheduling decisions is not performed by that database component. The scheduler consumes cluster state information from elsewhere and writes desired pod placement back into the configuration stored in the control plane’s database, but scheduling logic does not reside in that key-value store system.

Ensuring that pods are restarted when they fail is part of the self-healing features implemented mainly through the kubelet running on each node and the controllers within the control plane. For example, a ReplicaSet controller ensures that a certain number of identical pods are running and requests new pods if any are lost. The kubelet ensures that containers inside pods continue running according to their specifications. Restart operations and health monitoring are not functions of the distributed store component. While the system stores desired state in the key-value store, the actual enforcement mechanisms for keeping pods alive happen through other Kubernetes components.

Maintaining the cluster configuration and state data is the essential purpose of the core data store of Kubernetes. It stores the entire desired state of the cluster, including information about nodes, pods, secrets, configuration objects, and access control definitions. It behaves as a distributed key-value store, ensuring reliable consistency across multiple control plane instances. This data underpinning the functioning of controllers and operators is maintained with strong consistency guarantees, so that components reading cluster state always rely on the same accurate configuration. Because the data store is crucial for operation, loss of its contents can mean the complete loss of cluster state, making it a critical single point guarded with backup strategies, redundancy, and encryption to protect cluster security. It is also the authoritative source that the control plane uses to drive the cluster toward its declared configuration. The architecture uses a store that supports distributed consensus, ensuring that if one control plane node fails, the others still have the correct information needed to keep cluster management operational.

Reasoning about the correct answer: Only one of the choices aligns directly with the fundamental architectural role of Kubernetes’ distributed storage component. It stores and maintains the cluster’s control information including the entire representation of desired workloads, configuration data, and resource definitions in a way that is consistent across all control plane nodes. It is not designed for image storage, workload scheduling, or direct container lifecycle management; instead, other parts of the system use the retrieved data to implement those behaviors. For that reason, maintaining configuration and state in a distributed key-value system is the correct and essential description of its role in Kubernetes.

Question 5

Which Cloud Native Computing Foundation (CNCF) project focuses on monitoring and alerting through collection of time-series metrics?

A) Prometheus
B) Fluentd
C) Envoy
D) Harbor

Answer:  A) Prometheus

Explanation:

One CNCF project is widely known as the de-facto standard monitoring solution in many Kubernetes environments. It collects time-series metrics from applications and system components, scrapes metrics endpoints using the pull model, supports flexible queries through a purpose-built querying language, and enables alerting based on metric evaluation. Unlike logging tools that handle unstructured textual data, this tool focuses on numerical samples over time to understand the system’s health and performance. Its integration with Kubernetes service discovery makes automatic monitoring possible as workloads dynamically appear and disappear. Its architecture includes a storage engine optimized for time-series data, exporters for collecting infrastructure metrics, and alert manager components for notification routing. This project is central to observability in many cloud native architectures.

Fluentd is not a monitoring and alerting system based on time-series metrics, but rather a log forwarding and aggregation tool designed to unify the collection of log data from diverse sources. It handles semi-structured or unstructured logs, tags and transforms them, and routes them to storage backends such as Elasticsearch or cloud log analytics platforms. Although valuable for observability, its focus lies in textual log data streams rather than numeric metric samples over time. Therefore, while often used side-by-side with monitoring systems, its primary goal is different.

Envoy is a service proxy and communication middleware that plays a major role within the service mesh ecosystem, providing intelligent routing, load balancing, security features like mTLS, and telemetry proxying for microservices. While it supplies metrics about network traffic as part of its proxy function, those metrics are not its core purpose. Envoy functions as a high-performance data plane rather than a tool designed specifically to collect and store time-series monitoring information. It collaborates with observability solutions rather than serving as one.

Harbor is a cloud native container registry solution supporting image management, scanning for vulnerabilities, controlling access through RBAC, and replicating images across sites. A registry holding container images is not a telemetry instrumentation system and therefore does not align with the functionality described in the question. While it helps secure software supply chains, it does not serve as a monitoring and alerting engine.

Reasoning about the correct answer: Observability in a cloud native environment typically includes metrics, logs, and traces. The project that is explicitly built to manage metrics and generate alerts from those metrics is the monitoring and alerting toolkit that is deeply integrated with Kubernetes service discovery. Fluentd solves logging, Envoy handles traffic proxying and service mesh data plane functionality, and Harbor provides container registry capabilities. Only one is fundamentally a time-series monitoring solution with a fully featured alerting subsystem. Thus, the correct project is Prometheus.

Question 6

What is a key architectural benefit of the microservices approach used in cloud native applications?

A) Scaling always requires scaling the entire system all at once
B) A failure in one component crashes the entire application
C) Independent components can scale, deploy, and update without impacting unrelated parts of the system
D) All functionality must share identical runtime dependencies to maintain compatibility

Answer: C) Independent components can scale, deploy, and update without impacting unrelated parts of the system

Explanation:

The claim that scaling the entire system together is required contradicts the central idea of decomposing an application into separate components. Traditional monolithic architectures often require full-system scaling because every function is part of a single deployable unit. That leads to inefficiency because scaling may be necessary only for a single busy function, but everything else grows unnecessarily alongside it. Microservices explicitly solve this by distributing responsibilities across multiple individually scalable services, enabling resource-optimized scaling behavior. Therefore, imposing full-application scaling is not a feature but a drawback avoided through microservice design.

The idea that a failure in one service should crash the entire application completely misrepresents cloud native design goals. Microservices emphasize resilience and loose coupling. Failures in a single service should not propagate catastrophically. Instead, design patterns such as retries, timeouts, circuit breakers, graceful degradation, and redundancy are used to ensure that services handle failure scenarios without bringing down the entire system. Resilient architecture aims to contain faults and keep unaffected functionalities operational even when individual services experience downtime.

Enforcing identical runtime dependencies across the full application directly contradicts what microservices architecture promotes: autonomy for each service. Each service can choose the runtime, programming language, library versions, and storage models that best match its requirements. They communicate over standard protocols, so their internal implementation choices remain independent. That freedom is one of the advantages microservices bring to technology selection, improving agility and avoiding the rigid dependency alignment required in monoliths.

Independence in deployment, scaling, failure isolation, and technology choice is the defining architectural benefit of adopting microservices. Breaking an application into loosely coupled services means that each component can grow independently, receive updates without full redeployment of the system, undergo individual test cycles, and fail without collapsing the rest. When user demand spikes for a particular function, scaling only the service responsible for that function improves resource utilization and reduces cost. Dev teams can deliver enhancements more rapidly because deployment pipelines for unrelated services do not interfere with one another. Microservices architecture directly enables ability to evolve sub-systems separately at the pace each requires, leveraging lightweight communication and orchestrators like Kubernetes to manage distributed lifecycle patterns. Cloud native infrastructure and service meshes provide observability and traffic control that allow microservices to interoperate even while independently changing versions, reinforcing the architectural principle that each service remains loosely coupled and independently manageable.

Reasoning about the correct answer: The principle of independent change and scalability lies at the heart of the cloud native shift away from large monoliths. Eliminating tight inter-dependencies allows product teams to innovate faster, scale smarter, and build resilient systems that react well to partial disruptions. The incorrect choices contradict core tenets of microservices. The only statement that supports the true design intent of microservices is that components operate independently for deployment, scaling, and updates. Thus, the key architectural benefit described is the correct answer.

Question 7

Which Kubernetes object is responsible for ensuring that a specified number of identical Pod replicas are running at any given time?

A) ConfigMap
B) ReplicaSet
C) Secret
D) PersistentVolume

Answer: B) ReplicaSet

Explanation:

ConfigMap serves a completely different purpose in Kubernetes. It stores non-confidential configuration data that can be consumed by pods and containers at runtime. Its function is to decouple runtime configurations such as environment variables, file content, or application settings from the container images. By doing so, it avoids embedding configuration inside the container and helps maintain portability. However, its role pertains strictly to configuration management and not the management of workload replicas. It does not include any logic to ensure a particular number of pods are running, nor is it involved in monitoring pod status or automatically restoring failed pods. Therefore, it has no direct role in replica control, scaling, or workload lifecycle enforcement.

Secret is intended specifically for storing sensitive data like passwords, tokens, and certificates. The key difference between a Secret and a ConfigMap is the confidential nature of the information handled. Secrets can be mounted inside pods or injected as environment variables, and Kubernetes makes an effort to keep them encrypted and protected depending on configuration. Still, this resource type has no mechanism to monitor or enforce application availability, nor does it maintain pod counts or interact with controllers responsible for replicas. Its value lies entirely in secure configuration, not in ensuring workload continuity.

PersistentVolume is a storage resource in Kubernetes that represents a piece of underlying storage provisioned either manually or dynamically. Its purpose is to retain data beyond the lifecycle of pods, enabling stateful applications that survive restarts or rescheduling. A PersistentVolume stays separate from the application’s execution and has no knowledge or responsibility for the number of pods in the system. It deals with storage provisioning and access modes rather than workload scaling, availability, or replication. Thus, while foundational for persistence, it does not fit the responsibility described in the question.

ReplicaSet is the Kubernetes object that ensures the correct number of pod replicas are running at all times. It continually compares the currently running number of pods that match its defined label selector with the desired number specified in its configuration. If a pod crashes or a node becomes unreachable, the ReplicaSet controller creates new replacements automatically to preserve the target count. Likewise, if excess pods exist, perhaps after scaling down, it will terminate extras to re-align with the declared desired state. ReplicaSet acts as a core availability mechanism, forming a key part of Kubernetes’ self-healing capability. Rather than humans manually checking and correcting pod statuses, the system automates this recovery, ensuring resiliency and continuous service presence. While a newer abstraction like Deployment builds on top of ReplicaSet to provide rolling updates and version management, ReplicaSets remain the fundamental primitive for maintaining stable replica counts.

Reasoning about the correct answer: In Kubernetes, desired-state management is central. The object designed to preserve consistent numbers of running replicas belongs to the core control plane’s operational logic. ReplicaSet continuously enforces pod scaling objectives and responds dynamically to disruptions. ConfigMap handles configuration non-confidentially, Secret deals with secure data, and PersistentVolume persists storage — none of them deal with workload replication logic. Therefore, the correct answer is the object responsible for ensuring intended numbers of pod replicas remain active: ReplicaSet.

Question 8

What is the role of the kubelet in Kubernetes?

A) Serving as the cluster-wide API server for user requests
B) Managing node-level operations by ensuring containers in Pods are running as expected
C) Acting as the control plane scheduler deciding where Pods should run
D) Running distributed consensus for cluster state storage

Answer: B) Managing node-level operations by ensuring containers in Pods are running as expected

Explanation:

The component at the center of the control plane responsible for processing user and internal Kubernetes operations is not deployed on every node. Instead, the core central API server handles CRUD operations for Kubernetes objects and validates incoming requests. It stores and retrieves resource definitions from the cluster’s data store and communicates desired states to controllers and schedulers. That centralized API server role is not fulfilled by the node agent. Each node does not handle incoming user requests for the cluster; therefore, this interpretation does not reflect the purpose of kubelet.

Scheduling pods to appropriate nodes based on resource requirements, constraints, and affinity rules is handled by a distinct control plane component called the scheduler. It evaluates available cluster resources and determines optimal placement. The kubelet may reject pods if node conditions make them un-runnable, but it does not select pod placement. The decision-making for scheduling comes from elsewhere in the control plane, and kubelet receives work assignments rather than producing them. That makes it clear that its role is not that of a scheduler.

Running distributed consensus for state management is another separate function. Kubernetes relies on an external distributed key-value store that implements consensus algorithms for storing configuration and state consistently across control plane nodes. That technology ensures strong consistency guarantees about cluster data but is unrelated to node-level container lifecycle functions. The node agent plays no role in consensus participation; it merely interacts with the state decisions derived from the control plane.

Kubelet is the node-level agent responsible for making sure that containers for assigned pods are running and remain healthy. It monitors containers according to pod specifications, communicates with the container runtime (like containerd or CRI-O), reports node and pod status back to the API server, and takes action if containers crash or go unhealthy. The kubelet ensures that the state of the node tries to conform to the desired pod state declared in configuration and enforced by controllers. It registers the node with the control plane and updates node metadata such as resource capacity. Without kubelet, a node could not meaningfully participate in the cluster, because it is the link between orchestration directives and physical execution. It forms the core enforcement mechanism for Kubernetes workloads on each node.

Reasoning about the correct answer: Understanding Kubernetes architecture requires distinguishing responsibilities in the control plane versus the data plane. Kubelet is definitely part of the node-level infrastructure, not the centralized scheduler, API server, or state consensus engine. It ensures the containers run correctly, reflecting assigned workloads into actual running processes. Since that role aligns precisely with managing node-level operations, B is the correct statement.

Question 9

Which concept in Kubernetes networking ensures that every Pod in a cluster can directly communicate with every other Pod without NAT?

A) NetworkPolicy
B) Container Runtime Interface
C) Flat network model
D) ServiceAccount

Answer: C) Flat network model

Explanation:

NetworkPolicy is a mechanism for restricting allowed communication between workloads. By applying rules, administrators can define which applications are allowed to talk to which others. Instead of guaranteeing connectivity, it intentionally removes connectivity under controlled conditions to enforce security and segmentation. It works by selecting pods using labels and then specifying permissible traffic. However, policies do not define the fundamental networking topology or connectivity model; they only alter allowed communications within that model. Thus, NetworkPolicy is more about filtering and enforcement, not guaranteeing universal pod reachability.

Container Runtime Interface is an abstraction layer that allows Kubernetes to communicate with different container runtime implementations such as containerd or CRI-O. This interface standardizes how Kubernetes launches and manages containers but does not participate in networking topology or direct pod-to-pod communication decisions. While the runtime handles container isolation, it does not impose a networking model. Therefore, it cannot be considered the entity that ensures every pod can communicate with every other pod without translation.

ServiceAccount relates to authentication and identity for pods accessing the Kubernetes API. It assigns credentials so that pods can interact with the cluster securely. ServiceAccount involvement concerns RBAC, secrets, and secure API access rather than pod connectivity. While important for security and authorization, this does not configure or influence the baseline connectivity assumptions underpinning Kubernetes networking. ServiceAccount defines who a workload is from a security perspective but does not ensure network visibility across pods.

Kubernetes assumes a flat network model where each pod receives a unique IP address and can communicate freely with any other pod across the environment without needing NAT. It allows consistent addressing and makes distributed load balancing and inter-service communication straightforward. This assumption simplifies application development because pods can treat communication just like contacting any other host without the complexity of port mapping or address translation. Cluster network plugins implementing Container Network Interface enforce this model across nodes so that pods remain globally reachable under the same addressing scheme. The universality of routing between pods underpins service abstraction and DNS-based discovery. By ensuring there is no need for network address translation, the model supports observability and traceability, improves resilience, and provides predictable connectivity semantics across the cluster.

Reasoning about the correct answer: Kubernetes foundational design presumes universal pod connectivity based on a flat networking environment. It does not rely on ServiceAccounts, runtimes, or policy enforcements to establish reachability. Policies can reduce reachability but not enable it. Runtimes provide execution environments but not the network topology. The networking principle ensuring communication between pods without NAT is indeed the flat network model. Therefore, C is correct.

Question 10

What is the primary function of a Kubernetes Ingress resource?

A) Persisting data volumes across pod restarts
B) Controlling external access to services in a cluster, typically via HTTP/HTTPS
C) Assigning CPU and memory limits to pods
D) Monitoring pod health and restarting failed containers

Answer: B) Controlling external access to services in a cluster, typically via HTTP/HTTPS

Explanation:

Persisting data volumes across pod restarts is the responsibility of PersistentVolume and PersistentVolumeClaim objects. Kubernetes provides these resources to decouple storage from pod lifecycle, allowing data to survive pod termination, node failures, or rolling updates. Ingress does not interact with storage or data persistence; it is unrelated to how state is managed for applications. While a pod’s network endpoint might point to an application using persistent data, the Ingress itself neither defines nor stores data volumes. Using it for storage management would be a misunderstanding of its intended role.

Assigning CPU and memory limits to pods is handled through resource requests and limits in pod or container specifications. Kubernetes allows control over how much CPU and memory a container can consume to ensure fair scheduling and prevent resource contention. Ingress has no control over CPU, memory, or other resource allocations. Its role is focused on network traffic, not system resource management.

Monitoring pod health and restarting failed containers is the responsibility of the kubelet and controllers such as ReplicaSet or Deployment. Liveness probes and readiness probes are configured at the pod level to determine if containers are healthy or ready to serve traffic. Ingress does not monitor pod health directly; instead, it can be configured to route traffic only to pods that are ready based on the underlying service definition. While Ingress interacts indirectly with healthy pods by routing requests, it does not perform monitoring or restart itself.

Ingress is the Kubernetes abstraction for managing external access to services, typically using HTTP and HTTPS. It defines rules to map incoming URLs or hostnames to services within the cluster. Ingress can handle features such as path-based routing, host-based routing, TLS termination, and load balancing. Unlike NodePort or LoadBalancer Services, which provide lower-level external access, Ingress centralizes traffic management and provides more granular control. It allows clusters to expose multiple services through a single external endpoint or IP address. Traffic routing decisions are enforced by an Ingress controller, which implements the rules specified in the Ingress resource. The Ingress controller watches for changes in Ingress objects and updates the underlying proxy or load balancer configuration accordingly. This makes Ingress a key tool in cloud native environments for controlling and managing external service exposure.

Reasoning about the correct answer: Ingress is designed to manage and direct external traffic to services inside the cluster, often providing features like TLS termination, URL path routing, and virtual host management. It does not handle storage, resource allocation, or container health management, which are managed by other Kubernetes primitives. Therefore, the correct answer is controlling external access to services in a cluster, typically via HTTP/HTTPS.

Question 11

Which Kubernetes object allows you to inject sensitive information like passwords or API keys into pods in a secure way?

A) ConfigMap
B) Secret
C) ServiceAccount
D) PersistentVolumeClaim

Answer: B) Secret

Explanation:

ConfigMap allows for the injection of configuration data, such as environment variables or configuration files, into pods. ConfigMaps store non-confidential data and can be consumed as environment variables, command-line arguments, or mounted files. They are not designed for sensitive data because they are stored unencrypted by default in etcd. Exposing passwords or tokens via ConfigMap would risk security, as the data can be accessed by anyone with API server access. Therefore, ConfigMap is inappropriate for storing confidential information.

ServiceAccount is designed to provide an identity for pods to interact with the Kubernetes API. While ServiceAccounts can reference Secrets containing credentials like tokens, their role is primarily for authentication and authorization. ServiceAccounts themselves are not directly used to store arbitrary secrets like database passwords or API keys; instead, they consume such secrets to facilitate secure API access. Therefore, this is not the primary mechanism for injecting sensitive data into pods.

PersistentVolumeClaim is a request for storage resources in a Kubernetes cluster. While PVCs enable pods to persist data across restarts, they do not manage sensitive information for pod consumption. Data stored in a PersistentVolume can be encrypted or restricted, but PVCs themselves are not intended for secure injection of secrets. Using PVCs to distribute credentials would be indirect and insecure compared to Secrets.

Secret is the Kubernetes object explicitly designed to store sensitive information such as passwords, OAuth tokens, or SSH keys. Secrets can be mounted as files inside a container, injected as environment variables, or referenced by ServiceAccounts. Kubernetes supports automatic encoding in base64 and can optionally encrypt Secrets at rest in etcd. This ensures that sensitive data is handled securely and separated from non-confidential configuration. By providing a centralized, secure way to manage confidential information, Secrets allow applications to remain portable and maintain best practices for credential management. They also facilitate updates; when a Secret changes, pods can either automatically receive the updated value or be restarted to pick up new credentials, ensuring secure and consistent access across workloads.

Reasoning about the correct answer: Kubernetes Secrets are purpose-built for securely managing sensitive data. ConfigMaps handle general configuration, ServiceAccounts provide identity and API authentication, and PersistentVolumeClaims manage storage. Only Secrets offers a secure mechanism to inject sensitive information directly into pods for application use.

Question 12

Which Kubernetes object allows declarative specification of multiple replica pods and automatic rolling updates?

A) Pod
B) Deployment
C) StatefulSet
D) DaemonSet

Answer: B) Deployment

Explanation:

Pod is the smallest deployable unit in Kubernetes and represents a single or tightly coupled set of containers. While Pods can be created directly, they do not manage replication or rolling updates automatically. If a pod fails, it must be recreated manually or through a controller like a ReplicaSet. Pods alone do not provide high availability or automated update features.

StatefulSet is used for workloads requiring stable identities, persistent storage, and ordered deployment/termination. While it can manage multiple replicas, its primary focus is on stateful applications that need consistent naming and storage. StatefulSets also provide ordered updates, but they are more complex than Deployments and are not typically used for stateless applications that require simple replication and rolling updates.

DaemonSet ensures that a copy of a pod runs on each node (or a subset of nodes) in the cluster. Its purpose is to run cluster-wide or node-specific services like monitoring agents or logging collectors. DaemonSets do not scale replicas based on a desired number and are not designed for rolling updates of multiple replicas in a declarative fashion.

Deployment is the Kubernetes abstraction that enables declarative management of stateless applications. Deployments specify a desired number of replicas for a pod template, manage ReplicaSets internally, and provide features like rolling updates, rollbacks, and pause/resume of updates. The Deployment controller continuously monitors the cluster and ensures the actual state matches the declared desired state. Rolling updates allow zero-downtime deployment by incrementally updating pods to a new version while maintaining availability. Deployment is suitable for stateless workloads where scaling, update management, and automated recovery from failures are essential. It simplifies application lifecycle management, providing declarative, predictable, and automated behavior.

Reasoning about the correct answer: While Pods are the basic unit, they do not handle replication or updates automatically. StatefulSets and DaemonSets serve specialized roles, but Deployments provide the combination of replication, declarative configuration, and automated rolling updates that fits the scenario described. Therefore, Deployment is the correct object.

Question 13

Which Kubernetes object allows running a single pod per node, often used for cluster-wide services like monitoring or logging agents?

A) Deployment
B) DaemonSet
C) StatefulSet
D) ReplicaSet

Answer: B) DaemonSet

Explanation:

Deployment is used to manage stateless applications with multiple replicas. It ensures that a desired number of pod replicas are running, and it supports rolling updates and rollbacks. However, Deployments are not intended to run one pod per node. They focus on scalable replicas distributed across the cluster without guaranteeing placement on every node. While Deployments can target nodes using node selectors or affinities, the behavior is not automatically enforced per node, and they are not suitable for cluster-wide services like logging agents or monitoring daemons.

StatefulSet manages pods with stable identities and persistent storage, ensuring ordered deployment, scaling, and termination. It is ideal for stateful workloads such as databases or message queues, where unique network identities and consistent storage are essential. StatefulSets can create multiple replicas, but they do not guarantee one pod per node. Their focus is stateful workloads rather than node-level services, making them unsuitable for cluster-wide agents.

ReplicaSet ensures a specific number of pod replicas are running in the cluster. While it guarantees the desired count, it does not provide control over distributing one pod per node. It is designed to maintain the total replica count rather than enforce node-specific placement. Therefore, ReplicaSet cannot guarantee that a pod runs on each node for cluster-wide tasks.

DaemonSet ensures that exactly one pod runs on each node (or a subset of nodes based on labels or taints). This makes it ideal for deploying cluster-wide services such as monitoring agents, logging collectors, or networking components that must exist on every node. The DaemonSet controller watches the cluster for node additions and automatically schedules pods on new nodes, maintaining full coverage. When nodes are removed, the corresponding pods are cleaned up. DaemonSets can also be configured with node selectors, tolerations, or affinities to control which nodes receive pods, providing flexibility while enforcing the per-node requirement.

Reasoning about the correct answer: Only DaemonSet guarantees that a pod runs on every node automatically, which is crucial for cluster-wide agents. Deployments focus on scalable replicas, StatefulSets manage stateful workloads, and ReplicaSets ensure a total count of pods without node-level enforcement. Therefore, DaemonSet is the correct object for one pod per node services.

Question 14

Which command is used to view all resources, including pods, services, and deployments, in a Kubernetes cluster?

A) kubectl describe <resource>
B) kubectl get all
C) kubectl logs <pod>
D) kubectl exec -it <pod> — /bin/bash

Answer: B) kubectl get all

Explanation:

kubectl describe <resource> provides detailed information about a specific resource, including events, status, labels, and annotations. It is useful for debugging a particular pod, service, or deployment, but it requires specifying the resource type and name. It does not provide a comprehensive list of all resources in the cluster at once. While describe can provide deep insight into a single resource, it is not suitable for quickly listing all objects in the cluster.

kubectl logs <pod> fetches the container logs for a specific pod. It is primarily used for troubleshooting application behavior or debugging errors, but it does not display resources such as deployments, services, or ReplicaSets. Logs show runtime information for a single container, not the cluster-wide resource inventory.

kubectl exec -it <pod> — /bin/bash allows executing a command inside a running container, often providing an interactive shell. It is valuable for inspecting container internals or running diagnostics inside the pod but is unrelated to viewing the cluster’s resources. It operates at the container level rather than the cluster level.

kubectl get all lists all the most commonly used resources in the current namespace, including pods, services, ReplicaSets, and deployments. It provides a quick overview of the cluster state and is especially useful when you want to see the current state of your applications, their replicas, and service endpoints. Although it does not show every possible resource type, it effectively consolidates the key workloads and service objects into a single view. Additional flags, like -A or —all-namespaces, can expand the scope to the entire cluster. The output includes columns such as NAME, READY, STATUS, RESTARTS, and AGE, which gives a comprehensive snapshot of workload health and activity.

Reasoning about the correct answer: Among the commands listed, kubectl get all uniquely provides a concise overview of multiple resource types in a cluster. Describe, logs, and exec focus on individual resources or pods and provide detailed or runtime information but do not offer a cluster-wide summary. Therefore, kubectl get all is the correct choice for viewing all resources.

Question 15

Which feature allows Kubernetes workloads to automatically adjust the number of pod replicas based on CPU or memory utilization?

A) Horizontal Pod Autoscaler (HPA)
B) Vertical Pod Autoscaler (VP A)
C) ReplicaSet
D) Deployment

Answer:  A) Horizontal Pod Autoscaler (HPA)

Explanation:

Vertical Pod Autoscaler adjusts the CPU and memory limits of individual pods based on observed usage. While VPA can resize containers to optimize resource allocation, it does not change the number of replicas. Its focus is vertical scaling (adjusting resources for a single pod) rather than horizontal scaling (adding or removing replicas), so it does not satisfy the requirement of automatically adjusting pod count.

ReplicaSet ensures a fixed number of pod replicas are running according to its specification. While it can maintain the desired number of pods, it does not dynamically adjust that number based on resource utilization. ReplicaSet reacts only to the difference between desired and actual pod counts, not to CPU or memory metrics.

Deployment provides declarative management of pods and replicas and can perform rolling updates. However, a Deployment alone does not dynamically adjust the number of replicas based on metrics. Deployments rely on controllers like HPA to implement automatic scaling based on observed resource utilization.

Horizontal Pod Autoscaler monitors workload metrics such as CPU, memory, or custom metrics and adjusts the number of pod replicas to maintain the target utilization. For example, if CPU utilization exceeds the defined threshold, HPA will increase replicas to distribute the load. Conversely, when utilization drops, it reduces replicas to conserve resources. HPA integrates with the Metrics Server or custom metrics APIs to obtain real-time data and communicates with the Deployment or ReplicaSet controlling the pods to scale up or down. This enables cloud native workloads to automatically adapt to changing demand, maintaining performance while optimizing resource use. HPA supports scaling limits and stabilization windows to avoid rapid oscillations and provides declarative configuration through YAML manifests, allowing full integration into GitOps and automation workflows.

Reasoning about the correct answer: Horizontal Pod Autoscaler is specifically designed for dynamic horizontal scaling based on resource metrics. Vertical Pod Autoscaler adjusts pod resources rather than replica count. ReplicaSet and Deployment manage pods statically unless combined with HPA. Therefore, for automatically adjusting the number of pod replicas based on CPU or memory, HPA is the correct feature.