What is CRD?
Custom Resource Definition
Custom Restricted Definition
Customized RUST Definition
Custom RUST Definition
A CRD is a CustomResourceDefinition, making A correct. Kubernetes is built around an API-driven model: resources like Pods, Services, and Deployments are all objects served by the Kubernetes API. CRDs allow you to extend the Kubernetes API by defining your own resource types. Once a CRD is installed, the API server can store and serve custom objects (Custom Resources) of that new type, and Kubernetes tooling (kubectl, RBAC, admission, watch mechanisms) can interact with them just like built-in resources.
CRDs are a core building block of the Kubernetes ecosystem because they enable operators and platform extensions. A typical pattern is: define a CRD that represents the desired state of some higher-level concept (for example, a database cluster, a certificate request, an application release), and then run a controller (often called an “operator”) that watches those custom resources and reconciles the cluster to match. That controller may create Deployments, StatefulSets, Services, Secrets, or cloud resources to implement the desired state encoded in the custom resource.
The incorrect answers are made-up expansions. CRDs are not related to Rust in Kubernetes terminology, and “custom restricted definition” is not the standard meaning.
So the verified meaning is: CRD = CustomResourceDefinition, used to extend Kubernetes APIs and enable Kubernetes-native automation via controllers/operators.
What component enables end users, different parts of the Kubernetes cluster, and external components to communicate with one another?
kubectl
AWS Management Console
Kubernetes API
Google Cloud SDK
The Kubernetes API is the central interface that enables communication between users, controllers, nodes, and external integrations, so C is correct. Kubernetes is fundamentally an API-driven system: all cluster state is represented as API objects, and all operations—create, update, delete, watch—flow through the API server.
End users typically interact with the Kubernetes API using tools like kubectl, client libraries, or dashboards. But those tools are clients; the shared communication “hub” is the API itself. Inside the cluster, core control plane components (controllers, scheduler) continuously watch the API for desired state and write status updates back. Worker nodes (via kubelet) also communicate with the API server to receive Pod specs, report node health, and update Pod statuses. External systems—cloud provider integrations, CI/CD pipelines, GitOps controllers, monitoring and policy engines—also integrate primarily through the Kubernetes API.
Option A (kubectl) is a CLI that talks to the Kubernetes API; it is not the underlying component that all parts use to communicate. Options B and D are cloud-provider tools and are not universal to Kubernetes clusters. Kubernetes runs across many environments, and the consistent interoperability layer is the Kubernetes API.
This API-centric architecture is what enables Kubernetes’ declarative model: you submit desired state to the API, and controllers reconcile actual state to match. It also enables extensibility: CRDs and admission webhooks expand what the API can represent and enforce. Therefore, the correct answer is C: Kubernetes API.
=========
Kubernetes ___ allows you to automatically manage the number of nodes in your cluster to meet demand.
Node Autoscaler
Cluster Autoscaler
Horizontal Pod Autoscaler
Vertical Pod Autoscaler
Kubernetes supports multiple autoscaling mechanisms, but they operate at different layers. The question asks specifically about automatically managing the number of nodes in the cluster, which is the role of the Cluster Autoscaler—therefore B is correct.
Cluster Autoscaler monitors the scheduling state of the cluster. When Pods are pending because there are not enough resources (CPU/memory) available on existing nodes—meaning the scheduler cannot place them—Cluster Autoscaler can request that the underlying infrastructure (typically a cloud provider node group / autoscaling group) add nodes. Conversely, when nodes are underutilized and Pods can be rescheduled elsewhere, Cluster Autoscaler can drain those nodes (respecting disruption constraints like PodDisruptionBudgets) and then remove them to reduce cost. This aligns with cloud-native elasticity: scale infrastructure up and down automatically based on workload needs.
The other options are different: Horizontal Pod Autoscaler (HPA) changes the number of Pod replicas for a workload (like a Deployment) based on metrics (CPU utilization, memory, or custom metrics). It scales the application layer, not the node layer. Vertical Pod Autoscaler (VPA) changes resource requests/limits (CPU/memory) for Pods, effectively “scaling up/down” the size of individual Pods. It also does not directly change node count, though its adjustments can influence scheduling pressure. “Node Autoscaler” is not the canonical Kubernetes component name used in standard terminology; the widely referenced upstream component for node count is Cluster Autoscaler.
In real systems, these autoscalers often work together: HPA increases replicas when traffic rises; that may cause Pods to go Pending if nodes are full; Cluster Autoscaler then adds nodes; scheduling proceeds; later, traffic drops, HPA reduces replicas and Cluster Autoscaler removes nodes. This layered approach provides both performance and cost efficiency.
=========
Which of the following is a primary use case of Istio in a Kubernetes cluster?
To manage and control the versions of container runtimes used on nodes between services.
To provide secure built-in database management features for application workloads.
To provision and manage persistent storage volumes for stateful applications.
To provide service mesh capabilities such as traffic management, observability, and security between services.
Istio is a widely adopted service mesh for Kubernetes that focuses on managing service-to-service communication in distributed, microservices-based architectures. Its primary use case is to provide advanced traffic management, observability, and security capabilities between services, making option D the correct answer.
In a Kubernetes cluster, applications often consist of many independent services that communicate over the network. Managing this communication using application code alone becomes complex and error-prone as systems scale. Istio addresses this challenge by inserting a transparent data plane—typically based on Envoy proxies—alongside application workloads. These proxies intercept all inbound and outbound traffic, enabling consistent policy enforcement without requiring code changes.
Istio’s traffic management features include fine-grained routing, retries, timeouts, circuit breaking, fault injection, and canary or blue–green deployments. These capabilities allow operators to control how traffic flows between services, test new versions safely, and improve overall system resilience. For observability, Istio provides detailed telemetry such as metrics, logs, and distributed traces, giving deep insight into service performance and behavior. On the security front, Istio enables mutual TLS (mTLS) for service-to-service communication, strong identity, and access policies to secure traffic within the cluster.
Option A is incorrect because container runtime management is handled at the node and cluster level by Kubernetes and the underlying operating system, not by Istio. Option B is incorrect because Istio does not provide database management functionality. Option C is incorrect because persistent storage provisioning is handled by Kubernetes storage APIs and CSI drivers, not by service meshes.
By abstracting networking concerns away from application code, Istio helps teams operate complex microservices environments more safely and efficiently. Therefore, the correct and verified answer is Option D, which accurately reflects Istio’s core purpose and documented use cases in Kubernetes ecosystems.
What are the 3 pillars of Observability?
Metrics, Logs, and Traces
Metrics, Logs, and Spans
Metrics, Data, and Traces
Resources, Logs, and Tracing
The correct answer is A: Metrics, Logs, and Traces. These are widely recognized as the “three pillars” because together they provide complementary views into system behavior:
Metrics are numeric time series collected over time (CPU usage, request rate, error rate, latency percentiles). They are best for dashboards, alerting, and capacity planning because they are structured and aggregatable. In Kubernetes, metrics underpin autoscaling and operational visibility (node/pod resource usage, cluster health signals).
Logs are discrete event records (often text) emitted by applications and infrastructure components. Logs provide detailed context for debugging: error messages, stack traces, warnings, and business events. In Kubernetes, logs are commonly collected from container stdout/stderr and aggregated centrally for search and correlation.
Traces capture the end-to-end journey of a request through a distributed system, breaking it into spans. Tracing is crucial in microservices because a single user request may cross many services; traces show where latency accumulates and which dependency fails. Tracing also enables root cause analysis when metrics indicate degradation but don’t pinpoint the culprit.
Why the other options are wrong: a span is a component within tracing, not a top-level pillar; “data” is too generic; and “resources” are not an observability signal category. The pillars are defined by signal type and how they’re used operationally.
In cloud-native practice, these pillars are often unified via correlation IDs and shared context: metrics alerts link to logs and traces for the same timeframe/request. Tooling like Prometheus (metrics), log aggregators (e.g., Loki/Elastic), and tracing systems (Jaeger/Tempo/OpenTelemetry) work together to provide a complete observability story.
Therefore, the verified correct answer is A.
=========
Which of the following is a valid PromQL query?
SELECT * from http_requests_total WHERE job=apiserver
http_requests_total WHERE (job="apiserver")
SELECT * from http_requests_total
http_requests_total(job="apiserver")
Prometheus Query Language (PromQL) uses a function-and-selector syntax, not SQL. A valid query typically starts with a metric name and optionally includes label matchers in curly braces. In the simplified quiz syntax given, the valid PromQL-style selector is best represented by D: http_requests_total(job="apiserver"), so D is correct.
Conceptually, what this query means is “select time series for the metric http_requests_total where the job label equals apiserver.” In standard PromQL formatting you most often see this as: http_requests_total{job="apiserver"}. Many training questions abbreviate braces and focus on the idea of filtering by labels; the key is that PromQL uses label matchers rather than SQL WHERE clauses.
Options A and C are invalid because they use SQL (SELECT * FROM ...) which is not PromQL. Option B is also invalid because PromQL does not use the keyword WHERE. PromQL filtering is done by applying label matchers directly to the metric selector.
In Kubernetes observability, PromQL is central to building dashboards and alerts from cluster metrics. For example, you might compute rates from counters: rate(http_requests_total{job="apiserver"}[5m]), aggregate by labels: sum by (code) (...), or alert on error ratios. Understanding the selector and label-matcher model is foundational because Prometheus metrics are multi-dimensional—labels define the slices you can filter and aggregate on.
So, within the provided options, D is the only one that follows PromQL’s metric+label-filter style and therefore is the verified correct answer.
=========
Which kubectl command is useful for collecting information about any type of resource that is active in a Kubernetes cluster?
describe
list
expose
explain
The correct answer is A (describe), used as kubectl describe
kubectl get (not listed) is typically used for listing objects and their summary fields, but kubectl describe goes deeper: for a Pod it will show container images, resource requests/limits, probes, mounted volumes, node assignment, IPs, conditions, and recent scheduling/pulling/starting events. For a Node it shows capacity/allocatable resources, labels/taints, conditions, and node events. Those event details often explain why something is Pending, failing to pull images, failing readiness checks, or being evicted.
Option B (“list”) is not a standard kubectl subcommand for retrieving resource information (you would use get for listing). Option C (expose) is for creating a Service to expose a resource (like a Deployment). Option D (explain) is for viewing API schema/field documentation (e.g., kubectl explain deployment.spec.replicas) and does not report what is currently happening in the cluster.
So, for gathering detailed live diagnostics about a resource in the cluster, the best kubectl command is kubectl describe, which corresponds to option A.
=========
Which of the following is a correct definition of a Helm chart?
A Helm chart is a collection of YAML files bundled in a tar.gz file and can be applied without decompressing it.
A Helm chart is a collection of JSON files and contains all the resource definitions to run an application on Kubernetes.
A Helm chart is a collection of YAML files that can be applied on Kubernetes by using the kubectl tool.
A Helm chart is similar to a package and contains all the resource definitions to run an application on Kubernetes.
A Helm chart is best described as a package for Kubernetes applications, containing the resource definitions (as templates) and metadata needed to install and manage an application—so D is correct. Helm is a package manager for Kubernetes; the chart is the packaging format. Charts include a Chart.yaml (metadata), a values.yaml (default configuration values), and a templates/ directory containing Kubernetes manifests written as templates. When you install a chart, Helm renders those templates into concrete Kubernetes YAML manifests by substituting values, then applies them to the cluster.
Option A is misleading/incomplete. While charts are often distributed as a compressed tarball (.tgz), the defining feature is not “YAML bundled in tar.gz” but the packaging and templating model that supports install/upgrade/rollback. Option B is incorrect because Helm charts are not “collections of JSON files” by definition; Kubernetes resources can be expressed as YAML or JSON, but Helm charts overwhelmingly use templated YAML. Option C is incorrect because charts are not simply YAML applied by kubectl; Helm manages releases, tracks installed resources, and supports upgrades and rollbacks. Helm uses Kubernetes APIs under the hood, but the value of Helm is the lifecycle and packaging system, not “kubectl apply.”
In cloud-native application delivery, Helm helps standardize deployments across environments (dev/stage/prod) by externalizing configuration through values. It reduces copy/paste and supports reuse via dependencies and subcharts. Helm also supports versioning of application packages, allowing teams to upgrade predictably and roll back if needed—critical for production change management.
So, the correct and verified definition is D: a Helm chart is like a package containing the resource definitions needed to run an application on Kubernetes.
=========
Which statement best describes the role of kubelet on a Kubernetes worker node?
kubelet manages the container runtime and ensures that all Pods scheduled to the node are running as expected.
kubelet configures networking rules on each node to handle traffic routing for Services in the cluster.
kubelet monitors cluster-wide resource usage and assigns Pods to the most suitable nodes for execution.
kubelet acts as the primary API component that stores and manages cluster state information.
The kubelet is the primary node-level agent in Kubernetes and is responsible for ensuring that workloads assigned to a worker node are executed correctly. Its core function is to manage container execution on the node and ensure that all Pods scheduled to that node are running as expected, which makes option A the correct answer.
Once the Kubernetes scheduler assigns a Pod to a node, the kubelet on that node takes over responsibility for running the Pod. It continuously watches the API server for Pod specifications that target its node and then interacts with the container runtime (such as containerd or CRI-O) through the Container Runtime Interface (CRI). The kubelet starts, stops, and restarts containers to match the desired state defined in the Pod specification.
In addition to lifecycle management, the kubelet performs ongoing health monitoring. It executes liveness, readiness, and startup probes, reports Pod and node status back to the API server, and enforces resource limits defined in the Pod specification. If a container crashes or becomes unhealthy, the kubelet initiates recovery actions such as restarting the container.
Option B is incorrect because configuring Service traffic routing is the responsibility of kube-proxy and the cluster’s networking layer, not the kubelet. Option C is incorrect because cluster-wide resource monitoring and Pod placement decisions are handled by the kube-scheduler. Option D is incorrect because cluster state is managed by the API server and stored in etcd, not by the kubelet.
In summary, the kubelet acts as the executor and supervisor of Pods on each worker node. It bridges the Kubernetes control plane and the actual runtime environment, ensuring that containers are running, healthy, and aligned with the declared configuration. Therefore, Option A is the correct and verified answer.
How many hosts are required to set up a highly available Kubernetes cluster when using an external etcd topology?
Four hosts. Two for control plane nodes and two for etcd nodes.
Four hosts. One for a control plane node and three for etcd nodes.
Three hosts. The control plane nodes and etcd nodes share the same host.
Six hosts. Three for control plane nodes and three for etcd nodes.
In a highly available (HA) Kubernetes control plane using an external etcd topology, you typically run three control plane nodes and three separate etcd nodes, totaling six hosts, making D correct. HA design relies on quorum-based consensus: etcd uses Raft and requires a majority of members available to make progress. Running three etcd members is the common minimum for HA because it tolerates one member failure while maintaining quorum (2/3).
In the external etcd topology, etcd is decoupled from the control plane nodes. This separation improves fault isolation: if a control plane node fails or is replaced, etcd remains stable and independent; likewise, etcd maintenance can be handled separately. Kubernetes API servers (often multiple instances behind a load balancer) talk to the external etcd cluster for storage of cluster state.
Options A and B propose four hosts, but they break common HA/quorum best practices. Two etcd nodes do not form a robust quorum configuration (a two-member etcd cluster cannot tolerate a single failure without losing quorum). One control plane node is not HA for the API server/scheduler/controller-manager components. Option C describes a stacked etcd topology (control plane + etcd on same hosts), which can be HA with three hosts, but the question explicitly says external etcd, not stacked. In stacked topology, you often use three control plane nodes each running an etcd member. In external topology, you use three control plane + three etcd.
Operationally, external etcd topology is often used when you want dedicated resources, separate lifecycle management, or stronger isolation for the datastore. It can reduce blast radius but increases infrastructure footprint and operational complexity (TLS, backup/restore, networking). Still, for the canonical HA external-etcd pattern, the expected answer is six hosts: 3 control plane + 3 etcd.
=========
What's the most adopted way of conflict resolution and decision-making for the open-source projects under the CNCF umbrella?
Financial Analysis
Discussion and Voting
Flipism Technique
Project Founder Say
B (Discussion and Voting) is correct. CNCF-hosted open-source projects generally operate with open governance practices that emphasize transparency, community participation, and documented decision-making. While each project can have its own governance model (maintainers, technical steering committees, SIGs, TOC interactions, etc.), a very common and widely adopted approach to resolving disagreements and making decisions is to first pursue discussion (often on GitHub issues/PRs, mailing lists, or community meetings) and then use voting/consensus mechanisms when needed.
This approach is important because open-source communities are made up of diverse contributors across companies and geographies. “Project Founder Say” (D) is not a sustainable or typical CNCF governance norm for mature projects; CNCF explicitly encourages neutral, community-led governance rather than single-person control. “Financial Analysis” (A) is not a conflict resolution mechanism for technical decisions, and “Flipism Technique” (C) is not a real governance practice.
In Kubernetes specifically, community decisions are often made within structured groups (e.g., SIGs) using discussion and consensus-building, sometimes followed by formal votes where governance requires it. The goal is to ensure decisions are fair, recorded, and aligned with the project’s mission and contributor expectations. This also reduces risk of vendor capture and builds trust: anyone can review the rationale in meeting notes, issues, or PR threads, and decisions can be revisited with new evidence.
Therefore, the most adopted conflict resolution and decision-making method across CNCF open-source projects is discussion and voting, making B the verified correct answer.
=========
What is a sidecar container?
A Pod that runs next to another container within the same Pod.
A container that runs next to another Pod within the same namespace.
A container that runs next to another container within the same Pod.
A Pod that runs next to another Pod within the same namespace.
A sidecar container is an additional container that runs alongside the main application container within the same Pod, sharing network and storage context. That matches option C, so C is correct. The sidecar pattern is used to add supporting capabilities to an application without modifying the application code. Because both containers are in the same Pod, the sidecar can communicate with the main container over localhost and share volumes for files, sockets, or logs.
Common sidecar examples include: log forwarders that tail application logs and ship them to a logging system, proxies (service mesh sidecars like Envoy) that handle mTLS and routing policy, config reloaders that watch ConfigMaps and signal the main process, and local caching agents. Sidecars are especially powerful in cloud-native systems because they standardize cross-cutting concerns—security, observability, traffic policy—across many workloads.
Options A and D incorrectly describe “a Pod running next to …” which is not how sidecars work; sidecars are containers, not separate Pods. Running separate Pods “next to” each other in a namespace does not give the same shared network namespace and tightly coupled lifecycle. Option B is also incorrect for the same reason: a sidecar is not a separate Pod; it is a container in the same Pod.
Operationally, sidecars share the Pod lifecycle: they are scheduled together, scaled together, and generally terminated together. This is both a benefit (co-location guarantees) and a responsibility (resource requests/limits should include the sidecar’s needs, and failure modes should be understood). Kubernetes is increasingly formalizing sidecar behavior (e.g., sidecar containers with ordered startup semantics), but the core definition remains: a helper container in the same Pod.
=========
What is the Kubernetes object used for running a recurring workload?
Job
Batch
DaemonSet
CronJob
A recurring workload in Kubernetes is implemented with a CronJob, so the correct choice is D. A CronJob is a controller that creates Jobs on a schedule defined in standard cron format (minute, hour, day of month, month, day of week). This makes CronJobs ideal for periodic tasks like backups, report generation, log rotation, and cleanup tasks.
A Job (option A) is run-to-completion but is typically a one-time execution; it ensures that a specified number of Pods successfully terminate. You can use a Job repeatedly, but something else must create it each time—CronJob is that built-in scheduler. Option B (“Batch”) is not a standard workload resource type (batch is an API group, not the object name used here). Option C (DaemonSet) ensures one Pod runs on every node (or selected nodes), which is not “recurring,” it’s “always present per node.”
CronJobs include operational controls that matter in real clusters. For example, concurrencyPolicy controls what happens if a scheduled run overlaps with a previous run (Allow, Forbid, Replace). startingDeadlineSeconds can handle missed schedules (e.g., if the controller was down). History limits (successfulJobsHistoryLimit, failedJobsHistoryLimit) help manage cleanup and troubleshooting. Each scheduled execution results in a Job with its own Pods, which can be inspected with kubectl get jobs and kubectl logs.
So the correct Kubernetes object for a recurring workload is CronJob (D): it provides native scheduling and creates Jobs automatically according to the defined cadence.
=========
In Kubernetes, what is the primary responsibility of the kubelet running on each worker node?
To allocate persistent storage volumes and manage distributed data replication for Pods.
To manage cluster state information and handle all scheduling decisions for workloads.
To ensure that containers defined in Pod specifications are running and remain healthy on the node.
To provide internal DNS resolution and route service traffic between Pods and nodes.
The kubelet is a critical Kubernetes component that runs on every worker node and acts as the primary execution agent for Pods. Its core responsibility is to ensure that the containers defined in Pod specifications are running and remain healthy on the node, making option C the correct answer.
Once the Kubernetes scheduler assigns a Pod to a specific node, the kubelet on that node becomes responsible for carrying out the desired state described in the Pod specification. It continuously watches the API server for Pods assigned to its node and communicates with the container runtime (such as containerd or CRI-O) to start, stop, and restart containers as needed. The kubelet does not make scheduling decisions; it simply executes them.
Health management is another key responsibility of the kubelet. It runs liveness, readiness, and startup probes as defined in the Pod specification. If a container fails a liveness probe, the kubelet restarts it. If a readiness probe fails, the kubelet marks the Pod as not ready, preventing traffic from being routed to it. The kubelet also reports detailed Pod and node status information back to the API server, enabling controllers to take corrective actions when necessary.
Option A is incorrect because persistent volume provisioning and data replication are handled by storage systems, CSI drivers, and controllers—not by the kubelet. Option B is incorrect because cluster state management and scheduling are responsibilities of control plane components such as the API server, controller manager, and kube-scheduler. Option D is incorrect because DNS resolution and service traffic routing are handled by components like CoreDNS and kube-proxy.
In summary, the kubelet serves as the node-level guardian of Kubernetes workloads. By ensuring containers are running exactly as specified and continuously reporting their health and status, the kubelet forms the essential bridge between Kubernetes’ declarative control plane and the actual execution of applications on worker nodes.
Which statement about Ingress is correct?
Ingress provides a simple way to track network endpoints within a cluster.
Ingress is a Service type like NodePort and ClusterIP.
Ingress is a construct that allows you to specify how a Pod is allowed to communicate.
Ingress exposes routes from outside the cluster to Services in the cluster.
Ingress is the Kubernetes API resource for defining external HTTP/HTTPS routing into the cluster, so D is correct. An Ingress object specifies rules such as hostnames (e.g., app.example.com), URL paths (e.g., /api), and TLS configuration, mapping those routes to Kubernetes Services. This provides Layer 7 routing capabilities beyond what a basic Service offers.
Ingress is not a Service type (so B is wrong). Service types (ClusterIP, NodePort, LoadBalancer, ExternalName) are part of the Service API and operate at Layer 4. Ingress is a separate API object that depends on an Ingress Controller to actually implement routing. The controller watches Ingress resources and configures a reverse proxy/load balancer (like NGINX, HAProxy, or a cloud load balancer integration) to enforce the desired routing. Without an Ingress Controller, creating an Ingress object alone will not route traffic.
Option A describes endpoint tracking (that’s closer to Endpoints/EndpointSlice). Option C describes NetworkPolicy, which controls allowed network flows between Pods/namespaces. Ingress is about exposing and routing incoming application traffic from outside the cluster to internal Services.
So the verified correct statement is D: Ingress exposes routes from outside the cluster to Services in the cluster.
A Kubernetes _____ is an abstraction that defines a logical set of Pods and a policy by which to access them.
Selector
Controller
Service
Job
A Kubernetes Service is the abstraction that defines a logical set of Pods and the policy for accessing them, so C is correct. Pods are ephemeral: their IPs change as they are recreated, rescheduled, or scaled. A Service solves this by providing a stable endpoint (DNS name and virtual IP) and routing rules that send traffic to the current healthy Pods backing the Service.
A Service typically uses a label selector to identify which Pods belong to it. Kubernetes then maintains endpoint data (Endpoints/EndpointSlice) for those Pods and uses the cluster dataplane (kube-proxy or eBPF-based implementations) to forward traffic from the Service IP/port to one of the backend Pod IPs. This is what the question means by “logical set of Pods” and “policy by which to access them” (for example, round-robin-like distribution depending on dataplane, session affinity options, and how ports map via targetPort).
Option A (Selector) is only the query mechanism used by Services and controllers; it is not itself the access abstraction. Option B (Controller) is too generic; controllers reconcile desired state but do not provide stable network access policies. Option D (Job) manages run-to-completion tasks and is unrelated to network access abstraction.
Services can be exposed in different ways: ClusterIP (internal), NodePort, LoadBalancer, and ExternalName. Regardless of type, the core Service concept remains: stable access to a dynamic set of Pods. This is foundational to Kubernetes networking and microservice communication, and it is why Service discovery via DNS works effectively across rolling updates and scaling events.
Thus, the correct answer is Service (C).
=========
Which of the following options includes valid API versions?
alpha1v1, beta3v3, v2
alpha1, beta3, v2
v1alpha1, v2beta3, v2
v1alpha1, v2beta3, 2.0
Kubernetes API versions follow a consistent naming pattern that indicates stability level and versioning. The valid forms include stable versions like v1, and pre-release versions such as v1alpha1, v1beta1, etc. Option C contains valid-looking Kubernetes version strings—v1alpha1, v2beta3, v2—so C is correct.
In Kubernetes, the “v” prefix is part of the standard for API versions. A stable API uses v1, v2, etc. Pre-release APIs include a stability marker: alpha (earliest, most changeable) and beta (more stable but still may change). The numeric suffix (e.g., alpha1, beta3) indicates iteration within that stability stage.
Option A is invalid because strings like alpha1v1 and beta3v3 do not match Kubernetes conventions (the v comes first, and alpha/beta are qualifiers after the version: v1alpha1). Option B is invalid because alpha1 and beta3 are missing the leading version prefix; Kubernetes API versions are not just “alpha1.” Option D includes 2.0, which looks like semantic versioning but is not the Kubernetes API version format. Kubernetes uses v2, not 2.0, for API versions.
Understanding this matters because API versions signal compatibility guarantees. Stable APIs are supported for a defined deprecation window, while alpha/beta APIs may change in incompatible ways and can be removed more easily. When authoring manifests, selecting the correct apiVersion ensures the API server accepts your resource and that controllers interpret fields correctly.
Therefore, among the choices, C is the only option comprised of valid Kubernetes-style API version strings.
=========
In Kubernetes, what is the primary responsibility of the kubelet running on each worker node?
To allocate persistent storage volumes and manage distributed data replication for Pods.
To manage cluster state information and handle all scheduling decisions for workloads.
To ensure that containers defined in Pod specifications are running and remain healthy on the node.
To provide internal DNS resolution and route service traffic between Pods and nodes.
The kubelet is the primary node-level agent in Kubernetes and plays a critical role in ensuring that workloads run correctly on each worker node. Its main responsibility is to ensure that the containers described in Pod specifications are running and remain healthy on that node, which makes option C the correct answer.
Once the Kubernetes scheduler assigns a Pod to a node, the kubelet on that node takes over execution responsibilities. It watches the API server for Pod specifications that are scheduled to its node and then interacts with the container runtime to start, stop, and manage the containers defined in those Pods. The kubelet continuously monitors container health and reports Pod and node status back to the API server, enabling Kubernetes to make informed decisions about restarts, rescheduling, or remediation.
Health checks are another key responsibility of the kubelet. It executes liveness, readiness, and startup probes as defined in the Pod specification. Based on probe results, the kubelet may restart containers or update Pod status to reflect whether the application is ready to receive traffic. This behavior directly supports Kubernetes’ self-healing capabilities.
Option A is incorrect because persistent storage allocation and data replication are handled by storage systems, CSI drivers, and controllers—not by the kubelet itself. Option B is incorrect because cluster state management and scheduling decisions are the responsibility of control plane components such as the API server, controller manager, and kube-scheduler. Option D is incorrect because DNS resolution and service traffic routing are handled by components like CoreDNS and kube-proxy.
In summary, the kubelet acts as the “node supervisor” for Kubernetes workloads. By ensuring containers are running as specified and continuously reporting their status, the kubelet forms the essential link between the Kubernetes control plane and the actual execution of applications on worker nodes. This clearly aligns with Option C as the correct and verified answer.
Which statement about Secrets is correct?
A Secret is part of a Pod specification.
Secret data is encrypted with the cluster private key by default.
Secret data is base64 encoded and stored unencrypted by default.
A Secret can only be used for confidential data.
The correct answer is C. By default, Kubernetes Secrets store their data as base64-encoded values in the API (backed by etcd). Base64 is an encoding mechanism, not encryption, so this does not provide confidentiality. Unless you explicitly configure encryption at rest for etcd (via the API server encryption provider configuration) and secure access controls, Secret contents should be treated as potentially readable by anyone with sufficient API access or access to etcd backups.
Option A is misleading: a Secret is its own Kubernetes resource (kind: Secret). While Pods can reference Secrets (as environment variables or mounted volumes), the Secret itself is not “part of the Pod spec” as an embedded object. Option B is incorrect because Kubernetes does not automatically encrypt Secret data with a cluster private key by default; encryption at rest is optional and must be enabled. Option D is incorrect because Secrets can store a range of sensitive or semi-sensitive data (tokens, certs, passwords), but Kubernetes does not enforce “only confidential data” semantics; it’s a storage mechanism with size and format constraints.
Operationally, best practices include: enabling encryption at rest, limiting access via RBAC, avoiding broad “list/get secrets” permissions, using dedicated service accounts, auditing access, and considering external secrets managers (Vault, cloud KMS-backed solutions) for higher assurance. Also, don’t confuse “Secret” with “secure by default.” The default protection is mainly about avoiding accidental plaintext exposure in manifests, not about cryptographic security.
So the only correct statement in the options is C.
=========
What is the purpose of the CRI?
To provide runtime integration control when multiple runtimes are used.
Support container replication and scaling on nodes.
Provide an interface allowing Kubernetes to support pluggable container runtimes.
Allow the definition of dynamic resource criteria across containers.
The Container Runtime Interface (CRI) exists so Kubernetes can support pluggable container runtimes behind a stable interface, which makes C correct. In Kubernetes, the kubelet is responsible for managing Pods on a node, but it does not implement container execution itself. Instead, it delegates container lifecycle operations (pull images, create pod sandbox, start/stop containers, fetch logs, exec/attach streaming) to a container runtime through a well-defined API. CRI is that API contract.
Because of CRI, Kubernetes can run with different container runtimes—commonly containerd or CRI-O—without changing kubelet core logic. This improves portability and keeps Kubernetes modular: runtime innovation can happen independently while Kubernetes retains a consistent operational model. CRI is accessed via gRPC and defines the services and message formats kubelet uses to communicate with runtimes.
Option B is incorrect because replication and scaling are handled by controllers (Deployments/ReplicaSets) and schedulers, not by CRI. Option D is incorrect because resource criteria (requests/limits) are expressed in Pod specs and enforced via OS mechanisms (cgroups) and kubelet/runtime behavior, but CRI is not “for defining dynamic resource criteria.” Option A is vague and not the primary statement; while CRI enables runtime integration, its key purpose is explicitly to make runtimes pluggable and interoperable.
This design became even more important as Kubernetes moved away from Docker Engine integration (dockershim removal from kubelet). With CRI, Kubernetes focuses on orchestrating Pods, while runtimes focus on executing containers. That separation of responsibilities is a core container orchestration principle and is exactly what the question is testing.
So the verified answer is C.
=========
Which cloud native tool keeps Kubernetes clusters in sync with sources of configuration (like Git repositories), and automates updates to configuration when there is new code to deploy?
Flux and ArgoCD
GitOps Toolkit
Linkerd and Istio
Helm and Kustomize
Tools that continuously reconcile cluster state to match a Git repository’s desired configuration are GitOps controllers, and the best match here is Flux and ArgoCD, so A is correct. GitOps is the practice where Git is the source of truth for declarative system configuration. A GitOps tool continuously compares the desired state (manifests/Helm/Kustomize outputs stored in Git) with the actual state in the cluster and then applies changes to eliminate drift.
Flux and Argo CD both implement this reconciliation loop. They watch Git repositories, detect updates (new commits/tags), and apply the updated Kubernetes resources. They also surface drift and sync status, enabling auditable, repeatable deployments and easy rollbacks (revert Git). This model improves delivery velocity and security because changes flow through code review, and cluster changes can be restricted to the GitOps controller identity rather than ad-hoc human kubectl access.
Option B (“GitOps Toolkit”) is related—Flux uses a GitOps Toolkit internally—but the question asks for a “tool” that keeps clusters in sync; the recognized tools are Flux and Argo CD in this list. Option C lists service meshes (traffic/security/telemetry), not deployment synchronization tools. Option D lists packaging/templating tools; Helm and Kustomize help build manifests, but they do not, by themselves, continuously reconcile cluster state to a Git source.
In Kubernetes application delivery, GitOps tools become the deployment engine: CI builds artifacts, updates references in Git (image tags/digests), and the GitOps controller deploys those changes. This separation strengthens traceability and reduces configuration drift. Therefore, A is the verified correct answer.
=========
What methods can you use to scale a Deployment?
With kubectl edit deployment exclusively.
With kubectl scale-up deployment exclusively.
With kubectl scale deployment and kubectl edit deployment.
With kubectl scale deployment exclusively.
A Deployment’s replica count is controlled by spec.replicas. You can scale a Deployment by changing that field—either directly editing the object or using kubectl’s scaling helper. Therefore C is correct: you can scale using kubectl scale and also via kubectl edit.
kubectl scale deployment
kubectl edit deployment
Option B is invalid because kubectl scale-up deployment is not a standard kubectl command. Option A is incorrect because kubectl edit is not the only method; scaling is commonly done with kubectl scale. Option D is also incorrect because while kubectl scale is a primary method, kubectl edit is also a valid method to change replicas.
In production, you often scale with autoscalers (HPA/VPA), but the question is asking about kubectl methods. The key Kubernetes concept is that scaling is achieved by updating desired state (spec.replicas), and controllers reconcile Pods to match.
=========
In a cloud native environment, who is usually responsible for maintaining the workloads running across the different platforms?
The cloud provider.
The Site Reliability Engineering (SRE) team.
The team of developers.
The Support Engineering team (SE).
B (the Site Reliability Engineering team) is correct. In cloud-native organizations, SREs are commonly responsible for the reliability, availability, and operational health of workloads across platforms (multiple clusters, regions, clouds, and supporting services). While responsibilities vary by company, the classic SRE charter is to apply software engineering to operations: build automation, standardize runbooks, manage incident response, define SLOs/SLIs, and continuously improve system reliability.
Maintaining workloads “across different platforms” implies cross-cutting operational ownership: deployments need to behave consistently, rollouts must be safe, monitoring and alerting must be uniform, and incident practices must work across environments. SRE teams typically own or heavily influence the observability stack (metrics/logs/traces), operational readiness, capacity planning, and reliability guardrails (error budgets, progressive delivery, automated rollback triggers). They also collaborate closely with platform engineering and application teams, but SRE is often the group that ensures production workloads meet reliability targets.
Why other options are less correct:
The cloud provider (A) maintains the underlying cloud services, but not your application workloads’ correctness, SLOs, or operational processes.
Developers (C) do maintain application code and may own on-call in some models, but the question asks “usually” in cloud-native environments; SRE is the widely recognized function for workload reliability across platforms.
Support Engineering (D) typically focuses on customer support and troubleshooting from a user perspective, not maintaining platform workload reliability at scale.
So, the best and verified answer is B: SRE teams commonly maintain and ensure reliability of workloads across cloud-native platforms.
=========
What service account does a Pod use in a given namespace when the service account is not specified?
admin
sysadmin
root
default
D (default) is correct. In Kubernetes, if you create a Pod (or a controller creates Pods) without specifying spec.serviceAccountName, Kubernetes assigns the Pod the default ServiceAccount in that namespace. The ServiceAccount determines what identity the Pod uses when accessing the Kubernetes API (for example, via the in-cluster token mounted into the Pod, when token automounting is enabled).
Every namespace typically has a default ServiceAccount created automatically. The permissions associated with that ServiceAccount are determined by RBAC bindings. In many clusters, the default ServiceAccount has minimal permissions (or none) as a security best practice, because leaving it overly privileged would allow any Pod to access sensitive cluster APIs.
Why the other options are wrong: Kubernetes does not automatically choose “admin,” “sysadmin,” or “root” service accounts. Those are not standard implicit identities, and automatically granting admin privileges would be insecure. Instead, Kubernetes follows a predictable, least-privilege-friendly default: use the namespace’s default ServiceAccount unless you explicitly request a different one.
Operationally, this matters for security and troubleshooting. If an application in a Pod is failing with “forbidden” errors when calling the API, it often means it’s using the default ServiceAccount without the necessary RBAC permissions. The correct fix is usually to create a dedicated ServiceAccount and bind only the required roles, then set serviceAccountName in the Pod template. Conversely, if you’re hardening a cluster, you often disable automounting of service account tokens for Pods that don’t need API access.
Therefore, the verified correct answer is D: default.
=========
Which one of the following is an open source runtime security tool?
lxd
containerd
falco
gVisor
The correct answer is C: Falco. Falco is a widely used open-source runtime security tool (originally created by Sysdig and now a CNCF project) designed to detect suspicious behavior at runtime by monitoring system calls and other kernel-level signals. In Kubernetes environments, Falco helps identify threats such as unexpected shell access in containers, privilege escalation attempts, access to sensitive files, anomalous network tooling, crypto-mining patterns, and other behaviors that indicate compromise or policy violations.
The other options are not primarily “runtime security tools” in the detection/alerting sense:
containerd is a container runtime responsible for executing containers; it’s not a security detection tool.
lxd is a system container and VM manager; again, not a runtime threat detection tool.
gVisor is a sandboxed container runtime that improves isolation by interposing a user-space kernel; it’s a security mechanism, but the question asks for a runtime security tool (monitoring/detection). Falco fits that definition best.
In cloud-native security practice, Falco typically runs as a DaemonSet so it can observe activity on every node. It uses rules to define what “bad” looks like and can emit alerts to SIEM systems, logging backends, or incident response workflows. This complements preventative controls like RBAC, Pod Security Admission, seccomp, and least privilege configurations. Preventative controls reduce risk; Falco provides visibility and detection when something slips through.
Therefore, among the provided choices, the verified runtime security tool is Falco (C).
=========
Which API object is the recommended way to run a scalable, stateless application on your cluster?
ReplicaSet
Deployment
DaemonSet
Pod
For a scalable, stateless application, Kubernetes recommends using a Deployment because it provides a higher-level, declarative management layer over Pods. A Deployment doesn’t just “run replicas”; it manages the entire lifecycle of rolling out new versions, scaling up/down, and recovering from failures by continuously reconciling the current cluster state to the desired state you define. Under the hood, a Deployment typically creates and manages a ReplicaSet, and that ReplicaSet ensures a specified number of Pod replicas are running at all times. This layering is the key: you get ReplicaSet’s self-healing replica maintenance plus Deployment’s rollout/rollback strategies and revision history.
Why not the other options? A Pod is the smallest deployable unit, but it’s not a scalable controller—if a Pod dies, nothing automatically replaces it unless a controller owns it. A ReplicaSet can maintain N replicas, but it does not provide the full rollout orchestration (rolling updates, pause/resume, rollbacks, and revision tracking) that you typically want for stateless apps that ship frequent releases. A DaemonSet is for node-scoped workloads (one Pod per node or subset of nodes), like log shippers or node agents, not for “scale by replicas.”
For stateless applications, the Deployment model is especially appropriate because individual replicas are interchangeable; the application does not require stable network identities or persistent storage per replica. Kubernetes can freely replace or reschedule Pods to maintain availability. Deployment strategies (like RollingUpdate) allow you to upgrade without downtime by gradually replacing old replicas with new ones while keeping the Service endpoints healthy. That combination—declarative desired state, self-healing, and controlled updates—makes Deployment the recommended object for scalable stateless workloads.
=========
What are the most important resources to guarantee the performance of an etcd cluster?
CPU and disk capacity.
Network throughput and disk I/O.
CPU and RAM memory.
Network throughput and CPU.
etcd is the strongly consistent key-value store backing Kubernetes cluster state. Its performance directly affects the entire control plane because most API operations require reads/writes to etcd. The most critical resources for etcd performance are disk I/O (especially latency) and network throughput/latency between etcd members and API servers—so B is correct.
etcd is write-ahead-log (WAL) based and relies heavily on stable, low-latency storage. Slow disks increase commit latency, which slows down object updates, watches, and controller loops. In busy clusters, poor disk performance can cause request backlogs and timeouts, showing up as slow kubectl operations and delayed controller reconciliation. That’s why production guidance commonly emphasizes fast SSD-backed storage and careful monitoring of fsync latency.
Network performance matters because etcd uses the Raft consensus protocol. Writes must be replicated to a quorum of members, and leader-follower communication is continuous. High network latency or low throughput can slow replication and increase the time to commit writes. Unreliable networking can also cause leader elections or cluster instability, further degrading performance and availability.
CPU and memory are still relevant, but they are usually not the first bottleneck compared to disk and network. CPU affects request processing and encryption overhead if enabled, while memory affects caching and compaction behavior. Disk “capacity” alone (size) is less relevant than disk I/O characteristics (latency, IOPS), because etcd performance is sensitive to fsync and write latency.
In Kubernetes operations, ensuring etcd health includes: using dedicated fast disks, keeping network stable, enabling regular compaction/defragmentation strategies where appropriate, sizing correctly (typically odd-numbered members for quorum), and monitoring key metrics (commit latency, fsync duration, leader changes). Because etcd is the persistence layer of the API, disk I/O and network quality are the primary determinants of control-plane responsiveness—hence B.
=========
Which of the following best describes horizontally scaling an application deployment?
The act of adding/removing node instances to the cluster to meet demand.
The act of adding/removing applications to meet demand.
The act of adding/removing application instances of the same application to meet demand.
The act of adding/removing resources to application instances to meet demand.
Horizontal scaling means changing how many instances of an application are running, not changing how big each instance is. Therefore, the best description is C: adding/removing application instances of the same application to meet demand. In Kubernetes, “instances” typically correspond to Pod replicas managed by a controller like a Deployment. When you scale horizontally, you increase or decrease the replica count, which increases or decreases total throughput and resilience by distributing load across more Pods.
Option A is about cluster/node scaling (adding or removing nodes), which is infrastructure scaling typically handled by a cluster autoscaler in cloud environments. Node scaling can enable more Pods to be scheduled, but it’s not the definition of horizontal application scaling itself. Option D describes vertical scaling—adding/removing CPU or memory resources to a given instance (Pod/container) by changing requests/limits or using VPA. Option B is vague and not the standard definition.
Horizontal scaling is a core cloud-native pattern because it improves availability and elasticity. If one Pod fails, other replicas continue serving traffic. In Kubernetes, scaling can be manual (kubectl scale deployment ... --replicas=N) or automatic using the Horizontal Pod Autoscaler (HPA). HPA adjusts replicas based on observed metrics like CPU utilization, memory, or custom/external metrics (for example, request rate or queue length). This creates responsive systems that can handle variable traffic.
From an architecture perspective, designing for horizontal scaling often means ensuring your application is stateless (or manages state externally), uses idempotent request handling, and supports multiple concurrent instances. Stateful workloads can also scale horizontally, but usually with additional constraints (StatefulSets, sharding, quorum membership, stable identity).
So the verified definition and correct choice is C.
=========
What feature must a CNI support to control specific traffic flows for workloads running in Kubernetes?
Border Gateway Protocol
IP Address Management
Pod Security Policy
Network Policies
To control which workloads can communicate with which other workloads in Kubernetes, you use NetworkPolicy resources—but enforcement depends on the cluster’s networking implementation. Therefore, for traffic-flow control, the CNI/plugin must support Network Policies, making D correct.
Kubernetes defines the NetworkPolicy API as a declarative way to specify allowed ingress and egress traffic based on selectors (Pod labels, namespaces, IP blocks) and ports/protocols. However, Kubernetes itself does not enforce NetworkPolicy rules; enforcement is provided by the network plugin (or associated dataplane components). If your CNI does not implement NetworkPolicy, the objects may exist in the API but have no effect—Pods will communicate freely by default.
Option B (IP Address Management) is often part of CNI responsibilities, but IPAM is about assigning addresses, not enforcing L3/L4 security policy. Option A (BGP) is used by some CNIs to advertise routes (for example, in certain Calico deployments), but BGP is not the general requirement for policy enforcement. Option C (Pod Security Policy) is a deprecated/removed Kubernetes admission feature related to Pod security settings, not network flow control.
From a Kubernetes security standpoint, NetworkPolicies are a key tool for implementing least privilege at the network layer—limiting lateral movement, reducing blast radius, and segmenting environments. But they only work when the chosen CNI supports them. Thus, the correct answer is D: Network Policies.
=========
What is the difference between a Deployment and a ReplicaSet?
With a Deployment, you can’t control the number of pod replicas.
A ReplicaSet does not guarantee a stable set of replica pods running.
A Deployment is basically the same as a ReplicaSet with annotations.
A Deployment is a higher-level concept that manages ReplicaSets.
A Deployment is a higher-level controller that manages ReplicaSets and provides rollout/rollback behavior, so D is correct. A ReplicaSet’s primary job is to ensure that a specified number of Pod replicas are running at any time, based on a label selector and Pod template. It’s a fundamental “keep N Pods alive” controller.
Deployments build on that by managing the lifecycle of ReplicaSets over time. When you update a Deployment (for example, changing the container image tag or environment variables), Kubernetes creates a new ReplicaSet for the new Pod template and gradually shifts replicas from the old ReplicaSet to the new one according to the rollout strategy (RollingUpdate by default). Deployments also retain revision history, making it possible to roll back to a previous ReplicaSet if a rollout fails.
Why the other options are incorrect:
A is false: Deployments absolutely control the number of replicas via spec.replicas and can also be controlled by HPA.
B is false: ReplicaSets do guarantee that a stable number of replicas is running (that is their core purpose).
C is false: a Deployment is not “a ReplicaSet with annotations.” It is a distinct API resource with additional controller logic for declarative updates, rollouts, and revision tracking.
Operationally, most teams create Deployments rather than ReplicaSets directly because Deployments are safer and more feature-complete for application delivery. ReplicaSets still appear in real clusters because Deployments create them automatically; you’ll commonly see multiple ReplicaSets during rollout transitions. Understanding the hierarchy is crucial for troubleshooting: if Pods aren’t behaving as expected, you often trace from Deployment → ReplicaSet → Pod, checking selectors, events, and rollout status.
So the key difference is: ReplicaSet maintains replica count; Deployment manages ReplicaSets and orchestrates updates. Therefore, D is the verified answer.
=========
What are the characteristics for building every cloud-native application?
Resiliency, Operability, Observability, Availability
Resiliency, Containerd, Observability, Agility
Kubernetes, Operability, Observability, Availability
Resiliency, Agility, Operability, Observability
Cloud-native applications are typically designed to thrive in dynamic, distributed environments where infrastructure is elastic and failures are expected. The best set of characteristics listed is Resiliency, Agility, Operability, Observability, making D correct.
Resiliency means the application and its supporting platform can tolerate failures and continue providing service. In Kubernetes terms, resiliency is supported through self-healing controllers, replica management, health probes, and safe rollout mechanisms, but the application must also be designed to handle transient failures, retries, and graceful degradation.
Agility reflects the ability to deliver changes quickly and safely. Cloud-native systems emphasize automation, CI/CD, declarative configuration, and small, frequent releases—often enabled by Kubernetes primitives like Deployments and rollout strategies. Agility is about reducing the friction to ship improvements while maintaining reliability.
Operability is how manageable the system is in production: clear configuration, predictable deployments, safe scaling, and automation-friendly operations. Kubernetes encourages operability through consistent APIs, controllers, and standardized patterns for configuration and lifecycle.
Observability means you can understand what’s happening inside the system using telemetry—metrics, logs, and traces—so you can troubleshoot issues, measure SLOs, and improve performance. Kubernetes provides many integration points for observability, but cloud-native apps must also emit meaningful signals.
Options B and C include items that are not “characteristics” (containerd is a runtime; Kubernetes is a platform). Option A includes “availability,” which is important, but the canonical cloud-native framing in this question emphasizes the four qualities in D as the foundational build characteristics.
=========
What framework does Kubernetes use to authenticate users with JSON Web Tokens?
OpenID Connect
OpenID Container
OpenID Cluster
OpenID CNCF
Kubernetes commonly authenticates users using OpenID Connect (OIDC) when JSON Web Tokens (JWTs) are involved, so A is correct. OIDC is an identity layer on top of OAuth 2.0 that standardizes how clients obtain identity information and how JWTs are issued and validated.
In Kubernetes, authentication happens at the API server. When OIDC is configured, the API server validates incoming bearer tokens (JWTs) by checking token signature and claims against the configured OIDC issuer and client settings. Kubernetes can use OIDC claims (such as sub, email, groups) to map the authenticated identity to Kubernetes RBAC subjects. This is how enterprises integrate clusters with identity providers such as Okta, Dex, Azure AD, or other OIDC-compliant IdPs.
Options B, C, and D are fabricated phrases and not real frameworks. Kubernetes documentation explicitly references OIDC as a supported method for token-based user authentication (alongside client certificates, bearer tokens, static token files, and webhook authentication). The key point is that Kubernetes does not “invent” JWT auth; it integrates with standard identity providers through OIDC so clusters can participate in centralized SSO and group-based authorization.
Operationally, OIDC authentication is typically paired with:
RBAC for authorization (“what you can do”)
Audit logging for traceability
Short-lived tokens and rotation practices for security
Group claim mapping to simplify permission management
So, the verified framework Kubernetes uses with JWTs for user authentication is OpenID Connect.
=========
How is application data maintained in containers?
Store data into data folders.
Store data in separate folders.
Store data into sidecar containers.
Store data into volumes.
Container filesystems are ephemeral: the writable layer is tied to the container lifecycle and can be lost when containers are recreated. Therefore, maintaining application data correctly means storing it in volumes, making D the correct answer. In Kubernetes, volumes provide durable or shareable storage that is mounted into containers at specific paths. Depending on the volume type, the data can persist across container restarts and even Pod rescheduling.
Kubernetes supports many volume patterns. For transient scratch data you might use emptyDir (ephemeral for the Pod’s lifetime). For durable state, you typically use PersistentVolumes consumed by PersistentVolumeClaims (PVCs), backed by storage systems via CSI drivers (cloud disks, SAN/NAS, distributed storage). This decouples the application container image from its state and enables rolling updates, rescheduling, and scaling without losing data.
Options A and B (“folders”) are incomplete because folders inside the container filesystem do not guarantee persistence. A folder is only as durable as the underlying storage; without a mounted volume, it lives in the container’s writable layer and will disappear when the container is replaced. Option C is incorrect because “sidecar containers” are not a data durability mechanism; sidecars can help ship logs or sync data, but persistent data should still be stored on volumes (or external services like managed databases).
From an application delivery standpoint, the principle is: containers should be immutable and disposable, and state should be externalized. Volumes (and external managed services) make this possible. In Kubernetes, this is a foundational pattern enabling safe rollouts, self-healing, and portability: the platform can kill and recreate Pods freely because data is maintained independently via volumes.
Therefore, the verified correct choice is D: Store data into volumes.
=========
What is the API that exposes resource metrics from the metrics-server?
custom.k8s.io
resources.k8s.io
metrics.k8s.io
cadvisor.k8s.io
The correct answer is C: metrics.k8s.io. Kubernetes’ metrics-server is the standard component that provides resource metrics (primarily CPU and memory) for nodes and pods. It aggregates this information (sourced from kubelet/cAdvisor) and serves it through the Kubernetes aggregated API under the group metrics.k8s.io. This is what enables commands like kubectl top nodes and kubectl top pods, and it is also a key data source for autoscaling with the Horizontal Pod Autoscaler (HPA) when scaling on CPU/memory utilization.
Why the other options are wrong:
custom.k8s.io is not the standard API group for metrics-server resource metrics. Custom metrics are typically served through the custom metrics API (commonly custom.metrics.k8s.io) via adapters (e.g., Prometheus Adapter), not metrics-server.
resources.k8s.io is not the metrics-server API group.
cadvisor.k8s.io is not exposed as a Kubernetes aggregated metrics API. cAdvisor is a component integrated into kubelet that provides container stats, but metrics-server is the thing that exposes the aggregated Kubernetes metrics API, and the canonical group is metrics.k8s.io.
Operationally, it’s important to understand the boundary: metrics-server provides basic resource metrics suitable for core autoscaling and “top” views, but it is not a full observability system (it does not store long-term metrics history like Prometheus). For richer metrics (SLOs, application metrics, long-term trending), teams typically deploy Prometheus or a managed monitoring backend. Still, when the question asks specifically which API exposes metrics-server data, the answer is definitively metrics.k8s.io.
=========
What is Flux constructed with?
GitLab Environment Toolkit
GitOps Toolkit
Helm Toolkit
GitHub Actions Toolkit
The correct answer is B: GitOps Toolkit. Flux is a GitOps solution for Kubernetes, and in Flux v2 the project is built as a set of Kubernetes controllers and supporting components collectively referred to as the GitOps Toolkit. This toolkit provides the building blocks for implementing GitOps reconciliation: sourcing artifacts (Git repositories, Helm repositories, OCI artifacts), applying manifests (Kustomize/Helm), and continuously reconciling cluster state to match the desired state declared in Git.
This construction matters because it reflects Flux’s modular architecture. Instead of being a single monolithic daemon, Flux is composed of controllers that each handle a part of the GitOps workflow: fetching sources, rendering configuration, and applying changes. This makes it more Kubernetes-native: everything is declarative, runs in the cluster, and can be managed like other workloads (RBAC, namespaces, upgrades, observability).
Why the other options are wrong:
“GitLab Environment Toolkit” and “GitHub Actions Toolkit” are not what Flux is built from. Flux can integrate with many SCM providers and CI systems, but it is not “constructed with” those.
“Helm Toolkit” is not the named foundational set Flux is built upon. Flux can deploy Helm charts, but that’s a capability, not its underlying construction.
In cloud-native delivery, Flux implements the key GitOps control loop: detect changes in Git (or other declared sources), compute desired Kubernetes state, and apply it while continuously checking for drift. The GitOps Toolkit is the set of controllers enabling that loop.
Therefore, the verified correct answer is B.
=========
What best describes cloud native service discovery?
It's a mechanism for applications and microservices to locate each other on a network.
It's a procedure for discovering a MAC address, associated with a given IP address.
It's used for automatically assigning IP addresses to devices connected to the network.
It's a protocol that turns human-readable domain names into IP addresses on the Internet.
Cloud native service discovery is fundamentally about how services and microservices find and connect to each other reliably in a dynamic environment, so A is correct. In cloud native systems (especially Kubernetes), instances are ephemeral: Pods can be created, destroyed, rescheduled, and scaled at any time. Hardcoding IPs breaks quickly. Service discovery provides stable names and lookup mechanisms so that one component can locate another even as underlying endpoints change.
In Kubernetes, service discovery is commonly achieved through Services (stable virtual IP + DNS name) and cluster DNS (CoreDNS). A Service selects a group of Pods via labels, and Kubernetes maintains the set of endpoints behind that Service. Clients connect to the Service name (DNS) and Kubernetes routes traffic to the current healthy Pods. For some workloads, headless Services provide DNS records that map directly to Pod IPs for per-instance discovery.
The other options describe different networking concepts: B is ARP (MAC discovery), C is DHCP (IP assignment), and D is DNS in a general internet sense. DNS is often used as a mechanism for service discovery, but cloud native service discovery is broader: it’s the overall mechanism enabling dynamic location of services, often implemented via DNS and/or environment variables and sometimes enhanced by service meshes.
So the best description remains A: a mechanism that allows applications and microservices to locate each other on a network in a dynamic environment.
What are the two essential operations that the kube-scheduler normally performs?
Pod eviction or starting
Resource monitoring and reporting
Filtering and scoring nodes
Starting and terminating containers
The kube-scheduler is a core control plane component in Kubernetes responsible for assigning newly created Pods to appropriate nodes. Its primary responsibility is decision-making, not execution. To make an informed scheduling decision, the kube-scheduler performs two essential operations: filtering and scoring nodes.
The scheduling process begins when a Pod is created without a node assignment. The scheduler first evaluates all available nodes and applies a set of filtering rules. During this phase, nodes that do not meet the Pod’s requirements are eliminated. Filtering criteria include resource availability (CPU and memory requests), node selectors, node affinity rules, taints and tolerations, volume constraints, and other policy-based conditions. Any node that fails one or more of these checks is excluded from consideration.
Once filtering is complete, the scheduler moves on to the scoring phase. In this step, each remaining eligible node is assigned a score based on a collection of scoring plugins. These plugins evaluate factors such as resource utilization balance, affinity preferences, topology spread constraints, and custom scheduling policies. The purpose of scoring is to rank nodes according to how well they satisfy the Pod’s placement preferences. The node with the highest total score is selected as the best candidate.
Option A is incorrect because Pod eviction is handled by other components such as the kubelet and controllers, and starting Pods is the responsibility of the kubelet. Option B is incorrect because resource monitoring and reporting are performed by components like metrics-server, not the scheduler. Option D is also incorrect because starting and terminating containers is entirely handled by the kubelet and the container runtime.
By separating filtering (eligibility) from scoring (preference), the kube-scheduler provides a flexible, extensible, and policy-driven scheduling mechanism. This design allows Kubernetes to support diverse workloads and advanced placement strategies while maintaining predictable scheduling behavior.
Therefore, the correct and verified answer is Option C: Filtering and scoring nodes, as documented in Kubernetes scheduling architecture.
Services and Pods in Kubernetes are ______ objects.
JSON
YAML
Java
REST
In Kubernetes, resources like Pods and Services are represented as API objects that you create, read, update, delete, and watch via the Kubernetes RESTful API. That makes D (REST) the correct answer.
Kubernetes is fundamentally API-driven: the API server exposes endpoints for each resource type (for example, /api/v1/namespaces/{ns}/pods and /api/v1/namespaces/{ns}/services). Clients such as kubectl, controllers, operators, and external systems interact with these resources by making REST-style calls using HTTP verbs (GET, POST, PUT/PATCH, DELETE) and using watch streams for event-driven updates. This API-first design is what enables Kubernetes’ declarative model—users submit desired state to the API server, and controllers reconcile the cluster to that desired state.
Options A and B (JSON and YAML) are common serialization formats used to represent Kubernetes objects, but they are not what the objects “are.” Kubernetes objects are logical API resources; they can be encoded as JSON (what the API uses) and often authored as YAML for human convenience. YAML is effectively a superset-friendly format that can be converted to JSON. The underlying API object model remains the same regardless of whether you wrote YAML or JSON. Option C (Java) is unrelated; Java is a programming language that can interact with Kubernetes via client libraries, but Kubernetes objects are not “Java objects” in the platform’s definition.
So the accurate statement is: Pods and Services are Kubernetes REST API objects (resources) exposed and managed through the Kubernetes API server, which is why REST is the correct fill-in.
=========
When modifying an existing Helm release to apply new configuration values, which approach is the best practice?
Use helm upgrade with the --set flag to apply new values while preserving the release history.
Use kubectl edit to modify the live release configuration and apply the updated resource values.
Delete the release and reinstall it with the desired configuration to force an updated deployment.
Edit the Helm chart source files directly and reapply them to push the updated configuration values.
Helm is a package manager for Kubernetes that provides a declarative and versioned approach to application deployment and lifecycle management. When updating configuration values for an existing Helm release, the recommended and best-practice approach is to use helm upgrade, optionally with the --set flag or a values file, to apply the new configuration while preserving the release’s history.
Option A is correct because helm upgrade updates an existing release in a controlled and auditable manner. Helm stores each revision of a release, allowing teams to inspect past configurations and roll back to a previous known-good state if needed. Using --set enables quick overrides of individual values, while using -f values.yaml supports more complex or repeatable configurations. This approach aligns with GitOps and infrastructure-as-code principles, ensuring consistency and traceability.
Option B is incorrect because modifying Helm-managed resources directly with kubectl edit breaks Helm’s state tracking. Helm maintains a record of the desired state for each release, and manual edits can cause configuration drift, making future upgrades unpredictable or unsafe. Kubernetes documentation and Helm guidance strongly discourage modifying Helm-managed resources outside of Helm itself.
Option C is incorrect because deleting and reinstalling a release discards the release history and may cause unnecessary downtime or data loss, especially for stateful applications. Helm’s upgrade mechanism is specifically designed to avoid this disruption while still applying configuration changes safely.
Option D is also incorrect because editing chart source files directly and reapplying them bypasses Helm’s release management model. While chart changes are appropriate during development, applying them directly to a running release without helm upgrade undermines versioning, rollback, and repeatability.
According to Helm documentation, helm upgrade is the standard and supported method for modifying deployed applications. It ensures controlled updates, preserves operational history, and enables safe rollbacks, making option A the correct and fully verified best practice.
What is the primary mechanism to identify grouped objects in a Kubernetes cluster?
Custom Resources
Labels
Label Selector
Pod
Kubernetes groups and organizes objects primarily using labels, so B is correct. Labels are key-value pairs attached to objects (Pods, Deployments, Services, Nodes, etc.) and are intended to be used for identifying, selecting, and grouping resources in a flexible, user-defined way.
Labels enable many core Kubernetes behaviors. For example, a Service selects the Pods that should receive traffic by matching a label selector against Pod labels. A Deployment’s ReplicaSet similarly uses label selectors to determine which Pods belong to the replica set. Operators and platform tooling also rely on labels to group resources by application, environment, team, or cost center. This is why labeling is considered foundational Kubernetes hygiene: consistent labels make automation, troubleshooting, and governance easier.
A “label selector” (option C) is how you query/group objects based on labels, but the underlying primary mechanism is still the labels themselves. Without labels applied to objects, selectors have nothing to match. Custom Resources (option A) extend the API with new kinds, but they are not the primary grouping mechanism across the cluster. “Pod” (option D) is a workload unit, not a grouping mechanism.
Practically, Kubernetes recommends common label keys like app.kubernetes.io/name, app.kubernetes.io/instance, and app.kubernetes.io/part-of to standardize grouping. Those conventions improve interoperability with dashboards, GitOps tooling, and policy engines.
So, when the question asks for the primary mechanism used to identify grouped objects in Kubernetes, the most accurate answer is Labels (B)—they are the universal metadata primitive used to group and select resources.
=========
What is the goal of load balancing?
Automatically measure request performance across instances of an application.
Automatically distribute requests across different versions of an application.
Automatically distribute instances of an application across the cluster.
Automatically distribute requests across instances of an application.
The core goal of load balancing is to distribute incoming requests across multiple instances of a service so that no single instance becomes overloaded and so that the overall service is more available and responsive. That matches option D, which is the correct answer.
In Kubernetes, load balancing commonly appears through the Service abstraction. A Service selects a set of Pods using labels and provides stable access via a virtual IP (ClusterIP) and DNS name. Traffic sent to the Service is then forwarded to one of the healthy backend Pods. This spreads load across replicas and provides resilience: if one Pod fails, it is removed from endpoints (or becomes NotReady) and traffic shifts to remaining replicas. The actual traffic distribution mechanism depends on the networking implementation (kube-proxy using iptables/IPVS or an eBPF dataplane), but the intent remains consistent: distribute requests across multiple backends.
Option A describes monitoring/observability, not load balancing. Option B describes progressive delivery patterns like canary or A/B routing; that can be implemented with advanced routing layers (Ingress controllers, service meshes), but it’s not the general definition of load balancing. Option C describes scheduling/placement of instances (Pods) across cluster nodes, which is the role of the scheduler and controllers, not load balancing.
In cloud environments, load balancing may also be implemented by external load balancers (cloud LBs) in front of the cluster, then forwarded to NodePorts or ingress endpoints, and again balanced internally to Pods. At each layer, the objective is the same: spread request traffic across multiple service instances to improve performance and availability.
=========
What is the default eviction timeout when the Ready condition of a node is Unknown or False?
Thirty seconds.
Thirty minutes.
One minute.
Five minutes.
The verified correct answer is D (Five minutes). In Kubernetes, node health is continuously monitored. When a node stops reporting status (heartbeats from the kubelet) or is otherwise considered unreachable, the Node controller updates the Node’s Ready condition to Unknown (or it can become False). From that point, Kubernetes has to balance two risks: acting too quickly might cause unnecessary disruption (e.g., transient network hiccups), but acting too slowly prolongs outage for workloads that were running on the failed node.
The “default eviction timeout” refers to the control plane behavior that determines how long Kubernetes waits before evicting Pods from a node that appears unhealthy/unreachable. After this timeout elapses, Kubernetes begins eviction of Pods so controllers (like Deployments) can recreate them on healthy nodes, restoring the desired replica count and availability.
This is tightly connected to high availability and self-healing: Kubernetes does not “move” Pods from a dead node; it replaces them. The eviction timeout gives the cluster time to confirm the node is truly unavailable, avoiding flapping in unstable networks. Once eviction begins, replacement Pods can be scheduled elsewhere (assuming capacity exists), which is the normal recovery path for stateless workloads.
It’s also worth noting that graceful operational handling can be influenced by PodDisruptionBudgets (for voluntary disruptions) and by workload design (replicas across nodes/zones). But the question is testing the default timer value, which is five minutes in this context.
Therefore, among the choices provided, the correct answer is D.
=========
Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called:
Namespaces
Containers
Hypervisors
cgroups
Kubernetes provides “virtual clusters” within a single physical cluster primarily through Namespaces, so A is correct. Namespaces are a logical partitioning mechanism that scopes many Kubernetes resources (Pods, Services, Deployments, ConfigMaps, Secrets, etc.) into separate environments. This enables multiple teams, applications, or environments (dev/test/prod) to share a cluster while keeping their resource names and access controls separated.
Namespaces are often described as “soft multi-tenancy.” They don’t provide full isolation like separate clusters, but they do allow administrators to apply controls per namespace:
RBAC rules can grant different permissions per namespace (who can read Secrets, who can deploy workloads, etc.).
ResourceQuotas and LimitRanges can enforce fair usage and prevent one namespace from consuming all cluster resources.
NetworkPolicies can isolate traffic between namespaces (depending on the CNI).
Containers are runtime units inside Pods and are not “virtual clusters.” Hypervisors are virtualization components for VMs, not Kubernetes partitioning constructs. cgroups are Linux kernel primitives for resource control, not Kubernetes virtual cluster constructs.
While there are other “virtual cluster” approaches (like vcluster projects) that create stronger virtualized control planes, the built-in Kubernetes mechanism referenced by this question is namespaces. Therefore, the correct answer is A: Namespaces.
=========
Which Kubernetes resource workload ensures that all (or some) nodes run a copy of a Pod?
DaemonSet
StatefulSet
kubectl
Deployment
A DaemonSet is the workload controller that ensures a Pod runs on all nodes or on a selected subset of nodes, so A is correct. DaemonSets are used for node-level agents and infrastructure components that must be present everywhere—examples include log collectors, monitoring agents, storage daemons, CNI components, and node security tools.
The DaemonSet controller watches for node additions/removals. When a new node joins the cluster, Kubernetes automatically schedules a new DaemonSet Pod onto that node (subject to constraints such as node selectors, affinities, and taints/tolerations). When a node is removed, its DaemonSet Pod naturally disappears with it. This creates the “one per node” behavior that differentiates DaemonSets from other workload types.
A Deployment manages a replica count across the cluster, not “one per node.” A StatefulSet manages stable identity and ordered operations for stateful replicas; it does not inherently map one Pod to every node. kubectl is a CLI tool and not a workload resource.
DaemonSets can also be scoped: by using node selectors, node affinity, and tolerations, you can ensure Pods run only on GPU nodes, only on Linux nodes, only in certain zones, or only on nodes with a particular label. That’s why the question says “all (or some) nodes.”
Therefore, the correct and verified answer is DaemonSet (A).
A Kubernetes Pod is returning a CrashLoopBackOff status. What is the most likely reason for this behavior?
There are insufficient resources allocated for the Pod.
The application inside the container crashed after starting.
The container’s image is missing or cannot be pulled.
The Pod is unable to communicate with the Kubernetes API server.
A CrashLoopBackOff status in Kubernetes indicates that a container within a Pod is repeatedly starting, crashing, and being restarted by Kubernetes. This behavior occurs when the container process exits shortly after starting and Kubernetes applies an increasing back-off delay between restart attempts to prevent excessive restarts.
Option B is the correct answer because CrashLoopBackOff most commonly occurs when the application inside the container crashes after it has started. Typical causes include application runtime errors, misconfigured environment variables, missing configuration files, invalid command or entrypoint definitions, failed dependencies, or unhandled exceptions during application startup. Kubernetes itself is functioning as expected by restarting the container according to the Pod’s restart policy.
Option A is incorrect because insufficient resources usually lead to different symptoms. For example, if a container exceeds its memory limit, it may be terminated with an OOMKilled status rather than repeatedly crashing immediately. While resource constraints can indirectly cause crashes, they are not the defining reason for a CrashLoopBackOff state.
Option C is incorrect because an image that cannot be pulled results in statuses such as ImagePullBackOff or ErrImagePull, not CrashLoopBackOff. In those cases, the container never successfully starts.
Option D is incorrect because Pods do not need to communicate directly with the Kubernetes API server for normal application execution. Issues with API server communication affect control plane components or scheduling, not container restart behavior.
From a troubleshooting perspective, Kubernetes documentation recommends inspecting container logs using kubectl logs and reviewing Pod events with kubectl describe pod to identify the root cause of the crash. Fixing the underlying application error typically resolves the CrashLoopBackOff condition.
In summary, CrashLoopBackOff is a protective mechanism that signals a repeatedly failing container process. The most likely and verified cause is that the application inside the container is crashing after startup, making option B the correct answer.
Which component of the Kubernetes architecture is responsible for integration with the CRI container runtime?
kubeadm
kubelet
kube-apiserver
kubectl
The correct answer is B: kubelet. The Container Runtime Interface (CRI) defines how Kubernetes interacts with container runtimes in a consistent, pluggable way. The component that speaks CRI is the kubelet, the node agent responsible for running Pods on each node. When the kube-scheduler assigns a Pod to a node, the kubelet reads the PodSpec and makes the runtime calls needed to realize that desired state—pull images, create a Pod sandbox, start containers, stop containers, and retrieve status and logs. Those calls are made via CRI to a CRI-compliant runtime such as containerd or CRI-O.
Why not the others:
kubeadm bootstraps clusters (init/join/upgrade workflows) but does not run containers or speak CRI for workload execution.
kube-apiserver is the control plane API frontend; it stores and serves cluster state and does not directly integrate with runtimes.
kubectl is just a client tool that sends API requests; it is not involved in runtime integration on nodes.
This distinction matters operationally. If the runtime is misconfigured or CRI endpoints are unreachable, kubelet will report errors and Pods can get stuck in ContainerCreating, image pull failures, or runtime errors. Debugging often involves checking kubelet logs and runtime service health, because kubelet is the integration point bridging Kubernetes scheduling/state with actual container execution.
So, the node-level component responsible for CRI integration is the kubelet—option B.
=========
What is Serverless computing?
A computing method of providing backend services on an as-used basis.
A computing method of providing services for AI and ML operating systems.
A computing method of providing services for quantum computing operating systems.
A computing method of providing services for cloud computing operating systems.
Serverless computing is a cloud execution model where the provider manages infrastructure concerns and you consume compute as a service, typically billed based on actual usage (requests, execution time, memory), which matches A. In other words, you deploy code (functions) or sometimes containers, configure triggers (HTTP events, queues, schedules), and the platform automatically provisions capacity, scales it up/down, and handles much of availability and fault tolerance behind the scenes.
From a cloud-native architecture standpoint, “serverless” doesn’t mean there are no servers; it means developers don’t manage servers. The platform abstracts away node provisioning, OS patching, and much of runtime scaling logic. This aligns with the “as-used basis” phrasing: you pay for what you run rather than maintaining always-on capacity.
It’s also useful to distinguish serverless from Kubernetes. Kubernetes automates orchestration (scheduling, self-healing, scaling), but operating Kubernetes still involves cluster-level capacity decisions, node pools, upgrades, networking baseline, and policy. With serverless, those responsibilities are pushed further toward the provider/platform. Kubernetes can enable serverless experiences (for example, event-driven autoscaling frameworks), but serverless as a model is about a higher level of abstraction than “orchestrate containers yourself.”
Options B, C, and D are incorrect because they describe specialized or vague “operating system” services rather than the commonly accepted definition. Serverless is not specifically about AI/ML OSs or quantum OSs; it’s a general compute delivery model that can host many kinds of workloads.
Therefore, the correct definition in this question is A: providing backend services on an as-used basis.
=========
When a Kubernetes Secret is created, how is the data stored by default in etcd?
As Base64-encoded strings that provide simple encoding but no actual encryption.
As plain text values that are directly stored without any obfuscation or additional encoding.
As compressed binary objects that are optimized for space but not secured against access.
As encrypted records automatically protected using the Kubernetes control plane master key.
By default, Kubernetes Secrets are stored in etcd as Base64-encoded values, which makes option A the correct answer. This is a common point of confusion because Base64 encoding is often mistaken for encryption, but in reality, it provides no security—only a reversible text encoding.
When a Secret is defined in a Kubernetes manifest or created via kubectl, its data fields are Base64-encoded before being persisted in etcd. This encoding ensures that binary data (such as certificates or keys) can be safely represented in JSON and YAML formats, which require text-based values. However, anyone with access to etcd or the Secret object via the Kubernetes API can easily decode these values.
Option B is incorrect because Secrets are not stored as raw plaintext; they are encoded using Base64 before storage. Option C is incorrect because Kubernetes does not compress Secret data by default. Option D is incorrect because Secrets are not encrypted at rest by default. Encryption at rest must be explicitly configured using an encryption provider configuration in the Kubernetes API server.
Because of this default behavior, Kubernetes strongly recommends additional security measures when handling Secrets. These include enabling encryption at rest for etcd, restricting access to Secrets using RBAC, using short-lived ServiceAccount tokens, and integrating with external secret management systems such as HashiCorp Vault or cloud provider key management services.
Understanding how Secrets are stored is critical for designing secure Kubernetes clusters. While Secrets provide a convenient abstraction for handling sensitive data, they rely on cluster-level security controls to ensure confidentiality. Without encryption at rest and proper access restrictions, Secret data remains vulnerable to unauthorized access.
Therefore, the correct and verified answer is Option A: Kubernetes stores Secrets as Base64-encoded strings in etcd by default, which offers encoding but not encryption.
How many different Kubernetes service types can you define?
2
3
4
5
Kubernetes defines four primary Service types, which is why C (4) is correct. The commonly recognized Service spec.type values are:
ClusterIP: The default type. Exposes the Service on an internal virtual IP reachable only within the cluster. This supports typical east-west traffic between workloads.
NodePort: Exposes the Service on a static port on each node. Traffic to
LoadBalancer: Integrates with a cloud provider (or load balancer implementation) to provision an external load balancer and route traffic to the Service. This is common in managed Kubernetes.
ExternalName: Maps the Service name to an external DNS name via a CNAME record, allowing in-cluster clients to use a consistent Service DNS name to reach an external dependency.
Some people also talk about “Headless Services,” but headless is not a separate type; it’s a behavior achieved by setting clusterIP: None. Headless Services still use the Service API object but change DNS and virtual-IP behavior to return endpoint IPs directly rather than a ClusterIP. That’s why the canonical count of “Service types” is four.
This question tests understanding of the Service abstraction: Service type controls how a stable service identity is exposed (internal VIP, node port, external LB, or DNS alias), while selectors/endpoints control where traffic goes (the backend Pods). Different environments will favor different types: ClusterIP for internal microservices, LoadBalancer for external exposure in cloud, NodePort for bare-metal or simple access, ExternalName for bridging to outside services.
Therefore, the verified answer is C (4).
=========
Which of these is a valid container restart policy?
On login
On update
On start
On failure
The correct answer is D: On failure. In Kubernetes, restart behavior is controlled by the Pod-level field spec.restartPolicy, with valid values Always, OnFailure, and Never. The option presented here (“On failure”) maps to Kubernetes’ OnFailure policy. This setting determines what the kubelet should do when containers exit:
Always: restart containers whenever they exit (typical for long-running services)
OnFailure: restart containers only if they exit with a non-zero status (common for batch workloads)
Never: do not restart containers (fail and leave it terminated)
So “On failure” is a valid restart policy concept and the only one in the list that matches Kubernetes semantics.
The other options are not Kubernetes restart policies. “On login,” “On update,” and “On start” are not recognized values and don’t align with how Kubernetes models container lifecycle. Kubernetes is declarative and event-driven: it reacts to container exit codes and controller intent, not user “logins.”
Operationally, choosing the right restart policy is important. For example, Jobs typically use restartPolicy: OnFailure or Never because the goal is completion, not continuous uptime. Deployments usually imply “Always” because the workload should keep serving traffic, and a crashed container should be restarted. Also note that controllers interact with restarts: a Deployment may recreate Pods if they fail readiness, while a Job counts completions and failures based on Pod termination behavior.
Therefore, among the options, the only valid (Kubernetes-aligned) restart policy is D.
=========
Which of the following options include resources cleaned by the Kubernetes garbage collection mechanism?
Stale or expired CertificateSigningRequests (CSRs) and old deployments.
Nodes deleted by a cloud controller manager and obsolete logs from the kubelet.
Unused container and container images, and obsolete logs from the kubelet.
Terminated pods, completed jobs, and objects without owner references.
Kubernetes garbage collection (GC) is about cleaning up API objects and related resources that are no longer needed, so the correct answer is D. Two big categories it targets are (1) objects that have finished their lifecycle (like terminated Pods and completed Jobs, depending on controllers and TTL policies), and (2) “dangling” objects that are no longer referenced properly—often described as objects without owner references (or where owners are gone), which can happen when a higher-level controller is deleted or when dependent resources are left behind.
A key Kubernetes concept here is OwnerReferences: many resources are created “owned” by a controller (e.g., a ReplicaSet owned by a Deployment, Pods owned by a ReplicaSet). When an owning object is deleted, Kubernetes’ garbage collector can remove dependent objects based on deletion propagation policies (foreground/background/orphan). This prevents resource leaks and keeps the cluster tidy and performant.
The other options are incorrect because they refer to cleanup tasks outside Kubernetes GC’s scope. Kubelet logs (B/C) are node-level files and log rotation is handled by node/runtime configuration, not the Kubernetes garbage collector. Unused container images (C) are managed by the container runtime’s image GC and kubelet disk pressure management, not the Kubernetes API GC. Nodes deleted by a cloud controller (B) aren’t “garbage collected” in the same sense; node lifecycle is handled by controllers and cloud integrations, but not as a generic GC cleanup category like ownerRef-based object deletion.
So, when the question asks specifically about “resources cleaned by Kubernetes garbage collection,” it’s pointing to Kubernetes object lifecycle cleanup: terminated Pods, completed Jobs, and orphaned objects—exactly what option D states.
=========
In a Kubernetes cluster, what is the primary role of the Kubernetes scheduler?
To manage the lifecycle of the Pods by restarting them when they fail.
To monitor the health of the nodes and Pods in the cluster.
To handle network traffic between services within the cluster.
To distribute Pods across nodes based on resource availability and constraints.
The Kubernetes scheduler is a core control plane component responsible for deciding where Pods should run within a cluster. Its primary role is to assign newly created Pods that do not yet have a node assigned to an appropriate node based on a variety of factors such as resource availability, scheduling constraints, and policies.
When a Pod is created, it enters a Pending state until the scheduler selects a suitable node. The scheduler evaluates all available nodes and filters out those that do not meet the Pod’s requirements. These requirements may include CPU and memory requests, node selectors, node affinity rules, taints and tolerations, topology spread constraints, and other scheduling policies. After filtering, the scheduler scores the remaining nodes to determine the best placement for the Pod and then binds the Pod to the selected node.
Option A is incorrect because restarting failed Pods is handled by other components such as the kubelet and higher-level controllers like Deployments, ReplicaSets, or StatefulSets—not the scheduler. Option B is incorrect because monitoring node and Pod health is primarily the responsibility of the kubelet and the Kubernetes controller manager, which reacts to node failures and ensures desired state. Option C is incorrect because handling network traffic is managed by Services, kube-proxy, and the cluster’s networking implementation, not the scheduler.
Option D correctly describes the scheduler’s purpose. By distributing Pods across nodes based on resource availability and constraints, the scheduler helps ensure efficient resource utilization, high availability, and workload isolation. This intelligent placement is essential for maintaining cluster stability and performance, especially in large-scale or multi-tenant environments.
According to Kubernetes documentation, the scheduler’s responsibility is strictly focused on Pod placement decisions. Once a Pod is scheduled, the scheduler’s job is complete for that Pod, making option D the accurate and fully verified answer.
What helps an organization to deliver software more securely at a higher velocity?
Kubernetes
apt-get
Docker Images
CI/CD Pipeline
A CI/CD pipeline is a core practice/tooling approach that enables organizations to deliver software faster and more securely, so D is correct. CI (Continuous Integration) automates building and testing code changes frequently, reducing integration risk and catching defects early. CD (Continuous Delivery/Deployment) automates releasing validated builds into environments using consistent, repeatable steps—reducing manual errors and enabling rapid iteration.
Security improves because automation enables standardized checks on every change: static analysis, dependency scanning, container image scanning, policy validation, and signing/verification steps can be integrated into the pipeline. Instead of relying on ad-hoc human processes, security controls become repeatable gates. In Kubernetes environments, pipelines commonly build container images, run tests, publish artifacts to registries, and then deploy via manifests, Helm, or GitOps controllers—keeping deployments consistent and auditable.
Option A (Kubernetes) is a platform that helps run and manage workloads, but by itself it doesn’t guarantee secure high-velocity delivery. It provides primitives (rollouts, declarative config, RBAC), yet the delivery workflow still needs automation. Option B (apt-get) is a package manager for Debian-based systems and is not a delivery pipeline. Option C (Docker Images) are artifacts; they improve portability and repeatability, but they don’t provide the end-to-end automation of building, testing, promoting, and deploying across environments.
In cloud-native application delivery, the pipeline is the “engine” that turns code changes into safe production releases. Combined with Kubernetes’ declarative deployment model (Deployments, rolling updates, health probes), a CI/CD pipeline supports frequent releases with controlled rollouts, fast rollback, and strong auditability. That is exactly what the question is targeting. Therefore, the verified answer is D.
=========
The Container Runtime Interface (CRI) defines the protocol for the communication between:
The kubelet and the container runtime.
The container runtime and etcd.
The kube-apiserver and the kubelet.
The container runtime and the image registry.
The CRI (Container Runtime Interface) defines how the kubelet talks to the container runtime, so A is correct. The kubelet is the node agent responsible for ensuring containers are running in Pods on that node. It needs a standardized way to request operations such as: create a Pod sandbox, pull an image, start/stop containers, execute commands, attach streams, and retrieve logs. CRI provides that contract so kubelet does not need runtime-specific integrations.
This interface is a key part of Kubernetes’ modular design. Different container runtimes implement the CRI, allowing Kubernetes to run with containerd, CRI-O, and other CRI-compliant runtimes. This separation of concerns lets Kubernetes focus on orchestration, while runtimes focus on executing containers according to the OCI runtime spec, managing images, and handling low-level container lifecycle.
Why the other options are incorrect:
etcd is the control plane datastore; container runtimes do not communicate with etcd via CRI.
kube-apiserver and kubelet communicate using Kubernetes APIs, but CRI is not their protocol; CRI is specifically kubelet ↔ runtime.
container runtime and image registry communicate using registry protocols (image pull/push APIs), but that is not CRI. CRI may trigger image pulls via runtime requests, yet the actual registry communication is separate.
Operationally, this distinction matters when debugging node issues. If Pods are stuck in “ContainerCreating” due to image pull failures or runtime errors, you often investigate kubelet logs and the runtime (containerd/CRI-O) logs. Kubernetes administrators also care about CRI streaming (exec/attach/logs streaming), runtime configuration, and compatibility across Kubernetes versions.
So, the verified answer is A: the kubelet and the container runtime.
=========
Manual reclamation policy of a PV resource is known as:
claimRef
Delete
Retain
Recycle
The correct answer is C: Retain. In Kubernetes persistent storage, a PersistentVolume (PV) has a persistentVolumeReclaimPolicy that determines what happens to the underlying storage asset after its PersistentVolumeClaim (PVC) is deleted. The reclaim policy options historically include Delete and Retain (and Recycle, which is deprecated/removed in many modern contexts). “Manual reclamation” refers to the administrator having to manually clean up and/or rebind the storage after the claim is released—this behavior corresponds to Retain.
With Retain, when the PVC is deleted, the PV moves to a “Released” state, but the actual storage resource (cloud disk, NFS path, etc.) is not deleted automatically. Kubernetes will not automatically make that PV available for a new claim until an administrator takes action—typically cleaning the data, removing the old claim reference, and/or creating a new PV/PVC binding flow. This is important for data safety: you don’t want to automatically delete sensitive or valuable data just because a claim was removed.
By contrast, Delete means Kubernetes (via the storage provisioner/CSI driver) will delete the underlying storage asset when the claim is deleted—useful for dynamic provisioning and disposable environments. Recycle used to scrub the volume contents and make it available again, but it’s not the recommended modern approach and has been phased out in favor of dynamic provisioning and explicit workflows.
So, the policy that implies manual intervention and manual cleanup/reuse is Retain, which is option C.
=========
How do you deploy a workload to Kubernetes without additional tools?
Create a Bash script and run it on a worker node.
Create a Helm Chart and install it with helm.
Create a manifest and apply it with kubectl.
Create a Python script and run it with kubectl.
The standard way to deploy workloads to Kubernetes using only built-in tooling is to create Kubernetes manifests (YAML/JSON definitions of API objects) and apply them with kubectl, so C is correct. Kubernetes is a declarative system: you describe the desired state of resources (e.g., a Deployment, Service, ConfigMap, Ingress) in a manifest file, then submit that desired state to the API server. Controllers reconcile the actual cluster state to match what you declared.
A manifest typically includes mandatory fields like apiVersion, kind, and metadata, and then a spec describing desired behavior. For example, a Deployment manifest declares replicas and the Pod template (containers, images, ports, probes, resources). Applying the manifest with kubectl apply -f
Option B (Helm) is indeed a popular deployment tool, but Helm is explicitly an “additional tool” beyond kubectl and the Kubernetes API. The question asks “without additional tools,” so Helm is excluded by definition. Option A (running Bash scripts on worker nodes) bypasses Kubernetes’ desired-state control and is not how Kubernetes workload deployment is intended; it also breaks portability and operational safety. Option D is not a standard Kubernetes deployment mechanism; kubectl does not “run Python scripts” to deploy workloads (though scripts can automate kubectl, that’s still not the primary mechanism).
From a cloud native delivery standpoint, manifests support GitOps, reviewable changes, and repeatable deployments across environments. The Kubernetes-native approach is: declare resources in manifests and apply them to the cluster. Therefore, C is the verified correct answer.
Which authorization-mode allows granular control over the operations that different entities can perform on different objects in a Kubernetes cluster?
Webhook Mode Authorization Control
Role Based Access Control
Node Authorization Access Control
Attribute Based Access Control
Role Based Access Control (RBAC) is the standard Kubernetes authorization mode that provides granular control over what users and service accounts can do to which resources, so B is correct. RBAC works by defining Roles (namespaced) and ClusterRoles (cluster-wide) that contain sets of rules. Each rule specifies API groups, resource types, resource names (optional), and allowed verbs such as get, list, watch, create, update, patch, and delete. You then attach these roles to identities using RoleBindings or ClusterRoleBindings.
This gives fine-grained, auditable access control. For example, you can allow a CI service account to create and patch Deployments only in a specific namespace, while restricting it from reading Secrets. You can allow developers to view Pods and logs but prevent them from changing cluster-wide networking resources. This is exactly the “granular control over operations on objects” described by the question.
Why other options are not the best answer: “Webhook mode” is an authorization mechanism where Kubernetes calls an external service to decide authorization. While it can be granular depending on the external system, Kubernetes’ common built-in answer for granular object-level control is RBAC. “Node authorization” is a specialized authorizer for kubelets/nodes to access resources they need; it’s not the general-purpose system for all cluster entities. ABAC (Attribute-Based Access Control) is an older mechanism and is not the primary recommended authorization model; it can be expressive but is less commonly used and not the default best-practice for Kubernetes authorization today.
In Kubernetes security practice, RBAC is typically paired with authentication (certs/OIDC), admission controls, and namespaces to build a defense-in-depth security posture. RBAC policy is also central to least privilege: granting only what is necessary for a workload or user role to function. This reduces blast radius if credentials are compromised.
Therefore, the verified answer is B: Role Based Access Control.
=========
What edge and service proxy tool is designed to be integrated with cloud native applications?
CoreDNS
CNI
gRPC
Envoy
The correct answer is D: Envoy. Envoy is a high-performance edge and service proxy designed for cloud-native environments. It is commonly used as the data plane in service meshes and modern API gateways because it provides consistent traffic management, observability, and security features across microservices without requiring every application to implement those capabilities directly.
Envoy operates at Layer 7 (application-aware) and supports protocols like HTTP/1.1, HTTP/2, gRPC, and more. It can handle routing, load balancing, retries, timeouts, circuit breaking, rate limiting, TLS termination, and mutual TLS (mTLS). Envoy also emits rich telemetry (metrics, access logs, tracing) that integrates well with cloud-native observability stacks.
Why the other options are incorrect:
CoreDNS (A) provides DNS-based service discovery within Kubernetes; it is not an edge/service proxy.
CNI (B) is a specification and plugin ecosystem for container networking (Pod networking), not a proxy.
gRPC (C) is an RPC protocol/framework used by applications; it’s not a proxy tool. (Envoy can proxy gRPC traffic, but gRPC itself isn’t the proxy.)
In Kubernetes architectures, Envoy often appears in two places: (1) at the edge as part of an ingress/gateway layer, and (2) sidecar proxies alongside Pods in a service mesh (like Istio) to standardize service-to-service communication controls and telemetry. This is why it is described as “designed to be integrated with cloud native applications”: it’s purpose-built for dynamic service discovery, resilient routing, and operational visibility in distributed systems.
So the verified correct choice is D (Envoy).
=========
A platform engineer wants to ensure that a new microservice is automatically deployed to every cluster registered in Argo CD. Which configuration best achieves this goal?
Set up a Kubernetes CronJob that redeploys the microservice to all registered clusters on a schedule.
Manually configure every registered cluster with the deployment YAML for installing the microservice.
Create an Argo CD ApplicationSet that uses a Git repository containing the microservice manifests.
Use a Helm chart to package the microservice and manage it with a single Application defined in Argo CD.
Argo CD is a declarative GitOps continuous delivery tool designed to manage Kubernetes applications across one or many clusters. When the requirement is to automatically deploy a microservice to every cluster registered in Argo CD, the most appropriate and scalable solution is to use an ApplicationSet.
The ApplicationSet controller extends Argo CD by enabling the dynamic generation of multiple Argo CD Applications from a single template. One of its most powerful features is the cluster generator, which automatically discovers all clusters registered with Argo CD and creates an Application for each of them. By combining this generator with a Git repository containing the microservice manifests, the platform engineer ensures that the microservice is consistently deployed to all existing clusters—and any new clusters added in the future—without manual intervention.
This approach aligns perfectly with GitOps principles. The desired state of the microservice is defined once in Git, and Argo CD continuously reconciles that state across all target clusters. Any updates to the microservice manifests are automatically rolled out everywhere in a controlled and auditable manner. This provides strong guarantees around consistency, scalability, and operational simplicity.
Option A is incorrect because a CronJob introduces imperative redeployment logic and does not integrate with Argo CD’s reconciliation model. Option B is not scalable or maintainable, as it requires manual configuration for each cluster and increases the risk of configuration drift. Option D, while useful for packaging applications, still results in a single Application object and does not natively handle multi-cluster fan-out by itself.
Therefore, the correct and verified answer is Option C: creating an Argo CD ApplicationSet backed by a Git repository, which is the recommended and documented solution for multi-cluster application delivery in Argo CD.
Which of the following actions is supported when working with Pods in Kubernetes?
Managing static Pods directly through the API server.
Guaranteeing Pods always stay on the same node once scheduled.
Renaming containers in a Pod using kubectl patch.
Creating Pods through workload resources like Deployments.
In Kubernetes, Pods are the smallest deployable units and represent one or more containers that share networking and storage. While Pods can be created directly, Kubernetes strongly encourages users to manage Pods indirectly through higher-level workload resources. Among the options provided, creating Pods through workload resources like Deployments is a fully supported and recommended practice.
Workload resources such as Deployments, ReplicaSets, StatefulSets, and Jobs are designed to manage Pods declaratively. A Deployment, for example, defines a desired state—such as the number of replicas and the Pod template—and Kubernetes continuously works to maintain that state. If a Pod crashes, is deleted, or a node fails, the Deployment automatically creates a replacement Pod. This model provides self-healing, scalability, rolling updates, and rollback capabilities, which are not available when managing standalone Pods.
Option A is incorrect because static Pods are not managed through the API server. Static Pods are created and managed directly by the kubelet on a specific node using manifest files placed on disk. Although the API server becomes aware of static Pods, they cannot be created, modified, or deleted through it.
Option B is incorrect because Kubernetes does not guarantee that Pods will always remain on the same node. If a node becomes unhealthy or a Pod is evicted, the scheduler may place a replacement Pod on a different node. Only certain workload patterns, such as StatefulSets with persistent storage, attempt to preserve identity—not node placement.
Option C is also incorrect because container names within a Pod are immutable. Kubernetes does not allow renaming containers using kubectl patch or any other mechanism after the Pod has been created.
Therefore, the correct and verified answer is option D: creating Pods through workload resources like Deployments, which aligns with Kubernetes design principles and official documentation.
What is the minimum number of etcd members that are required for a highly available Kubernetes cluster?
Two etcd members.
Five etcd members.
Six etcd members.
Three etcd members.
D (three etcd members) is correct. etcd is a distributed key-value store that uses the Raft consensus algorithm. High availability in consensus systems depends on maintaining a quorum (majority) of members to continue serving writes reliably. With 3 members, the cluster can tolerate 1 failure and still have 2/3 available—enough for quorum.
Two members is a common trap: with 2, a single failure leaves 1/2, which is not a majority, so the cluster cannot safely make progress. That means 2-member etcd is not HA; it is fragile and can be taken down by one node loss, network partition, or maintenance event. Five members can tolerate 2 failures and is a valid HA configuration, but it is not the minimum. Six is even-sized and generally discouraged for consensus because it doesn’t improve failure tolerance compared to five (quorum still requires 4), while increasing coordination overhead.
In Kubernetes, etcd reliability directly affects the API server and the entire control plane because etcd stores cluster state: object specs, status, controller state, and more. If etcd loses quorum, the API server will be unable to persist or reliably read/write state, leading to cluster management outages. That’s why the minimum HA baseline is three etcd members, often across distinct failure domains (nodes/AZs), with strong disk performance and consistent low-latency networking.
So, the smallest etcd topology that provides true fault tolerance is 3 members, which corresponds to option D.
=========
Kubernetes ___ protect you against voluntary interruptions (such as deleting Pods, draining nodes) to run applications in a highly available manner.
Pod Topology Spread Constraints
Pod Disruption Budgets
Taints and Tolerations
Resource Limits and Requests
The correct answer is B: Pod Disruption Budgets (PDBs). A PDB is a policy object that limits how many Pods of an application can be voluntarily disrupted at the same time. “Voluntary disruptions” include actions such as draining a node for maintenance (kubectl drain), cluster upgrades, or an administrator deleting Pods. The core purpose is to preserve availability by ensuring that a minimum number (or percentage) of replicas remain running and ready while those planned disruptions occur.
A PDB is typically defined with either minAvailable (e.g., “at least 3 Pods must remain available”) or maxUnavailable (e.g., “no more than 1 Pod can be unavailable”). Kubernetes uses this budget when performing eviction operations. If evicting a Pod would violate the PDB, the eviction is blocked (or delayed), which forces maintenance workflows to proceed more safely—either by draining more slowly, scaling up first, or scheduling maintenance in stages.
Why the other options are not correct: topology spread constraints (A) influence scheduling distribution across failure domains but don’t directly protect against voluntary disruptions. Taints and tolerations (C) control where Pods can schedule, not how many can be disrupted. Resource requests/limits (D) control CPU/memory allocation and do not guard availability during drains or deletions.
PDBs also work best when paired with Deployments/StatefulSets that maintain replicas and with readiness probes that accurately represent whether a Pod can serve traffic. PDBs do not prevent involuntary disruptions (node crashes), but they materially reduce risk during planned operations—exactly what the question is targeting.
=========
What does CNCF stand for?
Cloud Native Community Foundation
Cloud Native Computing Foundation
Cloud Neutral Computing Foundation
Cloud Neutral Community Foundation
CNCF stands for the Cloud Native Computing Foundation, making B correct. CNCF is the foundation that hosts and sustains many cloud-native open source projects, including Kubernetes, and provides governance, neutral stewardship, and community infrastructure to help projects grow and remain vendor-neutral.
CNCF’s scope includes not only Kubernetes but also a broad ecosystem of projects across observability, networking, service meshes, runtime security, CI/CD, and application delivery. The foundation defines processes for project incubation and graduation, promotes best practices, organizes community events, and supports interoperability and adoption through reference architectures and education.
In the Kubernetes context, CNCF’s role matters because Kubernetes is a massive multi-vendor project. Neutral governance reduces the risk that any single company can unilaterally control direction. This fosters broad contribution and adoption across cloud providers and enterprises. CNCF also supports the broader “cloud native” definition, often associated with containerization, microservices, declarative APIs, automation, and resilience principles.
The incorrect options are close-sounding but not accurate expansions. “Cloud Native Community Foundation” and the “Cloud Neutral …” variants are not the recognized meaning. The correct official name is Cloud Native Computing Foundation.
So, the verified answer is B, and understanding CNCF helps connect Kubernetes to its broader ecosystem of standardized, interoperable cloud-native tooling.
=========
Which of the following is a good habit for cloud native cost efficiency?
Follow an automated approach to cost optimization, including visibility and forecasting.
Follow manual processes for cost analysis, including visibility and forecasting.
Use only one cloud provider to simplify the cost analysis.
Keep your legacy workloads unchanged, to avoid cloud costs.
The correct answer is A. In cloud-native environments, costs are highly dynamic: autoscaling changes compute footprint, ephemeral environments come and go, and usage-based billing applies to storage, network egress, load balancers, and observability tooling. Because of this variability, automation is the most sustainable way to achieve cost efficiency. Automated visibility (dashboards, chargeback/showback), anomaly detection, and forecasting help teams understand where spend is coming from and how it changes over time. Automated optimization actions can include right-sizing requests/limits, enforcing TTLs on preview environments, scaling down idle clusters, and cleaning unused resources.
Manual processes (B) don’t scale as complexity grows. By the time someone reviews a spreadsheet or dashboard weekly, cost spikes may have already occurred. Automation enables fast feedback loops and guardrails, which is essential for preventing runaway spend caused by misconfiguration (e.g., excessive log ingestion, unbounded autoscaling, oversized node pools).
Option C is not a cost-efficiency “habit.” Single-provider strategies may simplify some billing views, but they can also reduce leverage and may not be feasible for resilience/compliance; it’s a business choice, not a best practice for cloud-native cost management. Option D is counterproductive: keeping legacy workloads unchanged often wastes money because cloud efficiency typically requires adapting workloads—right-sizing, adopting autoscaling, and using managed services appropriately.
In Kubernetes specifically, cost efficiency is tightly linked to resource management: accurate CPU/memory requests, limits where appropriate, cluster autoscaler tuning, and avoiding overprovisioning. Observability also matters because you can’t optimize what you can’t measure. Therefore, the best habit is an automated cost optimization approach with strong visibility and forecasting—A.
=========
Imagine you're releasing open-source software for the first time. Which of the following is a valid semantic version?
1.0
2021-10-11
0.1.0-rc
v1beta1
Semantic Versioning (SemVer) follows the pattern MAJOR.MINOR.PATCH with optional pre-release identifiers (e.g., -rc, -alpha.1) and build metadata. Among the options, 0.1.0-rc matches SemVer rules, so C is correct.
0.1.0-rc breaks down as: MAJOR=0, MINOR=1, PATCH=0, and -rc indicates a pre-release (“release candidate”). Pre-release versions are valid SemVer and are explicitly allowed to denote versions that are not yet considered stable. For a first-time open-source release, 0.x.y is common because it signals the API may still change in backward-incompatible ways before reaching 1.0.0.
Why the other options are not correct SemVer as written:
1.0 is missing the PATCH segment; SemVer requires three numeric components (e.g., 1.0.0).
2021-10-11 is a date string, not MAJOR.MINOR.PATCH.
v1beta1 resembles Kubernetes API versioning conventions, not SemVer.
In cloud-native delivery and Kubernetes ecosystems, SemVer matters because it communicates compatibility. Incrementing MAJOR indicates breaking changes, MINOR indicates backward-compatible feature additions, and PATCH indicates backward-compatible bug fixes. Pre-release tags allow releasing candidates for testing without claiming full stability. This is especially useful for open-source consumers and automation systems that need consistent version comparison and upgrade planning.
So, the only valid semantic version in the choices is 0.1.0-rc, option C.
=========
What are the advantages of adopting a GitOps approach for your deployments?
Reduce failed deployments, operational costs, and fragile release processes.
Reduce failed deployments, configuration drift, and fragile release processes.
Reduce failed deployments, operational costs, and learn git.
Reduce failed deployments, configuration drift and improve your reputation.
The correct answer is B: GitOps helps reduce failed deployments, reduce configuration drift, and reduce fragile release processes. GitOps is an operating model where Git is the source of truth for declarative configuration (Kubernetes manifests, Helm releases, Kustomize overlays). A GitOps controller (like Flux or Argo CD) continuously reconciles the cluster’s actual state to match what’s declared in Git. This creates a stable, repeatable deployment pipeline and minimizes “snowflake” environments.
Reducing failed deployments: changes go through pull requests, code review, automated checks, and controlled merges. Deployments become predictable because the controller applies known-good, versioned configuration rather than ad-hoc manual commands. Rollbacks are also simpler—reverting a Git commit returns the cluster to the prior desired state.
Reducing configuration drift: without GitOps, clusters often drift because humans apply hotfixes directly in production or because different environments diverge over time. With GitOps, the controller detects drift and either alerts or automatically corrects it, restoring alignment with Git.
Reducing fragile release processes: releases become standardized and auditable. Git history provides an immutable record of who changed what and when. Promotion between environments becomes systematic (merge/branch/tag), and the same declarative artifacts are used consistently.
The other options include items that are either not the primary GitOps promise (like “learn git”) or subjective (“improve your reputation”). Operational cost reduction can happen indirectly through fewer incidents and more automation, but the most canonical and direct GitOps advantages in Kubernetes delivery are reliability and drift control—captured precisely in B.
=========
What function does kube-proxy provide to a cluster?
Implementing the Ingress resource type for application traffic.
Forwarding data to the correct endpoints for Services.
Managing data egress from the cluster nodes to the network.
Managing access to the Kubernetes API.
kube-proxy is a node-level networking component that helps implement the Kubernetes Service abstraction. Services provide a stable virtual IP and DNS name that route traffic to a set of Pods (endpoints). kube-proxy watches the API for Service and EndpointSlice/Endpoints changes and then programs the node’s networking rules so that traffic sent to a Service is forwarded (load-balanced) to one of the correct backend Pod IPs. This is why B is correct.
Conceptually, kube-proxy turns the declarative Service configuration into concrete dataplane behavior. Depending on the mode, it may use iptables rules, IPVS, or integrate with eBPF-capable networking stacks (sometimes kube-proxy is replaced or bypassed by CNI implementations, but the classic kube-proxy role remains the canonical answer). In iptables mode, kube-proxy creates NAT rules that rewrite traffic from the Service virtual IP to one of the Pod endpoints. In IPVS mode, it programs kernel load-balancing tables for more scalable service routing. In all cases, the job is to connect “Service IP/port” to “Pod IP/port endpoints.”
Option A is incorrect because Ingress is a separate API resource and requires an Ingress Controller (like NGINX Ingress, HAProxy, Traefik, etc.) to implement HTTP routing, TLS termination, and host/path rules. kube-proxy is not an Ingress controller. Option C is incorrect because general node egress management is not kube-proxy’s responsibility; egress behavior typically depends on the CNI plugin, NAT configuration, and network policies. Option D is incorrect because API access control is handled by the API server’s authentication/authorization layers (RBAC, webhooks, etc.), not kube-proxy.
So kube-proxy’s essential function is: keep node networking rules in sync so that Service traffic reaches the right Pods. It is one of the key components that makes Services “just work” across nodes without clients needing to know individual Pod IPs.
=========
Which option represents best practices when building container images?
Use multi-stage builds, use the latest tag for image version, and only install necessary packages.
Use multi-stage builds, pin the base image version to a specific digest, and install extra packages just in case.
Use multi-stage builds, pin the base image version to a specific digest, and only install necessary packages.
Avoid multi-stage builds, use the latest tag for image version, and install extra packages just in case.
Building secure, efficient, and reproducible container images is a core principle of cloud native application delivery. Kubernetes documentation and container security best practices emphasize minimizing image size, reducing attack surface, and ensuring deterministic builds. Option C fully aligns with these principles, making it the correct answer.
Multi-stage builds allow developers to separate the build environment from the runtime environment. Dependencies such as compilers, build tools, and temporary artifacts are used only in intermediate stages and excluded from the final image. This significantly reduces image size and limits the presence of unnecessary tools that could be exploited at runtime.
Pinning the base image to a specific digest ensures immutability and reproducibility. Tags such as latest can change over time, potentially introducing breaking changes or vulnerabilities without notice. By using a digest, teams guarantee that the same base image is used every time the image is built, which is essential for predictable behavior, security auditing, and reliable rollbacks.
Installing only necessary packages further reduces the attack surface. Every additional package increases the risk of vulnerabilities and expands the maintenance burden. Minimal images are faster to pull, quicker to start, and easier to scan for vulnerabilities. Kubernetes security guidance consistently recommends keeping container images as small and purpose-built as possible.
Option A is incorrect because using the latest tag undermines build determinism and traceability. Option B is incorrect because installing extra packages “just in case” contradicts the principle of minimalism and increases security risk. Option D is incorrect because avoiding multi-stage builds and installing unnecessary packages leads to larger, less secure images and is explicitly discouraged in cloud native best practices.
According to Kubernetes and CNCF security guidance, combining multi-stage builds, immutable image references, and minimal dependencies results in more secure, reliable, and maintainable container images. Therefore, option C represents the best and fully verified approach when building container images.
TESTED 21 Feb 2026
Copyright © 2014-2026 ClapGeek. All Rights Reserved