Spring Sale 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: clap70

CCDAK Confluent Certified Developer for Apache Kafka Certification Examination Questions and Answers

Questions 4

Match the testing tool with the type of test it is typically used to perform.

Options:

Buy Now
Questions 5

You need to configure a sink connector to write records that fail into a dead letter queue topic. Requirements:

Topic name: DLQ-Topic

Headers containing error context must be added to the messagesWhich three configuration parameters are necessary?(Select three.)

Options:

A.

errors.tolerance=all

B.

errors.deadletterqueue.topic.name=DLQ-Topic

C.

errors.deadletterqueue.context.headers.enable=true

D.

errors.tolerance=none

E.

errors.log.enable=true

F.

errors.log.include.messages=true

Buy Now
Questions 6

Which configuration determines how many bytes of data are collected before sending messages to the Kafka broker?

Options:

A.

batch.size

B.

max.block.size

C.

buffer.memory

D.

send.buffer.bytes

Buy Now
Questions 7

(You deploy a Kafka Streams application with five application instances.

Kafka Streams stores application metadata using internal topics.

Auto-topic creation is disabled in the Kafka cluster.

Which statement about this scenario is true?)

Options:

A.

The application will continue to work and internal topics will be created, even if auto-topic creation is disabled.

B.

The application will terminate with a non-retriable exception.

C.

The application will work, but application metadata will not be stored.

D.

The application will be on hold until internal topics are created manually.

Buy Now
Questions 8

(An S3 source connector named s3-connector stopped running.

You use the Kafka Connect REST API to query the connector and task status.

One of the three tasks has failed.

You need to restart the connector and all currently running tasks.

Which REST request will restart the connector instance and all its tasks?)

Options:

A.

POST /connectors/s3-connector/restart?includeTasks=true

B.

POST /connectors/s3-connector/restart?includeTasks=true&onlyFailed=true

C.

POST /connectors/s3-connector/restart

D.

POST /connectors/s3-connector/tasks/0/restart

Buy Now
Questions 9

You create a topic named stream-logs with:

A replication factor of 3

Four partitions

Messages that are plain logs without a keyHow will messages be distributed across partitions?

Options:

A.

The first message will always be written to partition 0.

B.

Messages will be distributed round-robin among all the topic partitions.

C.

All messages will be written to the same log segment.

D.

Messages will be distributed among all the topic partitions with strict ordering.

Buy Now
Questions 10

(You are building real-time streaming applications using Kafka Streams.

Your application has a custom transformation.

You need to define custom processors in Kafka Streams.

Which tool should you use?)

Options:

A.

TopologyTestDriver

B.

Processor API

C.

Kafka Streams Domain Specific Language (DSL)

D.

Kafka Streams Custom Transformation Language

Buy Now
Questions 11

You are experiencing low throughput from a Java producer.

Metrics show low I/O thread ratio and low I/O thread wait ratio.

What is the most likely cause of the slow producer performance?

Options:

A.

Compression is enabled.

B.

The producer is sending large batches of messages.

C.

There is a bad data link layer (layer 2) connection from the producer to the cluster.

D.

The producer code has an expensive callback function.

Buy Now
Questions 12

The producer code below features a Callback class with a method called onCompletion().

In the onCompletion() method, when the request is completed successfully, what does the value metadata.offset() represent?

Options:

A.

The sequential ID of the message committed into a partition

B.

Its position in the producer’s batch of messages

C.

The number of bytes that overflowed beyond a producer batch of messages

D.

The ID of the partition to which the message was committed

Buy Now
Questions 13

(You have a topic with four partitions. The application reading this topic is using a consumer group with two consumers.

Throughput is smoothly distributed among partitions, but application lag is increasing.

Application monitoring shows that message processing is consuming all available CPU resources.

Which action should you take to resolve this issue?)

Options:

A.

Add more partitions to the topic to increase the level of parallelism of the processing.

B.

Increase the max.poll.records property of consumers.

C.

Add more consumers to increase the level of parallelism of the processing.

D.

Decrease the max.poll.records property of consumers.

Buy Now
Questions 14

(You want to enrich the content of a topic by joining it with key records from a second topic.

The two topics have a different number of partitions.

Which two solutions can you use?

Select two.)

Options:

A.

Use a GlobalKTable for one of the topics where data does not change frequently and use a KStream–GlobalKTable join.

B.

Repartition one topic to a new topic with the same number of partitions as the other topic (co-partitioning constraint) and use a KStream–KTable join.

C.

Create as many Kafka Streams application instances as the maximum number of partitions of the two topics and use a KStream–KTable join.

D.

Use a KStream–KTable join; Kafka Streams will automatically repartition the topics to satisfy the co-partitioning constraint.

Buy Now
Questions 15

You are working on a Kafka cluster with three nodes. You create a topic named orders with:

replication.factor = 3

min.insync.replicas = 2

acks = allWhat exception will be generated if two brokers are down due to network delay?

Options:

A.

NotEnoughReplicasException

B.

NetworkException

C.

NotCoordinatorException

D.

NotLeaderForPartitionException

Buy Now
Questions 16

Which two statements about Kafka Connect Single Message Transforms (SMTs) are correct?

(Select two.)

Options:

A.

Multiple SMTs can be chained together and act on source or sink messages.

B.

SMTs are often used to join multiple records from a source data system into a single Kafka record.

C.

Masking data is a good example of an SMT.

D.

SMT functionality is included within Kafka Connect converters.

Buy Now
Questions 17

You have a topic t1 with six partitions. You use Kafka Connect to send data from topic t1 in your Kafka cluster to Amazon S3. Kafka Connect is configured for two tasks.

How many partitions will each task process?

Options:

A.

2

B.

3

C.

6

D.

12

Buy Now
Questions 18

You are writing to a topic with acks=all.

The producer receives acknowledgments but you notice duplicate messages.

You find that timeouts due to network delay are causing resends.

Which configuration should you use to prevent duplicates?

Options:

A.

enable.auto.commit=true

B.

retries=2147483647max.in.flight.requests.per.connection=5enable.idempotence=true

C.

retries=0max.in.flight.requests.per.connection=5enable.idempotence=true

D.

retries=2147483647max.in.flight.requests.per.connection=1enable.idempotence=false

Buy Now
Questions 19

Which is true about topic compaction?

Options:

A.

When a client produces a new event with an existing key, the old value is overwritten with the new value in the compacted log segment.

B.

When a client produces a new event with an existing key, the broker immediately deletes the offset of the existing event.

C.

Topic compaction does not remove old events; instead, when clients consume events from a compacted topic, they store events in a hashmap that maintains the latest value.

D.

Compaction will keep exactly one message per key after compaction of inactive log segments.

Buy Now
Questions 20

(Which configuration is valid for deploying a JDBC Source Connector to read all rows from the orders table and write them to the dbl-orders topic?)

Options:

A.

{"name": "orders-connect","connector.class": "io.confluent.connect.jdbc.DdbcSourceConnector","tasks.max": "1","connection.url": "jdbc:mysql://mysql:3306/dbl","topic.whitelist": "orders","auto.create": "true"}

B.

{"name": "dbl-orders","connector.class": "io.confluent.connect.jdbc.DdbcSourceConnector","tasks.max": "1","connection.url": "jdbc:mysql://mysql:3306/dbl?user=user&password=pas","topic.prefix": "dbl-","table.blacklist": "ord*"}

C.

{"name": "jdbc-source","connector.class": "io.confluent.connect.jdbc.DdbcSourceConnector","tasks.max": "1","connection.url": "jdbc:mysql://mysql:3306/dbl?user=user&useAutoAuth=true","topic.prefix": "dbl-","table.whitelist": "orders"}

D.

{"name": "jdbc-source","connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector","tasks.max": "1","connection.url": "jdbc:mysql://mysql:3306/dbl?user=user&password=pas","topic.prefix": "dbl-","table.whitelist": "orders"}

Buy Now
Questions 21

You want to connect with username and password to a secured Kafka cluster that has SSL encryption.

Which properties must your client include?

Options:

A.

security.protocol=SASL_SSLsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='myUser' password='myPassword';

B.

security.protocol=SSLsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='myUser' password='myPassword';

C.

security.protocol=SASL_PLAINTEXTsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='myUser' password='myPassword';

D.

security.protocol=PLAINTEXTsasl.jaas.config=org.apache.kafka.common.security.ssl.TlsLoginModule required username='myUser' password='myPassword';

Buy Now
Questions 22

Match each configuration parameter with the correct option.

To answer choose a match for each option from the drop-down. Partial

credit is given for each correct answer.

Options:

Buy Now
Questions 23

You have a topic with four partitions. The application reads from it using two consumers in a single consumer group.

Processing is CPU-bound, and lag is increasing.

What should you do?

Options:

A.

Add more consumers to increase the level of parallelism of the processing.

B.

Add more partitions to the topic to increase the level of parallelism of the processing.

C.

Increase the max.poll.records property of consumers.

D.

Decrease the max.poll.records property of consumers.

Buy Now
Questions 24

You need to explain the best reason to implement the consumer callback interface ConsumerRebalanceListener prior to a Consumer Group Rebalance.

Which statement is correct?

Options:

A.

Partitions assigned to a consumer may change.

B.

Previous log files are deleted.

C.

Offsets are compacted.

D.

Partition leaders may change.

Buy Now
Questions 25

Which tool can you use to modify the replication factor of an existing topic?

Options:

A.

kafka-reassign-partitions.sh

B.

kafka-recreate-topic.sh

C.

kafka-topics.sh

D.

kafka-reassign-topics.sh

Buy Now
Questions 26

(A consumer application needs to use an at-most-once delivery semantic.

What is the best consumer configuration and code skeleton to avoid duplicate messages being read?)

Options:

A.

auto.offset.reset=latest and enable.auto.commit=truewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);for (var record : records) {// Any processing}consumer.commitAsync();}

B.

auto.offset.reset=earliest and enable.auto.commit=falsewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);consumer.commitAsync();for (var record : records) {// Any processing}}

C.

auto.offset.reset=earliest and enable.auto.commit=falsewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);for (var record : records) {// Any processing}consumer.commitAsync();}

D.

auto.offset.reset=earliest and enable.auto.commit=truewhile (true) {final var records = consumer.poll(POLL_TIMEOUT);consumer.commitAsync();for (var record : records) {// Any processing}}

Buy Now
Questions 27

You have a Kafka Connect cluster with multiple connectors.

One connector is not working as expected.

How can you find logs related to that specific connector?

Options:

A.

Modify the log4j.properties file to enable connector context.

B.

Modify the log4j.properties file to add a dedicated log appender for the connector.

C.

Change the log level to DEBUG to have connector context information in logs.

D.

Make no change, there is no way to find logs other than by stopping all the other connectors.

Buy Now
Exam Code: CCDAK
Exam Name: Confluent Certified Developer for Apache Kafka Certification Examination
Last Update: Feb 19, 2026
Questions: 90
CCDAK pdf

CCDAK PDF

$25.5  $84.99
CCDAK Engine

CCDAK Testing Engine

$30  $99.99
CCDAK PDF + Engine

CCDAK PDF + Testing Engine

$40.5  $134.99