Confluent Cloud ETL Target

Target Configuration

Configure the target parameters that are explained below.

Connection Name

Connections are the service identifiers. A connection name can be selected from the list if you have created and saved connection details for Confluent Cloud earlier. Or create one as explained in the topic - Confluent Cloud Connection →


Topic Name

A topic name is a category or feed name to which messages will be published.

The specified topic should be set up with a value schema by providing a sample of the required schema format.

To know more about the steps required to create a topic value schema, click here.


Output Format

Datatype format of the output.


Output Fields

Fields in the message that needs to be a part of the output data.


Message Key

In Confluent Cloud, Message Keys refer to how data is organized and identified within Kafka topics. Here’s an explanation of each message key option

  • Field Value: The message key is derived from a specific field in the message payload, allowing you to use a particular data field as the identifier.

  • Field Value Hash: The message key is generated by hashing the value of a selected field in the message, providing a consistent identifier based on that field’s content.

  • UUID: Universally Unique Identifier (UUID) is used as the message key, ensuring each message has a unique identifier across the system.

  • Static: A static message key is a fixed value that remains the same for all messages, often used when you don’t need message-specific keys.

  • None: No specific message key is set, meaning messages don’t have a distinct key and are treated as independent events within the topic.


Field Values

Fields in the message that needs to be a part of the output data.

Static Value

Provide static value to be used as key.


Output Mode

Output Mode is used to specify what data will be written to a streaming sink when there is new data available.

Append: Emits only newly added records to the result of a streaming query, ideal for continuously growing datasets.

Complete: Sends the entire result of the streaming query whenever there is a change, including inserts, updates, and deletions, suitable for maintaining a complete view of the data.

Update: Emits only the changed records (new and updated) since the last emission, excluding removed records, often used for tracking changes in a dataset over time.


Enable Trigger

Trigger defines how frequently a streaming query should be executed.

If trigger is enabled, provide the processing time.

Processing Time

The time interval or conditions set to determine when streaming data results are emitted or processed.


Add Configuration: To add additional custom ADLS properties in a key-value pair.


Notes

Optionally, enter notes in the Notes → tab and save the configuration.

Top