Attunity Data Source
In this article
To add an Attunity Data Source into your pipeline, drag the Data Source to the canvas and right click on it to configure.
Under the Schema Type tab, select Fetch From Source or Upload Data File. Edit the schema if required and click next to Configure Attunity.
Configuring Attunity Data Source
Field | Description |
---|---|
Connection Name | Connections are the Service identifiers. Select the connection name from the available list of connections, from where you would like to read the data. |
Capture | Data: Flag to capture data Metadata: Flag to capture Metadata. You can select both the options too. |
Define Offset | This configurations is similar to what is used for Kafka offset. • Latest: The starting point of the query is just from the latest offset. • Earliest: The starting point of the query is from the starting /first offset. |
Connection Retries | The number of retries for component connection. Possible values are -1, 0 or any positive number. If the value were -1 then there would be infinite retries for infinite connection. |
Delay Between Connection Retries | Retry delay interval for component connection. (In milliseconds.) |
Add Configuration | To add additional custom Kafka properties in key-value pairs. |
Click on the add notes tab. Enter the notes in the space provided.
Configuring a Data Topics and Metadata Topic tab
Choose the topic names and their fields are populated which are editable. You can choose as many topics.
Choose the metadata topic and the topics’ fields are populated which are editable. You can only choose metadata of one Topic.
Field | Description |
---|---|
Topic Name | Topic name from where consumer will read the messages. |
ZK ID | Zookeeper path to store the offset value at per-consumer basis. An offset is the position of the consumer in the log. |
Click Done to save the configuration.
If you have any feedback on Gathr documentation, please email us!