JDBC Container Processor
In this article
JDBC Container allows you to perform- Read, Aggregation, Merge and Write operations, by using this processor over an oracle connection.
A JDBC Container allows the implementation of a retention policy as well. Retention policy lets you cleanse/delete unwanted data suing a retention period.
You can also apply checksum in case of aggregation to avoid duplicity of records.
JDBC-Container Processor Configuration
To add a JDBC Container Processor to your pipeline, drag the processor onto the canvas and right-click on it to configure.
Field | Description |
---|---|
Connection Name | Provide a oracle connection name for creating a connection. |
Table Name | Existing table name of specified database. |
Output Mode | Output mode is used to what data will be written to a streaming sink when there is new data available. In case of Upsert the item with existing ids gets updated and if it does not exist, it gets created, that is called, Insert. |
Enable Retention | When selected, each newly created item lives for the number of seconds specified by retention policy. After the expiration time is reached item will be deleted by server. |
Retention Policy | Number of days/month for which data is to be retained. You can select a number and select either DAYS or MONTH as a unit. |
Retention Column | Retention policy on a table will be applicable based on the field selected here. |
Record Limit | Enable the limit to keep the maximum number of records in the container. |
Maximum Number of records | Maximum Number of records to be retained for each group depending on grouping field criteria. |
Grouping Field | Field on which ‘GroupBy’ is applied to limit the maximum number of records that is to be retained for specified group. |
Write Data | Write the raw or aggregated data from the table. |
Fields | Select the fields for aggregated data i.e., Function, Input Fields and Output fields. |
Group By | Field of selected message on which group by is applied. |
Data Duplication Handling | If Data Duplication Handling is enabled, already processed data will not be processed again on the basis of selected fields. |
Fields | Data Duplication handling will be processed on the basis of selected fields. |
Schema Results | |
Table Column Name | Name of the column populated from the selected Table. |
Mapping Value | Map a corresponding value to the column. |
Database Data Type | Data type of the Mapped Value. |
Ignore All | Select the Ignore All check box to ignore all the Schema Results or select a checkbox adjacent to the column to ignore that column from the Schema Results. Use Ignore All or selected fields while pushing data to emitter. This will add that field as the part of partition fields while creating the table. |
Auto Fill | Auto Fill automatically populates and map all incoming schema fields with the fetched table columns. The left side shows the table columns and right side shows the incoming schema fields. If same field, as of table column, not found in incoming schema then the first field will be selected by default. |
Download Mapping | It downloads the mappings of schema fields and table columns in a file. |
Upload Mapping | Uploading the mapping file automatically populates the table columns and schema fields. |
Configuration Table:
After the configuration is the Mapping page, where you can map the incoming schema in the fetched columns.
Once the configuration is finalized, click Done to save the file.
If you have any feedback on Gathr documentation, please email us!