Greenplum ETL Source

In Gathr, Greenplum can be added as a channel to help in fetching customers’ and prospects’ data and transform it as needed before storing it in a desired data warehouse to run further analytics.


Schema Type

See the topic Provide Schema for ETL Source → to know how schema details can be provided for data sources.

After providing schema type details, the next step is to configure the data source.


Data Source Configuration

Configure the data source parameters as explained below.

Connection Name

Connections are the service identifiers. A connection name can be selected from the list if you have created and saved connection details for Greenplum earlier. Or create one as explained in the topic - Greenplum Connection →

Use the Test Connection option to ensure that the connection with the Greenplum channel is established successfully.

A success message states that the connection is available. In case of any error in test connection, edit the connection to resolve the issue before proceeding further.


Schema Name

Source Schema name for which the list of table will be viewed.


Table Name

Source table name to be selected for which you want to view the metadata.


Query

Hive compatible SQL query to be executed in the component.


Design Time Query

Query used to fetch limited records during Application design. Used only during schema detection and inspection.


Enable Query Partitioning

This enables parallel reading of data from the table. It is disabled by default.

Tables will be partitioned if this check-box is enabled.

If Enable Query Partitioning is check marked, additional fields will be displayed.

No. of Partitions

Specifies the number of parallel threads to be invoked to partition the table while reading the data.

Partition on Column

This column will be used to partition the data. This has to be a numeric column, on which spark will perform partitioning to read data in parallel.

Lower Bound

Value of the lower bound for partitioning column. This value will be used to decide the partition boundaries. The entire dataset will be distributed into multiple chunks depending on the values.

Upper Bound

Value of the upper bound for partitioning column. This value will be used to decide the partition boundaries. The entire dataset will be distributed into multiple chunks depending on the values.

If Enable Query Partitioning is disabled, then proceed by updating the following field.


Fetch Size

The fetch size determines the number of rows to be fetched per round trip. The default value is 1000.


Add Configuration: Additional properties can be added using this option as key-value pairs.


Detect Schema

Check the populated schema details. For more details, see Schema Preview →


Pre Action

To understand how to provide SQL queries or Stored Procedures that will be executed during pipeline run, see Pre-Actions →.


Notes

Optionally, enter notes in the Notes → tab and save the configuration.

Top