Hive Data Source

To use a Hive Data Source, select the connection and specify a warehouse directory path.

To add a Hive Data Source into your pipeline, drag the Data Source to the canvas and right-click on it to configure.

Under the Detect Schema Type tab, select Fetch From Source or Upload Data File.

Configuring Hive Data Source

FieldDescription
Connection Name

Connections are the Service identifiers.

Select the connection name from the available list of connections, from where you would like to read the data.

Override CredentialsCheck the override credentials for user specific actions.
UsernameThe name of user through which the hadoop services is running. This option is available once you check the Override Credentials option.
KeyTab Select OptionSelect one of the options for keytab Upload as mentioned below:

KeyTab File or Specify KeyTab File Path.

QueryProvide Hive compatible SQL query to be executed in component.
Inspect QueryProvide same query as in Query but provide limit in record count using only during inspect and schema detection.
Refresh Table Metadata

Spark hive caches the parquet table metadata and partition information to increase performance. It allows you to have an option to refresh table cache, to get the latest information during inspect. Also, this feature helps the most when there are multiple update and fetch events, in the inspect session.

Refresh Table option also repairs and sync partitioned values into Hive metastore. This allows to process the latest value while fetching data during inspect or run.

Table NamesUser can specify single or multiple table names to be refreshed.

After the query, Describe Table and corresponding Table Metadata, Partition Information, Serialize and Reserialize Information is populated.

Make sure that the query you run matches with the schema created with Upload data or Fetch from Source. 

Click Done to save the configuration.

Configure Pre-Action in Source →

Top