Functions Processor
In this article
Function processor enables to perform spark SQL functions on dataset.
Functions processor supports:
MVEL expressions
lookup functions
Date
String
Math
Miscellaneous functions
Functions Processor Configuration
To add a Functions processor to your pipeline, drag the processor onto the canvas and right-click on it to configure as explained below:
Field | Description |
---|---|
Config Fields | Config fields are used to create local variables |
Add Config Fields | Additional Config fields can be added by clicking on the plus sign(+). |
Transform/Add Fields | Select a schema on which Function is to be applied. Additional fields can be added by clicking on the plus sign(+). On right side, select the function that is to be applied. You can select an existing field or create a new field to hold the output of the applied function. |
Click on the NEXT button. You will view the fields generated from data source. You can provide new output format of the date field. For example, if the input date format was yyyy-dd-yy, specify new date format as yyyy/MM/dd
Click Next after changing the Date format. Enter the notes in the space provided.
Click SAVE for saving the configuration details.
If you have any feedback on Gathr documentation, please email us!