RabbitMQ Emitter
In this article
The RabbitMQ emitter is used when you want to write data to RabbitMQ cluster.
Data formats supported are JSON and DELIMITED (CSV, TSV, PSV, etc).
RabbitMQ Emitter Configuration
To add a RabbitMQ emitter to your pipeline, drag the RabbitMQ emitter onto the canvas and connect it to a Data Source or processor. Right-click on the emitter to configure it as explained below:
Field | Description |
---|---|
Connection Name | All RabbitMQ connections will be listed here. Select a connection for connecting to the RabbitMQ server. |
Exchange Name | Rabbit MQ Exchange name. |
Exchange Type | Specifies how messages are routed through Exchange. Direct: A message goes to the queue(s) whose binding key exactly matches the routing key of the message. Fanout: The fanout copies and routes a received message to all queues that are bound to it regardless of routing keys or pattern matching as with direct and topic exchanges. Keys provided will simply be ignored Topic: Topic exchanges route messages to queues based on wildcard matches between the routing key and something called the routing pattern specified by the queue binding. Messages are routed to one or many queues based on a matching between a message routing key and this pattern. |
Exchange Durable | Specifies whether exchange will be deleted or will remain active on server restart. TRUE: Exchange will not be deleted if you restart RabbitMQ server. FALSE: Exchange will be deleted if you restart RabbitMQ server. |
Routing Key | Select RabbitMQ Routing Key where data will be published. |
Queue Name | RabbitMQ Queue Name where data will be published. |
Queue Durable | Specifies whether queue will remain active or deleted on server restart. TRUE: Queue will not be deleted if you restart RabbitMQ. FALSE: Queue will be deleted if you restart RabbitMQ. |
Output Format | Select the data format in which RabbitMQ should write the data. |
Output Fields | Select the fields which should be a part of the output data. |
Enable Message TTL | RMQ allows you to set TTL (time to live) for messages. |
Message TTL | Time to live in seconds after which message will be discarded to the specified TTL Exchange. |
TTL Exchange | Name of the Exchange, on which message will be sent once time to live expires. |
TTL Queue | Name of the Queue on which message will be sent once time to live expires. |
TTL Routing Key | Routing key used to bind TTL queue with TTL exchange. |
Checkpoint Storage Location | Select the checkpointing storage location. Available options are HDFS, S3, and EFS. |
Checkpoint Connections | Select the connection. Connections are listed corresponding to the selected storage location. |
Checkpoint Directory | It is the path where Spark Application stores the checkpointing data. For HDFS and EFS, enter the relative path like /user/hadoop/, checkpointingDir system will add suitable prefix by itself. For S3, enter an absolute path like: S3://BucketName/checkpointingDir |
Time-Based Check Point | Select checkbox to enable timebased checkpoint on each pipeline run i.e. in each pipeline run above provided checkpoint location will be appended with current time in millis. |
Output Mode | Output mode to be used while writing the data to Streaming sink. Select the output mode from the given three options: Append Mode: Output Mode in which only the new rows in the streaming data will be written to the sink Complete Mode: Output Mode in which all the rows in the streaming data will be written to the sink every time there are some updates. Update Mode: Output Mode in which only the rows that were updated in the streaming data will be written to the sink every time there are some updates. |
Enable Trigger | Trigger defines how frequently a streaming query should be executed. |
Processing Time | Trigger time interval in minutes or seconds. |
Add Configuration | Enables to configure additional RabbitMQ properties. |
Click on the NEXT button. Enter the notes in the space provided.
Click SAVE for saving the configuration details.
If you have any feedback on Gathr documentation, please email us!