SQS Emitter

SQS emitter allows to emit data in Streaming manner into SQS Queues.

SQS Emitter Configuration

To add an SQS emitter to your pipeline, drag the emitter onto the canvas and connect it to a Data Source or processor.

The configuration settings are as follows:

FieldDescription
Connection NameAll RabbitMQ connections will be listed here. Select a connection for connecting to the RabbitMQ server.
Queue TypeSelect either type of Queue for incoming messages, Standard or SQS FIFO.
Queue NameName of the Queue where data will be published.
Content-Based DeduplicationEnable content based deduplication for the queue (each of your message has a unique body). The producer can omit the message deduplication ID.
Message Group ID For FIFO QueueThe tag that specifies that a message belongs to a specific message group. Messages that belong to the same message group are guaranteed to be processed in FIFO manner.
Message Deduplication ID for FIFO QueueThe token used for deduplication of messages within the deduplication interval.
Visibility Timeout (in seconds)The length of time (in seconds) that a message received from a queue will be invisible for, to other receiving components.
Message Retention Period (in seconds)The amount of time in which Amazon SQS will retain a message if it does not get deleted.
Maximum Message Size (in bytes)Maximum message size (in bytes) accepted by Amazon SQS.
Receive Message Wait Time (In seconds)The maximum amount of time that a long polling receive call will wait for a message to become available before returning an empty response.
Delivery Delay (in seconds)The amount of time to delay the first delivery of all messages added to this queue.
Output FormatSelect the data format in which RabbitMQ should write the data.
Output FieldsSelect the fields which should be a part of the output data.
Checkpoint Storage LocationSelect the check-pointing storage location. Available options are HDFS, S3, and EFS.
Checkpoint ConnectionsSelect the connection. Connections are listed corresponding to the selected storage location.
Checkpoint Directory

It is the path where Spark Application stores the checkpointing data.

For HDFS and EFS, enter the relative path like /user/hadoop/, checkpointingDir system will add suitable prefix by itself.

For S3, enter an absolute path like: S3://BucketName/checkpointingDir

Time-Based Check PointSelect checkbox to enable timebased checkpoint on each pipeline run i.e. in each pipeline run above provided checkpoint location will be appended with current time in millis.
Output Mode

Output mode to be used while writing the data to Streaming sink.

Select the output mode from the given three options:

Append Mode:

Output Mode in which only the new rows in the streaming data will be written to the sink

Complete Mode:

Output Mode in which all the rows in the streaming data will be written to the sink every time there are some updates.

Update Mode:

Output Mode in which only the rows that were updated in the streaming data will be written to the sink every time there are some updates.

Enable TriggerTrigger defines how frequently a streaming query should be executed.
Processing TimeTrigger time interval in minutes or seconds.
Add ConfigurationEnables to configure additional RabbitMQ properties.
Top