Mongo Emitter

To add Mongo emitter to your pipeline, drag the emitter onto the canvas, connect it to a Data Source or processor, and click on it to configure.

Mongo Emitter Configuration

FieldDescription
Connection NameSelect a connection name out of the list of saved connections from the drop-down.
Database NameSelect the database name to write data.
Collection NameSelect the name of the database collection that needs to be scanned should be selected.
Output FieldsSelect the fields from the drop-down list that needs to be included in the output data.
Extended BSON TypesThis option is checked by default to enable the extended BSON types while writing the data to Mongo DB emitter.
Replace Document

This options is checked by default to replace the document when saving datasets that contain an _id field.

If unchecked, it will only update the fields in the document that match the fields in the dataset.

Local ThresholdProvide the threshold value (in milliseconds) for choosing a server from multiple Mongo DB servers.
Max Batch SizeThe maximum batch size for bulk operations when saving data. The default value provided is 512.
Write Concern WThe w option request for an acknowledgment that the write operation has propagated to a specified number of mongod instances or to mongod instances with specified tags.
Write Concern TimeoutSpecify a wtimeout value (in milliseconds) so the query can timeout if the write concern can’t be enforced. Applicable for values > 1.
Shard KeyProvide value for Shard Key. MongoDB partitions data in the collection using ranges of shard key values. The field should be indexed and contain unique values.
Force InsertCheck the option to enable Force Insert to save inserts even if the datasets contains _IDs.
OrderedThis option is checked by default to allow setting the bulk operations ordered property.
Save Mode

Save Mode is used to specify the expected behavior of saving data to a data sink.

ErrorifExist: When persisting data, if the data already exists, an exception is expected to be thrown.

Append: When persisting data, if data/table already exists, contents of the Schema are expected to be appended to existing data.

Overwrite: When persisting data, if data/table already exists, existing data is expected to be overwritten by the contents of the Data.

Ignore: When persisting data, if data/table already exists, the save operation is expected to not save the contents of the Data and to not change the existing data.

This is similar to a CREATE TABLE IF NOT EXISTS in SQL.

ADD CONFIGURATIONFurther configurations can be added.
Top