Bedrock Image Processor

Bedrock Image Processor can be used to interpret the content of base64 encoded images.

Processor Configuration

Configure the processor parameters as explained below.

Connection Name

A connection name can be selected from the list if you have created and saved connection details for Bedrock earlier. Or create one as explained in the topic - Bedrock Connection →


Select Provider

Pick a provider to access foundation models (FMs) from AI companies. The provider parameters and FMs will be updated based on your selection.


Prompt

A prompt is a concise instruction or query in natural language provided to the Bedrock Image Processor to guide its actions or responses.

In the Prompts section, you have the flexibility to:

  • Choose any Predefined Sample Prompts: Discover a set of ready-to-use sample prompts that can kickstart your interactions with the Bedrock Image Processor.

  • Configuration Options: Customize prompts to suit your specific needs.

  • Save Prompts: Store your preferred prompts for future use.

  • Delete Prompts: Remove prompts that are no longer necessary.

  • Prompt Reset: To reset the prompt, clear the details in the prompt field, restoring it to its default state.

System

Provide high-level instructions and context for the AI model, guiding its behavior and setting the overall tone or role it should play in generating responses.

Note: <|endoftext|> is a document separator that the model sees during training, so if a prompt is not specified, the model will generate a response from the beginning of a new document.

The placeholder {some_key} represents a variable that can be replaced with specific column data. You can map this key to a column in the next section using “EXTRACT INPUTS FROM PROMPT”.

User

The user prompt is a specific instruction or question provided by the user to the AI model, directing it to perform a particular task or provide a response based on the user’s request.

Note: <|endoftext|> is a document separator that the model sees during training, so if a prompt is not specified, the model will generate a response from the beginning of a new document.

The placeholder {some_key} represents a variable that can be replaced with specific column data. You can map this key to a column in the next section using “EXTRACT INPUTS FROM PROMPT”.


Input

The placeholders {__} provided in the prompt can be mapped to columns to replace its value with the placeholder keys.

Input from prompt

All the placeholders {__} provided in the fields above are extracted here to map them with the column.

Input column

Select the column name to replace its value with the placeholder keys.


Input Image Columns

You can add rows as needed to specify the column(s) containing images to process along with additional configurations.

Please ensure that the image format is compatible with the base models supported by Bedrock.

For error: Could not process image.

Suggestion: If you encounter this message, it may be due to an incorrect image format. Double-check and ensure that the image format aligns with Bedrock’s supported formats.

Input column

Select the name of the column containing the input images.

Type

The format of the input images, with options for Base64-encoded images pointing to image files.

Drop Column (checkbox)

Option to remove the original input column after processing. This feature helps streamline data inspection and enhances performance by reducing unnecessary columns.


Output

The output can be configured to emit data received from input. This configuration includes utilizing Prompts, which can be interpreted as input columns via placeholders, and allows for emitting output either by specifying a column name or parsing it as JSON.

Process Response

Please provide the response format. Assign to Column/Parse as JSON

Json Key in Response

Add the JSON keys instructed to the model and map them with the corresponding output column names.

Output Column as JSON

Please type the column name for the data corresponding to the JSON keys instructed to the model.

Output Column as TEXT

Please type the column name for the data to be emitted.


RATE CONTROL

Choose how to utilize the Bedrock’s services:

Make Concurrent Requests with Token Limit: You can specify the number of simultaneous requests to be made to Bedrock, and each request can use up to the number of tokens you provide.

This option is suitable for scenarios where you need larger text input for fewer simultaneous requests.

OR

Rate-Limited Requests: Alternatively, you can make a total of specified number of requests within a 60-second window.

This option is useful when you require a high volume of requests within a specified time frame, each potentially processing smaller amounts of text.


Anthropic Parameters

The parameters described below are configuration settings that govern the behavior and performance of Bedrock models, influencing how they respond to prompts and generate outputs.

Choose a model

Select an ID of the model to determine the AI’s capabilities and language style.

Gathr supports below models:

  • Claude 3 Sonnet

    Image to text & code, multilingual conversation, complex reasoning & analysis.

  • Claude 3 Haiku

    Image to text, conversation, chat optimized.

Randomness and Diversity

Influence the variation in generated responses by limiting the outputs to more likely outcomes or by changing the shape of the probability distribution of outputs.

Temperature

The sampling temperature to be used between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

It is generally recommended to alter this or top_p but not both.

Top P

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So, 0.1 means only the tokens comprising the top 10% probability mass are considered.

It is generally recommended to alter this or temperature, but not both.

Top K

Can be used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.

Length

Limit the response by specifying the maximum length or character sequences that end response generation.

Maximum Length

Maximum number of tokens to generate. Responses are not guaranteed to fill up to the maximum desired length.

Stop Sequence

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.


Additional Information

Gathr doesn’t make actual calls to the AI Endpoint to detect schema for all AI processors. Instead, you can validate the data. All records are processed during the inspection call.

Top