hidekazu-konishi.com
HOME > Personal Tech Blog > Basic Information about Amazon Bedrock with API Examples - Model Features, Pricing, How to Use, Explanation of Tokens and Inference Parameters
Basic Information about Amazon Bedrock with API Examples - Model Features, Pricing, How to Use, Explanation of Tokens and Inference Parameters
First Published: 2023-10-02
Last Updated: 2023-10-08
Amazon Bedrock Basic Information
Amazon Bedrock Reference Materials & Learning Resources
The main reference materials and learning resources that can help in understanding Amazon Bedrock are as follows.The content of this article is based on the information from these reference materials and learning resources.
- What's New: Amazon Bedrock is now generally available
- AWS Blog: Amazon Bedrock Is Now Generally Available – Build and Scale Generative AI Applications with Foundation Models
- Price: Amazon Bedrock Pricing
- Workshop: GitHub - aws-samples/amazon-bedrock-workshop: This is a workshop designed for Amazon Bedrock a foundational model service.
- AWS Documentation(User Guide): What is Amazon Bedrock? - Amazon Bedrock
- AWS Documentation(API Reference): Bedrock API Reference - Amazon Bedrock
- AWS SDK for Python(Boto3) Documentation(Bedrock): Bedrock - Boto3 documentation
- AWS SDK for Python(Boto3) Documentation(BedrockRuntime): BedrockRuntime - Boto3 documentation
- AWS CLI Command Reference(bedrock): bedrock — AWS CLI Command Reference
- AWS CLI Command Reference(bedrock-runtime): bedrock-runtime — AWS CLI Command Reference
- AWS Management Console(Amazon Bedrock Model Providers): Amazon Bedrock Model Providers - AWS Management Console
What is Amazon Bedrock?
Amazon Bedrock is a service that provides access to Foundation Models (FMs) such as AI21 Labs' Jurassic-2, Amazon's Titan, Anthropic's Claude, Cohere's Command, Meta's Llama 2, and Stability AI's Stable Diffusion via API, as well as features to customize FMs privately using unique data.You can choose a foundation model based on use cases like text generation, chatbots, search, text summarization, image generation, and personalized recommendations to build and expand Generative AI applications.
Tokens in Generative AI for Text Handling
Before looking at the list of models and pricing for Amazon Bedrock, let me briefly explain tokens, which serve as the units for restrictions and billing.However, please note that this description may differ from the strict definition as I prioritize ease of understanding here.
In Generative AI handling text, tokens refer to units that break text into meaningful parts.
While tokens can correspond to words, they don't necessarily equate to words and can be split into characters, subwords, etc.
For instance, if we tokenize the string
Amazon Bedrock is amazing!
based on words, it would look like this:["Amazon", "Bedrock", "is", "amazing", "!"]
However, using a non-word-based tokenization method, it might also include spaces like this:
["Amazon", " ", "Bedrock", " ", "is", " ", "amazing", "!"]
There are advanced tokenization methods beyond word-based, like Unigram Tokenization, WordPiece, SentencePiece, and Byte Pair Encoding (BPE). Different models adopt various methods, so it's essential to be aware of that.
Especially when calculating fees on a token basis, it's best to determine the number of tokens based on the model's tokenization method and in a scenario close to actual usage conditions.
However, personally, when considering the monthly budget of the Generative AI service I use, if I don't want to spend time and effort estimating the exact number of tokens, I either use Generative AI itself for calculations or overestimate by assuming 1 character = 1 token for a higher fee estimate.
List and Features of Available Models
Based on the product page of Amazon Bedrock – AWS and AWS Management Console's Amazon Bedrock Model Providers, I compiled data as of the time of writing this article.* According to Amazon Bedrock Is Now Generally Available – Build and Scale Generative AI Applications with Foundation Models, Meta Llama-2-13b-chat and Meta Llama-2-70b-chat are set to be released soon.
* Amazon Titan Text G1 - Express supports the creation of custom models fine-tuned with unique data based on the base model.
* Amazon Titan Embeddings G1 - Text is a model that converts text input (words, phrases, large text units, etc.) into a numerical representation (Embedding) containing the semantic content of the text.
Model Provider | Model | Model ID | Max tokens | Modality (Data Type) |
Languages | Supported use cases |
---|---|---|---|---|---|---|
AI21 Labs | Jurassic-2 Ultra (v1) |
ai21.j2-ultra-v1 | 8191 | Text | English Spanish French German Portuguese Italian Dutch |
Open book question answering summarization draft generation information extraction ideation |
AI21 Labs | Jurassic-2 Mid (v1) |
ai21.j2-mid-v1 | 8191 | Text | English Spanish French German Portuguese Italian Dutch |
Open book question answering summarization draft generation information extraction ideation |
Amazon | Titan Embeddings G1 - Text (v1.2) |
amazon.titan-embed-text-v1 | 8k | Embedding | 100+ languages | Translate text inputs (words, phrases or possibly large units of text) into numerical representations (known as embeddings) that contain the semantic meaning of the text. |
Amazon | Titan Text G1 - Express (v1 - preview) |
amazon.titan-text-express-v1 | 8k | Text | English | Open ended text generation brainstorming summarization code generation table creation data formatting paraphrasing chain of though rewrite extraction Q&A chat |
Anthropic | Claude v1.3 | anthropic.claude-v1 | 100k | Text | English and multiple other languages | Question answering information extraction removing PII content generation multiple choice classification Roleplay comparing text summarization document Q&A with citation |
Anthropic | Claude v2 | anthropic.claude-v2 | 100k | Text | English and multiple other languages | Question answering information extraction removing PII content generation multiple choice classification Roleplay comparing text summarization document Q&A with citation |
Anthropic | Claude Instant v1.2 | anthropic.claude-instant-v1 | 100k | Text | English and multiple other languages | Question answering information extraction removing PII content generation multiple choice classification Roleplay comparing text summarization document Q&A with citation |
Cohere | Command (v14.6) |
cohere.command-text-v14 | 4096 | Text | English | Text generation text summarization |
Stability AI | Stable Diffusion XL (v0.8 - preview) |
stability.stable-diffusion-xl-v0 | 8192 | Image | English | image generation image editing |
Meta | Llama-2-13b-chat (* Coming Soon) |
(* Unknown) | 4k | Text | English | Assistant-like chat |
Meta | Llama-2-70b-chat (* Coming Soon) |
(* Unknown) | 4k | Text | English | Assistant-like chat |
Model Pricing
Based on the Amazon Bedrock Pricing, I have summarized the data available at the time of writing this article.If no pricing is listed for a model, it indicates that the pricing option is not offered, or the functionality to customize the model is not supported.
※Meta Llama-2-13b-chat and Meta Llama-2-70b-chat are expected to be released soon, and there was no pricing information available at the time I wrote this article.
Text Model Pricing
The pricing for text-based models is set based on the following criteria:- On-Demand
On-Demand pricing is calculated per 1,000 input tokens and per 1,000 output tokens (it's not based on time). - Provisioned Throughput
Provisioned Throughput allows you to commit to a time-based payment for a specified period, ensuring sufficient throughput for large-scale use and other requirements.
For the commitment duration, options include none, 1 month, and 6 months, with longer durations offering discounts. - Model customization (Fine-tuning)
When creating a custom model using Fine-tuning, training fees are incurred per 1,000 tokens, and there is a monthly storage fee for each custom model.
Model Provider | Model | On-Demand (per 1000 input tokens) |
On-Demand (per 1000 output tokens) |
Provisioned Throughput (per hour per model) |
Model customization through Fine-tuning |
---|---|---|---|---|---|
AI21 Labs | Jurassic-2 Ultra | 0.0188 USD | 0.0188 USD | - | - |
AI21 Labs | Jurassic-2 Mid | 0.0125 USD | 0.0125 USD | - | - |
Amazon | Titan Embeddings G1 - Text | 0.0001 USD | N/A | no commitment: N/A 1-month commitment: 6.40 USD 6-month commitment: 5.10 USD |
- |
Amazon | Titan Text G1 - Express | 0.0013 USD | 0.0017 USD | no commitment: 20.50 USD 1-month commitment: 18.40 USD 6-month commitment: 14.80 USD |
Train(per 1000 tokens): 0.0008 USD Store each custom model(per month): 1.95 USD |
Anthropic | Claude v1.3 | 0.01102 USD | 0.03268 USD | no commitment: N/A 1-month commitment: 63.00 USD 6-month commitment: 35.00 USD |
- |
Anthropic | Claude v2 | 0.01102 USD | 0.03268 USD | no commitment: N/A 1-month commitment: 63.00 USD 6-month commitment: 35.00 USD |
- |
Anthropic | Claude Instant v1.2 | 0.00163 USD | 0.00551 USD | no commitment: N/A 1-month commitment: 39.60 USD 6-month commitment: 22.00 USD |
- |
Cohere | Command | 0.0015 USD | 0.0020 USD | - | - |
Meta | Llama-2-13b-chat (* Coming Soon) |
(* Unknown) | (* Unknown) | (* Unknown) | (* Unknown) |
Meta | Llama-2-70b-chat (* Coming Soon) |
(* Unknown) | (* Unknown) | (* Unknown) | (* Unknown) |
Image Model Pricing
The Stable Diffusion XL model from Stability AI that handles text is priced based on image quality and resolution per image forv0.8
, and is priced with Provisioned Throughput for v1.0
.Model Provider | Model | Standard quality(<51 steps) (per image) |
Premium quality(>51 steps) (per image) |
Provisioned Throughput (per hour per model) |
Model customization through Fine-tuning |
---|---|---|---|---|---|
Stability AI | Stable Diffusion XL (v0.8) |
512x512 or smaller: 0.018 USD Larger than 512x512: 0.036 USD |
512x512 or smaller: 0.036 USD Larger than 512x512: 0.072 USD |
- | - |
Stability AI | Stable Diffusion XL (v1.0) |
- | - | no commitment: N/A 1-month commitment: 49.86 USD 6-month commitment: 46.18 USD |
- |
Basic How to Use Amazon Bedrock
Getting Started & Preparation for Amazon Bedrock
To get started with Amazon Bedrock, go to the Model access screen of Amazon Bedrock in the AWS Management Console, clickEdit
, select the model you want to use, and request access to the model by clicking Save changes
.Amazon Bedrock > Model access - AWS Management Console
Please note that for Anthropic models, you need to enter company information and the purpose to make a request.
Once the request is approved, you can access and use the model.
Amazon Bedrock Runtime API's InvokeModel, InvokeModelWithResponseStream, and Parameters
Here, I will explain the APIs needed to actually use Amazon Bedrock.There are mainly two types of APIs related to Amazon Bedrock: the Bedrock API and the Bedrock Runtime API.
The Bedrock API is used for operations like creating custom models through Fine-tuning or purchasing Provisioned Throughput for models.
On the other hand, the Bedrock Runtime API is used for the actual execution, where you specify the base or custom model, request input data (Prompt), and obtain output data (Completions) from the response.
The Amazon Bedrock Runtime API includes InvokeModel and InvokeModelWithResponseStream for actually invoking and using the model.
The InvokeModel of Amazon Bedrock Runtime API is an API that obtains all the contents of the response to a request at once.
Meanwhile, the InvokeModelWithResponseStream of the Amazon Bedrock Runtime API is an API that obtains the contents of the response to a request gradually, in small chunks of text, as a stream.
If you've used a chat-style Generative AI service before, you might have seen the results for a Prompt being displayed a few characters at a time. InvokeModelWithResponseStream can be used for this type of display.
The parameters specified in the request for the InvokeModel and InvokeModelWithResponseStream of the Amazon Bedrock Runtime API commonly use the following:
accept: MIME type of the inference Body of the response. (Default: application/json) contentType: MIME type of the input data of the request. (Default: application/json) modelId: [Required] Identifier of the model. (e.g., ai21.j2-ultra-v1) body: [Required] Input data in the format specified by contentType. Specify the format of the body field according to the inference parameters supported by each model.
Meaning of Common Inference Parameters
In the following, I will introduce examples of executing the Amazon Bedrock Runtime API, but before that, let's briefly explain the common inference parameters frequently used in the body of the model request. However, please be aware that, for the sake of clarity in visualization, this explanation might not be strictly aligned with the exact definition.- temperature
This parameter adjusts the randomness and diversity of the model's output probability distribution. If the value is high, it tends to return answers with higher randomness and diversity. Conversely, if the value is low, it is more likely to return answers that are estimated with higher probability. The typical range for temperature is between0 - 1
, but there are models that can be set to values exceeding 1. For instance, betweentemperature=1.0
andtemperature=0.1
,temperature=1.0
is inclined to provide answers with higher randomness and diversity, whereastemperature=0.1
tends to return more probable answers. - topK
This parameter adjusts randomness and diversity by limiting the top K tokens considered by the model. The optimal range for topK varies depending on the model used. When you set this value, the output tokens are selected from these top K. For example, withtopK=10
, the model considers only the top 10 tokens with the highest probability when generating answers. Put simply, topK limits the range of selectable tokens by the number of output tokens, thus adjusting the diversity as well. - topP
This parameter adjusts randomness and diversity by sampling from the set of tokens whose cumulative probability doesn't exceed a specified P. The usual range for topP is between0 - 1
. For instance, withtopP=0.9
, when the model generates answers, it considers tokens in decreasing order of probability until the cumulative probability exceeds 0.9. In simpler terms, topP limits the range of selectable tokens based on the cumulative probability of the output tokens, and adjusts randomness and diversity accordingly. - maxTokens
This parameter limits the maximum number of tokens generated to control the length of the produced text. For example, withmaxTokens=800
, the model ensures that the text doesn't exceed 800 tokens.
For detailed inference parameters of each model available in Amazon Bedrock, please refer to "Inference parameters for foundation models - Amazon Bedrock".
Example of invoking Amazon Bedrock Runtime using AWS SDK for Python (Boto3)
Here, I introduce an example where I executed the Amazon Bedrock Runtime's invoke_model using AWS SDK for Python (Boto3) in an AWS Lambda function.At the time of writing this article, the default AWS SDK for Python (Boto3) in AWS Lambda functions did not yet support calling the
bedrock
and bedrock-runtime
Clients.Therefore, the following is an example using the
bedrock-runtime
Client after adding the latest AWS SDK for Python (Boto3) to the Lambda Layer.・Execution Example (AWS Lambda function)
import boto3 import json import os region = os.environ.get('AWS_REGION') bedrock_runtime_client = boto3.client('bedrock-runtime', region_name=region) def lambda_handler(event, context): modelId = 'ai21.j2-ultra-v1' contentType = 'application/json' accept = 'application/json' body = json.dumps({ "prompt": "Please tell us all the states in the U.S.", "maxTokens": 800, "temperature": 0.7, "topP": 0.95 }) response = bedrock_runtime_client.invoke_model( modelId=modelId, contentType=contentType, accept=accept, body=body ) response_body = json.loads(response.get('body').read()) return response_body・Execution Result Example (Return value of the above AWS Lambda function)
{ "id": 1234, "prompt": { "text": "Please tell us all the states in the U.S.", "tokens": [ ... ] }, "completions": [ { "data": { "text": "\nUnited States of America is a federal republic consisting of 50 states, a federal district (Washington, D.C., the capital city of the United States), five major territories, and various minor islands. The 50 states are Alabama, Alaska, Arizona, Arkansas, California, Colorado, Connecticut, Delaware, Florida, Georgia, Hawaii, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Maryland, Massachusetts, Michigan, Minnesota, Mississippi, Missouri, Montana, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Carolina, North Dakota, Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Tennessee, Texas, Utah, Vermont, Virginia, Washington, West Virginia, Wisconsin, and Wyoming.", "tokens": [ ... ] }, "finishReason": { "reason": "endoftext" } } ] }Note: As of the time I wrote this article, the latest AWS SDK for Python (Boto3) provides the invoke_model_with_response_stream command for Amazon Bedrock Runtime.
However, I plan to explain the details in another article, so I will omit it in this article.
AWS CLI Implementation Example for Amazon Bedrock Runtime's invoke-model
In this article, I introduce the implementation example of Amazon Bedrock Runtime's invoke-model using AWS CLI.As of the time of writing this article, the Amazon Bedrock Runtime API was not yet compatible with AWS CLI Version 2.
Therefore, the following example was executed by separately installing AWS CLI Version 1, which supported the Amazon Bedrock Runtime API.
・Format
aws bedrock-runtime invoke-model \ --region [Region] \ --model-id "[modelId]" \ --content-type "[contentType]" \ --accept "[accept]" \ --body "[body]" [Output FileName]・Implementation Example
aws bedrock-runtime invoke-model \ --region us-east-1 \ --model-id "ai21.j2-ultra-v1" \ --content-type "application/json" \ --accept "application/json" \ --body "{\"prompt\": \"Please tell us all the states in the U.S.\", \"maxTokens\": 800,\"temperature\": 0.7,\"topP\": 0.95}" invoke-model-output.txt・Response Example
* Displayed on screen {"contentType": "application/json"} * File Content (invoke-model-output.txt) {"id": 1234,"prompt": {"text": "Please tell us all the states in the U.S.","tokens": [...]},"completions": [{"data": {"text": "\nUnited States of America is a federal republic consisting of 50 states, a federal district (Washington, D.C., the capital city of the United States), five major territories, and various minor islands. The 50 states are Alabama, Alaska, Arizona, Arkansas, California, Colorado, Connecticut, Delaware, Florida, Georgia, Hawaii, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Maryland, Massachusetts, Michigan, Minnesota, Mississippi, Missouri, Montana, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Carolina, North Dakota, Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Tennessee, Texas, Utah, Vermont, Virginia, Washington, West Virginia, Wisconsin, and Wyoming.","tokens": [...]},"finishReason": {"reason": "endoftext"}}]}Note: As of the time of writing this article, AWS CLI does not have the invoke-model-with-response-stream command for Amazon Bedrock Runtime.
Summary
In this article, I introduced reference materials for Amazon Bedrock, model features, pricing, how to use, explanations of terms like tokens and inference parameters, and examples of the Runtime API. While compiling the information, I realized that with Amazon Bedrock, you can choose from a variety of models according to use cases and call them with AWS SDK or AWS CLI interfaces that are highly compatible with other AWS services.I plan to continue monitoring Amazon Bedrock for updates, implementation methods, and its integration with other services in the future.
Written by Hidekazu Konishi
HOME > Personal Tech Blog > Basic Information about Amazon Bedrock with API Examples - Model Features, Pricing, How to Use, Explanation of Tokens and Inference Parameters