Amazon Bedrock FAQs
General
Open allWhat is Amazon Bedrock?
Which FMs are available in Amazon Bedrock?
Amazon Bedrock customers can choose from some of the most cutting-edge FMs available today. This includes models from:
AI21 Labs
Amazon
Anthropic
Cohere
DeepSeek
Luma AI
Meta
Mistral AI
poolside (coming soon)
Stability AI
TwelveLabs (coming soon)
See here for supported foundation models from each provider:
https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html
Why should I use Amazon Bedrock?
There are five reasons to use Amazon Bedrock for building generative AI applications.
Choice of leading FMs: Amazon Bedrock offers an easy-to-use developer experience to work with a broad range of high-performing FMs from Amazon and leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, and Stability AI. You can quickly experiment with a variety of FMs in the playground, and use a single API for inference regardless of the models you choose, giving you the flexibility to use FMs from different providers and keep up to date with the latest model versions with minimal code changes.
Easy model customization with your data: Privately customize FMs with your own data through a visual interface without writing any code. Simply select the training and validation data sets stored in Amazon Simple Storage Service (Amazon S3) and, if required, adjust the hyperparameters to achieve the best possible model performance.
Fully managed agents that can invoke APIs dynamically to execute tasks: Build agents that execute complex business tasks—from booking travel and processing insurance claims to creating ad campaigns, preparing tax filings, and managing your inventory—by dynamically calling your company systems and APIs. Fully managed agents for Amazon Bedrock extend the reasoning capabilities of FMs to break down tasks, create an orchestration plan, and execute it.
Native support for RAG to extend the power of FMs with proprietary data: With Amazon Bedrock Knowledge Bases, you can securely connect FMs to your data sources for retrieval augmentation—from within the managed service—extending the FM’s already powerful capabilities and making it more knowledgeable about your specific domain and organization.
Data security and compliance certifications: Amazon Bedrock offers several capabilities to support security and privacy requirements. Amazon Bedrock is in scope for common compliance standards such as Service and Organization Control (SOC), International Organization for Standardization (ISO), is Health Insurance Portability and Accountability Act (HIPAA) eligible, and customers can use Amazon Bedrock in compliance with the General Data Protection Regulation (GDPR). Amazon Bedrock is CSA Security Trust Assurance and Risk (STAR) Level 2 certified, which validates the use of best practices and the security posture of AWS cloud offerings. With Amazon Bedrock, your content is not used to improve the base models and is not shared with any model providers. Your data in Amazon Bedrock is always encrypted in transit and at rest, and you can optionally encrypt the data using your own keys. You can use AWS PrivateLink with Amazon Bedrock to establish private connectivity between your FMs and your Amazon Virtual Private Cloud (Amazon VPC) without exposing your traffic to the Internet.
How can I get started with Amazon Bedrock?
Link to Amazon Bedrock getting started course
Link to Amazon Bedrock user guide
What are the most common use cases for Amazon Bedrock?
You can quickly get started with use cases:
Create new pieces of original content, such as short stories, essays, social media posts, and web page copy.
Search, find, and synthesize information to answer questions from a large corpus of data.
Create realistic and artistic images of various subjects, environments, and scenes from language prompts.
Help customers find what they’re looking for with more relevant and contextual product recommendations than word matching.
Get a summary of textual content such as articles, blog posts, books, and documents to get the gist without having to read the full content.
Suggest products that match shopper preferences and past purchases
Explore more generative AI use cases.
What is Amazon Bedrock Playground?
In which AWS Regions is Amazon Bedrock available?
How do I customize a model on Amazon Bedrock?
Can I train a model and deploy it on Amazon Bedrock?
What is latency-optimized inference in Amazon Bedrock?
Available in public preview, latency-optimized inference in Amazon Bedrock offers reduced latency without compromising accuracy. As verified by Anthropic, with latency-optimized inference on Amazon Bedrock, Claude 3.5 Haiku runs faster on AWS than anywhere else. Additionally, with latency-optimized inference in Bedrock, Llama 3.1 70B and 405B runs faster on AWS than any other major cloud provider. Using purpose-built AI chips like AWS Trainium2 and advanced software optimizations in Amazon Bedrock, customers can access more options to optimize their inference for a particular use case.
Key Features:
Reduces response times for foundation model interactions
Maintains accuracy while improving speed
Requires no additional setup or model fine-tuning
Supported Models: Anthropic's Claude 3.5 Haiku and Meta's Llama 3.1 models 405B and 70B
Availability: The US East (Ohio) Region via cross-region inference
To get started, visit the Amazon Bedrock console. For more information visit the Amazon Bedrock documentation.
How do we get started with latency-optimized inference in Amazon Bedrock?
Accessing the latency-optimized inference in Amazon Bedrock requires no additional setup or model fine-tuning, allowing for immediate enhancement of existing generative AI applications with faster response times. You can toggle on the “Latency optimized” parameter while invoking the Bedrock inference API.
To get started, visit the Amazon Bedrock console. For more information visit the Amazon Bedrock documentation.
Agents
Open allWhat are Amazon Bedrock Agents?
How can I connect FMs to my company data sources?
What are some use cases for Amazon Bedrock Agents?
How do Amazon Bedrock Agents help improve developer productivity?
With agents, developers have seamless support for monitoring, encryption, user permissions, versioning, and API invocation management without writing custom code. Amazon Bedrock Agents automate the prompt engineering and orchestration of user-requested tasks. Developers can use the agent-created prompt template as a baseline to further refine it for an enhanced user experience. They can update the user input, orchestration plan, and the FM response. With access to the prompt template developers have better control over the Agent orchestration.
With fully managed agents, you don’t have to worry about provisioning or managing infrastructure and can take applications to production faster.
Security
Open allIs the content processed by Amazon Bedrock moved outside the AWS Region where I am using Amazon Bedrock?
Are user inputs and model outputs made available to third-party model providers?
What security and compliance standards does Amazon Bedrock support?
Will AWS and third-party model providers use customer inputs to or outputs from Amazon Bedrock to train Amazon Nova, Amazon Titan or any third-party models?
SDK
Open allWhat SDKs are supported for Amazon Bedrock?
What SDKs support streaming functionality?
Billing and support
Open allHow much does Amazon Bedrock cost?
What support is provided for Amazon Bedrock?
How can I track the input and output tokens?
Why do I see a billing entry for AWS Marketplace for my usage of AWS Bedrock?
Customization
Open allHow can I securely use my data to customize FMs available through Amazon Bedrock?
How does Amazon Bedrock ensure my data used in fine-tuning remains private and confidential?
Does Amazon Bedrock support continued pretraining?
Why should I use continued pretraining in Amazon Bedrock?
How does the continued pretraining feature relate to other AWS services?
How do I use continued pre-training?
Amazon Titan
Open allWhat are Amazon Titan models?
Where can I learn more about the data processed to develop and train Amazon Titan FMs?
Knowledge Bases / RAG
Open allWhich data sources can I connect to Amazon Bedrock Knowledge Bases?
How does Amazon Bedrock Knowledge Base retrieve data from structured data sources?
Does Amazon Bedrock Knowledge Bases support multi-turn conversations?
Does Amazon Bedrock Knowledge Bases provide source attribution for retrieved information?
What multi-modal capabilities does Amazon Bedrock Knowledge Bases offer?
What multi-modal data formats does Amazon Bedrock Knowledge Bases support?
What are the different parsing options available in Amazon Bedrock Knowledge Bases?
How does Amazon Bedrock Knowledge Bases ensure data security and manage workflow complexities?
Model evaluation
Open allWhat is Model Evaluation on Amazon Bedrock?
Against what metrics can I evaluate FMs?
What is the difference between human-based and automatic evaluations?
How does automatic evaluation work?
How does human evaluation work?
Responsible AI with Amazon Bedrock Guardrails
Open allWhat is Amazon Bedrock Guardrails?
What are the safeguards available in Amazon Bedrock Guardrails?
Guardrails help you to define a set of six policies to help safeguard your generative AI applications. You can configure the following policies in Amazon Bedrock Guardrails:
Multi modal content filters – Configure thresholds to detect and filter harmful text and/or image content across categories including hate, insults, sexual, violence, misconduct, and prompt attacks.
Denied topics – Define a set of topics that are undesirable in the context of your application. The filter will help block them if detected in user queries or model responses.
Word filters – Configure filters to help block undesirable words, phrases, and profanity (exact match). Such words can include offensive terms, competitor names, etc.
Sensitive information filters – Configure filters to help block or mask sensitive information, such as personally identifiable information (PII), or custom regex in user inputs and model responses. Blocking or masking is done based on probabilistic detection of sensitive information in standard formats in entities such as SSN number, Date of Birth, address, etc. This also allows configuring regular expression based detection of patterns for identifiers.
Contextual grounding check – Help detect and filter hallucinations if the responses are not grounded (e.g., factually inaccurate or new information) in the source information and irrelevant to user’s query or instruction.
Automated Reasoning checks – Help detect factual inaccuracies in generated content, suggest corrections, and explain why responses are accurate by checking against a structured, mathematical representation of knowledge called an Automated Reasoning Policy.
What modalities are supported with Bedrock Guardrails?
Can I use Guardrails with all available FMs and tools on Amazon Bedrock?
What languages are supported with Bedrock Guardrails?
Do you have a list of off-the-shelf (built-in) guardrails, and what can be customized?
There are five guardrail policies each with different off-the-shelf protections
Content filters – This has 6 off the shelf categories (hate, insults, sexual, violence, misconduct (incl. criminal activity) and prompt attack (jailbreak and prompt injection. Each category can have further customized thresholds in terms of aggressiveness of filtering - low/medium/high for both text and image content.
Denied topic – These are customized topics that customers can define using simple natural language description
Sensitive information filter – These come with 30+ off the shelf PIIs. It can be further customized by adding customer’s proprietary information that are sensitive.
Word filters – It comes with off the shelf profanity filtering and can be further customized with custom words.
Contextual grounding checks – It can help detect hallucinations for RAG, summarization, and conversational applications, where source information can be used as reference to validate the model response.
How can I enforce Guardrails across my organization?
Does AWS offer an intellectual property indemnity covering copyright claims for its generative AI services?
Do default Guardrails automatically detect social security numbers or phone numbers?
What is the pricing model for using Amazon Bedrock Guardrails?
Are customers able to run automated tests on the effectiveness of the Guardrails they set? Is there a “test case builder” (the journalist’s terminology) for ongoing monitoring?
How is validation using Automated Reasoning checks different from Contextual Grounding checks?
What image formats are supported for multimodal content?
Marketplace
Open allWhat is Amazon Bedrock Marketplace?
Why should I use Amazon Bedrock Marketplace?
How do I get started with Amazon Bedrock Marketplace?
Can I fine-tune Amazon Bedrock Marketplace models?
Data Automation
Open allWhat is Bedrock Data Automation?
Why should I use Bedrock Data Automation?
What does Amazon Bedrock Data Automation manage on my behalf?
What is a blueprint?
What features and file formats are supported per modality by Amazon Bedrock Data Automation
Documents
Bedrock Data Automation supports both standard output and custom output for documents.
Standard output will provide extraction of text from documents and generative output such as document summary and captions for tables/figures/diagrams. Output is returned in reading order and can optionally be grouped by layout element, which will include headers/footers/titles/tables/figures/diagrams. Standard output will be used for BDA integration with Bedrock Knowledge Bases.
Custom Output leverages blueprints, which specify output requirements using natural language or a schema editor. Blueprints include a list of fields to extract and a data format for each field.
Bedrock Data Automation supports PDF, PNG, JPG, TIFF, a max of 1500 pages, and a max file size of 500MB per API request. By default, BDA will support 50 concurrent jobs and 10 transactions per second per customer.
Images
Bedrock Data Automation supports both standard output and custom output for images.
Standard output will provide summarization, detected explicit content, detected text, logo detection and Ad taxonomy: IAB for images. Standard output will be used for BDA integration with Bedrock Knowledge Bases.
Custom Output leverages blueprints, which specify output requirements using natural language or a schema editor. Blueprints include a list of fields to extract and a data format for each field.
Bedrock Data Automation supports JPG, PNG, a max resolution of 4K, and a max file size of 5 MB per API request. By default, BDA supports a max concurrency of 20 images at 10 transactions per second (TPS) per customer.
Videos
Bedrock Data Automation supports both standard output for videos.
Standard output will provide full video summary, chapter segmentation, chapter summary, full audio transcription, speaker identification, detected explicit content, detected text, logo detection and Interactive Advertising Bureau (IAB) taxonomy for videos. Full video summary is optimized for content with descriptive dialogue such as product overviews, trainings, news casts, and documentaries.
Bedrock Data Automation supports MOV and MKV with H.264, VP8, VP9, a max video duration of 4 hours, and a max file size of 2 GB per API request. By default, BDA supports a max concurrency of 20 videos at 10 transactions per second (TPS) per customer.
Audio
Bedrock Data Automation supports both standard output for audio.
Standard output will provide summarization including chapter summarization, full transcription, and detect explicit content moderation for audio files.
Bedrock Data Automation supports FLAC, M4A, MP3, MP4, Ogg, WebM, WAV, a max audio duration of 4 hours, and a max file size of 2 GB per API request.
In which AWS regions is Amazon Bedrock Data Automation available?
What languages does Amazon Bedrock Data Automation support?
Amazon Bedrock in SageMaker Unified Studio
Open allWhat is Amazon Bedrock in SageMaker Unified Studio?
How do I access Amazon Bedrock's capabilities in Amazon SageMaker Unified Studio?
To access Amazon Bedrock's capabilities within Amazon SageMaker Unified Studio, developers and their admins will need to follow these steps:
Create a new domain in Amazon SageMaker Unified Studio.
Enable the Gen AI application development project profile.
Access Amazon Bedrock through the Generative AI Playground (Discover) and Generative AI App Development (Build) sections, using their company's single sign-on (SSO) credentials within Amazon SageMaker Unified Studio.
What are the key features and capabilities of Amazon Bedrock in Amazon SageMaker Unified Studio? How is it different from Amazon Bedrock Studio and Amazon Bedrock IDE?
New features include a model hub for side-by-side AI model comparison, an expanded playground supporting chat, image, and video interactions, and improved Knowledge Base creation with web crawling. It introduces Agent creation for more complex chat applications and simplifies sharing of AI apps and prompts within organizations. It also offers access to underlying application code and the ability to export chat apps as CloudFormation templates. By managing AWS infrastructure details, it enables users of various skill levels to create AI applications more efficiently, making it a more versatile and powerful tool than its predecessor.
Amazon Bedrock IDE was renamed to better represent the core capability of Amazon Bedrock being accessed through Amazon SageMaker Unified Studio's governed environment.
How does Amazon Bedrock in SageMaker Unified Studio enable collaboration among teams within an organization?
Why is Amazon Bedrock being integrated into Amazon SageMaker Unified Studio?
The unified environment allows seamless collaboration among developers of various skill levels throughout the development lifecycle - from data preparation to model development and generative AI application building. Teams can access integrated tools for knowledge base creation, guardrail configuration, and high-performing generative AI application development, all within a secure and governed framework.
Within Amazon SageMaker Unified Studio, developers can effortlessly switch between different tools based on their needs, combining analytics, machine learning, and generative AI capabilities in a single workspace. This consolidated approach reduces development complexity and accelerates time-to-value for generative AI projects. By bringing Amazon Bedrock into Amazon SageMaker Unified Studio, AWS lowers the barriers to entry for generative AI development while maintaining enterprise-grade security and governance, ultimately enabling organizations to innovate faster and more effectively with generative AI.
When should I use Amazon Bedrock's capabilities in Amazon SageMaker Unified Studio?
Amazon Bedrock's capabilities in Amazon SageMaker Unified Studio are ideal for enterprise teams who need a governed environment for collaboratively building and deploying generative AI applications. Through Amazon SageMaker Unified Studio, teams can access:
The Generative AI Playground in the Discover section enables teams to experiment with foundation models (FMs), test different models and configurations, compare model outputs, and collaborate on prompts and applications. This environment provides a seamless way for teams to evaluate and understand the capabilities of different models before implementing them in their applications.
The Generative AI App Development in the Build section provides teams with the tools needed to create production-ready generative AI applications. Teams can create and manage Knowledge Bases, implement Guardrails for responsible AI, develop Agents and Flows, and collaborate securely while maintaining governance and compliance controls. This environment is particularly valuable for organizations that require secure collaboration and seamless access to Amazon Bedrock's full range of capabilities while maintaining enterprise security and compliance standards.
How does Amazon Bedrock integrate with other AWS services within Amazon SageMaker Unified Studio to create generative AI applications?
Within Amazon SageMaker Unified Studio, Amazon Bedrock seamlessly integrates with Amazon SageMaker's analytics, machine learning (ML), and generative AI capabilities. Organizations can move from concept to production faster by prototyping and experimenting with foundation models in Amazon Bedrock, then easily transitioning to JupyterLab notebooks or code editors to integrate these resources into broader applications and workflows. This consolidated workspace streamlines complexity, enabling faster prototyping, iteration, and deployment of production-ready, responsible generative AI applications that align with specific business requirements.
Are there any limits or quotas on the usage of Amazon Bedrock in SageMaker Unified Studio?
What are the pricing and billing models for using Amazon Bedrock in SageMaker Unified Studio?
What are the Service Level Agreements (SLAs) for Amazon Bedrock in SageMaker Unified Studio?
What documentation and support resources are available for Amazon Bedrock in SageMaker Unified Studio?
Did you find what you were looking for today?
Let us know so we can improve the quality of the content on our pages.