Writer's Palmyra in Amazon Bedrock

Build fast with enterprise-grade, reliable Palmyra LLMs from Writer

Introducing Writer's Palmyra foundation models

Balance advanced reasoning capabilities with enterprise-grade reliability using Palmyra X hybrid models. Achieve superior performance on complex agentic workflows.

Palmyra X5

Palmyra X5 is Writer's most advanced model, purpose-built for building and scaling AI agents across the enterprise. It delivers industry-leading speed and efficiency on context windows up to 1 million tokens, powered by a novel transformer architecture and hybrid attention mechanisms. This enables faster inference and expanded memory for processing large volumes of enterprise data and tool calls, critical for scaling AI agents. With adaptive reasoning to dynamically adjust strategy based on context, Palmyra X5 also supports code generation, structured outputs, and over 30 languages.

Palmyra X4

Top ranked on Stanford HELM, Palmyra X4 achieves superior performance on complex tasks and agentic workflows. It combines a 128K token context window with a suite of capabilities, including advanced reasoning, tool calling, LLM delegation, built-in RAG, code generation, structured outputs, multi-modality, and multilingual support. Using enterprise-specific tools that extend the model's ability to take action, Palmyra X4 allows developers to build apps and agents that update systems, perform transactions, send emails, trigger workflows, and more.

Benefits

Use Cases

Model versions

Palmyra X5

Palmyra X5 is Writer's most advanced model featuring a 1M token context window, enabling complex workflows and deep analysis with industry-leading speed and efficiency.

Max tokens: 1M

Languages: English, Spanish, French, German, Chinese, and more

Fine-tuning supported: No

Supported use cases: Complex app and agent development, code generation and deployment, deep research, and long-context analysis

Read the blog

Palmyra X4

Palmyra X4 is a high-performance model optimized for enterprise workflows, featuring advanced reasoning with a 128K context window.

Max tokens: 128K

Languages: English, Spanish, French, German, Chinese and more

Fine-tuning supported: No

Supported use cases: Multi-step tasks and workflows, chat, text generation, and extracting knowledge from unstructured data

Read the blog