Getting started with Amazon SageMaker JumpStart

Overview

Amazon SageMaker JumpStart is a machine learning (ML) hub that can help you accelerate your ML journey. Explore how you can get started with built-in algorithms with pretrained models from model hubs, pretrained foundation models, and prebuilt solutions to solve common use cases. To get started, see documentation or example notebooks that you can quickly execute.

Product Type
Text Tasks
Vision Tasks
Tabular Tasks
Audio Tasks
Multimodal
Reinforced Learning
Showing results: 1-12
Total results: 630
  • Popularity
  • Featured First
  • A-Z Model Name
  • Z-A Model Name
  • foundation model

    Text Generation

    Falcon 7B Instruct BF16

    Hugging Face
    Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by TII based on Falcon-7B and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.
    Fine-tunable
  • foundation model

    Text Generation

    Falcon 40B Instruct BF16

    Hugging Face
    Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by TII based on Falcon-40B and finetuned on a mixture of Baize. It is made available under the Apache 2.0 license.
    Fine-tunable
  • foundation model

    Text Generation

    BloomZ 7B1 FP16

    Hugging Face
    BloomZ 7b1 is an instruction-tuned model based on Bloom 7b1 and thus capable of performing various zero-shot natural language processing tasks, as well as the few-shot in-context learning tasks. With appropriate prompt, it can perform zero-shot NLP tasks such as text summarization, common sense reasoning, natural language inference, question and answering, sentence/sentiment classification, translation, and pronoun resolution.
    Fine-tunable
  • foundation model

    Text Generation

    Falcon 7B BF16

    Hugging Face
    Falcon-7B is a 7B parameters causal decoder-only model built by TII and trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license.
    Fine-tunable
  • foundation model

    Text Generation

    GPT NeoXT Chat Base 20B FP16

    Hugging Face
    As part of OpenChatKit, GPT-NeoXT-Chat-Base-20B-v0.16 is a 20B parameter language model, fine-tuned from EleutherAI's GPT-NeoX with over 40 million instructions on 100% carbon negative compute.
    Deploy only
  • foundation model

    Featured
    Text Classification

    Meta Llama Prompt Guard 86M

    Meta
    Prompt Guard is a classifier model trained on a large corpus of attacks, capable of detecting both explicitly malicious prompts as well as data that contains injected inputs.
    Deploy only
  • foundation model

    Text Generation

    Falcon 40B BF16

    Hugging Face
    Falcon-40B is a 40B parameters causal decoder-only model built by TII and trained on 1,000B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license.
    Fine-tunable
  • foundation model

    Text Generation

    Falcon2-11B

    Hugging Face
    Falcon2-11B is a 11B parameters causal decoder-only model built by TII and trained on over 5,000B tokens of RefinedWeb enhanced with curated corpora.
    Deploy only
  • foundation model

    Featured
    Image to Text

    EXAONE Atelier - Image to Text

    LG AI Research

    EXAONE Atelier Image to Text model is a zero-shot image captioning model that is trained with 3.5 million images and text data, built upon LG AI Research's commercially licensed datasets.

  • foundation model

    Featured
    Text Generation

    Llama 3

    Meta

    Llama 3 from Meta. Llama 3 family comes as 8B and 70B with pretrained and instruct versions. Available in us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), eu-west-1 (Ireland) and ap-northeast-1 (Tokyo).

    Deploy Only
  • foundation model

    Featured
    Text Generation

    Llama 2 70B

    Meta
    70B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Llama 2 7B

    Meta
    7B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
1 53