概览

Amazon SageMaker JumpStart 是一个机器学习(ML)中心,可以帮助您加速 ML 之旅。探索如何开始使用内置算法(带有来自模型中心的预训练模型、预训练基础模型和预构建解决方案)满足常见应用场景需要。要开始使用,请参阅可以快速执行的文档或示例 Notebook。

产品类型
文本任务
视觉任务
表格任务
音频任务
多模态
强化学习
Showing results: 1-12
Total results: 630
  • 人气
  • 首页推荐位
  • A-Z 模型名称
  • Z-A 模型名称
  • foundation model

    Text Generation

    Falcon 7B Instruct BF16

    Hugging Face
    Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by TII based on Falcon-7B and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.
    Fine-tunable
  • foundation model

    Text Generation

    Falcon 40B Instruct BF16

    Hugging Face
    Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by TII based on Falcon-40B and finetuned on a mixture of Baize. It is made available under the Apache 2.0 license.
    Fine-tunable
  • foundation model

    Text Generation

    BloomZ 7B1 FP16

    Hugging Face
    BloomZ 7b1 is an instruction-tuned model based on Bloom 7b1 and thus capable of performing various zero-shot natural language processing tasks, as well as the few-shot in-context learning tasks. With appropriate prompt, it can perform zero-shot NLP tasks such as text summarization, common sense reasoning, natural language inference, question and answering, sentence/sentiment classification, translation, and pronoun resolution.
    Fine-tunable
  • foundation model

    Text Generation

    Falcon 7B BF16

    Hugging Face
    Falcon-7B is a 7B parameters causal decoder-only model built by TII and trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license.
    Fine-tunable
  • foundation model

    Text Generation

    GPT NeoXT Chat Base 20B FP16

    Hugging Face
    As part of OpenChatKit, GPT-NeoXT-Chat-Base-20B-v0.16 is a 20B parameter language model, fine-tuned from EleutherAI's GPT-NeoX with over 40 million instructions on 100% carbon negative compute.
    Deploy only
  • foundation model

    Featured
    Text Classification

    Meta Llama Prompt Guard 86M

    Meta
    Prompt Guard is a classifier model trained on a large corpus of attacks, capable of detecting both explicitly malicious prompts as well as data that contains injected inputs.
    Deploy only
  • foundation model

    Text Generation

    Falcon 40B BF16

    Hugging Face
    Falcon-40B is a 40B parameters causal decoder-only model built by TII and trained on 1,000B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license.
    Fine-tunable
  • foundation model

    Text Generation

    Falcon2-11B

    Hugging Face
    Falcon2-11B is a 11B parameters causal decoder-only model built by TII and trained on over 5,000B tokens of RefinedWeb enhanced with curated corpora.
    Deploy only
  • foundation model

    Featured
    Image to Text

    EXAONE Atelier - Image to Text

    LG AI Research

    EXAONE Atelier Image to Text model is a zero-shot image captioning model that is trained with 3.5 million images and text data, built upon LG AI Research's commercially licensed datasets.

  • foundation model

    Featured
    Text Generation

    Llama 3

    Meta

    Llama 3 from Meta. Llama 3 family comes as 8B and 70B with pretrained and instruct versions. Available in us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), eu-west-1 (Ireland) and ap-northeast-1 (Tokyo).

    Deploy Only
  • foundation model

    Featured
    Text Generation

    Llama 2 70B

    Meta
    70B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Llama 2 7B

    Meta
    7B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
1 53