概觀

Amazon SageMaker JumpStart 是機器學習 (ML) 中心,可協助您加速機器學習之旅。探索如何透過模型中心預先訓練的模型、預先訓練的基礎模型和預先建置的解決方案開始使用內建演算法,以解決常見使用案例。若要開始使用,請參閱可快速執行的文件或範例筆記本。

產品類型
文字任務
視覺任務
表格式任務
音訊任務
多模態
強化學習
Showing results: 1-12
Total results: 630
  • 熱門度
  • 精選優先
  • A-Z 型號名稱
  • Z-A 型號名稱
  • foundation model

    Text Generation

    Falcon 7B Instruct BF16

    Hugging Face
    Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by TII based on Falcon-7B and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.
    Fine-tunable
  • foundation model

    Text Generation

    Falcon 40B Instruct BF16

    Hugging Face
    Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by TII based on Falcon-40B and finetuned on a mixture of Baize. It is made available under the Apache 2.0 license.
    Fine-tunable
  • foundation model

    Text Generation

    BloomZ 7B1 FP16

    Hugging Face
    BloomZ 7b1 is an instruction-tuned model based on Bloom 7b1 and thus capable of performing various zero-shot natural language processing tasks, as well as the few-shot in-context learning tasks. With appropriate prompt, it can perform zero-shot NLP tasks such as text summarization, common sense reasoning, natural language inference, question and answering, sentence/sentiment classification, translation, and pronoun resolution.
    Fine-tunable
  • foundation model

    Text Generation

    Falcon 7B BF16

    Hugging Face
    Falcon-7B is a 7B parameters causal decoder-only model built by TII and trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license.
    Fine-tunable
  • foundation model

    Text Generation

    GPT NeoXT Chat Base 20B FP16

    Hugging Face
    As part of OpenChatKit, GPT-NeoXT-Chat-Base-20B-v0.16 is a 20B parameter language model, fine-tuned from EleutherAI's GPT-NeoX with over 40 million instructions on 100% carbon negative compute.
    Deploy only
  • foundation model

    Featured
    Text Classification

    Meta Llama Prompt Guard 86M

    Meta
    Prompt Guard is a classifier model trained on a large corpus of attacks, capable of detecting both explicitly malicious prompts as well as data that contains injected inputs.
    Deploy only
  • foundation model

    Text Generation

    Falcon 40B BF16

    Hugging Face
    Falcon-40B is a 40B parameters causal decoder-only model built by TII and trained on 1,000B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license.
    Fine-tunable
  • foundation model

    Text Generation

    Falcon2-11B

    Hugging Face
    Falcon2-11B is a 11B parameters causal decoder-only model built by TII and trained on over 5,000B tokens of RefinedWeb enhanced with curated corpora.
    Deploy only
  • foundation model

    Featured
    Image to Text

    EXAONE Atelier - Image to Text

    LG AI Research

    EXAONE Atelier Image to Text model is a zero-shot image captioning model that is trained with 3.5 million images and text data, built upon LG AI Research's commercially licensed datasets.

  • foundation model

    Featured
    Text Generation

    Llama 3

    Meta

    Llama 3 from Meta. Llama 3 family comes as 8B and 70B with pretrained and instruct versions. Available in us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), eu-west-1 (Ireland) and ap-northeast-1 (Tokyo).

    Deploy Only
  • foundation model

    Featured
    Text Generation

    Llama 2 70B

    Meta
    70B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Llama 2 7B

    Meta
    7B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
1 53