Panoramica

Amazon SageMaker JumpStart è un hub di machine learning (ML) che può contribuire ad accelerare il percorso verso il ML. Scopri come iniziare a utilizzare algoritmi integrati con modelli preaddestrati provenienti da hub di modelli, modelli di base preformati e soluzioni predefinite per risolvere casi d'uso comuni. Per iniziare, consulta la documentazione o i notebook di esempio che puoi eseguire rapidamente.

Tipo di prodotto
Attività di testo
Attività visive
Attività tabulari
Attività audio
Multimodale
Apprendimento rinforzato
Showing results: 1-12
Total results: 630
  • Popolarità
  • Funzionalità in primo piano
  • Nome del modello da A a Z
  • Nome del modello da Z ad A
  • foundation model

    Text Generation

    Falcon 7B Instruct BF16

    Hugging Face
    Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by TII based on Falcon-7B and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.
    Fine-tunable
  • foundation model

    Text Generation

    Falcon 40B Instruct BF16

    Hugging Face
    Falcon-40B-Instruct is a 40B parameters causal decoder-only model built by TII based on Falcon-40B and finetuned on a mixture of Baize. It is made available under the Apache 2.0 license.
    Fine-tunable
  • foundation model

    Text Generation

    BloomZ 7B1 FP16

    Hugging Face
    BloomZ 7b1 is an instruction-tuned model based on Bloom 7b1 and thus capable of performing various zero-shot natural language processing tasks, as well as the few-shot in-context learning tasks. With appropriate prompt, it can perform zero-shot NLP tasks such as text summarization, common sense reasoning, natural language inference, question and answering, sentence/sentiment classification, translation, and pronoun resolution.
    Fine-tunable
  • foundation model

    Text Generation

    Falcon 7B BF16

    Hugging Face
    Falcon-7B is a 7B parameters causal decoder-only model built by TII and trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license.
    Fine-tunable
  • foundation model

    Text Generation

    GPT NeoXT Chat Base 20B FP16

    Hugging Face
    As part of OpenChatKit, GPT-NeoXT-Chat-Base-20B-v0.16 is a 20B parameter language model, fine-tuned from EleutherAI's GPT-NeoX with over 40 million instructions on 100% carbon negative compute.
    Deploy only
  • foundation model

    Featured
    Text Classification

    Meta Llama Prompt Guard 86M

    Meta
    Prompt Guard is a classifier model trained on a large corpus of attacks, capable of detecting both explicitly malicious prompts as well as data that contains injected inputs.
    Deploy only
  • foundation model

    Text Generation

    Falcon 40B BF16

    Hugging Face
    Falcon-40B is a 40B parameters causal decoder-only model built by TII and trained on 1,000B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license.
    Fine-tunable
  • foundation model

    Text Generation

    Falcon2-11B

    Hugging Face
    Falcon2-11B is a 11B parameters causal decoder-only model built by TII and trained on over 5,000B tokens of RefinedWeb enhanced with curated corpora.
    Deploy only
  • foundation model

    Featured
    Image to Text

    EXAONE Atelier - Image to Text

    LG AI Research

    EXAONE Atelier Image to Text model is a zero-shot image captioning model that is trained with 3.5 million images and text data, built upon LG AI Research's commercially licensed datasets.

  • foundation model

    Featured
    Text Generation

    Llama 3

    Meta

    Llama 3 from Meta. Llama 3 family comes as 8B and 70B with pretrained and instruct versions. Available in us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), eu-west-1 (Ireland) and ap-northeast-1 (Tokyo).

    Deploy Only
  • foundation model

    Featured
    Text Generation

    Llama 2 70B

    Meta
    70B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
  • foundation model

    Featured
    Text Generation

    Llama 2 7B

    Meta
    7B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.
    Fine-tunable
1 53