Inizia a usare Amazon SageMaker JumpStart
Panoramica
Amazon SageMaker JumpStart è un hub di machine learning (ML) che può contribuire ad accelerare il percorso verso il ML. Scopri come iniziare a utilizzare algoritmi integrati con modelli preaddestrati provenienti da hub di modelli, modelli di base preformati e soluzioni predefinite per risolvere casi d'uso comuni. Per iniziare, consulta la documentazione o i notebook di esempio che puoi eseguire rapidamente.
Total results: 630
- Popolarità
- Funzionalità in primo piano
- Nome del modello da A a Z
- Nome del modello da Z ad A
-
foundation model
Text GenerationFalcon 7B Instruct BF16
Hugging FaceFalcon-7B-Instruct is a 7B parameters causal decoder-only model built by TII based on Falcon-7B and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.Fine-tunable -
foundation model
Text GenerationFalcon 40B Instruct BF16
Hugging FaceFalcon-40B-Instruct is a 40B parameters causal decoder-only model built by TII based on Falcon-40B and finetuned on a mixture of Baize. It is made available under the Apache 2.0 license.Fine-tunable -
foundation model
Text GenerationBloomZ 7B1 FP16
Hugging FaceBloomZ 7b1 is an instruction-tuned model based on Bloom 7b1 and thus capable of performing various zero-shot natural language processing tasks, as well as the few-shot in-context learning tasks. With appropriate prompt, it can perform zero-shot NLP tasks such as text summarization, common sense reasoning, natural language inference, question and answering, sentence/sentiment classification, translation, and pronoun resolution.Fine-tunable -
foundation model
Text GenerationFalcon 7B BF16
Hugging FaceFalcon-7B is a 7B parameters causal decoder-only model built by TII and trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license.Fine-tunable -
foundation model
Text GenerationGPT NeoXT Chat Base 20B FP16
Hugging FaceAs part of OpenChatKit, GPT-NeoXT-Chat-Base-20B-v0.16 is a 20B parameter language model, fine-tuned from EleutherAI's GPT-NeoX with over 40 million instructions on 100% carbon negative compute.Deploy only -
foundation model
FeaturedText ClassificationMeta Llama Prompt Guard 86M
MetaPrompt Guard is a classifier model trained on a large corpus of attacks, capable of detecting both explicitly malicious prompts as well as data that contains injected inputs.Deploy only -
foundation model
Text GenerationFalcon 40B BF16
Hugging FaceFalcon-40B is a 40B parameters causal decoder-only model built by TII and trained on 1,000B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license.Fine-tunable -
foundation model
Text GenerationFalcon2-11B
Hugging FaceFalcon2-11B is a 11B parameters causal decoder-only model built by TII and trained on over 5,000B tokens of RefinedWeb enhanced with curated corpora.Deploy only -
foundation model
FeaturedImage to TextEXAONE Atelier - Image to Text
LG AI ResearchEXAONE Atelier Image to Text model is a zero-shot image captioning model that is trained with 3.5 million images and text data, built upon LG AI Research's commercially licensed datasets.
-
foundation model
FeaturedText GenerationLlama 3
MetaLlama 3 from Meta. Llama 3 family comes as 8B and 70B with pretrained and instruct versions. Available in us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), eu-west-1 (Ireland) and ap-northeast-1 (Tokyo).
Deploy Only -
foundation model
FeaturedText GenerationLlama 2 70B
Meta70B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.Fine-tunable -
foundation model
FeaturedText GenerationLlama 2 7B
Meta7B variant of Llama 2 models. Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. Llama 2 is intended for commercial and research use in English. It comes in a range of parameter sizes—7 billion, 13 billion, and 70 billion—as well as pre-trained and fine-tuned variations.Fine-tunable