models
Organization Account
models's Repositories
Displaying Page 1 of 1 (3 total Repositories)
23

This is a pre-trained Score Entropy Discrete Diffusion (SEDD) model. Vocab size is 50256 and uses the GPT2 Tokenizer. Trained on OpenWebText.

1.7 gb
1
Updated: 6 months ago

Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pre-trained and instruction tuned generative text models in 8B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, they took great care to optimize helpfulness and safety.

16.1 gb
410
Updated: 6 months ago
Public
0

This repository contains the weights and the code of the Grok-1 open-weights model.

318.2 gb
77012
Updated: 7 months ago