๐ก๐ฒ๐ ๐ฎ๐ป๐ฑ ๐ถ๐บ๐ฝ๐ฟ๐ผ๐๐ฒ๐บ๐ฒ๐ป๐๐ ๐๐ผ ๐๐ฎโจ:
๐ ย Trained on 15T Tokens & fine-tuned on 10M human annotated samples
๐งฎย 8B & 70B versions as Instruct and Base
๐ย Llama 3 70B best open LLM on MMLU (> 80 ๐คฏ)
๐ง๐ปโ๐ปย Instruct good at coding 8B with 62.2 and 70B 81.7 on Human Eval
โ๐ปย Tiktoken-based tokenizer with a 128k vocabulary
๐ชย 8192 default context window (can be increased)
๐ง ย Used SFT, PPO & DPO for alignment.
๐ฐCommercial use allowed โ
๐คย Available on Hugging Face
๐คย 1-click deployments on Hugging Face, Amazon SageMaker, Google Cloud
๐ย more model sizes & enhanced performance
Blog: https://lnkd.in/ehXXavJ8
Models: https://lnkd.in/ek2pJviv
Chat-Demo: https://lnkd.in/eyRHH2X4
Massive kudos to Meta for continuing its commitment to open AI. Honored to partner with Joseph Spisak and team! ๐คย The gap is melting. ๐ง