𝗡𝗲𝘄 𝗮𝗻𝗱 𝗶𝗺𝗽𝗿𝗼𝘃𝗲𝗺𝗲𝗻𝘁𝘀 𝘁𝗼 𝘃𝟮✨:
🔠 Trained on 15T Tokens & fine-tuned on 10M human annotated samples
🧮 8B & 70B versions as Instruct and Base
🚀 Llama 3 70B best open LLM on MMLU (> 80 🤯)
🧑🏻💻 Instruct good at coding 8B with 62.2 and 70B 81.7 on Human Eval
✍🏻 Tiktoken-based tokenizer with a 128k vocabulary
🪟 8192 default context window (can be increased)
🧠 Used SFT, PPO & DPO for alignment.
💰Commercial use allowed ✅
🤗 Available on Hugging Face
🤝 1-click deployments on Hugging Face, Amazon SageMaker, Google Cloud
🔜 more model sizes & enhanced performance
Blog: https://lnkd.in/ehXXavJ8
Models: https://lnkd.in/ek2pJviv
Chat-Demo: https://lnkd.in/eyRHH2X4
Massive kudos to Meta for continuing its commitment to open AI. Honored to partner with Joseph Spisak and team! 🤗 The gap is melting. 🧊