Today, we’re making Gemini 2.0 available to everyone - and expanding the model family. ✨
Say hello to:
🔵 An updated 2.0 Flash, which is optimized for high-volume, high-frequency tasks at scale - enabling developers to start building production applications.
You can now use it via the Gemini API in Google AI Studio, Google Cloud’s #VertexAI platform and the Gemini app.
🔵 2.0 Pro Experimental: our best model yet for coding tasks and complex prompts.
With a 2 million token context window, it’s able to analyze and understand large amounts of information.
🔵 2.0 Flash-Lite, our most cost-efficient AI model yet - now available across Google products.
It has better quality than 1.5 Flash, at similar cost and speed, and comes with a 1 million token context window, multimodal input and text output.
🔵 And 2.0 Flash Thinking Experimental now available in the Gemini app.
Find out more → https://goo.gle/3CMXg5p
Say hello to:
🔵 An updated 2.0 Flash, which is optimized for high-volume, high-frequency tasks at scale - enabling developers to start building production applications.
You can now use it via the Gemini API in Google AI Studio, Google Cloud’s #VertexAI platform and the Gemini app.
🔵 2.0 Pro Experimental: our best model yet for coding tasks and complex prompts.
With a 2 million token context window, it’s able to analyze and understand large amounts of information.
🔵 2.0 Flash-Lite, our most cost-efficient AI model yet - now available across Google products.
It has better quality than 1.5 Flash, at similar cost and speed, and comes with a 1 million token context window, multimodal input and text output.
🔵 And 2.0 Flash Thinking Experimental now available in the Gemini app.
Find out more → https://goo.gle/3CMXg5p