Google announces next-generation AI model - Gemini 1.5


Devdiscourse News Desk | California | Updated: 15-02-2024 21:22 IST | Created: 15-02-2024 21:22 IST
Google announces next-generation AI model - Gemini 1.5
Image Credit: Google Workspace Updates
  • Country:
  • United States

Google has announced its next-generation AI model - Gemini 1.5 - that delivers dramatically enhanced performance. The tech giant is releasing the Gemini 1.5 Pro model for early testing to select developers and enterprise customers.

"Gemini 1.5 delivers dramatically enhanced performance. It represents a step change in our approach, building upon research and engineering innovations across nearly every part of our foundation model development and infrastructure. This includes making Gemini 1.5 more efficient to train and serve, with a new Mixture-of-Experts (MoE) architecture," Google said on Thursday.

Optimized for scaling across a wide range of tasks, Gemini 1.5 Pro performs at a similar level to Gemini 1.0 Ultra, Google's largest model to date. The model can process vast amounts of information in one go, including 1 hour of video, 11 hours of audio, and codebases with over 30,000 lines of code or over 700,000 words.

Gemini 1.5 Pro comes with a standard 128,000 token context window by default but a limited group of developers and enterprise customers can try it with up to 1 million tokens via AI Studio and Vertex AI in private preview, starting today.

Google also plans to introduce pricing tiers, starting at the standard 128,000 context window and scaling up to 1 million tokens. During the testing period, developers and enterprise customers can try the 1 million token context window at no cost. Google also noted that early testers should expect longer latency times with this experimental feature.

"Starting today, any developer can start building with Gemini Pro in production. 1.0 Pro offers the best balance of quality, performance, and cost for most AI tasks, like content generation, editing, summarization, and classification," Google wrote in a blog post. 

Give Feedback