This Week in AI: Google, Meta, and DeepMind Push the Boundaries of Artificial Intelligence

It’s been a busy week in the world of artificial intelligence, and while OpenAI captured headlines with controversial moves and product launches, tech giants like Google, Meta, and DeepMind have been advancing their own AI agendas. Here’s a roundup of the most notable AI developments from the past week: 

Google Gemini 1.5 Models: Faster, Cheaper, and Smarter 

Google has rolled out updates to its Gemini AI lineup, introducing  Gemini-1.5-Pro-002 and Gemini-1.5-Flash-002 models, which offer major improvements in handling math, long-context processing, and vision-related tasks. Se updates promise a 7% increase in performance on MMLU-Pro benchmark and a 20% boost in math tasks, making models more efficient across a wider range of applications. 

In addition to technical upgrades, Google slashed prices for  Gemini models, cutting input token costs by 64% and output token costs by 52% for prompts under 128,000 tokens. This makes Gemini 1.5 Pro not only one of the most powerful AI models on the market but also one of the most cost-effective. Simon Willison, an AI researcher, highlighted that Gemini is now significantly cheaper than competitors like GPT-4 and Claude 3.5, which could attract more developers and enterprises to the platform. 

Google also improved rate limits for se models, allowing Gemini 1.5 Flash to handle 2,000 requests per minute and Gemini 1.5 Pro to manage 1,000 requests per minute, doubling output speeds and reducing latency. 

Meta’s Llama 3.2: AI on Edge 

Meta has also made strides in AI with the launch of Llama 3.2, its latest update to the Llama AI family. The release includes vision-capable language models with up to 90 billion parameters and lightweight models designed for mobile devices with 1 to 3 billion parameters. Se models bring Meta into competition for image recognition and visual understanding tasks, an area dominated by closed-source models until now. 

AI researcher Ethan Mollick even managed to run Llama 3.2 on an iPhone using an app called PocketPal, showcasing practicality of smaller models. Meta also unveiled its first official “Llama Stack” distributions, simplifying the development and deployment of models across various platforms. 

Llama 3.2’s support for long context windows of up to 128,000 tokens gives it an edge in handling complex tasks that require more memory and context, making it a valuable tool for developers working on large-scale projects. 

Google DeepMind’s AlphaChip: AI Revolutionizing Chip Design 

Anor groundbreaking development came from Google DeepMind, which announced a significant advancement in AI-driven chip design called AlphaChip. Initially launched as a research project in 2020, AlphaChip has since evolved into a reinforcement learning system that designs electronic chip layouts faster and more accurately than human engineers. 

Google has used AlphaChip to design Tensor Processing Units (TPUs), a critical component for accelerating AI operations. The tool is capable of generating high-quality chip layouts in hours, compared to weeks or even months that traditional methods require. 

This week, Google took the bold step of releasing AlphaChip’s pre-trained checkpoint on GitHub, allowing companies and researchers to build on the technology. Companies like MediaTek are already leveraging AlphaChip’s capabilities to optimize their chip designs, signaling a shift in how AI could transform the entire semiconductor industry.

 

What’s Next? 

While OpenAI’s moves may dominate mainstream discussions,  developments from Google, Meta, and DeepMind highlight that the AI race is far from one-dimensional. From cost-effective, high-performance models to AI reshaping industries like chip design, the landscape of artificial intelligence continues to evolve at a rapid pace. 

With breakthroughs happening weekly, it’s clear that AI will continue to redefine industries and technologies, leaving us to wonder what the next wave of innovations will bring. 

Scroll to Top