• AI For All
  • Posts
  • Google Open Sources Gemini (sort of)

Google Open Sources Gemini (sort of)

PLUS: Groq LPU Outruns Competition

Together with

Hello readers,

Welcome to another edition of This Week in the Future! Google introduced new state-of-the-art open models based on Gemini called Gemma (7B and 2B) and added Gemini in Google Workspace. Plus, the Groq LPU outran the competition with blazing fast chat responses, wowing the internet in the process.

As always, thanks for being a subscriber! We hope you enjoy this week’s content — for a video breakdown, check out the episode on YouTube.

Let’s get into it!

Two Steps Forward, One Step Back

Image generated by DALL·E 3

It was a rollercoaster of a week for Google. First, they released an open-source family of models called Gemma. Now, even Google is more open than OpenAI. Gemma 7B and 2B are based on Gemini but less powerful of course. Google says that Gemma is lighter and more performant than Llama 2, its main competitor in winning the hearts of the open source community. Why would Google bother with this? There’s value in developers building on your tech. This also adds extra credibility to Google’s AI push.

Gemini in Google Workspace

Google’s second major announcement was the integration of Gemini in Google Workspace for Gemini Business and Gemini Enterprise users. This means you’ll be able to utilize Gemini in all your Google Workspace apps while benefitting from enterprise-grade data protection. This is Google’s answer to ChatGPT Teams, so if your business is a heavy user of Google Workspace, then Gemini might be the AI for you.

Gemini Image Generation No More

Google had to temporarily pause Gemini’s image generating capabilities after it produced historical inaccuracies. Google responded by saying they were working on immediate fixes and intend to improve the model in this regard. Google had such momentum, but this easily mockable fiasco might have distracted everyone from the fact that Google is moving fast on AI, faster than one might have expected.

Groq LPU Outruns Competition

Image generated by DALL·E 3

A company called Groq (not to be confused with Elon Musk’s Grok) has developed a language processing unit (LPU), an alternative to GPUs that is purpose-built for LLMs and token generation. Groq can reach speeds of 500 tokens per second. For comparison, GPT-3.5 can do 30-50 tokens per second. You can try it out yourself using either Llama 2 70B-4K or Mixtral 8x7B-32K.

Why This Matters

By offering an alternative to the high-demand NVIDIA GPUs, Groq's energy-efficient LPU could make advanced AI technologies more accessible and affordable. Furthermore, this opens the door to a wide range of applications that were previously limited by latency. We can now expect to see new and innovative uses of AI, not just in chatbots but across various sectors including content generation and machine translation.

🔥 Rapid Fire

🎙️ The AI For All Podcast

This week’s episode featured Brendan Kane, Founder of Hook Point, who discussed AI in marketing and what it takes to go viral. We cover using qualitative analysis to optimize content, resonating with an audience, unlearning traditional marketing methods, and adapting to new methods enhanced by AI.

📖 What We’re Reading

AI in Traditional Industries: Legal and Insurance (link)

“AI has been revolutionizing various industries. However, its impact on more traditional sectors like insurance and law is often overlooked. These slower industries will be among the fastest to be fully transformed by AI.”

Source: AI For All
Adopting AI at speed and scale (link)

“Consider the example of a rapid changeover at a production site. This requires flexible robotics to handle different products, automated guided vehicles to move materials and parts, 3D printing to customize line fixtures, and wearable technology to keep managers and technicians informed with real-time data. What orchestrates this complex interplay of elements? The answer: AI.”

Source: McKinsey

💻️ AI Tools and Platforms

  • Groq → The fastest AI chat experience powered by LPUs

  • Covalent → Serverless infrastructure to simplify AI development

  • Run:ai → Easily train and deploy your AI models

  • Squad → AI-powered product strategy tool

  • Predibase → Deploy any open-source LLM to your cloud

Stay up-to-date with AI.

AI won’t replace you, but a person using AI might. That’s why 500,000+ professionals read The Rundown– the free newsletter that keeps you updated on the latest AI news, tools, and tutorials in 5 minutes a day.