• AI For All
  • Posts
  • Mistral Releases The Open Source GPT-4

Mistral Releases The Open Source GPT-4

PLUS: Google Lets the Genie Out of the Bottle

In partnership with

Hello readers,

Welcome to another edition of This Week in the Future! Every company seems to be building AI models now. Mistral released Mistral Large, an open model on par with GPT-4, and 6 other models were released! One of them was Genie from Google DeepMind, which can generate video games from text and images.

As always, thanks for being a subscriber! We hope you enjoy this week’s content — for a video breakdown, check out the episode on YouTube.

Let’s get into it!

So Many Models

Image generated by DALL·E 3

The AI Industry continues to deliver us a Matryoshka doll of AI models. Ok, I admit I’m stretching for creative metaphors at this point, but Mistral’s latest model, Mistral Large, also comes with a Mistral Small. If Google’s naming of its Gemini products wasn’t confusing enough, 5 other models were released from other players in the AI game. The recurring theme? The competition is catching up to GPT-4.

Mistral Large

Mistral’s new and open flagship model is said to have top-tier reasoning capabilities. You can access it through an API, Le Chat, or on Azure thanks to a new partnership between Mistral AI and Microsoft. It outperforms other leading models on the MMLU benchmark.

Other Models

This week saw the release of 5 other significant models including:

  • Phind-70B — Matches GPT-4 Turbo in coding ability while being 4x faster

  • StarCoder 2 — Another powerful coding model available on Hugging Face

  • Samba-1 — A one trillion parameter model from SambaNova

  • Evo — A foundation model for biology from Together AI

  • Palmyra-Vision — Multimodal LLM that beats GPT-4V and Gemini Ultra

Our Take

With Llama 3 on the horizon and Google’s relentless Gemini push, it’s inevitable that there will be legit alternatives to GPT-4. However, OpenAI has GPT-5 in the works, which could return everyone else to square one. OpenAI also controls the narrative at the moment, and whoever controls the narrative tends to win. The future will definitely be a smorgasbord of AIs, and companies may have to look to architectural and algorithmic innovations to leap frog the competition instead of just building a larger model.

Google Lets the Genie Out of the Bottle

Image generated with DALL·E 3

Google DeepMind unveiled a rather innovative AI model called Genie. Genie enables the generation of interactive 2D platformer games from simple image prompts or text descriptions, paving the way for the next frontier of generative AI, text-to-video-game.

What makes Genie innovative is that it was trained on video footage [of gameplay] alone. The footage wasn’t labeled with input actions or button presses. It learned game mechanics from scratch. Genie likely builds on DeepMind’s famous work on Atari games and deep reinforcement learning.

Why This Matters

AI has mastered images at this point, and video is on the way (still needs the audio component — Emote Portrait Alive and Pika Labs are working on it). The next medium is video games, which presents a new set of technical challenges. Genie only runs at 1 frame per second, so it’ll be awhile before it’s useful, but it’s a step towards a future where individuals and businesses will be able to summon their own metaverses and interactive experiences. Overall, generative AI will raise the bar for quality, and those who continue to provide value will reap the most rewards.

🔥 Rapid Fire

🎙️ The AI For All Podcast

This week’s episode featured Tom Andriola, Chief Digital Officer at UC Irvine, who discussed AI in universities and the workplace. We also cover leveraging AI in business operations, AI and the metaverse, and making AI accessible to all.

📖 What We’re Reading

Business Marketing: A Story of Humans and AI (link)

“AI has emerged as a powerful marketing tool. However, it's crucial to understand that AI is not a magic wand that can solve all marketing challenges. Instead, it should be used to maximize the impact of your storytelling.”

Source: AI For All
Customizing and fine-tuning LLMs: What you need to know (link)

“Customizing an LLM is not the same as training it. Training an LLM means building the scaffolding and neural networks to enable deep learning. Customizing an LLM means adapting a pre-trained LLM to specific tasks, such as generating information about a specific repository or updating your organization’s legacy code into a different language.”

Source: GitHub

💻️ AI Tools and Platforms

  • Sanctum → Run open source LLMs locally on your device

  • Nebius → AI-centric cloud platform for intensive workloads

  • Sendbird → Build in-app AI chatbots in minutes

  • Gradial → Generative AI for sales ops, CMS, and marketing

  • Senso → AI-powered knowledge base for all your teams

My Favorite Newsletter: Stay ahead on the business of AI 

Have you heard of Prompts Daily newsletter? I recently came across it and absolutely love it.

AI news, insights, tools and workflows. If you want to keep up with the business of AI, you need to be subscribed to the newsletter (it’s free).

Read by executives from industry-leading companies like Google, Hubspot, Meta, and more.

Want to receive daily intel on the latest in business/AI?