GPT-5 Starts Training

PLUS: Why Sam Altman Was Fired

Hello readers,

Welcome to another edition of This Week in the Future! OpenAI has formed a safety committee in anticipation of GPT-5. The committee will ensure that Sam Altman does what’s best for humanity and is led by Sam Altman. Plus, why Sam Altman was fired.

Let’s get into it!

Who Watches the Watchmen?

Is the doomsday clock ticking? OpenAI has formed a Safety and Security Committee while announcing in the same blog post that it has “begun training its next frontier model” and anticipates the resulting systems to “bring us to the next level of capabilities on our path to AGI” and adds “we welcome a robust debate at this important moment.”

The committee is led by Bret Taylor and Adam D’Angelo (OpenAI board members), Nicole Seligman, and … Sam Altman (CEO), which is kind of like having the Cookie Monster on the Chips Ahoy board. This committee may have been rushed for PR reasons as key safety researchers leave OpenAI and “GPT-5” is impending.

In not-so-great timing for OpenAI, Helen Toner and Tasha McCauley (members of the old board that fired Sam Altman) published a piece in The Economist proclaiming that AI firms cannot govern themselves, to which Bret Taylor and Larry Summers responded with their own piece, to which AI veteran Gary Marcus responded with his own piece.

Additionally, Helen Toner explained why the board fired Sam Altman (5:00) on The TED AI Show and used words like “lying”, “manipulative” and “psychologically abusive” when talking about the CEO. Turns out the board only learned about ChatGPT on Twitter.

Meanwhile, OpenAI has forged more strategic partnerships:

Our Take

In light of all of this, we leave it to you, the reader, to draw your own conclusions based on the available information. OpenAI is certainly trying to appear like they care about safety while aggressively expanding their business. As for GPT-5, we expect “next level of capabilities” to mean agents that carry out multi-step tasks in the real world.

🔥 Rapid Fire

📖 What We’re Reading

“Organizations are already seeing material benefits from gen AI use, reporting both cost decreases and revenue jumps in the business units deploying the technology. The survey also provides insights into the kinds of risks presented by gen AI—most notably, inaccuracy—as well as the emerging practices of top performers to mitigate those challenges and capture value.”

Source: McKinsey

💻️ AI Tools and Platforms

  • Patterns → Self-serve analytics with AI agents

  • Langfuse → Open source LLM engineering platform

  • Blaize → AI edge computing hardware and software

  • Outerbase → AI-first interface for your database

  • Prem → Platform for developing sovereign gen AI

Sponsored
Turing PostTuring Post is here to help you stay on top of AI – connecting the dots between research, key concepts, and the technologies shaping the industry. It’s about knowing what’s out there to build on, u...