- AI For All
- Posts
- GPT-5 Starts Training
GPT-5 Starts Training
PLUS: Why Sam Altman Was Fired
Hello readers,
Welcome to another edition of This Week in the Future! OpenAI has formed a safety committee in anticipation of GPT-5. The committee will ensure that Sam Altman does what’s best for humanity and is led by Sam Altman. Plus, why Sam Altman was fired.
Let’s get into it!
Who Watches the Watchmen?
Is the doomsday clock ticking? OpenAI has formed a Safety and Security Committee while announcing in the same blog post that it has “begun training its next frontier model” and anticipates the resulting systems to “bring us to the next level of capabilities on our path to AGI” and adds “we welcome a robust debate at this important moment.”
The committee is led by Bret Taylor and Adam D’Angelo (OpenAI board members), Nicole Seligman, and … Sam Altman (CEO), which is kind of like having the Cookie Monster on the Chips Ahoy board. This committee may have been rushed for PR reasons as key safety researchers leave OpenAI and “GPT-5” is impending.
In not-so-great timing for OpenAI, Helen Toner and Tasha McCauley (members of the old board that fired Sam Altman) published a piece in The Economist proclaiming that AI firms cannot govern themselves, to which Bret Taylor and Larry Summers responded with their own piece, to which AI veteran Gary Marcus responded with his own piece.
Additionally, Helen Toner explained why the board fired Sam Altman (5:00) on The TED AI Show and used words like “lying”, “manipulative” and “psychologically abusive” when talking about the CEO. Turns out the board only learned about ChatGPT on Twitter.
Meanwhile, OpenAI has forged more strategic partnerships:
OpenAI inked deals with The Atlantic and Vox Media
PwC became OpenAI’s first reseller of ChatGPT Enterprise
OpenAI and WAN-IFRA are collaborating to increase AI adoption in newsrooms
Apple has reportedly signed a deal with OpenAI for iOS integration
Our Take
In light of all of this, we leave it to you, the reader, to draw your own conclusions based on the available information. OpenAI is certainly trying to appear like they care about safety while aggressively expanding their business. As for GPT-5, we expect “next level of capabilities” to mean agents that carry out multi-step tasks in the real world.
🔥 Rapid Fire
OpenAI launches ChatGPT Edu and OpenAI for Nonprofits
OpenAI shuts down five accounts using AI for deceptive purposes
Elon Musk’s xAI raises $6 billion in Series B funding round
Mistral AI releases Codestral, a coding LLM that beats Llama 3
Scale reinvents LLM benchmarking with SEAL Leaderboards
Perplexity introduces Pages to turn research into shareable content
Arm partners with Samsung and TSMC for AI chip designs
Tech giants form industry group to standardize AI chip components
SAP integrates models from Amazon Bedrock into SAP AI Core
Palantir lands $480M Army contract for Maven AI tech
China launches $47B chip fund to counter US restrictions
European Commission unveils AI Office to strengthen AI safety
Opera browser enhances Aria AI assistant with Gemini
Google infuses Gemini into Chromebook Plus laptops
Google improves AI Overviews after mistakes go viral
Samsung’s Galaxy AI is coming to new Galaxy smart watch
Microsoft creates an official Copilot bot on Telegram
AI brain-reading device decodes English and Spanish words
GPT-4 surpasses humans on financial statement analysis
Stanford researchers question reliability of AI legal tools
📖 What We’re Reading
“Organizations are already seeing material benefits from gen AI use, reporting both cost decreases and revenue jumps in the business units deploying the technology. The survey also provides insights into the kinds of risks presented by gen AI—most notably, inaccuracy—as well as the emerging practices of top performers to mitigate those challenges and capture value.”