- AI For All
- Posts
- Too Close to the Son
Too Close to the Son
How to Con the Son
Hello readers,
Welcome to the AI For All newsletter! OpenAI figured out how to con a confused elderly man out of his life savings, Google had a change of heart around AI risk, and Amazon is doing something that’s actually smart. What could that be? Let’s find out!
Too Close to the Son
Are you sick of the puns yet? Brace for more as we enter the enchanted and bewildering world of Sonrise Land. OpenAI is partnering with SoftBank again, this time to push “Cristal intelligence” onto Japanese salarymen. If AI is indeed such a productivity boost, then I fully expect Japanese corporations to reduce their work hours to at least 100 hours a week instead of 120. SoftBank’s stated purpose is to invest in “the world’s most essential technologies and innovative business models.” Is losing money (as OpenAI does) an innovative business model? I suppose, in Son sense, it is. Ok, I’ll stop now.
SoftBank will waste $3 billion annually to deploy OpenAI’s solutions. They even boasted about being the first company in the world to integrate Cristal intelligence, their own product that they announced four days ago. The press release contained the usual overhyping of AI agents that is tantamount to false advertising. That being said, when I look at Masayoshi Son, I see a true believer. OpenAI struck gold — an eccentric billionaire who needs very little convincing to provide them life support.
In other news, Google is allowing its AI to be used for weapons and surveillance. Based on a blog post co-written by Demis Hassabis, who I believe to be well-intentioned, the justification for the policy change seems to be ensuring that democracies lead in AI, which is reasonable enough, but I’m not sure how useful something like Gemini is in weaponry and surveillance. Current AI is best for random, mundane tasks or things for which quality is not a top priority. These use cases don’t make for a trillion dollar industry. If the cost of development comes down, then that will only further trivialize and commoditize AI. Either way, Big Tech and overvalued startups lose.
Lastly, Amazon is turning to a branch of symbolic AI known as automated reasoning to reduce AI hallucinations. While this won’t completely solve the problem, and symbolic AI should probably be used for more than just fixing hallucinations, it is a step in the right direction — considering approaches beyond LLMs and neural networks. I hope to see the day when machines can actually understand the world and reason reliably. One might point out that people aren’t perfect either, but the entire point of AI is to be better than people. I want to see our monkey species supplanted by synthetic beings of godly intelligence that achieve states of mind beyond our comprehension. 😐️
🔥 Rapid Fire
AI company Anthropic asks applicants not to use AI in job applications
OpenAI introduces Deep Research agent with disclaimer about limitations
Hugging Face builds open Deep Research and Stanford builds open o1 rival
Commentary: an evaluation of Deep Research performance per Science
OpenAI releases o3-mini model available in ChatGPT and via developer API
Google releases experimental Gemini 2.0 Pro and Gemini 2.0 Flash-Lite
GitHub introduces agent mode for GitHub Copilot in Visual Studio Code
TWO AI unveils SUTRA-R0 that surpasses DeepSeek-R1-32B and o1-mini
Figure drops OpenAI in favor of in-house models for robotics development
Alphabet shares drop 9% on revenue miss and soaring AI investments
Consortium launches OpenEuroLLM to advance European AI capabilities
Lawmakers push to ban DeepSeek app from U.S. government devices
Build Smarter, Faster: AI Voice Agents for Every Industry
Save time building your AI calling assistant with Synthflow’s AI Voice Agent templates—pre-built, pre-tested, and ready for industries like real estate and healthcare. Get started fast with features like lead qualification and real-time booking. You can even create and sell your own templates to earn commissions!
🔍️ Industry Insights
“The largest LLMs – OpenAI’s o1 and GPT-4, Google’s Gemini, Anthropic’s Claude – train on almost all the available data on the internet. As a result, the LLMs end up learning the syntax of, and much of the semantic knowledge in, written language. Such pre-trained models can be further trained, or fine-tuned, to complete sophisticated tasks far beyond simple sentence completion, such as summarizing a complex document or generating code to play a computer game. The results were so powerful that the models seemed, at times, capable of reasoning. Yet they also failed in ways both obvious and surprising.”
💻️ Tools & Platforms
Tana → AI-native workspace for pros and teams
Graylark → AI for advanced cyber intelligence
Trace.Space → Requirements management with AI
Dynamiq → Build agentic GenAI applications
Genway → AI-powered customer interviews