- AI For All
- Posts
- Who Do You Trust?
Who Do You Trust?
Meta Gets Caught
Hello readers,
Welcome to the AI For All newsletter! Meta, paradoxically, is the greedy company that keeps on giving, OpenAI is fighting for its life in court while continuing to ship half-baked products, Google and Microsoft get desperate, and the U.S. military is trigger-happy about generative AI. Can things get any worse? Let’s find out!
Who Do You Trust?
In a copyright lawsuit filed against Meta, newly unredacted documents revealed that Mark Zuckerberg approved the torrenting of a dataset called LibGen to train Llama. LibGen was known by employees to consist of pirated data and torrenting it meant Meta was now participating in the piracy. Meta, a company stuffed full of management class goons, did not heed the concerns of engineers. Meta (and other AI companies) cut corners to pointlessly acquire massive troughs of training data only for LLMs to plateau. Of course, Meta was obsessed with surpassing GPT-4.
During his awful appearance on Joe Rogan’s podcast, Zuck said that AI could soon replace Meta’s mid-level software engineers (who must feel very appreciated). If it were any other company, I would point out how misguided, ignorant, risky, and false such an idea is. However, since it’s Meta, and their platforms are already full of bugs with terrible user experience, I don’t think having AI do the coding would change anything. Honestly, being fired from Meta is not even a blessing in disguise, it’s just a blessing.
Rogan, in his apish stupor, failed to challenge Zuck on anything. I’ve written before about how the podcast circuit has devolved into a hotbed for reputation laundering and sanewashing. Zuck has exploited it masterfully to complete his transformation into Elon Musk Lite. Now, instead of being a PR-rehearsed ghoul that no one believes anyway, he’s a “cool dude” sticking it to the establishment, so all of his drastic faults are forgiven by confused, envious young men who were never taught healthy masculinity.
Also embroiled in a copyright lawsuit are OpenAI and Microsoft. A hearing was held on Tuesday to decide if the high-stakes case should proceed. A decision was not made, leaving all parties in suspense. If the case moves to trial, one person who will not be in attendance (much to OpenAI’s relief) is Suchir Balaji, the OpenAI whistleblower who pledged to testify in the lawsuit on behalf of The New York Times.
I want to clarify something I wrote in last week’s newsletter (which I’ve since revised). In my haste to provide you with intellectually scintillating commentary, I did not go far enough down the rabbit hole to discover that the developments in the Suchir Balaji case that might suggest foul play to the average armchair detective were based merely on the claims of a self-described “independent journalist” whose relationship with the bereaved has since soured due to him “making statements that are not accurate.”
The Suchir Balaji case, which the SFPD is calling an “active and open investigation,” has become the pet issue of Musk-adjacent conspiracy theorists, citizen journalists (amateurs with no reputations to uphold), and crypto bros. This doesn’t mean nothing will come of this case. At the risk of sounding obvious, one should approach information around this story with caution and defer to reputable sources. Next week, I’ll explain why you should drink water instead of bleach. With that said, imagine killing a guy for the sake of a chatbot that loses you money with every prompt. It remains astonishing the frivolousness of this whole generative AI thing.
OpenAI employees’ (who have vested stock in the company — no wonder they wanted Sam Altman back) favorite pastime seems to be publishing cryptic posts on social media that imply the imminence of AGI. Groveling on the floor to savor the breadcrumbs, influencers go on to do OpenAI’s marketing for them with fan fiction about what must be going on behind the scenes. It’s a simple but effective strategy for keeping OpenAI’s perceived value inflated. Except OpenAI’s bread is half-baked.
Scheduled tasks is a new beta feature in ChatGPT that is a lot worse than it should be. The company that supposedly knows how to build AGI might be taking for granted the difficulty of such a deceptively simple feature. Amazon, which is still working on its AI-enhanced Alexa, had a huge team dedicated just to alarms in the original Alexa. We should expect this trend of unreliable software to continue in the dark age of LLMs.
In other news, both Google and Microsoft are making their AI features freely available in Workspace and Microsoft 365. Sounds great, right? Except this means that the price of both suites will increase, even if you have no intention of using these AI features. Adoption of AI has been tepid for both Google and Microsoft, so this is effectively a way to force matters and make adoption look better on paper. If you’re having trouble selling your software, just partner with Pearson to create a soul-crushing training program that forces your software into the hands of apathetic workers who only cooperate because their managers (who are all named Bill Lumbergh) made it mandatory.
Lastly, the U.S. military is trigger-happy (shocker) about generative AI. With President Biden still under the false impression that data centers are the difference maker in the illusory AI race, it’s no surprise that the military shares his enthusiasm. The Army is evaluating generative AI tools for business operations, and the Marines have never been more ecstatic about a piece of productivity software.
Lt. Gen. Melvin “Jerry” Carter (a name that inspires confidence — this is a man that is not to be trifled with) said, “These systems have the potential to revolutionize mission processes by enhancing operational speed and efficiency, improving decisionmaking accuracy, and reducing human involvement in redundant, tedious, and dangerous tasks.” All I can say is I really hope he’s aware of the grave limitations of these models. Inspiring significantly less confidence is Pete Hegseth, the network TV presenter who is Trump’s pick for Secretary of Defense and who wants to prioritize AI because of China. sigh
🔥 Rapid Fire
Apple suspends AI news summaries feature after false headlines
ChatGPT use for schoolwork rises amid hallucinations and inaccuracies
Only 25% of enterprises deploy AI and few benefit per new report
AI agents pave the way for sophisticated cyberattacks per Gartner analysis
Research: lessons from red teaming 100 GenAI products – it’s not pretty
White House proposes controversial restrictions on exporting AI chips
UK government weighs undermining novelists in futile bid for AI leadership
François Chollet launches Ndea to build AGI with hybrid architecture
Mistral releases Codestral 25.01 open source coding model for developers
Sakana introduces Transformer² self-adaptive LLM that adjusts its weights
Microsoft unveils MaterialGen generative AI model for materials discovery
NVIDIA partners with industry leaders to advance drug discovery with AI
Mayo Clinic partners with Microsoft and Cerebras on personalized medicine
IBM partners with CoreWeave to build supercomputer for Granite models
Cisco unveils AI Defense to secure enterprise AI apps and development
Save 1 hour every day with Fyxer AI
Fyxer AI automates daily email and meeting tasks through:
Email Organization: Fyxer puts your email into folders so you read the important ones first.
Automated Email Drafting: Drafts replies as if they were written by you; convincing, concise and with perfect spelling in every language.
Meeting Notes: Stay focused in meetings while Fyxer takes notes, writes summaries and drafts follow-up emails.
Fyxer AI is even adaptable to teams!
Setting up Fyxer AI takes just 30 seconds with Gmail or Outlook.
🔍️ Industry Insights
“The initial excitement around AI, especially generative AI, is evolving into a deeper focus on execution and results. AI’s importance in the C-suite is steadfast; three-quarters of executives name it as a top-three strategic priority for 2025. Companies also plan to invest more in GenAI in 2025 than last year – even as they realize that the intuitive feel of GenAI masks the discipline, commitment, and hard work required to introduce these technologies into the workplace.”
💻️ Tools & Platforms
TestSprite → AI agent for software testing
Folk → AI-powered sales assistant and CRM
Noteworthy AI → AI-powered grid management
Trellis → Automate PDF workflows at scale
Belva AI → AI-powered development environment