The End of the World

And Hope for the Future

In partnership with

Hello readers,

Welcome to the AI For All newsletter! AI companies are selling their models to U.S. government agencies, Big Tech’s nuclear power ambitions are hitting a snag, and there’s this great nature documentary that you need to see. Let’s dive in!

The End of the World

AI’s #1 customer

I’ve been watching this decent nature documentary for some time now. It’s about these bipeds that refer to themselves as “humans.” There’s other organisms on the show, but the humans are the strangest ones. In the newest episode, they turned on these massive “nuclear power plants” just to run a chatbot, and then they assigned a hallucinating misinformation machine of questionable reasoning capacity to the critical functions of their “government” and “military.” Fun show. 6/10.

Since I wouldn’t want you to think that I’ve alluded to anything other than LLMs, I’ll spoil the latest episode and tell you exactly what happened. Meta partnered with every lumbering government contractor that exists to bring Llama, its open source language model, to U.S. government agencies. On top of this, Scale AI announced Defense Llama, a fine-tuned Llama 3 exclusively for American national security missions.

Meta says, “These kinds of responsible and ethical uses of open source AI models like Llama will not only support the prosperity and security of the United States, they will also help establish U.S. open source standards in the global race for AI leadership.” Notice how said prosperity and security is conditioned upon responsible and ethical use, which is unenforceable in an open source context. Also, China has already developed an AI model for military use on the back of Llama, so what advantage is Meta conferring to the U.S. exactly? It’s not double dealing if you don’t make any money!

They go on to say, “As an American company, and one that owes its success in no small part to the entrepreneurial spirit and democratic values the United States upholds, Meta wants to play its part to support the safety, security, and economic prosperity of America.” Did Meta, a company that is rotten to the core, suddenly sprout a conscience? Of course not. Few companies have done more to harm democracy than Meta, and generative AI is the cherry on top, the Arab Spring (a very different time) notwithstanding.

Democracies rely on trust in institutions. Turns out, people are not going to successfully navigate the world on a steady diet of podcasts and newsletters (except this one). Generative AI is the most diabolical weapon against trust imaginable for obvious reasons. In Meta’s case, it’s not just the clear and present danger that its AI obsession poses to trust, it’s the fact that Meta is incentivized to maximize engagement at all costs. Scams, spam, fake content, misinformation, extremism, it’s all free ad real estate and an opportunity for engagement, so the algorithm boosts it. It doesn’t matter that it makes Facebook a minefield for users. All that matters is growth.

So, with Meta being quite the obstacle to democratic values, in part due to AI, offering a middling language model to the U.S. government is hardly penitence. It’s like if someone gave you the gun they shot you with as a Christmas gift. The goal of autocrats, by the way, is to sow mistrust, apathy, and nihilism. This is not done to delusionally vindicate their own societies, but to make people believe that the alternatives are just as bad, so why do anything different? What is truth if not the other side’s form of controlling you?

OpenAI is also selling generative AI to U.S. government agencies, including NASA, and Anthropic and Palantir are partnering with AWS to bring Claude to U.S. intelligence and defense agencies. I’m left to wonder, given the numerous failure modes of LLMs, is it really wise to risk an already inefficient bureaucracy with a scarcity of discerning workers becoming overreliant on a language model that can and will mislead humans, producing errors that are “hard for humans to detect, especially when the task is complex.”

In other news, Andreessen Horowitz and Microsoft put aside their overstated differences and wrote a blog post together advocating for open source AI and economic policies that incovenience them as little as possible. We can all agree that a thriving startup ecosystem is generally good, but not when redundant generative AI companies are drying up all the capital while open source ruins their chances of a moat.

This particular screed is predicated on the usual overhype. “Artificial intelligence is the most consequential innovation we have seen in a generation, with the transformative power to address society’s most complex problems and create a whole new economy.” They say this, yet Marc Andreessen recently admitted that AI capabilities are hitting a ceiling. Of course, this isn’t going to stop Andreessen Horowitz from continuing to invest in generative AI. These guys are still investing in Web3, after all.

Both Amazon and Meta’s attempts to use nuclear power to run their AI services were thwarted. Meta was no match for a small colony of bees, and Amazon was denied approval on the basis that its energy needs would make service unreliable for everyone else. As data centers for AI strain the power grid, who is paying for this exactly? Why, you are, dear reader, according to a Washington Post report.

You know those billion dollar vanity projects that Middle Eastern autocrats commission Western engineering firms to create pitch decks and 3D renders for that are filled with impractical sci-fi nonsense and every buzzword under the sun because these firms know that the autocrats have very short attention spans and just need to see shiny things to approve a project that may or may not be completed? Well, Saudi Arabia wants to build a $100 billion dollar AI hub with massive, water-thirsty data centers in the desert.

The more I watch this nature documentary, this perfect storm of excess and fragile egos, the more I am struck by the sheer unseriousness of it all. The insatiable demand for growth is like a corrosive acid melting through the fabric of society, taking good software and common sense with it. I think of the culprits. A clown car of rich toddlers playing with people’s lives. Am I indignant? Not at all. Like I said, it’s a nature documentary. Rule #1 of documentaries: never interfere. However, I am holding out hope for a happy ending. Maybe humans will one day build superintelligent AI that enlightens or transcends them. But that’s a topic for next week’s newsletter.

🔥 Rapid Fire

There’s a reason 400,000 professionals read this daily.

Join The AI Report, trusted by 400,000+ professionals at Google, Microsoft, and OpenAI. Get daily insights, tools, and strategies to master practical AI skills that drive results.

📖 What We’re Reading

“For many companies, a cloud transformation is a critical step in their digital and AI journey. Public cloud can yield compelling benefits, including greater agility for faster time to market with global reach and scale, more efficient delivery of digital products and services at lower cost, and improved resilience and security.”

Source: Boston Consulting Group

💻️ AI Tools and Platforms

  • Elicit → Analyze research papers with AI

  • BuilderKit → Build and ship your AI SaaS

  • Redactive → Enterprise data governance

  • Threado → AI agents for customer support

  • Proofs → AI agents for your tech stack