- AI For All
- Posts
- Meta Wants To Read Your Mind
Meta Wants To Read Your Mind
PLUS: "Doomers" Taking Over DC
Hello readers,
Welcome to another edition of This Week in the Future! This week, Meta showed how to decode images from brain activity using AI, Washington is having its focus shifted to existential AI risk by a well-funded group, and Anthropic made an AI constitution.
As always, thanks for being a subscriber! We hope you enjoy this week’s content — for a video breakdown, check out the episode on YouTube.
Let’s get into it!
Decoding the Brain
Meta has published a groundbreaking paper that delves into decoding human brain activity to generate visual representations of what an individual perceives. Here's a closer examination:
Objective
The central challenge is understanding the intricacies of how the human brain interprets and represents the world. Building on earlier progress in deploying machine learning to decode this representation, the study employs generative AI for an unrestricted approach, focusing on visual embeddings or features from images.
Methodology
Electrode Brain Activity Capture: Participants were fitted with electrodes to capture their brain activity. A high-resolution capturing method was prioritized over temporal frequency, resulting in recordings every few seconds.
Aligning Brainwaves with Image Embeddings: The study demonstrated images to the participants and aligned their brainwaves with pre-trained image embeddings. By doing so, the research aimed to generate an output resembling what the individual was processing visually.
Results
Despite some discrepancies in the generated image's specifics, the conceptual and thematic accuracy was commendable. For instance, viewing an animal resulted in the model generating an animal-like image.

Implications
Decoding Complex Perceptions: One monumental advancement is the potential to decode intricate perceptual representations in a time-bound manner. This can provide insights into how visual perceptions evolve as images are processed, enhancing our understanding of human visual cognition.
Potential Extensions: The technique's success in the visual domain implies potential applications in other modalities. This includes transforming brain activity into textual representations, sound processing, or even videos.
Shared Foundations between Neural Networks and the Human Brain: The successful alignment of AI-generated embeddings with human brainwaves hints at a fascinating revelation: the foundational processes of artificial neural networks might mirror those in human brains. This suggests that the human brain, like a neural network, continuously processes and interprets multifaceted data from various modalities.
Our Take
This pioneering study from Meta provides a tantalizing glimpse into the realm of human perception, bridging the gap between neural network models and the human brain. However, should this technology improve, it’s easy to see the potential misuses. Decoding brain activity would be heaven for advertisers but considered dystopian to much of the general public.
The Influence of Open Philanthropy
The influence of Open Philanthropy, a network of AI advisors heavily backed by billionaire funding, on AI policy direction in Washington DC has been noteworthy. Here’s an overview:
Who is Open Philanthropy?
Open Philanthropy has its roots in Silicon Valley and is significantly backed by billionaire funding including Facebook co-founder and Asana CEO Dustin Moskovitz. The organization funds AI fellows who are positioned in pivotal governmental and think tank roles. Their primary aim is to shape AI policy with an emphasis on long-term risks.
Collaborating with the Horizon Institute for Public Service, Open Philanthropy has established a strong foothold in various Senate offices, federal departments, and pivotal committees, significantly impacting AI regulations.
Our Take
So-called “doomerism” is a growing faction and well-funded groups like Open Philanthropy give us some perspective on why. Many are worried about regulatory capture that stifles innovation and competition. It’s also been argued that by emphasizing long-term risks, attention will be diverted from the immediate challenges that AI poses today. While thinking long-term is important, it’s essential to strike a balance, so that appropriate and effective policies are implemented.
The AI Constitution
Anthropic's recent exploration into collective constitutional AI offers an interesting approach to AI ethics and transparency. Here's a summary:
The Concept
Definition: This entails creating an AI "constitution" — a set of high-level normative principles — and training AI models to abide by these guidelines.
Current Practice: Anthropic’s language models, Claude and Claude 2, adhere to a constitution formulated by the company’s staff.
Inspiration: Anthropic’s constitution draws from established frameworks like the United Nations Universal Declaration of Human Rights. The aim is to align the AI’s behavior to be both beneficial and ethical.
The Public vs. Anthropic
Recognizing that their constitution was framed by a select group, Anthropic sought public feedback to discern differences and similarities between what their developers and the broader public perceived as essential constitutional principles.
Results
Similarities: Both parties agreed that AI should prioritize the human rights to freedom, universal equality, fair treatment, and protection against discrimination and that AI models should refrain from endorsing misinformation, conspiracy theories, or violent content. Instead, they should promote truth, freedom, and equality.
Differences: Anthropic emphasizes AI models providing balanced and objective data, reflecting various perspectives. Surprisingly, the broader public didn't entirely concur.
Our Take
While obtaining public input provides value, ensuring the ethical operation of AI also requires expert intervention. A blend of public consensus and informed decisions from specialists is essential for creating balanced AI guidelines. Additionally, it’s not clear that as AI systems get more intelligent, we won’t be the ones who should align with it instead of the other way around.
🔥 Rapid Fire
IBM expands relationship with AWS to deliver generative AI
PwC partners with OpenAI to use AI for tax, legal, and HR
Amazon reduces fulfillment time by 25% with AI robots
NVIDIA and Foxconn team up to build ‘AI factories’
China’s Baidu claims new model matches GPT-4
Humanoid robot company Figure unveils new progress
DeepMind releases new evaluation of AI risks
NVIDIA AI is now available in the Oracle Cloud Marketplace
US expands export restrictions on AI chips to China
The Army is planning to integrate more autonomy
US Senators introduce bill against AI deep fakes
New York wants to be AI’s world capital
First details released on UK’s AI Safety Summit
Stack Overflow cuts 28% of their workforce likely due to AI
Honda made an airport robot for repetitive tasks
🎙️ The AI For All Podcast
This week’s episode featured Ismaen Aboubakare, Head of Developer Advocacy at Airkit.ai, who discussed how businesses can get started with prompt engineering to take full advantage of LLMs and the power of open-source in shaping the AI market for businesses.
📖 What We’re Reading
This week’s handpicked articles include a look at AI bias and its ramifications in real-world applications, plus we answer the question: will ChatGPT be your home’s new voice assistant? With ChatGPT now being able to talk, the answer is an unequivocal yes.
Addressing AI Bias: Ensuring Equity in the Age of Automation (link)
“In the rapidly advancing field of AI, concerns over bias have become increasingly prominent. AI bias refers to unintentional discrimination or favoritism that can occur when algorithms are trained on prejudiced data or designed with inherent biases.”
Will ChatGPT Be Your Home's New Voice Assistant? (link)
“Today’s voice assistants fall short of a few opportunities to expand value in the smart home. Most communication with voice assistants revolves around simple command and response or action, so there’s little room for contextualizing data in your smart home. An AI-powered language model can help connect the dots between these different devices in your home to make it smart.”