Silicon Valley leaders including White House AI & Crypto Czar David Sacks and OpenAI Chief Strategy Officer Jason Kwon caused a stir online this week for their comments about groups promoting AI safety. In separate instances, they alleged that certain advocates of AI safety are not as virtuous as they appear, and are either acting in the interest of themselves or billionaire puppet masters behind the scenes.
AI safety groups that spoke with TechCrunch say the allegations from Sacks and OpenAI are Silicon Valleyâs latest attempt to intimidate its critics, but certainly not the first. In 2024, some venture capital firms spread rumors that a California AI safety bill, SB 1047, would send startup founders to jail. The Brookings Institution labeled the rumor as one of many âmisrepresentationsâ about the bill, but Governor Gavin Newsom ultimately vetoed it anyway.
Whether or not Sacks and OpenAI intended to intimidate critics, their actions have sufficiently scared several AI safety advocates. Many nonprofit leaders that TechCrunch reached out to in the last week asked to speak on the condition of anonymity to spare their groups from retaliation.
The controversy underscores Silicon Valleyâs growing tension between building AI responsibly and building it to be a massive consumer product â a theme my colleagues Kirsten Korosec, Anthony Ha, and I unpack on this weekâs Equity podcast. We also dive into a new AI safety law passed in California to regulate chatbots, and OpenAIâs approach to erotica in ChatGPT.
On Tuesday, Sacks wrote a post on X alleging that Anthropic â which has raised concerns over AIâs ability to contribute to unemployment, cyberattacks, and catastrophic harms to society â is simply fearmongering to get laws passed that will benefit itself and drown out smaller startups in paperwork. Anthropic was the only major AI lab to endorse Californiaâs Senate Bill 53 (SB 53), a bill that sets safety reporting requirements for large AI companies, which was signed into law last month.
Sacks was responding to a viral essay from Anthropic co-founder Jack Clark about his fears regarding AI. Clark delivered the essay as a speech at the Curve AI safety conference in Berkeley weeks earlier. Sitting in the audience, it certainly felt like a genuine account of a technologistâs reservations about his products, but Sacks didnât see it that way.
Sacks said Anthropic is running a âsophisticated regulatory capture strategy,â though itâs worth noting that a truly sophisticated strategy probably wouldnât involve making an enemy out of the federal government. In a follow up post on X, Sacks noted that Anthropic has positioned âitself consistently as a foe of the Trump administration.â
Techcrunch event
San Francisco
|
October 27-29, 2025
Also this week, OpenAIâs chief strategy officer, Jason Kwon, wrote a post on X explaining why the company was sending subpoenas to AI safety nonprofits, such as Encode, a nonprofit that advocates for responsible AI policy. (A subpoena is a legal order demanding documents or testimony.) Kwon said that after Elon Musk sued OpenAI â over concerns that the ChatGPT-maker has veered away from its nonprofit mission â OpenAI found it suspicious how several organizations also raised opposition to its restructuring. Encode filed an amicus brief in support of Muskâs lawsuit, and other nonprofits spoke out publicly against OpenAIâs restructuring.
âThis raised transparency questions about who was funding them and whether there was any coordination,â said Kwon.
NBC News reported this week that OpenAI sent broad subpoenas to Encode and six other nonprofits that criticized the company, asking for their communications related to two of OpenAIâs biggest opponents, Musk and Meta CEO Mark Zuckerberg. OpenAI also asked Encode for communications related to its support of SB 53.
One prominent AI safety leader told TechCrunch that thereâs a growing split between OpenAIâs government affairs team and its research organization. While OpenAIâs safety researchers frequently publish reports disclosing the risks of AI systems, OpenAIâs policy unit lobbied against SB 53, saying it would rather have uniform rules at the federal level.
OpenAIâs head of mission alignment, Joshua Achiam, spoke out about his company sending subpoenas to nonprofits in a post on X this week.
âAt what is possibly a risk to my whole career I will say: this doesnât seem great,â said Achiam.
Brendan Steinhauser, CEO of the AI safety nonprofit Alliance for Secure AI (which has not been subpoenaed by OpenAI), told TechCrunch that OpenAI seems convinced its critics are part of a Musk-led conspiracy. However, he argues this is not the case, and that much of the AI safety community is quite critical of xAIâs safety practices, or lack thereof.
âOn OpenAIâs part, this is meant to silence critics, to intimidate them, and to dissuade other nonprofits from doing the same,â said Steinhauser. âFor Sacks, I think heâs concerned that [the AI safety] movement is growing and people want to hold these companies accountable.â
Sriram Krishnan, the White Houseâs senior policy advisor for AI and a former a16z general partner, chimed in on the conversation this week with a social media post of his own, calling AI safety advocates out of touch. He urged AI safety organizations to talk to âpeople in the real world using, selling, adopting AI in their homes and organizations.â
A recent Pew study found that roughly half of Americans are more concerned than excited about AI, but itâs unclear what worries them exactly. Another recent study went into more detail and found that American voters care more about job losses and deepfakes than catastrophic risks caused by AI, which the AI safety movement is largely focused on.
Addressing these safety concerns could come at the expense of the AI industryâs rapid growth â a trade-off that worries many in Silicon Valley. With AI investment propping up much of Americaâs economy, the fear of over-regulation is understandable.
But after years of unregulated AI progress, the AI safety movement appears to be gaining real momentum heading into 2026. Silicon Valleyâs attempts to fight back against safety-focused groups may be a sign that theyâre working.