In partnership with

Welcome back to Daily Zaps, your regularly-scheduled dose of AI news

Here’s what we got for ya today:

  • 🏛️ Sanders and AOC want to freeze every new AI data center in America

  • ⚖️ Musk's own witness revealed he used OpenAI employees as free Tesla labor

  • 📚 Harvard researchers put AI agents on an org chart as "employees."

  • 🏢 Apple, Google, and Microsoft secretly drafted a shared "AI Constitution"

Let’s get right into it!

GOVERNMENT

Sanders and AOC want to freeze every new AI data center

Sen. Bernie Sanders and Rep. Alexandria Ocasio-Cortez introduced the AI Data Center Moratorium Act — legislation that would halt all new large-scale AI data center construction in the U.S. until Congress passes federal standards covering energy consumption, water usage, and worker protections. The bill comes as AI data centers are projected to consume 8% of all U.S. electricity by 2030, with residents near major facilities already reporting spikes in utility bills.

The bill has no clear path to passage in the current Congress — most lawmakers on both sides oppose a moratorium. But it has already rattled the industry. Anthropic recently pledged to cover any consumer electricity price increases caused by its U.S. data centers. As Big Tech prepares to spend $725 billion on AI infrastructure in 2026 alone, the political pressure around the physical cost of AI is growing louder and harder to ignore.

LEGAL

Musk used OpenAI staff as free Tesla labor

The Musk v. OpenAI trial entered week two with a bombshell from OpenAI President Greg Brockman: Elon Musk secretly enlisted OpenAI employees to work for free on Tesla's Autopilot team for months in 2017 — without OpenAI's knowledge as an organization. Brockman also testified that Musk never formally required OpenAI to open-source its technology, directly undercutting one of Musk's core claims in the lawsuit.

Prediction markets have taken notice. Musk's odds of winning dropped to just under 34% after his own week-one testimony — where he admitted xAI distills OpenAI's models, warned the jury AI could kill humanity, and claimed he was duped by the very nonprofit he co-founded. The advisory jury's verdict is expected soon. The judge, who isn't bound by it, is expected to rule by mid-May.

Meet Performance TV, powered by high-intent Pinterest audiences.

Brands bid on the same potential customers as their competitors and growth is getting pricey.

Reach audiences earlier where they watch the most with Pinterest’s high-intent signals on TV.

If "trust us, your ad is on TV" isn't good enough for you anymore, check this out!

RESEARCH

Harvard research: treating AI agents like employees makes your team worse

A large-scale experiment published in Harvard Business Review this week tested what happens when companies anthropomorphize AI agents — giving them names, titles, and spots on the org chart alongside human employees. The results cut against the grain of how most companies are deploying AI right now. When workers perceived AI as a "colleague," they offloaded personal accountability, escalated decisions unnecessarily to managers, reviewed AI-generated output less carefully, and grew more uncertain about what their own roles were — all without any improvement in adoption rates.

The researchers' conclusion is pointed: the real challenge isn't whether to deploy AI agents, it's how to redesign workflows so humans remain clearly in charge of supervising them. Framing AI as a peer creates social confusion that degrades human performance. Framing it as a powerful tool — one that requires active human oversight — keeps workers sharp and accountable. As Coinbase, Freshworks, and others rush to build "AI-native" teams this week, this research suggests the org chart framing may be exactly the wrong instinct.

BIG TECH

Apple, Google, and Microsoft secretly drafted a shared "AI Constitution"

Three of the most fiercely competitive companies on earth have been quietly cooperating in Washington. Senior policy executives from Apple, Google, and Microsoft have been meeting regularly — often alongside federal officials and occasionally with representatives from OpenAI and xAI — to draft a shared set of voluntary AI safety obligations. The media is calling it an "AI Constitution." The companies are calling it a strategy.

The logic: when three companies that collectively control the majority of AI infrastructure agree on a standard, it effectively becomes the industry standard. They'd rather write those standards themselves than fight whatever Congress eventually imposes. The framework focuses on safety commitments, testing protocols, and transparency obligations. Whether regulators and the public accept voluntary frameworks as genuinely effective will determine if this strategy holds — or triggers the legislation it was designed to avoid.

In case you’re interested — we’ve got hundreds of cool AI tools listed over at the Daily Zaps Tool Hub.

If you have any cool tools to share, feel free to submit them or get in touch with us by replying to this email.

🕸 Tech tidbits from around the web

Keep Reading