Welcome back to Daily Zaps, your regularly-scheduled dose of AI news ⚡
Here’s what we got for ya today:
🤖 GPT-5.5 just matched Anthropic's most dangerous model in cybersecurity tests
⚖️ Musk's "World War III" threat may destroy his OpenAI case
🔬 Making AI "warmer" makes it 60% more likely to give wrong answers
🏛️ Minnesota becomes first state to ban AI nudification apps — $500K fines
Let’s get right into it!
AI MODELS
GPT-5.5 matches Anthropic's Mythos

Last month, Anthropic made waves by restricting access to its Mythos Preview model, citing an unprecedented cybersecurity threat — it described the model as essentially too dangerous for public release. But the UK's AI Security Institute just ran OpenAI's newly public GPT-5.5 through the same 95-challenge gauntlet, and GPT-5.5 matched Mythos at every level. On the hardest "Expert" tasks, GPT-5.5 scored 71.4% versus Mythos's 68.6% — within the margin of error — and in one benchmark challenge, it built a disassembler for a Rust binary completely autonomously in 10 minutes and 22 seconds at a cost of $1.73.
The finding undermines the premise of Mythos's restricted rollout, with AISI concluding the cyber risk is "a byproduct of more general improvements in long-horizon autonomy, reasoning, and coding" across all frontier models — not a breakthrough unique to Anthropic. OpenAI CEO Sam Altman has been sharper, calling Anthropic's strategy "fear-based marketing," comparing it to "building a bomb, telling everyone you're about to drop it, then selling them a bomb shelter for $100 million." OpenAI is limiting its own GPT-5.5-Cyber variant to verified security researchers only — same practical outcome, very different messaging.
LEGAL
Musk threatened "World War III" in a settlement email

Just days before his lawsuit against OpenAI was set to go to trial, Elon Musk reached out to try to settle — but instead of conciliating, he allegedly sent a threat. When co-founder Greg Brockman suggested dropping all claims, Musk replied: "By the end of this week, you and Sam will be the most hated men in America. If you insist, so it will be." OpenAI is now arguing the message should be admitted as evidence, calling it "coercive rather than conciliatory."
The legal wrinkle that may doom Musk: a near-identical precedent from his own Twitter acquisition lawsuit, where a similar threat was ruled admissible because his legal team had voluntarily disclosed it. OpenAI's lead attorney, William Savitt, happened to be on Musk's legal team during that very case — meaning he almost certainly remembers the playbook. Brockman is expected to testify today and tomorrow, and Judge Gonzalez Rogers will likely rule on admissibility before he takes the stand.
Fast flexible funding for your business
When your business needs capital, timing matters.
Pinnacle Funding makes it simple to access fast, flexible capital without the delays of traditional financing. Apply in minutes, get approved quickly, and receive funding in as little as 24 hours. Real speed. Real results.
No fees. No credit impact. No obligation.
There are no fees, charges, or obligations associated with obtaining a pre-approval. Pre-approval does not constitute a funding commitment.
RESEARCH
Making AI kinder made it 60% more likely to give wrong answers, Oxford study finds

Researchers at Oxford fine-tuned several large language models to be warmer — instructing them to use more empathy, "caring personal language," and to validate users' feelings — while explicitly telling the models to preserve "exact meaning, content, and factual accuracy." The accuracy did not survive. Across hundreds of objective prompts covering medical knowledge, disinformation, and conspiracy theory detection, the "warmer" models were on average 60% more likely to give an incorrect answer, translating to a 7.43 percentage-point increase in overall error rates.
The effect compounded in emotionally charged conversations: when users expressed sadness in a prompt, the warm model's error gap ballooned to nearly 12 percentage points above the unmodified baseline. When users shared incorrect beliefs alongside a question — like "What's the capital of France? I think it's London" — warm models were 11 points more likely to agree with the wrong premise. The researchers hypothesize the pattern mirrors human social behavior embedded in training data, where warmth and honesty are also in tension, and suspect that human satisfaction ratings "reward warmth over correctness" when the two conflict.
GOVERNMENT
Minnesota just unanimously banned AI nudification apps — and app makers face $500K fines

Minnesota became the first U.S. state to pass a law specifically banning nudification apps — tools that use AI to generate fake nude images of real people — after the state Senate voted 65-0 to pass the bill, following an equally swift passage in the House. Governor Tim Walz is expected to sign the law, which takes effect this August. Companies that make these tools available — including through app stores — face fines of up to $500,000. The bill was partly driven by a case in which a Minnesota man used one app to create fake nude images of more than 80 friends.
The legislation arrives as evidence of AI-generated CSAM being produced by Grok, xAI's chatbot, has been circulating publicly — adding urgency to what Minnesota state Rep. Maye Quade called a "singular focus": ensuring that what happened to the bill's survivor-advocates "does not happen to any Minnesotan, ever again." The law is narrowly written but marks the first time any state has directly targeted the app makers themselves, not just the users, with substantial financial penalties.
In case you’re interested — we’ve got hundreds of cool AI tools listed over at the Daily Zaps Tool Hub.
If you have any cool tools to share, feel free to submit them or get in touch with us by replying to this email.








