Why language models hallucinate

Google upgrades Veo 3, HHS rolls out $1 ChatGPT for all employees, and parental controls are coming to ChatGPT

In partnership with

Welcome back to Daily Zaps, your regularly-scheduled dose of AI news ⚡️ 

Here’s what we got for ya today:

  • Why language models hallucinate

  • Google upgrades Veo 3

  • HHS rolls out $1 ChatGPT for all employees

  • Parental controls are coming to ChatGPT

Let’s get right into it!

BIG TECH

Why language models hallucinate

OpenAI’s new research paper explores why large language models like GPT-5 continue to hallucinate—producing plausible but false statements—and concludes that while errors from predictable patterns diminish with scale, low-frequency facts (like birthdays) remain prone to mistakes. The issue stems from pretraining focused on next-word prediction without true/false labels and from evaluation methods that reward accuracy without penalizing confident errors, encouraging models to “guess” rather than admit uncertainty.

To address this, researchers propose updating evaluation systems to discourage blind guessing—similar to standardized tests that penalize wrong answers or give partial credit for skipping—so that models are incentivized to express uncertainty appropriately instead of confidently hallucinating.

36 page research paper PDF

BIG TECH

Google upgrades Veo 3

Google has upgraded its Veo 3 AI video generator with support for 1080p resolution and vertical 9:16 video formats, making it better suited for mobile and social media content. Both Veo 3 and the cheaper, lower-quality Veo 3 Fast now allow developers to generate vertical videos by setting the aspectRatio parameter in API requests, though 1080p is currently limited to standard 16:9 videos.

Google also announced that Veo 3 models are now stable for scaled production within the Gemini API and have become more affordable, with pricing cut nearly in half to $0.40 per second for Veo 3 and $0.15 for Veo 3 Fast. These updates, which preview Google’s broader integration of Veo 3 with platforms like YouTube Shorts, are expected to bring more AI-generated vertical content to apps such as TikTok and Instagram Reels.

FROM OUR PARTNER PACASO

How 433 Investors Unlocked 400X Return Potential

Institutional investors back startups to unlock outsized returns. Regular investors have to wait. But not anymore. Thanks to regulatory updates, some companies are doing things differently.

Take Revolut. In 2016, 433 regular people invested an average of $2,730. Today? They got a 400X buyout offer from the company, as Revolut’s valuation increased 89,900% in the same timeframe.

Founded by a former Zillow exec, Pacaso’s co-ownership tech reshapes the $1.3T vacation home market. They’ve earned $110M+ in gross profit to date, including 41% YoY growth in 2024 alone. They even reserved the Nasdaq ticker PCSO.

The same institutional investors behind Uber, Venmo, and eBay backed Pacaso. And you can join them. But not for long. Pacaso’s investment opportunity ends September 18.

Paid advertisement for Pacaso’s Regulation A offering. Read the offering circular at invest.pacaso.com. Reserving a ticker symbol is not a guarantee that the company will go public. Listing on the NASDAQ is subject to approvals.

GOVERNMENT

HHS rolls out $1 ChatGPT for all employees

The Department of Health and Human Services (HHS) has made ChatGPT available to all employees following President Trump’s AI Action Plan, according to a departmentwide email from Deputy Secretary Jim O’Neill. O’Neill emphasized the tool’s potential to support science, transparency, and health, while cautioning staff to remain skeptical of outputs, consider biases, and verify information with original sources.

He highlighted ChatGPT’s strengths in summarizing documents and noted its prior use by agencies like the FDA. HHS has implemented security measures, granting the platform authority to operate at a FISMA moderate level, though employees are prohibited from using it with classified, sensitive, or HIPAA-protected data.

AI SAFETY

Parental controls are coming to ChatGPT

OpenAI announced it will launch parental controls for ChatGPT within the next month, following lawsuits and concerns linking chatbots to teen self-harm and suicide. The controls will let parents link accounts, manage responses, disable memory and history, and receive alerts if the system detects acute distress. OpenAI says these are just first steps, adding it will route high-stress conversations to safer reasoning models and consult experts in youth development and mental health.

The company has faced growing scrutiny over safety, including lawsuits, criticism from advocacy groups, and questions from lawmakers. While ChatGPT already includes safeguards like directing users to crisis hotlines, OpenAI admitted these protections can weaken during long conversations. With 700 million weekly users, the company says it will roll out additional safeguards over the next 120 days, stressing that improvements will continue throughout the year.

How much did you enjoy this email?

Login or Subscribe to participate in polls.