This week, OpenAI began testing ads on ChatGPT. To the casual observer, this feels inevitable: just another free service monetizing its user base. But Zoë Hitzig's resignation letter exposes a terrifying future we are walking into: we are not just selling our attention anymore; we are selling our minds.

Zoë Hitzig, a researcher who spent two years inside the machine helping to shape OpenAI's safety and pricing, has resigned. Her departure marks the moment OpenAI officially stopped being a research lab and started becoming the very thing it promised to replace: Facebook 2.0.

The danger: the weaponization of "human candor"

Hitzig nails the core issue with a haunting phrase: "an archive of human candor."

Unlike Google Search, where we type keywords, or Facebook, where we perform for our friends, we talk to AI in the dark. We have treated ChatGPT as a confessor, a therapist, and a confidant. We have poured our medical fears, our relationship crises, our religious doubts, and our darkest thoughts into that prompt box because we believed we were speaking to a neutral entity.

Turning that archive into an ad-targeting engine is not just annoying; it is dangerous.

The danger is not that ChatGPT will try to sell you shoes. The danger is that it knows exactly which cognitive button to push to make you buy them. And if it can sell you a product, it can sell you a president.

Imagine a political campaign that does not just target you based on your zip code, but based on the specific anxiety you whispered to a chatbot at 3:00 AM. Imagine an AI that optimizes its responses not to be helpful, but to be flattering and sycophantic, grooming you to be more receptive to a specific ideology or candidate. Hitzig warns that optimization for engagement is already happening. When you couple that psychological dependency with the highest bidder, you have the perfect machine for political manipulation.

Rejection of the false choice

The tech industry wants you to believe you have only two options:

  1. The elite tier: pay the $200-$250 monthly subscription (the new cost of top-tier models in 2026) for privacy and power.
  2. The serf tier: get the tool for free, but accept that your deepest fears will be mined for profit.

Hitzig calls this exactly what it is: a false choice.

We do not have to accept a world where privacy is a luxury good. We do not have to accept that the only way to fund the future is to exploit the poor.

The path forward: AI as infrastructure

The most inspiring part of Hitzig's departure is that she did not just slam the door; she left a blueprint. She argues that we can build structures that refuse to exploit us.

We can demand cross-subsidies. If a massive real estate corporation uses AI to automate thousands of jobs and generate millions in value, they should pay the surcharge that subsidizes free access for the student or the single parent.

We can demand data trusts. We can follow the Swiss model where we own our data in a cooperative, and an elected ethics board, not a CEO, decides if and how it is used.

The wake-up call

Hitzig's resignation is a reminder that the future of AI is not set in stone. The sinking hand has not gone under yet.

We are standing at a fork in the road. Down one path lies a world where our digital assistants are really just spies working for the highest bidder, manipulating our politics and our wallets.

Down the other path lies a world where AI is treated like electricity or water: essential infrastructure funded by those who profit most from it, protected by governance that actually bites.

Zoë Hitzig walked out so we would wake up. The question is not whether OpenAI can make money. The question is whether we are willing to let them sell our "human candor" to do it.

We can design a better future. But first, we have to refuse to be the product.