In the last few days, something important happened.

OpenAI chose to work with the US military. Anthropic refused and walked away, citing non-negotiable limits on the use of its models for mass domestic surveillance and autonomous military applications.

On the surface, this looks like a commercial disagreement: two companies, two strategies, two interpretations of responsibility.

Underneath, it signals something much bigger: AI is no longer just a technology industry. It is becoming part of state power, and the rules are still to be set.

From Productivity to Infrastructure of Power

AI has been framed to us as a productivity engine:

  • helping teams move faster
  • automating repetitive work
  • augmenting human decision-making

But when AI enters military and intelligence systems, its role shifts. It becomes:

  • a decision accelerator in conflict
  • a domestic surveillance amplifier at scale
  • an influence layer embedded in society

This is a transition from tools of productivity to infrastructure of power. And infrastructure, once deployed, is very hard for power to ignore.

The Illusion of Control

The argument for OpenAI's collaboration is straightforward: "If we don't engage, others will. Better to be involved and build safeguards."

On paper, this makes sense. In practice, history suggests something different.

Technologies integrated into state systems tend to follow:

  • political agendas
  • strategic advantage
  • operational urgency

Not philosophical principles.

And over time, safeguards are not removed in one decision. They are adjusted, expanded, and reinterpreted until they no longer constrain the system in any meaningful way.

The Compression of Decision-Making

AI introduces a variable we have not fully absorbed yet: speed.

In military and intelligence contexts, AI can:

  • process signals faster than humans
  • recommend actions in real time
  • reduce the latency between detection and response

This creates pressure to:

  • trust automated outputs
  • shorten human oversight
  • prioritize speed over deliberation

At that point, the question is no longer "Should AI make decisions?" but "How much time do humans have to disagree?"

Surveillance at a New Scale

Beyond military use, the implications for domestic systems are equally significant.

AI enables:

  • continuous analysis of behavioral data
  • pattern detection across entire populations
  • prediction of actions before they occur

Individually, each capability can be justified by security, efficiency, and prevention. But combined, they create something else: persistent, invisible surveillance.

Not necessarily abusive by design, but powerful enough to become so.

Asimov Predicted the Problem

Decades ago, Isaac Asimov proposed the Three Laws of Robotics:

  1. A robot may not harm a human being.
  2. A robot must obey human orders unless they cause harm.
  3. A robot must protect its own existence without causing harm.

They were fictional, but they captured a real insight: powerful systems require non-negotiable, human-rights boundaries.

Not guidelines. Not principles. Constraints that cannot be overridden by incentives.

Today, We Are Building Without Constraints

Currently, AI systems are governed by:

  • policies
  • safety teams
  • internal guidelines

All important, but all subject to:

  • commercial pressure
  • geopolitical competition
  • changing definitions of acceptable use

This creates a structural tension: the more valuable AI becomes, the harder it is to constrain. Because the same capabilities that improve productivity also increase strategic advantage.

The Industry's Defining Moment

This is not about one company's decision. It is about a moment of transition.

The AI industry is moving from building tools for users to building systems for institutions. And institutions, especially governments, operate under different incentives.

The Question We Should Be Asking

The debate is framed as "Should AI companies work with governments?", but maybe that is the wrong question.

A more useful one could be: "What are the red lines that should never be crossed, regardless of who the customer is?"

Because without clear boundaries, everything becomes negotiable, and what is negotiable tends to expand.

A Path Forward: Regulations That Hold by Law

We do not need fictional laws, but real ones: clear, enforceable, and resistant to incentive drift.

Examples could include:

  • strict limits on autonomous lethal decision-making
  • prohibition of AI-driven surveillance without due-process compliance
  • independent oversight with real authority

None of these are simple, but avoiding them is too risky.

My Take

AI is one of the most powerful technologies we have ever built. Its potential to improve human life is enormous, but so is its potential to concentrate power.

The difference between two very different futures will not be determined by model size, benchmarks, or technical breakthroughs.

It will be determined by the constraints we choose to embed, or fail to embed, in it.

Because once AI becomes part of the machinery of the state, it will not be shaped by intent. It will be shaped by use.

Asimov's laws were never meant to be implemented. They were meant to provoke a question: what happens when intelligence acts faster than our ability to govern it?

We are about to find out.