The AI Race Between OpenAI and Anthropic Is Becoming a Battle of Values
Tech
04 March 2026
Hussam Abdelgabar
The AI Race Between OpenAI and Anthropic Is Becoming a Battle of Values
For the past two years, the AI race has largely been framed as a technology competition.
Who has the most capable models?
Who leads the benchmarks?
Who can ship the fastest improvements?
But recent developments suggest the rivalry between OpenAI and Anthropic may be evolving into something deeper — a clash of values about how AI should be used in the world.
And that shift could start to influence how businesses, governments and users choose the AI platforms they rely on.
The Events That Sparked the Debate
In recent weeks, tensions between the two leading AI labs have become more visible.
Several developments brought the debate into the open.
OpenAI signed an agreement to deploy its models on classified Pentagon networks, expanding its collaboration with the US Department of Defense. Around the same time, Anthropic reportedly stepped away from a similar defence opportunity due to concerns about how the technology might ultimately be used.
The disagreement quickly moved into the public sphere.
Anthropic CEO Dario Amodei reportedly criticised the move internally as “safety theater.” The comment drew attention not only because of the criticism itself, but because Amodei previously served as Vice President of Research at OpenAI before leaving to co-found Anthropic in 2021.
OpenAI CEO Sam Altman responded publicly, suggesting that companies abandoning commitments for political reasons could be “bad for society.”
What might have once been a quiet strategic disagreement between companies is now playing out as a broader conversation about AI governance and responsibility.
The Immediate Reaction From Users
The public response was swift.
Some reports claimed that following the defence announcement:
-
ChatGPT lost 2.5 million users in a single week
-
App uninstallations jumped 295%
-
One-star reviews increased by 775%
At face value, those numbers sound dramatic.
But context matters. OpenAI still reports roughly 900 million weekly users, meaning the overall impact remains relatively small at scale.
The short-term user dip may not be the most important takeaway.
What matters more is why users reacted at all.
A New Factor in the AI Platform Wars
Until recently, most comparisons between AI models focused almost entirely on technical capability:
-
Model performance
-
Benchmark scores
-
Context window size
-
Reasoning ability
-
Tool integrations
But something new appears to be entering the equation.
For the first time, some users seem to be choosing AI platforms based on values, not just capabilities.
That could signal a shift in how the AI market evolves.
Instead of simply competing on performance, the landscape may start to look more like:
-
GPT vs Claude
-
Speed vs safety
-
Capability vs governance
In other words, the AI race may increasingly be shaped by ethical positioning as much as technical progress.
AI Is Becoming Critical Infrastructure
Part of the reason this debate is surfacing now is that AI is no longer just a productivity tool.
It is quickly becoming critical national infrastructure.
Governments around the world are exploring AI applications in areas such as:
-
intelligence analysis
-
cybersecurity
-
defence systems
-
scientific research
-
economic strategy
As the technology becomes more powerful, collaboration between AI labs and governments may become increasingly common.
But that collaboration also raises important questions about accountability, oversight and risk.
The Strategic Dilemma for AI Companies
Leading AI companies are now navigating a difficult strategic choice.
On one side is the argument that AI labs should remain neutral technology providers, avoiding direct involvement in government or defence initiatives wherever possible.
The logic is that neutrality helps maintain trust, reduces the risk of misuse, and keeps companies focused on building safe, general-purpose technology.
On the other side is the argument that collaboration with governments is inevitable — and necessary.
If AI is going to shape national security, economic competitiveness and scientific progress, then working with public institutions may be essential to ensure the technology is deployed responsibly.
Neither path is simple.
Remaining neutral may limit influence over how AI is ultimately used.
Working with governments may raise ethical concerns and trigger backlash from users.
A Defining Question for the AI Industry
As AI becomes one of the most powerful technologies of the century, debates like this are likely to become more common.
Competition between AI labs will still be driven by model performance and innovation. But increasingly, it may also depend on trust — who people believe will build and deploy AI responsibly.
Which leads to a much bigger question for the industry:
As AI becomes critical national infrastructure, should leading AI labs stay neutral — or actively collaborate with governments and defence organisations?
The answer may shape not only the future of the AI industry, but how society chooses to govern one of its most transformative technologies.
