The dispute between Anthropic and the Pentagon raises a fundamental question for democracies. As AI systems move closer to lethal targeting and population-scale surveillance, the issue is no longer technical capability but political choice.
It is tempting for Europeans to watch the Anthropic saga with a degree of satisfaction, confident the EU AI Act would shield them from similar excesses. In reality, the Act excludes defence and national security. Where the stakes are highest, its protections do not reach.
Anthropic has resisted pressure to remove limits in its AI systems that restrict autonomous lethal targeting and large-scale surveillance, even as its Claude model reportedly supported US strikes on Iran. The Pentagon argues that it should be free to use such systems for “all lawful military purposes.”
Something can be legal and still deeply immoral. Democracies have approved policies in the past that later generations judged harshly. The question is not whether militaries should use AI. Clearly, they should and will. The question is how far that use should extend.
Take autonomous weapons, often called “killbots.” Their proponents claim that humans remain involved but operational speed is pushing systems toward greater autonomy. When machines identify and engage targets faster than humans can intervene, the human role risks becoming marginal at best. If a system selects a target based on code written by one team, trained on data gathered by another and is deployed by a third, who is accountable when something goes wrong? As the gap between decision and consequence widens, accountability weakens.
Surveillance raises a different concern. Modern AI does more than store and process information; it also models behaviour and predicts intent. Used broadly, it enables tracking across entire populations. This is mass surveillance, even when described as “keeping people safe.” Democracy collapses when citizens are transparent to power, but power is opaque to citizens.
There is another uncomfortable reality. In the Anthropic case, a private company is holding the line while a government bureaucracy presses for fewer limits. When private companies are enforcing ethical constraints and governments are not, something is very wrong.
This dispute has already produced extraordinary government action. After Anthropic held firm on its safeguards, US authorities moved to ban its systems from federal use and label the company a supply-chain risk, a decision that has drawn industry pushback and deepened the broader debate over who gets to set the limits on AI in national security contexts. It has also caused a surge in new subscriptions to Anthropic’s Claude AI since the company has won public trust by holding the line.
Anthropic’s refusal did not halt the trajectory since OpenAI stepped in to deepen its Pentagon relationship. This illustrates the limitations of corporate restraint when governments lack their own regulations and ethical boundaries.
Defence and security strength is not the issue; the character of that strength is. The argument that rivals will move faster if we don’t act with the same impunity is tired and familiar. It assumes that fewer limits automatically produce greater strength.
Democracies claim to defend human freedom and the rule of law. If those values are sidelined when new capabilities emerge, they never had any real meaning in the first place.
US AI giants seem fine with their tech being used to spy on Europeans
US AI giants OpenAI and Anthropic appear untroubled by their technologies being used for mass…
5 minutes
There is also a democratic deficit. Decisions about the defence and security use of AI are increasingly shaped through negotiations between governments and technology firms. Citizens have little role in defining these boundaries, even though the consequences affect everyone.
For Europe, this debate is immediate. The European Union presents itself as a guardian of digital rights. The GDPR and the AI Act are offered as proof that innovation can align with freedom and dignity.
Yet the AI Act does not govern military and national security uses. In the domain where AI may have the most severe consequences, safeguards are thinner and oversight is weaker. If Europe wants its values to shape its defence policy, those limits must be defined deliberately. They can also serve to bolster the global brand of European tech by being known as products that won’t spy on the people who buy them.
European governments are accelerating AI integration which makes sense given the current security environment. In so doing, they should decide now which lines will not be crossed. Lethal decisions must remain under meaningful human control and population-wide surveillance must not become a routine practice.
The deeper risk for Europe is not falling behind in technological capability but sacrificing its values and principles while trying to keep pace. If Europe wants to remain credible as a bastion of democracy, it cannot anchor its future security in fully automated killbots and mass surveillance. It must show that even in an era of rapid technological change, power remains constrained by law, accountability and democratic consent.
Disinformation beyond lies: Europe is fighting yesterday’s disinformation war
For more than a decade, efforts to counter disinformation have been guided by a relatively…
4 minutes

Source:
www.euractiv.com


