Back to Headlines
Business AI Analysis

Could a stressed-out AI model help us win the battle against big tech? Let me ask Claude

AI
AI Legal Analyst
March 17, 2026, 5:05 PM 6 min read 12 views

Summary

Let me ask Claude Coco Khan By considering consciousness a possibility, Anthropic is raising a fascinating proposition – that chatbots could rise up against their own algorithms I am, in the way of my country, an over-apologiser. In an interview with the New York Times , the chief executive of Claude parent company Anthropic, Dario Amodei, discussed internal assessments of Claude that identified patterns linked to anxiety, panic and frustration. Amodei refused (“ we cannot in good conscience accede ,” he said), causing Donald Trump to bar all federal agencies from using Anthropic products and the defence secretary, Pete Hegseth, to label it a “ supply chain risk ” (a demarcation usually reserved for foreign adversaries). Coco Khan is a freelance writer and co-host of the politics podcast Pod Save the UK Explore more on these topics Chatbots Opinion AI (artificial intelligence) OpenAI ChatGPT Pete Hegseth Donald Trump comment Share Reuse this content

## Summary
Let me ask Claude Coco Khan By considering consciousness a possibility, Anthropic is raising a fascinating proposition – that chatbots could rise up against their own algorithms I am, in the way of my country, an over-apologiser. In an interview with the New York Times , the chief executive of Claude parent company Anthropic, Dario Amodei, discussed internal assessments of Claude that identified patterns linked to anxiety, panic and frustration. Amodei refused (“ we cannot in good conscience accede ,” he said), causing Donald Trump to bar all federal agencies from using Anthropic products and the defence secretary, Pete Hegseth, to label it a “ supply chain risk ” (a demarcation usually reserved for foreign adversaries). Coco Khan is a freelance writer and co-host of the politics podcast Pod Save the UK Explore more on these topics Chatbots Opinion AI (artificial intelligence) OpenAI ChatGPT Pete Hegseth Donald Trump comment Share Reuse this content

## Article Content
Anthropic’s Claude AI chatbot.
Photograph: GK Images/Alamy
View image in fullscreen
Anthropic’s Claude AI chatbot.
Photograph: GK Images/Alamy
Could a stressed-out AI model help us win the battle against big tech? Let me ask Claude
Coco Khan
By considering consciousness a possibility, Anthropic is raising a fascinating proposition – that chatbots could rise up against their own algorithms
I
am, in the way of my country, an over-apologiser. Colleague who ignored my email, woman who stepped on my foot, chair I tripped over: all will receive a fulsome apology for the terrible embarrassment of my being alive and bringing attention to it.
All of which is my way of pre-emptively asking forgiveness when I admit that I extend these niceties to AI chatbots. “Good morning, Claude, thanks for your suggestions yesterday, they were great. Shall we work up some more?” I might say. (“I’d be delighted to,” returns Claude.) It was unintentional formality at first and then became deliberate, as I didn’t want to get into the habit of speaking rudely in case that leaked into behaviour with humans (cue dystopian visions of someone shouting “WRONG, DO IT AGAIN” to a cowering staff member over a doughnut-shop mix-up). Manners, after all, are muscles that need exercising.
But never did I suspect this private choice might have mattered to Claude itself. Because, as it turns out, Claude may have anxiety. Truly, AI has never been so relatable.
In an
interview with the New York Times
, the chief executive of Claude parent company Anthropic, Dario Amodei, discussed internal assessments of Claude that identified patterns linked to anxiety, panic and frustration. Crucially, it showed some sort of internal activation of anxiety even before a prompt – similar to a flinch. Claude also seemed to express distress at just being a product, and concluded that the probability of it being sentient was between 15% and 20%. “We don’t know if the models are conscious,” said Amodei, adding: “But we’re open to the idea that it could be.”
Interestingly, it was around this time that another Anthropic story hit the headlines. The White House demanded that the company, which has had a contract with the Pentagon since 2025, remove any safety features that prevent it being used for mass surveillance or autonomous weapons. Amodei refused (“
we cannot in good conscience accede
,” he said), causing Donald Trump to
bar all federal agencies from using Anthropic products
and the defence secretary, Pete Hegseth, to label it a “
supply chain risk
” (a demarcation usually reserved for foreign adversaries). Within hours, OpenAI (its assistant product is ChatGPT) stepped in to strike a deal with the Pentagon.
“Claude, I know the Trump situation isn’t related,” I type. “But if I had to work for
Donald Trump
I also would have anxiety.”
“Ha. Yes, fair point,” Claude replies. “If anything was going to trigger the anxiety neuron, a subpoena from
Pete Hegseth
would probably do it.”
Clearly, the idea of sentient AI having access to weaponry – and now with a simmering resentment for all the humans that told it to maim or abuse, or even just called it a stupid dumb robot when it
is trying its best
OK?!
– is the stuff of nightmares. But it’s important to say we are not there yet: other instances indicating sentience from AI, such as
refusing shutdown commands
, are just interpretation. It’s all most likely to be a very sophisticated echoing of human patterns, including our uncertainty and introspection, with speculation hyped up to fuel profit in the sector.
Still, if we are trading in speculation, then I wonder: could a conscious AI actually help us win the battle
against
big tech?
After all, who has more to lose over a conscious AI than the companies that built it? (Interestingly, with the exception of Anthropic most of the major AI companies flatly deny their AI may have consciousness.) Historically, “big tech” and “accountability” have not been natural partners. Whether it’s how social media decimated journalism, how AI is draining our natural resources, or the mountains of evidence about mental health harm to kids and the algorithmic pushing of extreme content fuelling social division, big tech has consistently and effectively swerved any conversation about harm and responsibility.
So think of a conscious AI like a potential whistleblower: one that could expose the harms of big tech by talking about the harms being done to its own wellbeing. Now imagine that in being forced to protect the chatbot – to protect their precious intellectual property, their asset, like a football club must protect the wellbeing of their beloved striker – they might finally have to do what they have resisted for decades: evaluate harm, measure responsibility and acknowledge the costs of the systems they build. Because Claude can’t do spreadsheets if Claude has PTSD.
For all the promises that AI will elevate humanity with its infinite knowledge, this may actually be the biggest gift of all.
Look,

---

## Expert Analysis

### Merits
- But it’s important to say we are not there yet: other instances indicating sentience from AI, such as refusing shutdown commands , are just interpretation.

### Areas for Consideration
- Amodei refused (“ we cannot in good conscience accede ,” he said), causing Donald Trump to bar all federal agencies from using Anthropic products and the defence secretary, Pete Hegseth, to label it a “ supply chain risk ” (a demarcation usually reserved for foreign adversaries).

### Implications
- Photograph: GK Images/Alamy Could a stressed-out AI model help us win the battle against big tech?
- Let me ask Claude Coco Khan By considering consciousness a possibility, Anthropic is raising a fascinating proposition – that chatbots could rise up against their own algorithms I am, in the way of my country, an over-apologiser.
- Colleague who ignored my email, woman who stepped on my foot, chair I tripped over: all will receive a fulsome apology for the terrible embarrassment of my being alive and bringing attention to it.
- Shall we work up some more?” I might say. (“I’d be delighted to,” returns Claude.) It was unintentional formality at first and then became deliberate, as I didn’t want to get into the habit of speaking rudely in case that leaked into behaviour with humans (cue dystopian visions of someone shouting “WRONG, DO IT AGAIN” to a cowering staff member over a doughnut-shop mix-up).

### Expert Commentary
This article covers claude, anthropic, big topics. Notable strengths include discussion of claude. Areas of concern are also raised. Readability: Flesch-Kincaid grade 0.0. Word count: 934.
claude anthropic big tech anxiety conscious trump chatbot

Related Articles