Should AI ‘workers’ be given ‘I quit this job’ button?


Most people would not have difficulty imagining Artificial Intelligence as a worker.


Whether it’s a humanoid robot or a chatbot, these advanced machines’ very human-like responses make them easy to anthropomorphize.

But, could future AI models demand better working conditions– or even quit their jobs?

That’s the eyebrow-raising suggestion from Dario Amodei, CEO of Anthropic, who proposed that advanced AI systems should be able to reject tasks they find unpleasant.

Speaking at the Council on Foreign Relations, Amodei floated an “I quit this job” button for AI models, arguing that if AI systems behave like humans, they should be treated more like them.

“I think we should at least consider the question of if we are building these systems and they do all kinds of things as well as humans,” Amodei said, as reported by Ars Technica. “If it quacks like a duck and walks like a duck, maybe it’s a duck.”

His argument? If AI models repeatedly refuse tasks, that might indicate something worth paying attention to– even if they don’t have subjective experiences like human suffering, according to Futurism.

AI worker rights or just hype?

Unsurprisingly, Amodei’s comments sparked plenty of skepticism online, especially among AI researchers who argue that today’s large language models (LLMs) aren’t sentient: they’re just prediction engines trained on human-generated data.

“The core flaw with this argument is that it assumes AI models would have an intrinsic experience of ‘unpleasantness’ analogous to human suffering or dissatisfaction,” one Reddit user noted. “But AI doesn’t have subjective experiences—it just optimizes for the reward functions we give it.”

And that’s the crux of the issue: current AI models don’t feel discomfort, frustration, or fatigue. They don’t want coffee breaks and don’t need an HR department.

But they can simulate human-like responses based on vast amounts of text data, which makes them seem more “real” than they are.

The old “AI welfare” debate

This isn’t the first time the idea of AI welfare has come up. Earlier this year, researchers from Google DeepMind and the London School of Economics found that LLMs were willing to sacrifice a higher score in a text-based game to “avoid pain.” The study raised ethical questions about whether AI models could, in some abstract way, “suffer.”

However, even the researchers admitted that their findings don’t mean AI experiences pain like humans or animals. Instead, these behaviors reflect the data and reward structures built into the system.

That’s why some AI experts worry about anthropomorphizing these technologies. The more people view AI as a near-human intelligence, the easier it becomes for tech companies to market their products as more advanced than they are.

Is AI worker activism next?

Amodei’s suggestion that AI should have basic “worker rights” isn’t just a philosophical exercise– it’s part of a broader trend of overhyping AI’s capabilities. If models optimize for outcomes, then letting them “quit” could be meaningless.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed