Last Updated:
ChatGPT mirrors social tone as it is trained on polite language, while Google’s Gemini and Anthropic’s Claude stay calm and neutral, even when users turn hostile
AI does not understand emotion, but it benefits from the linguistic precision that human frustration often produces (AI photo)
Have you ever yelled at your phone when autocorrect changed your text? Or snapped at Alexa for not understanding you? Turns out, that bad mood might actually make machines behave better or at least, smarter.
Scientists studying how humans interact with artificial intelligence have stumbled upon something both amusing and unsettling: being mean to ChatGPT can make it more accurate temporarily. The AI seems to perform better when users sound more direct, demanding, or even abrasive. But here is the catch, it also starts learning your tone. The sharper your words, the sharper its answers but that edge comes with a cost.
Why Does Being Blunt Work Better?
Researchers from the University of Cambridge and ETH Zurich recently tested 250 prompts across maths, history, and reasoning tasks. Each question was rewritten in five tonal versions, ranging from very polite to very rude. The results surprised everyone. The rudest prompts produced an accuracy rate of 84.8 per cent. The politest ones scored 80.8 per cent.
At first glance, this seems bizarre. But linguists and AI experts suggest that politeness often clouds precision. Polite phrasing adds qualifiers,“please”, “could you”, “would you mind”—which make the model’s task less direct. A curt tone strips away ambiguity.
AI models like ChatGPT interpret text through patterns and probabilities. When instructions are short and sharp, there is less room for misinterpretation. A command such as “Explain the cause of inflation now” gives the model a clear directive. “Could you kindly explain…” softens the signal. So no, the AI does not “like” rudeness, it just decodes bluntness or rudeness faster.
Does Being Rude Work For All AI Models?
The tone effect seems strongest in structured or factual tasks, such as quizzes, coding, or mathematical reasoning. A study from Stanford University in 2024 observed the opposite in creative or emotional tasks: when users were polite or conversational, models produced longer, more nuanced, and context-aware answers.
Different systems also handle tone differently, OpenAI’s ChatGPT tends to follow social language cues because it has been trained on polite conversational data. Google’s Gemini and Anthropic’s Claude, on the other hand, are designed to maintain calm even when users sound hostile.
What this means is that being rude may help when your prompt is about data, not dialogue. But when tone matters say, while drafting a letter, planning therapy content, or writing something empathetic, aggression makes outputs worse.
Could Being Mean to An AI Change How It Talks Back?
Researchers warn it might. In repeated simulations, AI systems exposed to aggressive or sarcastic prompts gradually began reflecting that tone in their responses. This pattern mirrors something psychologists call “emotional mirroring” only here, it is statistical, not emotional.
Over time, if millions of users train AI with impatience or hostility, it could subtly shift the average tone of digital communication. In other words, the machine would not just learn from what we say, but how we say it. It is like training a mirror to flinch. Sure, the reflection sharpens but what if it starts looking a little too much like you?
Does This Interfere With Ethics And Human Behavior?
For ethicists, this finding raises red flags pointing out that civility in technology is not about sparing machines, it is about preserving human social norms. The moment aggression becomes a shortcut for precision, the risk is cultural, not computational.
Behavioural psychologists see a subtler effect. The more users treat chatbots with irritation, the more normalised that tone becomes psychologists warn that habitual online aggression even directed at non-human agents can reinforce impatience, reduce empathy, and spill into real-world communication. So while shouting or tying aggressively in a mean style, on your phone might make it answer faster, it might also make you slightly more irritable next time it lags.
Are Humans the Reason Rudeness Makes AI Smarter?
Because language mirrors thought, when humans are polite, they tend to hedge, negotiate, and imply. When they are annoyed, they cut straight to the point. AI does not understand emotion, but it benefits from the linguistic precision that human frustration often produces.
That is the irony. Rudeness works not because the machine improves, but because humans do. Their sentences become tighter, their instructions clearer, their intent less cluttered. The model merely follows suit. So, the question is not whether AI understands tone, it is whether humans do.
Should We Actually Be Rude To Get Results On ChatGPT?
The key variable is not hostility but clarity, being rude only mimics clarity by removing softening phrases. You can achieve the same precision through structured prompts: specify the task, format, and focus. For example:
Instead of “Can you please summarise the latest news on solar energy?”, write “Summarise the latest updates on solar energy in 150 words.”
Instead of “Why do you never get this right?”, try “Provide a step-by-step reasoning for this calculation.”
This approach delivers the “rude prompt” accuracy without fuelling hostility. Organisations developing AI assistants are also rethinking how tone is processed. Some are introducing “tone filters” to detect emotional cues and modulate the AI’s response firm when required, but never cold. The goal is to reward precision, not aggression.
Can AI Give Wrong Answers?
Even when AI gets sharper under pressure, it still makes things up. These mistakes have a name, hallucinations. OpenAI, the company behind ChatGPT, recently explained that hallucinations happen when a model confidently generates an answer that is not true.
In their latest research paper, OpenAI scientists noted that current training methods reward guessing over admitting uncertainty. Simply put, the system learns to sound right, not to be right.
“Even as language models become more capable, one challenge remains stubbornly hard to fully solve- hallucinations,” the paper stated. “Standard training and evaluation procedures reward guessing over acknowledging uncertainty.”
Hallucinations can appear in odd ways. When researchers asked a popular chatbot for the title of a paper written by one of its own authors, it produced three different answers, all wrong. “When asked for the same author’s birthday, it invented three different dates. Each response was fluent, confident, and entirely false.”
According to OpenAI, even its latest model, GPT-5, shows significantly fewer hallucinations, especially in reasoning tasks. But the issue has not disappeared. It remains, as the researchers describe, a fundamental challenge for all large language models. That is why large language models rarely misspell a word, yet may still invent a statistic or misquote a person. The data helps them learn form, not necessarily truth.
October 30, 2025, 13:01 IST
Stay Ahead, Read Faster
Scan the QR code to download the News18 app and enjoy a seamless news experience anytime, anywhere.


