{"id":21959,"date":"2025-10-30T07:50:16","date_gmt":"2025-10-30T07:50:16","guid":{"rendered":"http:\/\/tezgyan.com\/index.php\/2025\/10\/30\/you-cant-be-rude-to-people-but-chatgpt-actually-gets-smarter-when-you-are-heres-why-tech-news\/"},"modified":"2025-10-30T07:50:16","modified_gmt":"2025-10-30T07:50:16","slug":"you-cant-be-rude-to-people-but-chatgpt-actually-gets-smarter-when-you-are-heres-why-tech-news","status":"publish","type":"post","link":"https:\/\/tezgyan.com\/index.php\/2025\/10\/30\/you-cant-be-rude-to-people-but-chatgpt-actually-gets-smarter-when-you-are-heres-why-tech-news\/","title":{"rendered":"You Can\u2019t Be Rude To People, But ChatGPT Actually Gets Smarter When You Are, Here&#8217;s Why | Tech News"},"content":{"rendered":"<p><br \/>\n<\/p>\n<div id=\"story-9668783\">\n<p><span class=\"jsx-395e0e0beb19cb6e jsx-4143937483\">Last Updated:<\/span><time class=\"jsx-395e0e0beb19cb6e jsx-4143937483\">October 30, 2025, 13:01 IST<\/time><\/p>\n<h2 id=\"asubttl-9668783\" class=\"jsx-c9f81425ec968c48 jsx-1466417683 asubttl-schema\">ChatGPT mirrors social tone as it is trained on polite language, while Google\u2019s Gemini and Anthropic\u2019s Claude stay calm and neutral, even when users turn hostile<\/h2>\n<figure class=\"jsx-c9f81425ec968c48 jsx-1466417683 amimg\"><img decoding=\"async\" alt=\"AI does not understand emotion, but it benefits from the linguistic precision that human frustration often produces (AI photo)\" title=\"AI does not understand emotion, but it benefits from the linguistic precision that human frustration often produces (AI photo)\" src=\"https:\/\/images.news18.com\/ibnlive\/uploads\/2021\/07\/1627283897_news18_logo-1200x800.jpg?impolicy=website&amp;width=400&amp;height=225\" loading=\"eager\" fetchpriority=\"high\" class=\"jsx-c9f81425ec968c48 jsx-1466417683\"\/><\/p>\n<p>AI does not understand emotion, but it benefits from the linguistic precision that human frustration often produces (AI photo)<\/p>\n<\/figure>\n<p id=\"0\" class=\"story_para_0\">Have you ever yelled at your phone when autocorrect changed your text? Or snapped at Alexa for not understanding you? Turns out, that bad mood might actually make machines behave better or at least, smarter.<\/p>\n<p id=\"1\" class=\"story_para_1\">Scientists studying how humans interact with artificial intelligence have stumbled upon something both amusing and unsettling: being mean to ChatGPT can make it more accurate temporarily. The AI seems to perform better when users sound more direct, demanding, or even abrasive. But here is the catch, it also starts learning your tone. The sharper your words, the sharper its answers but that edge comes with a cost.<\/p>\n<p id=\"2\" class=\"story_para_2\"><strong>Why Does Being Blunt Work Better?<\/strong><\/p>\n<p id=\"3\" class=\"story_para_3\">Researchers from the University of Cambridge and ETH Zurich recently tested 250 prompts across maths, history, and reasoning tasks. Each question was rewritten in five tonal versions, ranging from very polite to very rude. The results surprised everyone. The rudest prompts produced an accuracy rate of 84.8 per cent. The politest ones scored 80.8 per cent.<\/p>\n<p id=\"4\" class=\"story_para_4\">At first glance, this seems bizarre. But linguists and AI experts suggest that politeness often clouds precision. Polite phrasing adds qualifiers,\u201cplease&#8221;, \u201ccould you&#8221;, \u201cwould you mind&#8221;\u2014which make the model\u2019s task less direct. A curt tone strips away ambiguity.<\/p>\n<p id=\"5\" class=\"story_para_5\">AI models like ChatGPT interpret text through patterns and probabilities. When instructions are short and sharp, there is less room for misinterpretation. A command such as \u201cExplain the cause of inflation now&#8221; gives the model a clear directive. \u201cCould you kindly explain\u2026&#8221; softens the signal. So no, the AI does not \u201clike&#8221; rudeness, it just decodes bluntness or rudeness faster.<\/p>\n<p id=\"6\" class=\"story_para_6\"><strong>Does Being Rude Work For All AI Models?<\/strong><\/p>\n<p id=\"7\" class=\"story_para_7\">The tone effect seems strongest in structured or factual tasks, such as quizzes, coding, or mathematical reasoning. A study from Stanford University in 2024 observed the opposite in creative or emotional tasks: when users were polite or conversational, models produced longer, more nuanced, and context-aware answers.<\/p>\n<p id=\"8\" class=\"story_para_8\">Different systems also handle tone differently, OpenAI\u2019s ChatGPT tends to follow social language cues because it has been trained on polite conversational data. Google\u2019s Gemini and Anthropic\u2019s Claude, on the other hand, are designed to maintain calm even when users sound hostile.<\/p>\n<p id=\"9\" class=\"story_para_9\">What this means is that being rude may help when your prompt is about data, not dialogue. But when tone matters say, while drafting a letter, planning therapy content, or writing something empathetic, aggression makes outputs worse.<\/p>\n<p id=\"10\" class=\"story_para_10\"><strong>Could Being Mean to An AI Change How It Talks Back?<\/strong><\/p>\n<p id=\"11\" class=\"story_para_11\">Researchers warn it might. In repeated simulations, AI systems exposed to aggressive or sarcastic prompts gradually began reflecting that tone in their responses. This pattern mirrors something psychologists call \u201cemotional mirroring&#8221; only here, it is statistical, not emotional.<\/p>\n<p id=\"12\" class=\"story_para_12\">Over time, if millions of users train AI with impatience or hostility, it could subtly shift the average tone of digital communication. In other words, the machine would not just learn from what we say, but how we say it. It is like training a mirror to flinch. Sure, the reflection sharpens but what if it starts looking a little too much like you?<\/p>\n<p id=\"13\" class=\"story_para_13\"><strong>Does This Interfere With Ethics And Human Behavior?<\/strong><\/p>\n<p id=\"14\" class=\"story_para_14\">For ethicists, this finding raises red flags pointing out that civility in technology is not about sparing machines, it is about preserving human social norms. The moment aggression becomes a shortcut for precision, the risk is cultural, not computational.<\/p>\n<p id=\"15\" class=\"story_para_15\">Behavioural psychologists see a subtler effect. The more users treat chatbots with irritation, the more normalised that tone becomes psychologists warn that habitual online aggression even directed at non-human agents can reinforce impatience, reduce empathy, and spill into real-world communication. So while shouting or tying aggressively in a mean style, on your phone might make it answer faster, it might also make you slightly more irritable next time it lags.<\/p>\n<p id=\"16\" class=\"story_para_16\"><strong>Are Humans the Reason Rudeness Makes AI Smarter?<\/strong><\/p>\n<p id=\"17\" class=\"story_para_17\">Because language mirrors thought, when humans are polite, they tend to hedge, negotiate, and imply. When they are annoyed, they cut straight to the point. AI does not understand emotion, but it benefits from the linguistic precision that human frustration often produces.<\/p>\n<p id=\"18\" class=\"story_para_18\">That is the irony. Rudeness works not because the machine improves, but because humans do. Their sentences become tighter, their instructions clearer, their intent less cluttered. The model merely follows suit. So, the question is not whether AI understands tone, it is whether humans do.<\/p>\n<p id=\"19\" class=\"story_para_19\"><strong>Should We Actually Be Rude To Get Results On ChatGPT?<\/strong><\/p>\n<p id=\"20\" class=\"story_para_20\">The key variable is not hostility but clarity, being rude only mimics clarity by removing softening phrases. You can achieve the same precision through structured prompts: specify the task, format, and focus. For example:<\/p>\n<p id=\"21\" class=\"story_para_21\"><em>Instead of \u201cCan you please summarise the latest news on solar energy?&#8221;, write \u201cSummarise the latest updates on solar energy in 150 words.&#8221;<\/em><\/p>\n<p id=\"22\" class=\"story_para_22\"><em>Instead of \u201cWhy do you never get this right?&#8221;, try \u201cProvide a step-by-step reasoning for this calculation.&#8221;<\/em><\/p>\n<p id=\"23\" class=\"story_para_23\">This approach delivers the \u201crude prompt&#8221; accuracy without fuelling hostility. Organisations developing AI assistants are also rethinking how tone is processed. Some are introducing \u201ctone filters&#8221; to detect emotional cues and modulate the AI\u2019s response firm when required, but never cold. The goal is to reward precision, not aggression.<\/p>\n<p id=\"24\" class=\"story_para_24\"><strong>Can AI Give Wrong Answers?<\/strong><\/p>\n<p id=\"25\" class=\"story_para_25\">Even when AI gets sharper under pressure, it still makes things up. These mistakes have a name, hallucinations. OpenAI, the company behind ChatGPT, recently explained that hallucinations happen when a model confidently generates an answer that is not true.<\/p>\n<p id=\"26\" class=\"story_para_26\">In their latest research paper, OpenAI scientists noted that current training methods reward guessing over admitting uncertainty. Simply put, the system learns to sound right, not to be right.<\/p>\n<p id=\"27\" class=\"story_para_27\">\u201cEven as language models become more capable, one challenge remains stubbornly hard to fully solve- hallucinations,&#8221; the paper stated. \u201cStandard training and evaluation procedures reward guessing over acknowledging uncertainty.&#8221;<\/p>\n<p id=\"28\" class=\"story_para_28\">Hallucinations can appear in odd ways. When researchers asked a popular chatbot for the title of a paper written by one of its own authors, it produced three different answers, all wrong. \u201cWhen asked for the same author\u2019s birthday, it invented three different dates. Each response was fluent, confident, and entirely false.&#8221;<\/p>\n<p id=\"29\" class=\"story_para_29\">According to OpenAI, even its latest model, GPT-5, shows significantly fewer hallucinations, especially in reasoning tasks. But the issue has not disappeared. It remains, as the researchers describe, a fundamental challenge for all large language models. That is why large language models rarely misspell a word, yet may still invent a statistic or misquote a person. The data helps them learn form, not necessarily truth.<\/p>\n<div class=\"jsx-c9f81425ec968c48 jsx-1466417683 atbtlink fp\"><span>First Published:<\/span><\/p>\n<div class=\"rs\">\n<p>October 30, 2025, 13:01 IST<\/p>\n<\/div>\n<\/div>\n<div class=\"jsx-c9f81425ec968c48 jsx-1466417683 brdcrmb\"><a href=\"https:\/\/www.news18.com\/\">News<\/a>  <a href=\"https:\/\/www.news18.com\/tech\/\">tech<\/a>  <span class=\"brdout\"> You Can\u2019t Be Rude To People, But ChatGPT Actually Gets Smarter When You Are, Here&#8217;s Why<\/span><\/div>\n<div id=\"coral-wrap\" class=\"jsx-ba4d8f086a12294f \">\n<div class=\"jsx-ba4d8f086a12294f coral-cont\">\n<div class=\"jsx-ba4d8f086a12294f coltoptxt\">Disclaimer: Comments reflect users\u2019 views, not News18\u2019s. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our <a href=\"https:\/\/www.news18.com\/disclaimer\/\" class=\"jsx-ba4d8f086a12294f\">Terms of Use<\/a> and <a href=\"https:\/\/www.news18.com\/privacy_policy\/\" class=\"jsx-ba4d8f086a12294f\">Privacy Policy<\/a>.<\/div>\n<\/div>\n<\/div>\n<section class=\"jsx-ddbb77f9e0c46f92 qrsect\">\n<div style=\"display:none\" class=\"jsx-ddbb77f9e0c46f92 paywall\">\n<p><strong>Why Does Being Blunt Work Better?<\/strong><\/p>\n<p>Researchers from the University of Cambridge and ETH Zurich recently tested 250 prompts across maths, history, and reasoning tasks. Each question was rewritten in five tonal versions, ranging from very polite to very rude. The results surprised everyone. The rudest prompts produced an accuracy rate of 84.8 per cent. The politest ones scored 80.8 per cent.<\/p>\n<p>At first glance, this seems bizarre. But linguists and AI experts suggest that politeness often clouds precision. Polite phrasing adds qualifiers,\u201cplease\u201d, \u201ccould you\u201d, \u201cwould you mind\u201d\u2014which make the model\u2019s task less direct. A curt tone strips away ambiguity.<\/p>\n<p>AI models like ChatGPT interpret text through patterns and probabilities. When instructions are short and sharp, there is less room for misinterpretation. A command such as \u201cExplain the cause of inflation now\u201d gives the model a clear directive. \u201cCould you kindly explain\u2026\u201d softens the signal. So no, the AI does not \u201clike\u201d rudeness, it just decodes bluntness or rudeness faster.<\/p>\n<p><strong>Does Being Rude Work For All AI Models?<\/strong><\/p>\n<p>The tone effect seems strongest in structured or factual tasks, such as quizzes, coding, or mathematical reasoning. A study from Stanford University in 2024 observed the opposite in creative or emotional tasks: when users were polite or conversational, models produced longer, more nuanced, and context-aware answers.<\/p>\n<p>Different systems also handle tone differently, OpenAI\u2019s ChatGPT tends to follow social language cues because it has been trained on polite conversational data. Google\u2019s Gemini and Anthropic\u2019s Claude, on the other hand, are designed to maintain calm even when users sound hostile.<\/p>\n<p>What this means is that being rude may help when your prompt is about data, not dialogue. But when tone matters say, while drafting a letter, planning therapy content, or writing something empathetic, aggression makes outputs worse.<\/p>\n<p><strong>Could Being Mean to An AI Change How It Talks Back?<\/strong><\/p>\n<p>Researchers warn it might. In repeated simulations, AI systems exposed to aggressive or sarcastic prompts gradually began reflecting that tone in their responses. This pattern mirrors something psychologists call \u201cemotional mirroring\u201d only here, it is statistical, not emotional.<\/p>\n<p>Over time, if millions of users train AI with impatience or hostility, it could subtly shift the average tone of digital communication. In other words, the machine would not just learn from what we say, but how we say it. It is like training a mirror to flinch. Sure, the reflection sharpens but what if it starts looking a little too much like you?<\/p>\n<p><strong>Does This Interfere With Ethics And Human Behavior?<\/strong><\/p>\n<p>For ethicists, this finding raises red flags pointing out that civility in technology is not about sparing machines, it is about preserving human social norms. The moment aggression becomes a shortcut for precision, the risk is cultural, not computational.<\/p>\n<p>Behavioural psychologists see a subtler effect. The more users treat chatbots with irritation, the more normalised that tone becomes psychologists warn that habitual online aggression even directed at non-human agents can reinforce impatience, reduce empathy, and spill into real-world communication. So while shouting or tying aggressively in a mean style, on your phone might make it answer faster, it might also make you slightly more irritable next time it lags.<\/p>\n<p><strong>Are Humans the Reason Rudeness Makes AI Smarter?<\/strong><\/p>\n<p>Because language mirrors thought, when humans are polite, they tend to hedge, negotiate, and imply. When they are annoyed, they cut straight to the point. AI does not understand emotion, but it benefits from the linguistic precision that human frustration often produces.<\/p>\n<p>That is the irony. Rudeness works not because the machine improves, but because humans do. Their sentences become tighter, their instructions clearer, their intent less cluttered. The model merely follows suit. So, the question is not whether AI understands tone, it is whether humans do.<\/p>\n<p><strong>Should We Actually Be Rude To Get Results On ChatGPT?<\/strong><\/p>\n<p>The key variable is not hostility but clarity, being rude only mimics clarity by removing softening phrases. You can achieve the same precision through structured prompts: specify the task, format, and focus. For example:<\/p>\n<p><em>Instead of \u201cCan you please summarise the latest news on solar energy?\u201d, write \u201cSummarise the latest updates on solar energy in 150 words.\u201d<\/em><\/p>\n<p><em>Instead of \u201cWhy do you never get this right?\u201d, try \u201cProvide a step-by-step reasoning for this calculation.\u201d<\/em><\/p>\n<p>This approach delivers the \u201crude prompt\u201d accuracy without fuelling hostility. Organisations developing AI assistants are also rethinking how tone is processed. Some are introducing \u201ctone filters\u201d to detect emotional cues and modulate the AI\u2019s response firm when required, but never cold. The goal is to reward precision, not aggression.<\/p>\n<p><strong>Can AI Give Wrong Answers?<\/strong><\/p>\n<p>Even when AI gets sharper under pressure, it still makes things up. These mistakes have a name, hallucinations. OpenAI, the company behind ChatGPT, recently explained that hallucinations happen when a model confidently generates an answer that is not true.<\/p>\n<p>In their latest research paper, OpenAI scientists noted that current training methods reward guessing over admitting uncertainty. Simply put, the system learns to sound right, not to be right.<\/p>\n<p>\u201cEven as language models become more capable, one challenge remains stubbornly hard to fully solve- hallucinations,\u201d the paper stated. \u201cStandard training and evaluation procedures reward guessing over acknowledging uncertainty.\u201d<\/p>\n<p>Hallucinations can appear in odd ways. When researchers asked a popular chatbot for the title of a paper written by one of its own authors, it produced three different answers, all wrong. \u201cWhen asked for the same author\u2019s birthday, it invented three different dates. Each response was fluent, confident, and entirely false.\u201d<\/p>\n<p>According to OpenAI, even its latest model, GPT-5, shows significantly fewer hallucinations, especially in reasoning tasks. But the issue has not disappeared. It remains, as the researchers describe, a fundamental challenge for all large language models. That is why large language models rarely misspell a word, yet may still invent a statistic or misquote a person. The data helps them learn form, not necessarily truth.<\/p>\n<\/div>\n<div class=\"jsx-ddbb77f9e0c46f92 qrcnt\">\n<div class=\"jsx-ddbb77f9e0c46f92 qrimg\"><img decoding=\"async\" src=\"https:\/\/images.news18.com\/dlxczavtqcctuei\/news18\/static\/images\/english\/goldenicon.svg\" alt=\"img\" class=\"jsx-ddbb77f9e0c46f92 prziccne\"\/><\/div>\n<div class=\"jsx-ddbb77f9e0c46f92 dskcont\">\n<div class=\"jsx-ddbb77f9e0c46f92 deskcol\">\n<div class=\"jsx-ddbb77f9e0c46f92\">\n<p>Stay Ahead, Read Faster<\/p>\n<p class=\"jsx-ddbb77f9e0c46f92 qrtxt\">Scan the QR code to download the News18 app and enjoy a seamless news experience anytime, anywhere.<\/p>\n<\/div>\n<div class=\"jsx-ddbb77f9e0c46f92 qrcodeimg\"><img decoding=\"async\" src=\"https:\/\/images.news18.com\/dlxczavtqcctuei\/news18\/static\/images\/english\/appfirst-desktop.png\" alt=\"QR Code\" width=\"150\" class=\"jsx-ddbb77f9e0c46f92\"\/><\/div>\n<\/div>\n<p><a href=\"https:\/\/www.news18.com\/login\/\" class=\"jsx-ddbb77f9e0c46f92 login\">login<\/a><\/div>\n<\/div>\n<\/section>\n<\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/www.news18.com\/tech\/you-cant-be-rude-to-people-but-chatgpt-actually-gets-smarter-when-you-are-heres-why-tyd-ws-el-9668783.html\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Last Updated:October 30, 2025, 13:01 IST ChatGPT mirrors social tone as it is trained on polite language, while Google\u2019s Gemini and Anthropic\u2019s Claude stay calm and neutral, even when users turn hostile AI does not understand emotion, but it benefits from the linguistic precision that human frustration often produces (AI photo) Have you ever yelled&#8230;<\/p>\n","protected":false},"author":1,"featured_media":21960,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[49],"tags":[],"class_list":["post-21959","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech"],"_links":{"self":[{"href":"https:\/\/tezgyan.com\/index.php\/wp-json\/wp\/v2\/posts\/21959","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/tezgyan.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/tezgyan.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/tezgyan.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/tezgyan.com\/index.php\/wp-json\/wp\/v2\/comments?post=21959"}],"version-history":[{"count":0,"href":"https:\/\/tezgyan.com\/index.php\/wp-json\/wp\/v2\/posts\/21959\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/tezgyan.com\/index.php\/wp-json\/wp\/v2\/media\/21960"}],"wp:attachment":[{"href":"https:\/\/tezgyan.com\/index.php\/wp-json\/wp\/v2\/media?parent=21959"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/tezgyan.com\/index.php\/wp-json\/wp\/v2\/categories?post=21959"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/tezgyan.com\/index.php\/wp-json\/wp\/v2\/tags?post=21959"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}