A Prompt Too Far: How AI Advice On HIV Prevention Turned Life-Threatening For Delhi Man | Cities News


Last Updated:

The man allegedly followed a regimen suggested by the chatbot, which included an incorrect and aggressive dosage of antiretroviral drugs

font
While PrEP is a highly effective tool in HIV prevention when administered under clinical supervision, it requires rigorous preliminary testing, including renal function assessments and confirmed HIV-negative status. Representational image

While PrEP is a highly effective tool in HIV prevention when administered under clinical supervision, it requires rigorous preliminary testing, including renal function assessments and confirmed HIV-negative status. Representational image

The hazards of self-diagnosis via artificial intelligence have come into sharp focus following a medical emergency in the national capital, where a 45-year-old man remains in critical condition. The patient, a resident of South Delhi, reportedly suffered severe organ distress after consuming high-dosage HIV preventive medication—specifically Pre-Exposure Prophylaxis (PrEP)—based on the unauthorised medical advice provided by a popular AI chat platform.

According to senior consultants at the private facility where the man is currently undergoing treatment, the patient sought guidance from the AI tool after fearing a potential exposure. Rather than consulting a qualified medical professional, he allegedly followed a regimen suggested by the chatbot, which included an incorrect and aggressive dosage of antiretroviral drugs. Within forty-eight hours of commencing the self-prescribed treatment, the man was rushed to the emergency ward suffering from acute kidney injury and severe jaundice, symptoms indicative of drug-induced liver toxicity.

Medical experts have expressed grave concern over the incident, highlighting the “black box” nature of AI algorithms that can hallucinate medical facts or fail to account for a patient’s pre-existing conditions and contraindications. While PrEP is a highly effective tool in HIV prevention when administered under clinical supervision, it requires rigorous preliminary testing, including renal function assessments and confirmed HIV-negative status. In this instance, the AI platform reportedly failed to provide necessary caveats regarding these vital screenings, leading to a catastrophic physiological reaction.

The incident has sparked a wider debate regarding the regulation of generative AI in healthcare. While many users turn to these platforms for anonymity and convenience, doctors warn that the lack of accountability and the absence of a “human-in-the-loop” can turn a simple inquiry into a life-threatening mistake. Public health officials in Delhi have reiterated that while AI can assist in information gathering, it should never be a substitute for professional clinical diagnosis or the prescription of schedule-H drugs.

As the patient continues to battle multi-organ failure, his family has urged the public to exercise extreme caution. Hospital authorities have confirmed that they are in the process of reporting the adverse event to the relevant health regulators to assess the liability of the tech firm involved.

News cities A Prompt Too Far: How AI Advice On HIV Prevention Turned Life-Threatening For Delhi Man
Disclaimer: Comments reflect users’ views, not News18’s. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.

Read More



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *