Last Updated:
In Musk’s own promotion of the technology, he has suggested that Grok’s medical analysis is “quite accurate” and will improve with time, fueling a debate about AI in healthcare
Millions use chatbots to look up symptoms or seek clarity on medical reports that can seem opaque or technical (Image: Getty)
Elon Musk recently revealed that he uploaded the results of his own MRI scan to Grok, the artificial-intelligence chatbot developed by his company xAI, and that both his doctors and Grok came back with the same “clean” assessment. Musk’s account, shared on the Moonshots podcast with Peter Diamandis, was framed as a vote of confidence, a hint that people could use AI to double-check their medical scans and test results.
SpaceX and Tesla founder’s clip resurfaced after he retweeted a post by Tesla Owners Silicon Valley that highlighted a remarkable case involving Grok. The post claimed that in late 2025, a 49-year-old Norwegian man, on X, credited xAI’s Grok with saving his life following a misdiagnosis.
Musk went further, exhorting users in the video from June 2025 saying, “I think AI will be very helpful with the medical stuff. Right now you can upload your X-rays or MRI images to Grok and it will give you a medical diagnosis. I have seen cases where it’s actually better than what doctors tell you.”
In his posts, he has suggested that Grok’s interpretation might rival or even surpass what doctors provide, pointing to anecdotal cases where the chatbot flagged something that clinicians had missed.
Can AI Offer A Reliable Second Opinion?
AI tools analysing medical data are not new. Radiology departments have used machine learning for years to highlight areas of concern in scans, helping to triage workloads and flag anomalies in medical reports but Grok and its peers operate in a very different domain, they’re general-purpose chatbots, not dedicated medical systems trained on carefully curated clinical datasets.
In Musk’s own promotion of the technology, he has suggested that Grok’s medical analysis is “quite accurate” and will improve with time. He points to cases including a Norwegian man whose appendix issue was flagged by the AI when his doctors initially missed it, as evidence that these tools can add value.
Grok is not alone in the race to bring AI into healthcare. This week, OpenAI introduced ChatGPT Health, a new feature within the chatbot that lets users securely link their medical records and wellness apps such as MyFitnessPal and Apple Health. OpenAI said the data shared through this feature will not be used to train its models, a move aimed at addressing growing concerns around privacy and trust in AI-driven health tools.
How Does AI Healthcare Invades Privacy?
Beyond accuracy, there’s the question of data privacy. When users upload medical scans to a chatbot on a social platform, they may unwittingly expose sensitive health information in ways that conventional healthcare systems are designed to protect. Unlike medical records stored under stringent regulations, data shared on social media platforms isn’t subject to the same privacy safeguards. Experts warn that this could lead to unintended consequences if images or associated metadata are used for training AI or shared more widely than users realise.
Medical ethicists also raise concerns about equity and bias. Training models on self-submitted scans from a subset of users may produce algorithms that work better for some populations than others, particularly if the underlying dataset isn’t representative of broader demographic diversity.
Despite these warnings, many people are already turning to AI for health-related queries. Millions use chatbots to look up symptoms or seek clarity on medical reports that can seem opaque or technical. Industry figures suggest that tens of millions of users have sought health information from large language models in the past year alone.
AI in Healthcare: Where Does Responsibility Lie?
A study published in May 2025 found that while no AI model is without limits in interpreting medical data, Grok outperformed Google’s Gemini and ChatGPT-4o in identifying pathological signs across 35,711 brain MRI slices.
Musk’s promotion of Grok as a tool for medical interpretation has prompted debate about the boundaries of AI in healthcare. While he argues for its potential, others caution that encouraging people to treat a chatbot as a medical second opinion carries risks if users misinterpret or over-rely on its output.
Doctors are not blind to the technology’s potential. In controlled settings, AI can assist radiologists, help prioritise cases and even pick up subtle patterns humans might overlook but there’s broad consensus among clinicians that AI should augment human expertise, not sideline it.
January 13, 2026, 21:49 IST
Stay Ahead, Read Faster
Scan the QR code to download the News18 app and enjoy a seamless news experience anytime, anywhere.


