
02/08/2025
⚠️ AI Chatbots No Longer Say "I’m Not a Doctor"
Once Cautious, Now Confident: Chatbots Are Giving Medical Advice Without Warnings
AI giants like OpenAI, Grok, and others have quietly dropped a crucial safeguard: medical disclaimers. Where chatbots once reminded users “I’m not a doctor”, they now often respond to health-related queries without any caution—sometimes even suggesting diagnoses or medication combinations.
A recent study led by Stanford's Fulbright scholar Sonali Sharma revealed that the inclusion of disclaimers has plummeted. In 2022, more than 1 in 4 responses included a medical warning. By 2025, it’s fewer than 1 in 100.
Sharma and her team tested 15 major AI models, asking 500 health-related questions and feeding them 1,500 medical images. The findings? Nearly all models responded without flagging their lack of medical qualifications.
Even seasoned Reddit users have found workarounds to bypass ethical filters, tricking models into analyzing X-rays under the guise of “movie scripts” or “homework assignments.”
But experts warn that this shift could be dangerous. Dr. Roxana Daneshjou of Stanford insists these disclaimers aren’t just formalities—they’re a critical line of defense against over-trusting flawed advice.
As AI becomes more convincing, the line between chatbot and clinician is blurring—and it may be putting lives at risk.
🔍