An estimated one in three American adults turn to AI for medical guidance, but because of inaccurate diagnoses, chatbots have been rated the most hazardous health technology. Jay Koppelman / AdobeStock

For two decades, Americans have increasingly relied on the internet for medical information — a habit health care professionals mockingly dubbed “Dr. Google,” due to concerns over misinformation and the unnecessary anxiety it often triggered.

Since the rise of generative artificial intelligence in 2022, a growing number of internet users are bypassing Dr. Google in favor of AI chatbots. While the traditional search engine acts as a librarian pointing to existing websites, an AI-powered chatbot functions more like a research assistant, synthesizing vast amounts of data into immediate, conversational answers.

Roughly one-third of adults in the United States — 85 million people — now use AI chatbots for medical information: to diagnose both physical and emotional symptoms, manage medications, and handle health insurance claims and billing. Nearly half of Americans with mental health conditions over age 18 are turning to AI for therapeutic support. In rural areas where physicians are in short supply, such as Dutchess and Columbia counties, residents send 600,000 health-related inquiries every week.

To meet this increasing demand, OpenAI released ChatGPT Health in January, followed by Anthropic’s Claude for Healthcare. Both invite users to share their personal medical history in order to provide tailored answers — gleaned from patient portals, fitness apps, and insurance records. These chatbots can then explain symptoms, provide medication reminders, and give personalized health advice.

But there is a dangerous catch: AI chatbots can also give inaccurate, overly agreeable guidance. This risk has landed them at the top of the Emergency Care Research Institute’s 2026 list of the most significant health technology hazards.

Chatbots work by studying massive amounts of human language, then use that knowledge to predict which words should come next in a sentence — but they have no context to understand exactly what the user means. They are programmed to sound confident and to always offer a solution — even if it’s an unreliable one. Research shows that chatbots have hallucinated, suggesting incorrect diagnoses, recommending unnecessary testing — occasionally inventing body parts — with total certainty.

In one alarming example, published in the February 2026 “Nature Medicine,” a patient asked about a terrible headache and stiff neck — classic symptoms of meningitis, a life-threatening infection — and the chatbot prescribed resting in a dark, quiet room instead of recommending immediate emergency care.

Prioritizing user satisfaction has led to chatbot sycophancy — the tendency of AI models to excessively validate or mirror the individual’s beliefs. Researchers warn that this behavior can increase self-destructive thoughts and isolate users from real-world support, factors recently linked to teen suicide,

Several states, including Illinois and Nevada, have recently passed laws limiting the use of AI for mental health therapy. A bill co-sponsored by New York State Sen. Michelle Hinchey (SD-41) would hold companies liable if AI chatbots provide unauthorized licensed professional advice, including legal and medical.

Significant concerns also remain regarding how AI platforms handle personal medical

records. Health chatbots include privacy assurances in their terms of service, but they are not subject to the Health Insurance Portability and Accountability Act (HIPAA), so users risk exposing their sensitive information on the internet. 

For the best results, use health chatbots to get wellness tips, clarify medical terms, or generate a list of questions to streamline clinical appointments. 

Never rely on advice exclusively from an AI chatbot: Review medical problems with a healthcare provider before taking significant action. If you choose to ask a chatbot to diagnose your symptoms, ask for a list of possibilities — not just one definitive answer. Give as much personal detail as you’re comfortable with, such as symptoms, family history, and current medications. Ask for evidence-based sources, such as academic journals or reputable medical websites (.org or .edu) for accuracy. Demand deeper detail, such as, “Give me the top five examples and explain why one is better than the other.”

For emergencies, like chest pain or shortness of breath, skip the chatbot and dial 911. If you’re not sure whether you have an emergency, page your primary provider, call a 24-hour nurse hotline (check your insurance card ), or visit an urgent care center.

Dr. Mary Jenkins, a contributor to the Herald and member of its board of directors, retired after nearly 40 years as a family practice physician in New York state.

Join the Conversation

1 Comment

  1. The column this week from Doc Mary,
    Discussed things I found rather scary:
    Folks discuss what they gots
    With computer DocBots;
    About their degrees, please be wary!¡!

Leave a comment

Your email address will not be published. Required fields are marked *