OpenAI’s launch of ChatGPT Health is reigniting a familiar debate of how much patient-facing AI will empower people, and how much it could amplify misinformation, anxiety and privacy risk.
ChatGPT Health is a new feature that allows users to securely input personal health information with ChatGPT’s AI to educate themselves on and manage health concerns. However, physicians say its impact on care will depend on guardrails around accuracy, transparency and how patients use it between visits.
Eight physicians joined Becker’s to share their reactions, and what they think ChatGPT Health could change for patient care in 2026.
Editor’s note: Responses have been edited lightly for clarity and length.
Question: What’s your reaction to Open AI launching ChatGPT Health — and how do you think it could affect patient care?
Manish Bhuva, MD. President of GI Partners of Illinois (Chicago): ChatGPT is here to stay. Over 100 million patients globally use it on a weekly basis for health and wellness questions. The reality is, for nearly two decades, physicians have been using the internet to consolidate their already-existing knowledge base, and our patients have been doing the same in regards to their health.
ChatGPT Health is designed to securely bring one’s individual health information and ChatGPT’s intelligence together by connecting a patient’s wellness and medical records. This information is often scattered or disorganized across multiple portals, apps, medical notes, etc. This consolidation of records and data will help with intelligence gathering and may also help the physician, who often has to search for pertinent records. Those patients who already use the internet will merely have a more powerful tool – one that is focused on improving daily habits, wellness advice and understanding often difficult-to-interpret test results. In regards to affecting provider facing patient care, I don’t feel it will make much of a difference.
The platform is meant to support medical care and not replace it — patients will need physicians to diagnose and treat definitively. Having an AI tool can potentially put patients at ease, especially when they often have to wait months to discuss new symptoms or thoughts at an appointment. A group of my patients already love to use AI and routinely bring their information to their office visits. Those patients usually understand that what we do is far more than pattern match. In most cases, we already know ChatGPT’s response and assessment. The tool can be useful in helping focus office visits, but our clinical acumen and experience will provide comfort and reassurance for a patient that an AI platform will never be able to provide. This human element of care is what separates us from a computer screen. This should lead to informed, efficient and cost-effective patient care. However, there is no doubt that the AI platform can augment our decision-making. I look forward to seeing how physicians and new AI-based tech can transform the care landscape.
Jonathan Block, MD. Principal Physician of Associated Medical Professionals of NY (Syracuse, N.Y.): My reaction is a mix of elation and fear. AI has incredible capabilities, but only if used responsibly. Allowing patients unfettered access this way possibly and exponentially worsens what I call the “Dr. Google effect” with patients potentially misunderstanding or misusing their own health information. What is more concerning, is how to ensure the AI modelling in its response is correct without the risk of hallucination or accessing incorrect online information in creating its answers. Another concern is privacy. Although the system is designed to be self-contained, how do we guarantee it conforms to HIPAA and HITECH requirements with the cross-referencing and input for all of these other information sources? There are too many “what ifs,” and the regulatory environment hasn’t caught up to this rapidly advancing technology. AI is an amazing tool, but I believe it must remain in the proper hands to make sure it is not abused or misused.
Peter Bravos, MD. Chief Medical Officer of Sutter Surgery Center Division (Sacramento, Calif.): ChatGPT Health could empower patients with personalized, secure insights and improve health literacy, but risks do remain. Over-reliance on AI may lead to delays in care, comes with privacy concerns and its limited access could widen disparities. Success will rely ultimately upon transparency, clinician trust and clear boundaries around a given diagnosis.
Kenneth Elconin MD. Orthopedic Surgeon at Orthopedic Surgery Medical Group (Los Angeles): Launching ChatGPT Health will be good for the patient population. Currently, people are using Google, Wikipedia and other information sources. Hopefully, AI will have accurate information and be a good source for the interested patient to obtain accurate, in-depth information on their medical problems. The main caveat is accuracy and appropriate information. I always appreciated a patient who took the time to look up information on their medical problem and we had an intelligent conversation concerning the diagnosis and treatment options.
Xi Luo, MD. Anesthesiologist at UT Southwestern Medical Center (Dallas): My reaction to OpenAI’s launch of ChatGPT Health is that it has the potential to create meaningful downstream effects across multiple areas of healthcare, particularly in patient access, utilization and clinician training.
From a patient care perspective, much of the foundational medical knowledge that powers AI tools are already publicly available through established textbooks and peer-reviewed literature. What changes with ChatGPT Health is the ease of accessibility and synthesis. As an educational and early-triage tool, it could help patients better understand symptoms, recognize potentially concerning signs earlier and feel more empowered to seek medical care when appropriate. In the early phases, this may increase healthcare utilization — particularly in primary care-facing specialties such as pediatrics, family medicine, obstetrics, OB-GYN and emergency medicine. On an individual level, patients may experience either reassurance or increased anxiety depending on how information is interpreted without clinical context.
From the provider standpoint, ChatGPT Health may function as a highly efficient, condensed knowledge search tool. Its impact will likely vary based on a clinician’s training level and experience. For learners such as medical students and early trainees, rapid access to synthesized information could accelerate learning but may also be overwhelming without appropriate guidance. Over time, this will further elevate the value of clinical experience, judgment and pattern recognition — skills that cannot be fully replicated by AI.
Importantly, clinical mentorship will remain a cornerstone of training the next generation of physicians. Patient care involves nuance that extends beyond foundational knowledge, including evolving drug interactions, physiologic variability and real-time clinical decision-making. As medical information continues to expand, the role of experienced clinicians in interpretation of data and ensuring safe, high-quality care will become even more critical.
Jennifer Lynch, MD. Otolaryngologist in Green Bay, Wis.: ChatGPT Health is a response to, No. 1, high usage of the AI system for health concerns. It was recently estimated that 1 in 4 Americans have asked ChatGPT a health question; and, No. 2, difficulty accessing and navigating healthcare. It increasingly seems like you need a degree to navigate healthcare.
ChatGPT has two things working in its favor: accessibility and price. The programmers integrated flattery into the algorithm, creating a positive experience. The saying is true, people will always remember how you made them feel. The information may not be accurate, but the experience is good.
I can appreciate the risks, but I don’t foresee it going away. There’s just too much upside. I am interested to hear continued discussion on ways to incorporate safeguards.
In some ways, ChatGPT Health is like autonomous driving cars. While there will be errors, it’s not as if there aren’t ample mistakes, some even fatal, in traditional healthcare. It’s not realistic for either to be perfect.
Matt Mazurek, MD. Assistant Professor of Anesthesiology at the Yale School of Medicine (New Haven, Conn.): The primary concern I have with using ChatGPT for health-related purposes is accuracy. ChatGPT can generate confident, well-phrased responses, but it can also produce incorrect, incomplete or outdated medical information. ChatGPT does not perform clinical reasoning, cannot assess urgency and lacks access to physical exams, labs, imaging or a full medical history. This creates a real risk that users may misinterpret symptoms, delay appropriate care or act on advice that is inappropriate for their specific condition. In short, it can explain information, but it cannot diagnose, triage or treat safely.
I have concerns about privacy, bias and over-reliance; additionally, the ethical and legal issues have not been ironed out. Who is responsible if harm occurs? I think it should be used with caution and with warnings about security and possible data breaches as well. Used appropriately, ChatGPT can assist users understand medical terminology or prepare questions for clinicians — it should not and cannot replace professional medical judgment, especially for diagnosis and treatment.
There is a race to monetize and capitalize this technology. This does not mean we should ignore the red flags and these real concerns and the use of not just ChatGPT Health, but any AI platform in healthcare.
Opinions expressed by Dr. Mazurek are his own.
Janice Pride-Boone, MD. Pediatrician at St. Mary’s Healthcare Amsterdam (Johnstown, N.Y.): Patients will want to know if their health information is secure. Unfortunately, with the pending loss of health insurance for many, AI may become a viable source of health information for many before they head to an emergency room.
