OpenAI recently launched ChatGPT Health, a new feature that allows users to securely upload personal health information—the most recent development in a continuous stream of AI-powered tools shaping the future of healthcare.
But physicians remain generally divided on their views of AI’s increasing ubiquitousness in healthcare.
“My reaction is a mix of elation and fear. AI has incredible capabilities, but only if used responsibly,” Jonathan Block, MD, principal physician of Syracuse-based Associated Medical Professional of NY told Becker’s.
Dr. Block’s views are representative of the general predicament physicians face when it comes to AI and patient care as they try to find a balance between enhancing patient education without fueling medical misinformation or physician distrust.
“Allowing patients unfettered access this way possibly and exponentially worsens what I call the ‘Dr. Google effect,’ with patients potentially misunderstanding or misusing their own health information,” Dr. Block said.
A 2025 survey by the Physician’s Foundation found that 61% of physicians had patients who were influenced by medical misinformation by at least a moderate amount, over the course of one year. Another 40% said they were not at all confident that their patients know how to access reliable, evidence-based health information online.
According to OpenAI, more than 230 million people globally ask ChatGPT health and wellness-related questions on a weekly basis. The new health-focused experience allows users to connect medical records and wellness apps so responses from the platform are grounded in their own data.
According to a Jan. 7 news release, ChatGPT Health operates separately from the broader platform and includes additional privacy and security protections tailored to sensitive health information. The company also said conversations within ChatGPT Health are encrypted, isolated from other chats and not used to train its foundation models. The platform was developed in collaboration with 260 physicians from across the world, many of whom are hospital and health system-affiliated.
Even with privacy protections in place, some still have concerns about the safety of highly-sensitive health information.
“Although the system is designed to be self-contained, how do we guarantee it conforms to HIPAA and [Health Information Technology for Economic and Clinical Health Act] requirements with the cross-referencing and input for all of these other information sources?” Dr. Block said. “There are too many ‘what ifs,’ and the regulatory environment hasn’t caught up to this rapidly advancing technology.”
The balance between professional medical training and completely democratized medical information is something physicians will continue debating as new developments in healthcare eAI emerge every day.
In December, OpenEvidence, which has been called “ChatGPT for doctors” announced that it is raising $250 million in equity financing at a $12 billion valuation. The startup also landed a $75 million series A funding round in early 2025 and partnered with the American College of Cardiology in November to accelerate the translation of research into clinical practice.
Yet many physicians told researchers at Johns Hopkins University that they viewed colleagues who used AI more heavily as less competent, according to a study published in October. Physicians who used AI faced a “competence penalty” and were viewed with more skepticism by their peers than physicians who did not rely on AI.
Outside of hospitals and health systems, independent practices have also increasingly turned to AI as a means of alleviating financial burdensome administrative tasks and cutting out other costly inefficiencies.
“AI-driven service line redesign has been major,” Geogy Vennikandam, MD, COO of Chicago-based GI Partners of Illinois, told Becker’s. “It reduces administrative [full-time equivalent unit] growth, shortens cycles and improves yield without increasing overhead.”
As medicine’s relationship with AI-driven technologies like ChatGPT Health continues to evolve, physicians continue to interrogate its accuracy while exploring its potential usefulness in patient education and clinical support.
“There is a race to monetize and capitalize this technology,” Matt Mazurek, MD, assistant professor of anesthesiology at the New Haven, Conn.-based Yale School of Medicine. “This does not mean we should ignore the red flags and these real concerns and the use of not just ChatGPT Health, but any AI platform in healthcare.”
