OpenAI rolls out ChatGPT Health: What it means for physicians

Advertisement

OpenAI has launched ChatGPT Health, a new feature that allows users to securely input personal health information with ChatGPT’s AI to educate themselves on and manage health concerns. 

More than 230 million people globally ask ChatGPT health and wellness related questions on a weekly basis, according to OpenAI’s de-identified analysis of user conversations, making it one of most common uses of ChatGPT. The new health-focused experience allows users to connect medical records and wellness apps so responses from the platform are grounded in their own data. 

According to a Jan. 7 news release,  ChatGPT Health operates separately from the broader platform and includes additional privacy and security protections tailored to sensitive health information. The company also said conversations within ChatGPT Health are encrypted, isolated from other chats and not used to train its foundation models. 

However, in commentary published Jan. 7 in Nature Medicine, Stanford (Calif.) University researchers said that the industry has not yet reached its “MedGPT moment,” despite industry chatter of EHR tools that can forecast a patient’s mortality or disease progression. 

Current generative AI models trained on EHR data are roughly where ChatGPT was between GPT-2 and GPT-3, according to the experts. The AI chatbot is currently at GPT 5.2. 

The applications learn patterns from historical data to “generate plausible patient timelines — sequences of diagnoses, procedures, medication codes, lab values, and their timing,” said the authors, including Nigam Shah, MD, PhD, chief data scientist of Palo Alto, Calif.-based Stanford Health Care.

“If 60 out of 100 simulated timelines show a readmission, the model reports a 60% risk,” they wrote. “However, these frequencies are derived from simulated patterns, not necessarily real-world probabilities. Treating a simulation as an ‘oracle’ prediction can lead to unsafe clinical decisions, such as overtreating low-risk patients or missing high-risk ones.”

To reach their potential, these tools need five evaluation criteria: performance by frequency, calibration, timeline completion, shortcut audits, and out-of-distribution validation, the researchers wrote. Better calibration, for example, would ensure a “30% predicted risk actually corresponds to 30% of patients experiencing that outcome.”

The launch also comes as medical misinformation, especially that disseminated online and through social media, has become a significant concern for physicians who say that it has deteriorated trust in their practice and damaged their relationships with patients. In a survey published by the Physician’s Foundation in October, 61% of physicians said their patients were influenced by medical misinformation.

Advertisement

Next Up in Digital Health

Advertisement