OpenAI’s ChatGPT Health Push Raises Questions About Data Security

Summary

OpenAI has introduced ChatGPT Health, a new feature that connects users’ medical records and wellness data to ChatGPT to help them better understand their health. Developed with physicians, the tool is intended to support care but not diagnose or treat illnesses. ChatGPT Health only provides general health information and avoids personalized or unsafe advice. For higher-risk topics, it flags risks and encourages consultation with healthcare professionals. The rollout comes amid concerns from experts and advocacy groups over privacy, data security, and the lack of existing legal protections for health information managed by tech companies. Unlike data handled by medical providers or insurers, information given to ChatGPT is not protected under HIPAA. OpenAI claims health data is encrypted, stored separately from other chats, and excluded from training AI models. Despite these measures, experts argue that current safeguards are inadequate without comprehensive privacy regulations. ChatGPT Health is initially available to select users outside the EU and UK, with plans for wider access on web and iOS in the coming weeks.