Feature Removed Shortly After Launch Amid Privacy Concerns
OpenAI introduced an opt-in ChatGPT feature that allowed public sharing of conversations, which could then be indexed by search engines like Google. Within hours of the release, privacy experts flagged that thousands of shared chats—some containing personal content—appeared in search results. In response, OpenAI removed the feature and termed it a “short-lived experiment”, aiming to prevent users from unintentionally exposing sensitive data.
(Source: Business Insider, Omni)
Indexed Data Sparks Warning From Privacy Advocates
Reports from Fast Company and Ars Technica revealed that over 4,500 ChatGPT conversations were publicly visible online—some even included details like mental health, relationships, and sensitive personal queries. Even though users had to confirm the discoverability box, critics argued the warnings were unclear, and many users were unaware of the risks.
(See Ars Technica and Tom’s Guide follow-up coverage)
OpenAI’s Response and Broader Policy Issues
Dane Stuckey, OpenAI’s Chief Information Security Officer, confirmed the removal was tied to preserving user trust and pending investigation into the feature’s potential liabilities. OpenAI is actively working with Google and others to de-index these shared chats from search results. These events also come amid a legal dispute over a court order requiring retention of deleted ChatGPT conversations, raising questions around data retention policies and user control.
(Source: OpenAI blog response )
Why It Matters
- Reinforces the importance of clear opt-in policies when AI features expose any user-generated content to the public web.
- Highlights risks associated with emerging features that might unintentionally reveal sensitive user data.
- Illustrates how AI innovation and privacy protections must go hand in hand—especially amid intensifying regulatory scrutiny and public concern.