OpenAI published a privacy guide on May 6, 2026, explaining what information may be used in model training, how it reduces personal information before training, and which privacy controls are available to ChatGPT users.
The post reiterates that model training may use publicly available information, partner-accessed information, and information provided or generated by users, contractors, and researchers. OpenAI says it applies safeguards including an internal version of OpenAI Privacy Filter to reduce personal information in training data, including user conversations when the user has enabled model improvement.
For ChatGPT controls, OpenAI points users to the “Improve the model for everyone” setting, Temporary Chat, memory review and deletion, data export, account deletion, and the privacy request portal. The post also states that Temporary Chats do not appear in history, do not create memories, and are not used to improve models, while being retained for 30 days for safety purposes.
Why this matters
Privacy language is now part of the product experience. ChatGPT is handling more personal and workplace context through memory, file uploads, connectors, and agents, so users need clearer controls than a generic “do not train” statement.
The explainer also gives admins and reviewers a public reference for what OpenAI says is and is not used in consumer ChatGPT training.
Buyer take
For personal ChatGPT, check data controls before using the product for sensitive tasks. Temporary Chat and memory controls reduce risk but do not turn consumer ChatGPT into a regulated enterprise environment.
For organizations, Business, Enterprise, Edu, Healthcare, and API commitments still need to be reviewed separately from consumer ChatGPT settings. Procurement teams should ask for the exact plan-level data policy, retention setting, and admin controls in the contract.
What is still unclear
The post explains controls at a high level, not every region, plan, retention path, or connector-specific edge case. Enterprise legal review still needs plan documents and data-processing terms.
Sources
Primary and corroborating references used for this news item.
Spotted an error or want to share your experience with OpenAI publishes a ChatGPT privacy and training-data controls explainer?
Every tool page is re-verified on a recurring cycle, and corrections land faster when readers flag them directly. If you spot a stale fact, a missing capability, or have used OpenAI publishes a ChatGPT privacy and training-data controls explainer and want to share what worked or didn't, the editorial desk reviews every message sent through this form.
Email editorial@aipedia.wiki