In a bid to strengthen AI security and transparency, OpenAI has introduced two new safeguards for ChatGPT users: a Lockdown mode and an “elevated risk” flag system.
Together, they aim to give users tighter control over data exposure and clearer warnings about potential risks when using advanced features.
OpenAI Lockdown mode: How does it work
Lockdown mode is not meant for everyone. It’s a specialised security layer built for users who face a higher risk of cyber intrusion, such as executives, cybersecurity experts, or large-scale enterprise teams. OpenAI calls it an optional but advanced safeguard designed to stop attackers from manipulating ChatGPT through hidden prompts or malicious commands.
Once activated, Lockdown mode places ChatGPT in a tightly controlled environment.
Certain tools and integrations are automatically disabled to block unauthorised access or data extraction. For instance, web browsing becomes limited to cached pages only, meaning no live network requests can leave OpenAI’s secure servers. If a feature cannot guarantee robust protection, it is turned off.
Enterprise workspaces already have strong defences, but Lockdown mode builds on them with stricter, more granular controls.
Workspace administrators can enable the feature by assigning a dedicated “Lockdown” role in the settings. They can also tailor permissions, deciding which apps remain functional and what actions are allowed while the mode is active.
For companies under strict compliance requirements, additional logging tools offer deep visibility into usage patterns, shared data, and third-party connections. In essence, Lockdown mode transforms ChatGPT into a walled-off, enterprise-grade AI designed for sensitive environments.
‘Elevated risk’ flags bring clarity to complex features
The second update is all about communication. Some AI tools, especially those that access the web, code environments, or external systems, inherently come with a higher risk.
To make this clearer, OpenAI will now label such features with an “elevated risk” warning across ChatGPT, ChatGPT Atlas, and Codex.
These warnings outline exactly what the feature does, which data it may touch, and where vulnerabilities could arise. For example, when coding utilities access a network, users will be informed about the associated exposure. It’s an approach designed to empower users to make informed choices, whether to proceed or hold back when handling sensitive projects.
OpenAI also notes that this system will evolve: as security improves, the “elevated risk” tag may be removed from some features, while new ones could be added if risks emerge.
Together, Lockdown mode and elevated-risk labelling signals that OpenAI is prioritising user trust and data integrity as artificial intelligence becomes increasingly embedded in daily workflows. With cyber threats growing more sophisticated, these tools give enterprises and professionals a stronger, clearer line of defence.





