OpenAI Announces Plans for "Client-Side Encryption" for ChatGPT
Coming hot off the heels of their lawsuit with the New York Times, OpenAI has announced plans to improve the privacy of its users:
Our long-term roadmap includes advanced security features designed to keep your data private, including client-side encryption for your messages with ChatGPT. We believe these features will help keep your private conversations private and inaccessible to anyone else, even OpenAI. We will build fully automated systems to detect safety issues in our products. Only serious misuse and critical risks—such as threats to someone’s life, plans to harm others, or cybersecurity threats—may ever be escalated to a small, highly vetted team of human reviewers. These security features are in active development and we will share more details about them, and other short-term mitigations, in the very near future.
It's unclear if they mean the messages will be encrypted in such a way that they're never processed on their servers in plaintext or if they just mean users' message history will be encrypted after processing.
Google recently launched their Private AI Compute which uses their Titanium Intelligence Enclaves to keep user data isolated and inaccessible while it's being processed, even from Google themselves.
Apple launched a similar feature with their Private Cloud Compute back in 2024.
Proton's Lumo doesn't use Secure Enclaves to protect data while it's processed, but they do use zero-access encryption to protect your conversation history.
It'll be interesting to see what OpenAI is cooking up, hopefully it can prevent another situation like what we saw with the New York Times lawsuit.
Subscriber Discussion