
The New York Times has sued OpenAI for the unauthorized use of its news articles to train large language models. As part of the ongoing lawsuit, the NYT recently asked the court to require OpenAI to retain all ChatGPT user content indefinitely. The NYT's argument is that they may find something in the data that supports their case.
Brad Lightcap, COO of OpenAI, wrote the following regarding the NYT's sweeping demand:
"This fundamentally conflicts with the privacy commitments we have made to our users. It abandons long-standing privacy norms and weakens privacy protections. We strongly believe this is an overreach by the New York Times. We’re continuing to appeal this order so we can keep putting your trust and privacy first."
OpenAI has already filed a motion asking the Magistrate Judge to reconsider the preservation order, since indefinite retention of user data breaches industry norms and its own policies. Additionally, OpenAI has also appealed this order with the District Court Judge.
Until OpenAI wins its appeal, it will be complying with the court order. The content defined by the court order will be stored separately in a secure system and will be accessed or used only for meeting legal obligations. Only a small, audited OpenAI legal and security team will be able to access this data as necessary to comply with our legal obligations.
As of early 2025, ChatGPT has over 400 million weekly active users, and this data retention order will affect a significant number of them. OpenAI confirmed that ChatGPT Free, Plus, Pro, and Teams subscription users, and developers who use the OpenAI API (without a Zero Data Retention agreement) will be affected by this order. ChatGPT Enterprise, ChatGPT Edu, and API customers who are using Zero Data Retention endpoints will not be affected by this court change.
0 Comments - Add comment