The Only Solution to Escape the Order Imposed on OpenAI’s User Permanent Chat Storage: OpenAI vs NYT Lawsuit

By SWIFT AI Team
Sep 22, 2025

Tech giant OpenAI has been embroiled in controversy, legal fallout, and massive data leak penalties, with the bitter result of watching its privacy policies annihilated by a new court ruling.

The legal battle with major news organization, the New York Times, has resulted in a privacy crisis affecting millions of OpenAI users around the globe.

Further damaging its already tenuous privacy reputation, the most recent court order requires OpenAI to preserve all ChatGPT output data indefinitely. This is in direct conflict with their promise to protect user privacy. This duration of this retention remains indefinite until the copyright litigation concludes – a process that could be extended for years given the novel legal questions involved.

TLDR; Core Issue

The preservation order requires OpenAI to retain all user conversations indefinitely, overriding its 30-day deletion policy and user privacy expectations.

Best practices for safe AI usage

The safest approach involves using a provider like SWIFT AI or open-source models locally or within enterprise environment virtual private clouds (VPC) with Zero Data Retention protocols for sensitive information ensuring no data leaves your environment.

Overview of Legal Case

In December 2023, The New York Times sued OpenAI and Microsoft, alleging they used millions of its articles without permission to train AI systems like ChatGPT and Bing Chat. This is the first major U.S. copyright lawsuit by a media company against AI firms.

NYT’s Claims

• OpenAI and Microsoft copied Times’ content, sometimes nearly word-for-word, bypassing its paywall.

• The lawsuit seeks billions in damages and demands destruction of all AI models and datasets trained on its works.

• The Times argues the AI systems threaten its business model and intellectual property.

OpenAI’s Defense

• Claims its use of materials falls under fair use for education, research, and commentary.

• Says its AI doesn’t aim to reproduce full articles.

• Accuses the Times of hiring someone to “hack” its products to produce incriminating outputs.

• CEO Sam Altman argues the Times is “on the wrong side of history.”

Privacy Concerns

• A preservation order requires OpenAI to keep even deleted user chats.

• OpenAI warns this threatens user privacy and imposes major technical and financial burdens.

Legal Status

• Judge Sidney Stein allowed the case to move forward, rejecting parts of OpenAI’s dismissal request.

• The outcome could set a major precedent for copyright law in the age of AI.

• Here’s why the court’s data retention order is controversial in the OpenAI lawsuit:

Why It Matters

Privacy breach: Deleted chats and user-controlled privacy settings are now meaningless.

Equal risk: Free, Plus, and API users all face data retention, including businesses handling sensitive health, legal, or financial data.

Compliance conflicts: The order clashes with international privacy laws (e.g., GDPR, HIPAA).

Massive burden: OpenAI must store hundreds of millions of conversations, demanding costly engineering changes.

Sensitive data at stake: Stored chats may include finances, household details, or personal relationships.

Real-World Impact

• Users and companies lose trust in OpenAI’s privacy guarantees.

• Businesses question whether OpenAI can keep compliance promises.

• Experts suggest using Zero Data Retention protocols to reduce exposure until the lawsuit is resolved.

The court’s preservation order in the New York Times vs. OpenAI lawsuit forces OpenAI to indefinitely retain all user conversations, overriding its 30-day deletion policy and undermining user privacy expectations. This creates serious risks for individuals and businesses alike, as sensitive personal, financial, and even healthcare data may now be stored without consent. The order also places OpenAI in conflict with global privacy regulations like GDPR and HIPAA, while imposing massive technical and financial burdens to store “hundreds of millions” of conversations. With user trust shaken and compliance questions mounting, experts recommend adopting Zero Data Retention protocols to safeguard sensitive information until the case is resolved.

Temporary Chat and Zero Data Retention APIs

ChatGPT’s Temporary Chat feature offered users some protection by keeping conversations out of their history and automatically deleting them after a 30-day safety window. However, this can no longer be offered as per court order all chats including deleted chats must be persisted and stored.
Enterprise API customers affected by the OpenAI copyright lawsuit can request Zero Data Retention (ZDR) agreements that offer better protection. However, your prompts should never include details like names, account numbers, passwords, financial information or other personal identifiers.

Reddit Verdict

High Engagement on Privacy Forums: Multiple Reddit communities (notably r/privacy, r/technology, and r/ChatGPT) saw rapid surges in posts discussing data-retention fears. Users across these subreddits voiced alarm that “deleted” ChatGPT conversations (personal or corporate) might now be retained permanently. Many suggested switching off chat history or using open source/local LLMs to avoid indefinite logging.

Developer and Security Engineer Reactions: In threads on r/technology and r/ChatGPT, practitioners shared anecdotes about disabling the OpenAI API in internal workflows and deploying prompt-sanitization proxies. They cited concerns around potential exposure of PII or trade secrets if logs must be preserved forever. Although we cannot confirm precise percentages, commentary indicates a strong majority of technically oriented users supported OpenAI’s appeal on privacy and cost grounds.

As an API customer rather than a consumer service aggregator, SWIFT AI operates through channels explicitly exempted from the preservation order.

When you interact with AI models through SWIFT AI:

• Your conversations follow enterprise API endpoints

• Zero Data Retention is active by default

• No trace remains in OpenAI’s systems after processing

The technical implementation involves several layers of protection. First, SWIFT AI’s servers establish authenticated connections to OpenAI’s enterprise endpoints using certificate-based mutual TLS authentication. This ensures that data cannot be intercepted or redirected to consumer infrastructure. Second, all API requests include headers specifying ZDR processing, which OpenAI’s systems verify before processing. Finally, response packets include cryptographic attestation that no data was retained.

Conclusion

OpenAI’s legal battle with NYT marks a turning point for AI ethics, copyright law, and user privacy. Millions of ChatGPT users face major privacy risks because of the court’s preservation order. On top of that, it forces businesses using OpenAI’s technology to think about their compliance with industry regulations and the exposure of sensitive information.

Users need practical ways to protect their data as the case moves forward. SWIFT AI’s reliable sovereign features eliminate this issue. ChatGPT’s Temporary Chat feature gives casual users some protection, but it’s nowhere near complete data security. Enterprise customers should ask for Zero Data Retention agreements to lower their risks, but it still poses grave risks.

This case showcases how perilous digital privacy promises are. Standard privacy controls enforced only months ago can be overruled through legal proceedings. Users must stay vigilant about the information they share with AI systems, whatever company policies or stated protections say.

This lawsuit will shape how media organizations, AI companies, and end-users work together in the future. Right now, the best approach is to use the protective measures mentioned above and keep track of this landmark case. Your data privacy is your responsibility, especially now when deleted conversations might stick around forever.

FAQs

Q1. Is ChatGPT safe to use?

No. Recent high-profile breaches and fines show that using ChatGPT without additional security layers can expose sensitive data. Public AI platforms have leaked millions of credentials, faced GDPR-related fines exceeding €15 million, and suffered dark-web credential sales. Without end-toend encryption, real-time sanitization, and zero data retention, your private or corporate information is at significant risk.

Q2. Are there alternatives to ChatGPT that offer better privacy protection?

Yes, alternatives like SWIFTAI, or open-source models run locally can offer distinct privacy advantages, as they may have different data retention policies not affected by the current court order.

Q3. What is the main issue in the OpenAI vs New York Times lawsuit?

The lawsuit centers on copyright infringement claims by The New York Times against OpenAI and Microsoft, alleging unauthorized use of millions of articles to train AI models like ChatGPT.

Q4. How does the court’s data retention order affect ChatGPT users?

The order requires OpenAI to indefinitely retain all ChatGPT output data, including deleted conversations, overriding user privacy settings and potentially exposing sensitive information.

Q5. What are the privacy risks for businesses using ChatGPT?

Although OpenAI has claimed that enterprise users will stay unaffected.

Businesses face potential exposure of confidential information, trade secrets, and sensitive data that may have been shared with ChatGPT, as well as compliance challenges with industry regulations like HIPAA or GDPR. The list of ChatGPT incidents have been piling up since its inception and don’t seem to be slowing down anytime soon.

Q6. How can users protect their data while using AI chatbots during this legal uncertainty?

Users can utilize platforms with stronger privacy features like SWIFT AI, request Zero Data Retention agreements for API use, and practice data sanitization by removing identifying information from prompts.

Q7. Are there alternatives to ChatGPT that offer better privacy protection?

Yes, alternatives like SWIFT AI, or open-source models run locally can offer distinct privacy advantages, as they may have different data retention policies not affected by the current court order.

Q8. What are OpenAI’s main arguments in defending against the ChatGPT copyright lawsuit?

Cherry-Picked, Atypical Examples: OpenAI argues that the New York Times paid a third party to “hack” ChatGPT by running tens of thousands of nonstandard prompts to produce specific copyrighted passages. In normal use, these outputs would not appear.

Violation of Terms via Deceptive Prompts: The company maintains that The Times employed “jailbreak”-style queries that breached OpenAI’s user policies, generating anomalous results unrepresentative of routine ChatGPT behavior.

Transformative Use, No Systemic Infringement: OpenAI contends that training on large-scale, publicly available data complies with fair use. They emphasize that model outputs are user-driven and filtered, meaning there is no uncontrolled verbatim reproduction of copyrighted text.

Q9. What could happen if OpenAI loses the appeal?

High Financial Damages: A loss could trigger statutory damages for each infringed work, potentially amounting to millions. Even a single verbatim article can carry six-figure penalties under U.S. copyright law.

Restrictive Data Injunctions: Courts might bar OpenAI from using certain publishers’ archives, forcing costly licensing agreements or removal of that content from future model training.

Industry-Wide Precedent: A ruling against OpenAI could compel all generative AI developers to negotiate paid licenses or opt-in agreements with content owners before including their material in training sets. This would increase operational costs and slow innovation across the sector.

Table of Contents

Shopping Basket
👉 Translate