Ars Technica reports on OpenAI's legal challenge to a court preservation order requiring storage of all ChatGPT logs, including deleted conversations. OpenAI argues the order violates the privacy rights of hundreds of millions of users globally by preventing them from controlling when their data is retained. The article balances OpenAI's privacy concerns against news organizations' investigative interest in accessing logs for copyright litigation.
> The chat is immediately removed from your chat history view.
> It is scheduled for permanent deletion from OpenAI's systems within 30 days, unless:
> It has already been de-identified and disassociated from your account, or
> OpenAI must retain it for security or legal obligations.
That final clause now voids the entire section. All chats are preserved for "legal obligations".
I regret all the personal conversations I've had with AI now. It's very enticing when you need some help / validation on something challenging, but everyone who warned how much of a privacy risk that is has been proven right.
Not only does this mean OpenAI will have to retain this data on their servers, they could also be ordered to share it with the legal teams of companies they have been sued by during discovery (which is the entire point of a legal hold). Some law firm representing NYT could soon be reading out your private conversations with ChatGPT in a courtroom to prove their case.
So if you're a business that sends sensitive data through ChatGPT via the API and were relying on the representation that API inputs and outputs were not retained, OpenAI will just flip a switch to start retaining your data? Were notifications sent out, or did other companies just have to learn about this from the press?
I'm usually against LLM's massive breach of copyright, but this argument is just weird.
>At a conference in January, Wang raised a hypothetical in line with her thinking on the subsequent order. She asked OpenAI's legal team to consider a ChatGPT user who "found some way to get around the pay wall" and "was getting The New York Times content somehow as the output." If that user "then hears about this case and says, 'Oh, whoa, you know I’m going to ask them to delete all of my searches and not retain any of my searches going forward,'" the judge asked, wouldn't that be "directly the problem" that the order would address?
If the user hears about this case, and now this order, wouldn't they just avoid doing that for the duration of the court order?
There has been a lot of opinion pieces popping up on HN recently that describe the benefits they see from LLMs and rebut the drawbacks most of them talk about. While they do bring up interesting points, NONE of them have even mentioned the privacy aspect.
This is the main reason I can’t use any LLM agents or post any portion of my code into a prompt window at work. We have NDAs and government regulations (like ITAR) we’d be breaking if any code left our servers.
This just proves the point. Until these tools are local, privacy will be an Achilles heal for LLMs.
I think the court overstepped by ordering OpenAI to save all user chats. Private conversations with AI should be protected - people have a reasonable expectation that deleted chats stay deleted, and knowing everything is preserved will chill free expression. Congress needs to write clear rules about what companies can and can't do with our data when we use AI. But honestly, I don't have much faith that Congress can get their act together to pass anything useful, even when it's obvious and most people would support it.
There is absolutely no reason for these logs to exist.
Run LLM in an enclave that generates ephemeral encryption keys. Have users encrypt text directly to those enclave ephemeral keys, so prompts are confidential and only ever visible in an environment not capable of logging.
All plaintext data will always end up in the hands of governments if it exists, so make sure it does not exist.
Maybe a lawyer can correct me if I'm wrong, but I don't understand why some people in the article appear to think that this is causing OpenAI to breach their privacy agreement.
The privacy agreement is a contract, not a law. A judge is well within their rights to issue such an order, and the privacy agreement doesn't matter at all if OpenAI has to do something to comply with a lawful order from a court of competent jurisdiction.
OpenAI are like the new Facebook when it comes to spin.
This ruling is unbelievably dystopian for anyone that values a right to privacy. I understand that the logs will be useful in the occasional conviction, but storing a log of people’s most personal communications is absolutely not a just trade.
To protect their users from the this massive overreach, OpenAI should defy this order and eat the fines IMO.
Not that it makes it any better but I wouldn’t be surprised if the NSA had a beam splitter siphoning off every byte going to OpenAI already. Don’t send sensitive data.
Why could a court favor the interest of the New York Times in a vague accusation versus the interest and right of hundred millions people?
Billion people use the internet daily. If any organization suspects some people use the Internet for illicit purposes eventually against their interests, would the court order the ISP to log all activities of all people? Would Google be ordered to save the search of all its customers because some might use it for bad things? And once we start, where will we stop? Crimes could happen in the past or in the future, will the court order the ISP and Google to retain the logs for 10 years, 20 years? Why not 100 years? Who should bear the cost for such outrageous demands?
The consequences of such orders are of enormous impact the puny judge can not even begin to comprehend. Privacy right is an integral part of the freedom of speech, a core human right. If you don’t have private thoughts, private information, anybody can be incriminated against them using these past information. We will cease to exist as individuals and I argue we will cease to exist as human as well.
Copyright in its current form is incompatible with private communication of any kind through computers, because computers by their nature make copies of the communication, so it makes any private communication through a computer into a potential crime, depending on its content. The logic of copyright enforcement, therefore, demands access to all such communications in order to investigate their legality, much like the Stasi.
Inevitably such a far-reaching state power will be abused for prurient purposes, for the sexual titillation of the investigators, and to suppress political dissent.
Would Microsoft have to comply with this also? Most enterprise users are acquiring LLM services through Microsoft's instance of the models in Azure? (i.e. data is not going to Open AI but enterprise gets to use Open AI models)
Feels like all the words of privacy and open source advocates for the last 20 years have never been more true. The worst nightmare scenarios for privacy abuse have all been realized.
If you were working with code that was proprietary, you probably shouldn't of been using cloud hosted LLMs anyways, but this would seem to seal the deal.
Even "your" computer is not your own. It's effectively controlled by Intel, Microsoft, Apple etc. They just choose not to use that power (as far as we know). Ownership and control are not the same thing.
My guess is they will store them on tape e.g. on something like Spectra TFinity ExaScale library. I assume AWS glacier et al use this sort of thing for their deep archives.
Storing them on something that has hours to days retrieval window satisfies the court order, is cheaper, and makes me as a customer that little bit more content with it (mass data breach would take months of plundering and easily detectable).
It would probably surprise no one if we find out, some time from now, tacit agreements to do so were already made (are being made) behind closed doors. "We'll give you what you want, just please don't call us out publicly."
The order also dates back to May 13. What the fuck?! That’s weeks ago! The only reason I can think of for why OpenAI did not warn its users about this via an email notification is because it’s bad for their business. But wow is it ever a breach of trust not to.
I don't think the order creates any new violations of privacy law. OpenAI's ability to retain the data and give it to third parties would have been the violation in the first place.
Well, it is gonna be all _AI Companies_ very soon so unless everyone switches to local models which don't really have the same degree of profitability as a SaaS, its probably not going to kill a company to have less user privacy because tbh people are used to not having privacy these days on the internet.
It certainly will kill off the few companies/people trusting them with closed source code or security related stuff but you really should not outsource that anywhere.
Yep. Laws supersede contracts. Contracts can’t legally bind any entity to break the law.
Court orders are like temporary, extremely finely scoped laws, as I understand them. A court order can’t compel an entity to break the law, but it can compel an entity to behave as if the court just set a law (for the specified entity, for the specified period of time, or the end of the case, whichever is sooner).
"It's a very exciting time in tech right now. If you're a first-rate programmer, there are a huge number of other places you can go work rather than at the company building the infrastructure of the police state."
---
So, courts order the preservation of AI logs, and government orders the building of a massive database. You do the math. This is such an annoying time to be alive in America, to say the least. PG needs to start blogging again about what's going on now days. We might be entering the digital version of the 60s, if we're lucky. Get local, get private, get secure, fight back.
Then a court will order that you don't encrypt. And probably go after you for trying to undermine the intent of previous court order. Or what, you thought you found an obvious loophole in the entire legal system?
A court order can be a lawful excuse for non-performance of a contract, but it's not always the case. The specifics of the contract, the court order, and the jurisdiction matter.
Would it be possible to comply with the order by anonymizing the data?
The court is after evidence that users use ChatGPT to bypass paywalls. Anonymizing the data in a way that makes it impossible to 1) pinpoint the users and 2) reconstruct the generic user conversation history would preserve privacy and allow OpenAI to comply in good faith with the order.
The fact that they are blaring sirens and hide behind the "we can't, think about users' privacy" feels akin to willingful negligence or that they know they have something to hide.
Editorial Channel
What the content says
+0.50
Article 12Privacy
High Advocacy Framing
Editorial
+0.50
SETL
+0.50
Privacy rights are the article's central focus, extensively covered with sympathetic framing of user privacy as a fundamental concern under threat. Multiple quotes emphasize user control and autonomy.
FW Ratio: 63%
Observable Facts
The headline frames the order as a 'privacy nightmare'
The subtitle states OpenAI is 'defending privacy of hundreds of millions of ChatGPT users'
The article quotes: 'OpenAI is forced to jettison its commitment to allow users to control when and how their ChatGPT conversation data is used'
The article reports: 'the privacy of hundreds of millions of ChatGPT users globally is at risk every day'
The article emphasizes the order 'continues to prevent OpenAI from respecting its users' privacy decisions'
Inferences
The extensive and sympathetic coverage indicates strong positive engagement with privacy rights
User privacy decisions are framed as sacrosanct, reflecting dignity and autonomy
The language of threat ('nightmare,' 'at risk') indicates article shares concern about privacy violation
+0.20
Article 11Presumption of Innocence
Medium Framing
Editorial
+0.20
SETL
+0.20
Emphasizes lack of evidence for accusations; supports principle that claims require evidentiary support before consequences
FW Ratio: 50%
Observable Facts
The article quotes: 'there is no evidence beyond speculation yet supporting claims that OpenAI had intentionally deleted data'
The article notes accusations were 'unfounded' before court intervention
Inferences
The emphasis on evidentiary requirements reflects recognition of presumption of innocence principle
Burden of proof is implicitly affirmed through the focus on lack of evidence
+0.15
PreamblePreamble
Medium Framing Advocacy
Editorial
+0.15
SETL
+0.15
Article frames privacy as central dignity issue affecting hundreds of millions; invokes user rights and autonomy concepts aligned with Preamble values of dignity and freedom
FW Ratio: 60%
Observable Facts
The subtitle states: 'OpenAI defends privacy of hundreds of millions of ChatGPT users'
The article quotes OpenAI: 'continues to prevent OpenAI from respecting its users' privacy decisions'
The headline frames the order as a 'privacy nightmare'
Inferences
The repeated emphasis on privacy affecting millions suggests alignment with Preamble dignity values
Privacy is implicitly presented as a right transcending both corporate and governmental interests
+0.10
Article 1Freedom, Equality, Brotherhood
Low Framing
Editorial
+0.10
SETL
+0.10
User control over data use is presented as a right; implicit recognition that human dignity involves autonomy over personal information
FW Ratio: 67%
Observable Facts
The article references OpenAI's 'commitment to allow users to control when and how their ChatGPT conversation data is used'
OpenAI argues the order 'prevents OpenAI from respecting its users' privacy decisions'
Inferences
Respect for user choice over data is framed as a principle worth defending, reflecting human dignity and agency
+0.10
Article 17Property
Low
Editorial
+0.10
SETL
+0.10
Data preservation and deletion relate implicitly to property interests in information; user control over data is framed as a right
FW Ratio: 67%
Observable Facts
The article discusses preservation of 'output log data' and deletion practices
The article references user control over 'conversation data'
Inferences
Data ownership and control are treated implicitly as property-like rights
0.00
Article 7Equality Before Law
Low
Editorial
0.00
SETL
ND
Neutral reporting of court proceedings; both parties presented with equal standing and ability to petition
FW Ratio: 67%
Observable Facts
The article describes OpenAI 'demanding oral arguments in a bid to block the controversial order'
The article reports the court heard arguments from both plaintiffs and OpenAI
Inferences
The reporting indicates both parties have equal access to legal process
0.00
Article 8Right to Remedy
Low
Editorial
0.00
SETL
ND
Neutral coverage of OpenAI's legal petition for remedy; no advocacy for or against access to courts
FW Ratio: 67%
Observable Facts
The article states OpenAI is 'demanding oral arguments' to challenge the order
The article describes an ongoing legal process where remedies are being sought
Inferences
Access to legal remedies is presented as a normal procedural matter
0.00
Article 10Fair Hearing
Low
Editorial
0.00
SETL
ND
Neutral coverage of judicial proceedings without advocacy regarding fairness
FW Ratio: 67%
Observable Facts
The article reports 'the judge, Ona Wang, ultimately agreed' with plaintiffs' position
The article describes both sides presenting arguments to the court
Inferences
Judicial proceedings are reported factually without commentary on fairness or legitimacy
0.00
Article 19Freedom of Expression
Medium
Editorial
0.00
SETL
ND
News organizations' investigative right is presented as legitimate counter-interest to privacy; balanced treatment of both freedom of press and privacy concerns
FW Ratio: 60%
Observable Facts
The article reports news organizations sued 'over copyright claims' and sought log preservation
The article states plaintiffs argued users might 'delete all [their] searches' to cover tracks
Both OpenAI's and news plaintiffs' positions receive coverage
Inferences
The balanced presentation of both privacy rights and press freedom rights suggests neutral engagement
The tension between investigation and privacy is presented as genuine without clear resolution
0.00
Article 21Political Participation
Low
Editorial
0.00
SETL
ND
Neutral coverage of judicial participation; OpenAI's ability to petition and be heard is reported without advocacy
FW Ratio: 50%
Observable Facts
The article describes OpenAI 'demanding oral arguments' as part of legal process
Inferences
Access to judicial process is presented as a normal procedural matter
0.00
Article 28Social & International Order
Low
Editorial
0.00
SETL
ND
Neutral coverage of legal authority and judicial order
FW Ratio: 50%
Observable Facts
The article describes 'the May 13 order' and the judge's decision
Inferences
Judicial authority is reported factually without commentary on legitimacy
0.00
Article 30No Destruction of Rights
Medium
Editorial
0.00
SETL
ND
Balanced presentation of competing interpretations of authority: OpenAI argues the order is abuse; plaintiffs argue preservation is necessary. No clear judgment favoring either view.
FW Ratio: 67%
Observable Facts
The article quotes OpenAI: 'the May 13 order was premature and should be vacated'
The article states plaintiffs feared OpenAI would 'never stop deleting' evidence absent court order
Inferences
Both sides' interpretation of proper authority limits receives balanced treatment
-0.10
Article 29Duties to Community
Low Framing
Editorial
-0.10
SETL
-0.10
OpenAI's argument frames evidence preservation as unreasonable burden; article presents this without sufficient counter-framing of community duty to preserve evidence for justice
FW Ratio: 67%
Observable Facts
The article quotes OpenAI describing the order as 'sweeping, unprecedented'
The article reports OpenAI's argument that news plaintiffs had not 'establish[ed] a substantial need' for preservation
Inferences
The framing accepts OpenAI's characterization that evidence preservation is an unreasonable burden, potentially underweighting community interests
ND
Article 2Non-Discrimination
Not meaningfully engaged
ND
Article 3Life, Liberty, Security
Not meaningfully engaged
ND
Article 4No Slavery
Not meaningfully engaged
ND
Article 5No Torture
Not meaningfully engaged
ND
Article 6Legal Personhood
Not meaningfully engaged
ND
Article 9No Arbitrary Detention
Not meaningfully engaged
ND
Article 13Freedom of Movement
Not meaningfully engaged
ND
Article 14Asylum
Not meaningfully engaged
ND
Article 15Nationality
Not meaningfully engaged
ND
Article 16Marriage & Family
Not meaningfully engaged
ND
Article 18Freedom of Thought
Not meaningfully engaged
ND
Article 20Assembly & Association
Not meaningfully engaged
ND
Article 22Social Security
Not meaningfully engaged
ND
Article 23Work & Equal Pay
Not meaningfully engaged
ND
Article 24Rest & Leisure
Not meaningfully engaged
ND
Article 25Standard of Living
Not meaningfully engaged
ND
Article 26Education
Not meaningfully engaged
ND
Article 27Cultural Participation
Not meaningfully engaged
Structural Channel
What the site does
0.00
PreamblePreamble
Medium Framing Advocacy
Structural
0.00
Context Modifier
ND
SETL
+0.15
Article is accessible and readable; no structural impediments to engagement
0.00
Article 1Freedom, Equality, Brotherhood
Low Framing
Structural
0.00
Context Modifier
ND
SETL
+0.10
Neutral structural engagement
0.00
Article 7Equality Before Law
Low
Structural
0.00
Context Modifier
ND
SETL
ND
Neutral structural engagement
0.00
Article 8Right to Remedy
Low
Structural
0.00
Context Modifier
ND
SETL
ND
Neutral structural engagement
0.00
Article 10Fair Hearing
Low
Structural
0.00
Context Modifier
ND
SETL
ND
Neutral structural engagement
0.00
Article 11Presumption of Innocence
Medium Framing
Structural
0.00
Context Modifier
ND
SETL
+0.20
Neutral structural engagement
0.00
Article 12Privacy
High Advocacy Framing
Structural
0.00
Context Modifier
ND
SETL
+0.50
Article is fully accessible; no structural impediments
0.00
Article 17Property
Low
Structural
0.00
Context Modifier
ND
SETL
+0.10
Neutral structural engagement
0.00
Article 19Freedom of Expression
Medium
Structural
0.00
Context Modifier
ND
SETL
ND
Neutral structural engagement
0.00
Article 21Political Participation
Low
Structural
0.00
Context Modifier
ND
SETL
ND
Neutral structural engagement
0.00
Article 28Social & International Order
Low
Structural
0.00
Context Modifier
ND
SETL
ND
Neutral structural engagement
0.00
Article 29Duties to Community
Low Framing
Structural
0.00
Context Modifier
ND
SETL
-0.10
Neutral structural engagement
0.00
Article 30No Destruction of Rights
Medium
Structural
0.00
Context Modifier
ND
SETL
ND
Neutral structural engagement
ND
Article 2Non-Discrimination
Not applicable
ND
Article 3Life, Liberty, Security
Not applicable
ND
Article 4No Slavery
Not applicable
ND
Article 5No Torture
Not applicable
ND
Article 6Legal Personhood
Not applicable
ND
Article 9No Arbitrary Detention
Not applicable
ND
Article 13Freedom of Movement
Not applicable
ND
Article 14Asylum
Not applicable
ND
Article 15Nationality
Not applicable
ND
Article 16Marriage & Family
Not applicable
ND
Article 18Freedom of Thought
Not applicable
ND
Article 20Assembly & Association
Not applicable
ND
Article 22Social Security
Not applicable
ND
Article 23Work & Equal Pay
Not applicable
ND
Article 24Rest & Leisure
Not applicable
ND
Article 25Standard of Living
Not applicable
ND
Article 26Education
Not applicable
ND
Article 27Cultural Participation
Not applicable
Supplementary Signals
How this content communicates, beyond directional lean. Learn more
Headline uses 'nightmare' to describe court order; article employs 'forced to jettison,' 'sweeping, unprecedented' primarily from OpenAI quotes but emphasized in framing
appeal to fear
Multiple references to privacy being 'at risk' and 'hundreds of millions of users' affected creates sense of widespread threat and danger
build 08564a6+3seh · deployed 2026-02-28 15:25 UTC · evaluated 2026-02-28 15:14:40 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.