Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find this sort of thing cloying because all it does is show me they keep copies of my chats and access them at will.

I hate playing that card. I worked at Google, and for the first couple years, I was very earnest. Someone smart here pointed out to me, sure, maybe everything is behind 3 locks and keys and encrypted and audit logged, but what about the next guys?

Sort of stuck with me. I can't find a reason I'd ever build anything that did this, if only to make the world marginally easier to live in.



Anthropic’s privacy policy is extremely strict — for example, conversations are retained for only 30 days and there’s no training on user data by default. https://privacy.anthropic.com/en/articles/10023548-how-long-...


I thought this was true, honestly, up until I read it just now. User data is explicitly one of the 3 training sources[^1], with forced opt-ins like "feedback"[^2] lets them store & train on it for 10 years[^3], or tripping the safety classifier"[^2], lets them store & train on it for 7 years.[^3]

[^1] https://www.anthropic.com/legal/privacy:

"Specifically, we train our models using data from three sources:...[3.] Data that our users or crowd workers provide"..."

[^2] For all products, we retain inputs and outputs for up to 2 years and trust and safety classification scores for up to 7 years if you submit a prompt that is flagged by our trust and safety classifiers as violating our UP.

Where you have opted in or provided some affirmative consent (e.g., submitting feedback or bug reports), we retain data associated with that submission for 10 years.

[^3] "We will not use your Inputs or Outputs to train our models, unless: (1) your conversations are flagged for Trust & Safety review (in which case we may use or analyze them to improve our ability to detect and enforce our Usage Policy, including training models for use by our Trust and Safety team, consistent with Anthropic’s safety mission), or (2) you’ve explicitly reported the materials to us (for example via our feedback mechanisms), or (3) by otherwise explicitly opting in to training."


All of the major AI providers are trying to pretend they care about your privacy by being weasles with their retention and anonymization terms.

Partly why I'm building a zero-trust product that keeps all your AI artifacts encrypted at rest.


You're work is vital for opposing the nonchalant march of privacy erasing norms we are continuing to parade towards.


This is a non starter for every company I work with as a B2B SaaS dealing with sensitive documents. This policy doesn’t make any sense. OpenAI is guilty of the same. Just freaking turn this off for business customers. They’re leaving money on the table by effectively removing themselves from a huge chunk of the market that can’t agree to this single clause.


I haven't personally verified this, but I'm fairly positive all the enterprise versions of these tools (ChatGPT, Gemini, Claude) not only are oblivious to document contents but also respect things like RBAC on documents for any integration.


Given the apparent technical difficulties involved in getting insight into a model’s underlying data, how would anyone ever hold them to account if they violated this policy? Real question, not a gotcha, it just seems like if corporate-backed IP holders are unable to prosecute claims against AI, it seems even more unlikely that individual paying customers would have greater success.


That's the point, though. What's there that would stop it from changing later?


Even if this were true (and not hollowed out by various exceptions in Anthropic’s T&C), I would not call it “extremely strict”. How about zero retention?


who guards the guards? [they plan] ahead and begin with them.


They say something about retention after analysis by Clio but it's not very specific.


They have to, the major AI companies are ads companies. Their profits demand that we accept their attempts to normalize the Spyware that networked AI represents.


Yep. More generally, I have a lot of distaste that big tech are the ones driving the privacy conversation. Why would you put the guys with such blatant ulterior motives behind the wheel? But, this seems to be the US way. Customer choice via market share above everything, always, even if that choice gradually erodes the customer's autonomy.

Not that anywhere else is brave enough to try otherwise, for fear of falling too far behind US markets.

Disclaimer: I could be much more informed on the relevant policies which enable this, but I can see the direction we're heading in... and I don't like it.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: