Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So frustrating to see people complain that it provides wrong information. With this logic it can never know anything. It is not made for querying. It's made to complete the text you provide it, with some minor modifications to help it answer questions. It does know the most common patterns of characters in the internet which implicitly contain knowledge.


> With this logic it can never know anything.

That's correct: ChatGPT does not know anything. That's not what it's built to do.

> It does know the most common patterns of characters in the internet which implicitly contain knowledge.

No, it doesn't. It knows patterns of text in its training data, but it knows nothing about the semantics of the text--its relationship to other things in the world, which is what is involved with any text that contains knowledge. That is why ChatGPT is not reliable as a source of information.


I share your skepticism of LLM’s output but I don’t think it’s fair to say they know nothing about semantics. I think it’s still an open question as to what degree LLM’s encode a coherent world model. Also you can just ask chatgpt about objects and their relationships and it gets the answer right way more often than you’d expect by chance, so it has some understanding of the world. Not good enough for me to trust it though


The answer is clearly none. It literally knows nothing about the world.


Not that I see much evidence of what I'm about to assert in ourselves, but you should be able to correct ChatGPT's knowledge if it knows things and isn't just a fancy parrot.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: