Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, not at all. ChatGPT is trained on the same source information, but when you ask a question there's no guarantee it's answer is directly from an actual source, it's always a newly generated "thought".

Google is a photocopier. It gives you an exact copy of what it finds. Google doesn't create, just references and links to original sources.

Google is a library, but not an author.

ChatGPT is an author, but not a library.

However, ChatGPT has read every book in the library, so when you ask a question it writes you a story from it's memory based on what it thinks* you want. ChatGPT can write stories about books in the library, and it will probably be right (but maybe not).

*Remember the game Plinko from Price is Right? Basically ChatGPT takes your question, drops all the words through its super complicated plinko machine (neural network) and gives you the result.

If you ask it for the names of US presidents, it should give you the same answer as Google - even though it came up with it via the plinko method.

If you ask it for a story about a singing rock, the process is the same as the presidents list. It drops your request into the network and gives you the result. It's not smart, just wildly complicated. It's also never going to be a photocopier (but it might act like one for certain inputs).

----

The brain breaking part is that when you ask ChatGPT for...

"Write me a song about a singing rock"

It changes each word into a number-token, then those number-tokens go through the plinko machine. The result is a different set of number-tokens which it converts back to readable words. Inside ChatGPT it doesn't "know" anything. Rock is a number. Singing is a number. Write is a number.

But it knows the relationships between those numbers, and what other numbers are nearby the area of the network devoted to songs, so it pulls in words and related concepts like a human would.

But it's just numbers with no understanding.

Because it's numbers and not understanding, it can be wrong, either completely or subtly.

Edit: Asking for the list of us presidents has "David D. Eisenhower (1849-1850)" as number 12 (who isn't a person who was ever president). The rest look right, but ChatGPT is subtlety wrong in this case.



Do you see the future being ChatGPT results but with citations? Or is that basically impossible given how it's a "trained model"?


No, ChatGPT doesn't know it's own sources. It's just a trained model. Once the model has been created it's fixed - it can be recreated unlimited times, but it will never tell you the sources for it's output.

Maybe if the network nodes have a source attached to them...

But thinking out loud...

That's not how the number-tokens work. It's at a word level... so "a list of us presidents" is broken down into individual number-tokens for each word, and you can't provide a source for each word.

---

I'm not sure how you combine Google and ChatGPT.

Chat is creative/combinatorial and Google is "just the facts".

ChatGPT and Google are going to have problems going forward. How do both of them determine if the information they find on the internet is from a meat-brain and not a metal-brain.

Happy to be proven wrong.


Maybe by fact-checking its answer?

Question -> "creative" output -> Google -> Summary of links -> Comparison -> confidence level (or re-write) + links that were used for checking

Not so different than how we work in a high-level. I believe that openAI has published a paper called webGPT that has a workflow like this (although not sure its exactly the same)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: