Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It does such a good job at giving answers that sound right, and are almost correct.

I could imagine losing many hours from a ChatGPT answer. And if you have to go through the trouble to verify everything it says to make sure it's not just making crap up, then imo it loses much value as a tool.



It shows how form matters more than substance. Say real information in some poor structure and people will think you're wrong

Say incorrect stuff authoritatively and people will think you're right.

It happens to me all the time. I can't structure accurate information in a better way then some bullshit artist can spit off what they imagine to be real so everyone walks away believing in their haughty nonsense.

ChatGPT exploits that phenomena which is why it sounds like some overly confident oblivious dumb dumb all the time. That's the training set.

Almost once a week I'll go through a reddit thread and find someone deep in the negatives who has clearly done their homework and is extraordinarily more informed than anyone else but the problem is everyone else commenting is probably either drunk or a teenager or both so it doesn't matter.

Stuff is hard and people are mostly wrong. That's why PhDs take years and bars for important things are set so high


But so do people: I spent an hour yesterday trying regexps that multiple people on Stackoverflow confirmed would definitely do what I needed, and guess what? They did not do what I needed.

Same with copilot. Sometimes it's ludicrously wrong in ways that sound good. I still have to do my job and make sure they are right. But it's right or right enough to save me significant effort at least 75% of the time. Right enough to at least point me in the right direction or inspire me at least 90% of the time.


Self Reply: I just now thought to use Copilot to get my regex and wow! I described it in a comment and it printed me one that was only two characters off, and now I have what I needed yesterday. I'd since solved the problem without a regex.


It's not perfect, but sometimes its amazing. In your case, not only did it provide the right solution, but it was about as fast as theoretically possible. About as fast as if you already knew the answer.

I had a similar experience with a shell command. Searched google, looked at a few posts, wasnt exactly what I needed but close. Modified it a few times and got it working. Went to save the command in a markdown file and when I explained what the command did, copilot made a suggestion for it. It was correct and also much simpler.

It went from taking 5-10 minutes to stumble through something just so I could do the thing I really wanted to do, to finding the answer instantly all from within the IDE. Can keep you in flow.



One day what happens? A person uses it to encourage topical application of a toxic material and publishes the results?

How is ChatGPT enabling this? All of that is very possible without ChatGPT. The damaging part is deciding to do it.


They released a zero day for a security hole in the human brain. That's what ChatGPT is. The security hole is well known and described perhaps the most understandable format is the book Thinking Fast And Slow which describes, ah, if I try to explain I will surely botch but perhaps put it this way: how things that appear more credible will be deemed credible because of the "fast" processes in our brains.

In this particular case, ChatGPT will write something nonsensical which people will accept more easily because of the way it is written. This is inevitable and extremely dangerous.


Humans are still a lot better at writing something nonsensical that people will accept easily because of the way it's written.

Conversely, I just ask ChatGPT to extol the virtues of leaded gasoline, and instead I got a lecture on exactly why and how it's extremely harmful.


> Humans are still a lot better at writing something nonsensical that people will accept easily because of the way it's written.

Some are but not many. And then there's the amount. That's the crux of the matter. Have you seen that Aza Raskin interview where he posited one could ask the AI to write a thousand papers citing previous research against vaccines and then another thousand pro-vaccines? No human can do that.


You know people are already injecting themselves with bleach and horse dewormer without needing an AI generated list of instructions right?

People are just as good at making up convincing sounding nonsense.


> People are just as good at making up convincing sounding nonsense.

Perhaps as you just did, as I can find no one actually "injecting themselves with bleach."

The overall point stands: the difference between reading something dumb and doing that dumb thing is what it means to have agency. I personally don't think we should optimize the world 100% to prevent people who read something stupid from doing that stupid thing.

Or, if that's the path we're going to take, maybe we should first target things like the show Ridiculousness before we start talking about AI. After all, someone might do something dumb they see on TV!


> Perhaps as you just did, as I can find no one actually "injecting themselves with bleach."

Ingesting, injecting, that’s pretty similar. Nobody needs to make anything up there.

https://www.justice.gov/usao-sdfl/pr/leader-genesis-ii-churc...


People have absolutely injected themselves with what's known as "Miracle Mineral Solution", which is essentially bleach. It's more frequently drunk, of course.


I dunno, verifying and adjusting an otherwise complete answer is a lot more rote than originating what that answer would be, and I think that has value.


>It does such a good job at giving answers that sound right, and are almost correct.

For sure. But you have to compare against alternatives. What would that be? Posting to stack overflow and maybe getting a helpful reply within 48 hours.

> I could imagine losing many hours from a ChatGPT answer.

Dont trust it. Verify it.

We expect to ask a question and get a good answer. In reality we should leverage how cheap the answers are.


I agree. Also, sometimes the line between 'almost correct' and 'complete bullshit' is very thin.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: