Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It will be very annoying for e.g. forum moderators to determine whether first user posts are just a bit incoherent, or generated spam garbage.


That used to be a pretty annoying thing back in the days of IRC as a kind of DOS: run a bunch of bots that just replay a conversation from another channel. Engaging them fails, but is that because they are bots, or because they're just ignoring you?


The new kind can be more targeted for specific purposes. They could be excellent tools for trolling a forum, inciting flame wars and such.


That would require some more advanced tech though. I don't think GPT-3 can target divisiveness yet, especially since it would heavily depend on the community you're writing for, e.g. driving a wedge into the general population is very different than driving a wedge niches. The Linux vs Windows debate might get you engagement in a tech forum, but it'll fall short with social housing activists, and whatever issues they split on will probably not get you anywhere with the tech crowd.


I don't think it needs to understand what a divisive issue is to have an effect. If you've got a human operator who can pick a divisive enough prompt, this can dramatically increase their inflamations-per-hour because they don't need to compose the body text.


It's true that distinguishing these articles from ordinary jointed ramblings of poor writers would be hard. But I'm not sure what the benefit of filling forums with babble has to those running these models.

Bots offering idiocy and idiocy generally has done lots of damage. But by idiocy here I would quite carefully calculated cleverly polarized positions and I don't think just bot-rot would be enough (to maybe coin a phrase).




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: