Sure feels like we've made a wrong turn when we find ourselves in the middle of creating a problem and already have to reach for regulation as the cure.
I fully expect GPT and similar to ruin the internet in many of the ways mentioned in this thread. What's really confusing to me though is why we are going down this path at all when it is so clearly a bad idea. Where does the net positive come from that outweighs the massive risk of such massive AI systems?
Even the leaders at MS know this is a bad idea but fell into the trap of "we're the good guys, if we don't do this a bad actor will"
Of course there are drawbacks to that central registration - anonymity as we know it gets left behind. Yes the central registration could offer also "anonymous" handles for social media daily use, but to the police and any able hackers this would be nothing.
Would I stop visiting hacker news once most of the articles and comments come from ChatGPT-driven bots? Yes.
And how are we to avoid that happening?