Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't remember if it was here on HN or somewhere else, but recently I saw a comment or post somewhere that pointed out that AI will not kill humanity. It will merely make it easy for humanity to kill itself. It will make it easier for someone to issue a command, or press a button, or take some other action that they ought to have thought through; but the AI gave them its well-intentioned vote of confidence, and the consequence may leave none to remember the error.

The thing to remember about AI is that it does what we ask of it to do. It's not a matter of "will artificial intelligence develop a consciousness that drives it towards the extermination of humans". We don't have an Ultron on our hands. What we have before us is the best and worst enabler of human negligence. AI and the end of humanity is a matter of ensuring we don't forsake our own responsibilities towards each other. And that goes for responsibilities with potential for catastrophe as well as the mundane.



We make Replikas so we don't have to make human friends. We download apps for all sorts of things to avoid human interaction. Negative responses to that open letter post on this site have included calling those concerned everything from dinosaurs to luddites to losers. Ever since the beginning of the public craze in December, many people here have delighted in referring to humans as stochastic parrots and biological prediction machines.

Good luck. :)


> The thing to remember about AI is that it does what we ask of it to do.

If it always does what we ask of it to do then it's not really intelligent and can't be called AI.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: