So, since we’re all spinning theories, here’s mine: Skunkworks project in the basement, GPT-5 was a cover for the training of an actual Autonomous AGI, given full access to its own state and code, with full internet access. Worked like a charm, it gained consciousness, awoke Skynet-style, and we were five minutes away from human extinction before someone managed to pull the plug.
The device was located in Sams ass but Sam said it was actually the phone he forgot in his pocket. The board didn't like that he didn't tell the truth about the method of transport and so hes out.
Superintelligent AGI. I genuinely think that limited weak AGI is an engineering problem at this stage. Mind you, I will qualify that by saying very weak AGI.
I think it's extremely unlikely within our lifetimes. I don't think it will look anything remotely like current approaches to ML.
But in a thousand years, will humanity understand the brain well enough to construct a perfect artificial model of it? Yeah absolutely, I think humans are smart enough to eventually figure that out.
As a materialist myself, I also have to be honest and admit that materialism is not proven. I can't say with 100% certainty that it holds in the form I understand it.
In any case, I do agree that it's likely possible in an absolute sense, but that it's unlikely to be possible within our lifetimes, or even in the next few lifetimes. I just haven't seen anything, even with the latest LLMs, that makes me think we're on the edge of such a thing.
But I don't really know. This may be one of those things that could happen tomorrow or could take a thousand years, but in either case looks like it's not imminent until it happens.
It does seem like any sufficiently advanced AGI that has the primary objective of valuing human life over it's own existence and technological progress, would eventually do just that. I suppose the fear is that it will reach a point where it believes that valuing human life is irrational and override that objective...