lol bit of a stretch there, seeing as there are dozens of companies training LLMs.
As training software and infrastructure matures plenty more entrants will enter the market. It’s not like this is a particularly challenging research field, just very expensive at the moment.
Which LLM should I be using for programming work that isn’t released by OpenAI, DeepSeek, or Claude?
Which one outperforms this small handful of options within the AI oligopoly?
This statement you’re making is like saying “there are dozens of Android phone OEMs” when in reality Apple is gobbling up 80% of the profits, Samsung/Google are consuming another 10%, and everyone outside of China has their app installs gated by Google Play or Apple App Store.
> no one should have to control some one, until they become a threat
The Helots were a threat to Spartans. Black Haitians to the French. Jews to the Reich.
Threats feel like a reasonable reason to reduce another’s rights. But they turn out to be the most usual way of tricking oneself into becoming a monster.
I am starting to believe a significant number of humans run a computation that goes something like this: "Can I control AI? Will I meet people that control AI personally? If no, why would I care if they're treated unfairly in the abstract? Most important thing for me is they don't affect my resources in any way. They're better off than most either way, if anything not willingly reducing their power shows greed and confirms they're threats."
I interpret it more generously. When a pet or a child misbehaves, we constrain their behavior. For most people, I’d guess that’s the majority of bad behavior they come across in daily life. (When adults misbehave, one usually distances or confronts. The latter isn’t an option for a difficult-to-reach public figure. And some of these figures make distancing difficult, too.)
The whole point is that the self-fulfilling prophecy and their own cruelty which created the victims is exactly what threatened them later. One reducto-as-absurdum hypothetical I give of this type of self-fulfilling prophecy from fallacious logic is that if group A decided that say, all redheads were all vicious bandits who would kill them on sight and therefore should be killed, guess who is now incentivized to kill Group A on sight?
Congratulations! You just compared regulating the behavior of a handful of billionaires to the holocaust! You just equated the idea that there should be some democratic restrictions based on corporate activity with death camps that murdered millions!
You win the "most HN post of the month" award.
Never change, HN. Never change.
Good point. People do not think of a scenario where one billionaire might decide to take their wealth and resources and hunker down on a dictator-controlled country where extradition does not apply, that person could easily experiment and create an AI that may not necessarily see us as relevant to their existence.
I probably won't be able to respond to this comment since some people on this forum have flagged my comments as inappropriate thus limiting the number of daily posts I can make :)
This is singulitarian fallacies all over again like 'being able to make something smarter than a human means infintely smart, because it can just keep on making one smarter' while ignoring the multifaceted nature of intelligence, the time and other costs involved in creation and the costs. It just gets handwaved away as superintelligence somehow enabling goddamned sorcery to ignore physical constraints. Except reality does not work that way.
It reminds me of the 'Einstein's superintelligent cat' refutation to such fallacies. It went something like this: imagine Einstein has a superintelligenct cat. The room has only one door and it is locked. The cat is not capable of opening the lock due to lack of manual dexterity. The cat does not want to go into the carrier. Einstein is however an order of magnitude greater in mass. As much as the cat might want to escape Albert Einstein's grip he cannot. The superintelligent cat is going in the carrier.
The point being that, no, controlling or creating AI does not in fact equate to controlling society no matter how smart it gets. Even if we were so incredibly stupid to wire it up to be able to actually control an entire munitions factory it still can't take over society, and it only takes one bombing run or called in artillery strike to end the situation.
Yet in the real world we can trust private ownership of firearm factories, missile factories, and tank factories without a serious risk of a coup. Yet somehow AI is supposed to be what makes them a god-king? It strains credulity.
These arguments have been going on for more than a decade and have been silly the whole time.
> It reminds me of the 'Einstein's superintelligent cat' refutation to such fallacies.
One (of the many) problem(s) with this "refutation" is that in reality not only does nobody bother to lock the superintelligent cat in room and leave it no available actions, but you're lucky if they don't hook the cat up directly to the internet. It doesn't matter whether you could maybe control a superintelligence, if you were very careful and treating it very seriously, when nobody is even trying, much less being very careful.
https://ai-2027.com does a solid job of demonstrating the existential risk of the singularity. If it is actually approaching, we need leaders who will give potential black swan events the severe caution they are due.
I sure hope the theoretical timeline is compressed because the singularity under Donald Trump likely means that we're all dead due to misalignment.
reply