lol bit of a stretch there, seeing as there are dozens of companies training LLMs.
As training software and infrastructure matures plenty more entrants will enter the market. It’s not like this is a particularly challenging research field, just very expensive at the moment.
dangus 6 hours ago [-]
Which LLM should I be using for programming work that isn’t released by OpenAI, DeepSeek, or Claude?
Which one outperforms this small handful of options within the AI oligopoly?
This statement you’re making is like saying “there are dozens of Android phone OEMs” when in reality Apple is gobbling up 80% of the profits, Samsung/Google are consuming another 10%, and everyone outside of China has their app installs gated by Google Play or Apple App Store.
xnx 24 minutes ago [-]
[dead]
billfor 22 hours ago [-]
Clearly the Economist and their panel of experts.
camillomiller 22 hours ago [-]
So basically the same 5 men, considering that the Economist is the mouthpiece of the capitalist global oligarchy
xnx 20 hours ago [-]
Demis Hassabis is on their list? He reports to Sundar (who reports to Sergey Brin?)
moralestapia 1 hours ago [-]
Also no mention of Huang? He's behind all of them in the supply chain.
comrade1234 22 hours ago [-]
No Chinese? Guess they're no good at ai.
stratos123 1 hours ago [-]
Chinese companies are lagging behind and always have. They are fast-followers but don't decide the frontier.
saltyoldman 16 hours ago [-]
That wouldn't fit the narrative.
drewfax 9 hours ago [-]
One man controls entire country. Yet we call that freedom and democracy.
gigatree 7 hours ago [-]
Are you talking about the president? Because if so lol
6 hours ago [-]
stvltvs 6 hours ago [-]
Having one guy in charge of the entire executive branch was possibly the biggest blunder made by the framers of the US Constitution.
nh23423fefe 2 hours ago [-]
name checks out
dj_rock 1 hours ago [-]
"Insider is supported by Anthropic"
saltyoldman 1 days ago [-]
The countries that they're in already do via the law. No one else should "control" someone.
rolph 1 days ago [-]
no one should have to control some one, until they become a threat.
when someone presents a threat, at large, they have limited entitlement to walk among society, or act without review.
JumpCrisscross 22 hours ago [-]
> no one should have to control some one, until they become a threat
The Helots were a threat to Spartans. Black Haitians to the French. Jews to the Reich.
Threats feel like a reasonable reason to reduce another’s rights. But they turn out to be the most usual way of tricking oneself into becoming a monster.
gobdovan 22 hours ago [-]
I am starting to believe a significant number of humans run a computation that goes something like this: "Can I control AI? Will I meet people that control AI personally? If no, why would I care if they're treated unfairly in the abstract? Most important thing for me is they don't affect my resources in any way. They're better off than most either way, if anything not willingly reducing their power shows greed and confirms they're threats."
JumpCrisscross 22 hours ago [-]
I interpret it more generously. When a pet or a child misbehaves, we constrain their behavior. For most people, I’d guess that’s the majority of bad behavior they come across in daily life. (When adults misbehave, one usually distances or confronts. The latter isn’t an option for a difficult-to-reach public figure. And some of these figures make distancing difficult, too.)
monknomo 22 hours ago [-]
Are you comparing the ai ceos to helots? I am confused
rolph 20 hours ago [-]
i fixed that for you:
"The Spartans were a threat to Helots. the French to Black Haitians. the reich to the Jews."
justification, doesnt transform a victim into a threat.
Nasrudith 12 hours ago [-]
The whole point is that the self-fulfilling prophecy and their own cruelty which created the victims is exactly what threatened them later. One reducto-as-absurdum hypothetical I give of this type of self-fulfilling prophecy from fallacious logic is that if group A decided that say, all redheads were all vicious bandits who would kill them on sight and therefore should be killed, guess who is now incentivized to kill Group A on sight?
metalman 10 hours ago [-]
I'll do a bit more "fixing"
"justification, doesnt transform a victim into a threat"
unless the victim is Palestinian, and the
monsters are jewdaic zionist terrorists, for more than 100 years now
rolph 5 hours ago [-]
oops you broke it again stahlmann, a victim is a victim period.
it doesnt matter who is right or wrong at the start, there is the attacker, and the attacked.
victim, and attacker swap places as they go around the wheel.
now what breaks every thing is when a militant in combat is spun as a victim, defending from mother and child.
generalizing based on nationality or eye color or anything else that is the actual problem you seem to be concerned about.
Congratulations! You just compared regulating the behavior of a handful of billionaires to the holocaust! You just equated the idea that there should be some democratic restrictions based on corporate activity with death camps that murdered millions!
You win the "most HN post of the month" award.
Never change, HN. Never change.
npfo-hn 22 hours ago [-]
"Jews to the Reich."
Yes they did.
JumpCrisscross 22 hours ago [-]
> You just compared regulating the behavior of a handful of billionaires to the holocaust!
On the most surface level, sure. Regulating something and controlling someone are, to me, different motivations.
operatingthetan 22 hours ago [-]
>You just compared regulating the behavior of a handful of billionaires to the holocaust!
They literally did not.
rgbrgb 22 hours ago [-]
to be fair, that's exactly what's at issue. controlling AI implies controlling society as intelligence scales.
Nasrudith 12 hours ago [-]
This is singulitarian fallacies all over again like 'being able to make something smarter than a human means infintely smart, because it can just keep on making one smarter' while ignoring the multifaceted nature of intelligence, the time and other costs involved in creation and the costs. It just gets handwaved away as superintelligence somehow enabling goddamned sorcery to ignore physical constraints. Except reality does not work that way.
It reminds me of the 'Einstein's superintelligent cat' refutation to such fallacies. It went something like this: imagine Einstein has a superintelligenct cat. The room has only one door and it is locked. The cat is not capable of opening the lock due to lack of manual dexterity. The cat does not want to go into the carrier. Einstein is however an order of magnitude greater in mass. As much as the cat might want to escape Albert Einstein's grip he cannot. The superintelligent cat is going in the carrier.
The point being that, no, controlling or creating AI does not in fact equate to controlling society no matter how smart it gets. Even if we were so incredibly stupid to wire it up to be able to actually control an entire munitions factory it still can't take over society, and it only takes one bombing run or called in artillery strike to end the situation.
Yet in the real world we can trust private ownership of firearm factories, missile factories, and tank factories without a serious risk of a coup. Yet somehow AI is supposed to be what makes them a god-king? It strains credulity.
stratos123 1 hours ago [-]
These arguments have been going on for more than a decade and have been silly the whole time.
> It reminds me of the 'Einstein's superintelligent cat' refutation to such fallacies.
One (of the many) problem(s) with this "refutation" is that in reality not only does nobody bother to lock the superintelligent cat in room and leave it no available actions, but you're lucky if they don't hook the cat up directly to the internet. It doesn't matter whether you could maybe control a superintelligence, if you were very careful and treating it very seriously, when nobody is even trying, much less being very careful.
pixl97 8 hours ago [-]
>Yet in the real world we can trust private ownership of firearm factories, missile factories, and tank factories without a serious risk of a coup
Because they are highly fucking regulated....
Start selling missiles to kids and watch yourself get put in a cage.
bigyabai 1 days ago [-]
The law is only relevant insofar as it's enforced. In America, that's a tossup.
SilentM68 21 hours ago [-]
Good point. People do not think of a scenario where one billionaire might decide to take their wealth and resources and hunker down on a dictator-controlled country where extradition does not apply, that person could easily experiment and create an AI that may not necessarily see us as relevant to their existence.
I probably won't be able to respond to this comment since some people on this forum have flagged my comments as inappropriate thus limiting the number of daily posts I can make :)
judahmeek 20 hours ago [-]
https://ai-2027.com does a solid job of demonstrating the existential risk of the singularity. If it is actually approaching, we need leaders who will give potential black swan events the severe caution they are due.
I sure hope the theoretical timeline is compressed because the singularity under Donald Trump likely means that we're all dead due to misalignment.
lostmsu 8 hours ago [-]
Singularity is something we might never see. The question is whether we crossed the event horizon.
gizmodo59 22 hours ago [-]
“Insider is supported by
ANTHROPIC“ get their money and act like independent? What a joke
Rendered at 20:07:33 GMT+0000 (Coordinated Universal Time) with Vercel.
As training software and infrastructure matures plenty more entrants will enter the market. It’s not like this is a particularly challenging research field, just very expensive at the moment.
Which one outperforms this small handful of options within the AI oligopoly?
This statement you’re making is like saying “there are dozens of Android phone OEMs” when in reality Apple is gobbling up 80% of the profits, Samsung/Google are consuming another 10%, and everyone outside of China has their app installs gated by Google Play or Apple App Store.
when someone presents a threat, at large, they have limited entitlement to walk among society, or act without review.
The Helots were a threat to Spartans. Black Haitians to the French. Jews to the Reich.
Threats feel like a reasonable reason to reduce another’s rights. But they turn out to be the most usual way of tricking oneself into becoming a monster.
"The Spartans were a threat to Helots. the French to Black Haitians. the reich to the Jews."
justification, doesnt transform a victim into a threat.
"justification, doesnt transform a victim into a threat"
unless the victim is Palestinian, and the monsters are jewdaic zionist terrorists, for more than 100 years now
it doesnt matter who is right or wrong at the start, there is the attacker, and the attacked.
victim, and attacker swap places as they go around the wheel.
now what breaks every thing is when a militant in combat is spun as a victim, defending from mother and child.
generalizing based on nationality or eye color or anything else that is the actual problem you seem to be concerned about.
let that be your last battlefield.
https://en.wikipedia.org/wiki/Let_That_Be_Your_Last_Battlefi...
You win the "most HN post of the month" award. Never change, HN. Never change.
Yes they did.
On the most surface level, sure. Regulating something and controlling someone are, to me, different motivations.
They literally did not.
It reminds me of the 'Einstein's superintelligent cat' refutation to such fallacies. It went something like this: imagine Einstein has a superintelligenct cat. The room has only one door and it is locked. The cat is not capable of opening the lock due to lack of manual dexterity. The cat does not want to go into the carrier. Einstein is however an order of magnitude greater in mass. As much as the cat might want to escape Albert Einstein's grip he cannot. The superintelligent cat is going in the carrier.
The point being that, no, controlling or creating AI does not in fact equate to controlling society no matter how smart it gets. Even if we were so incredibly stupid to wire it up to be able to actually control an entire munitions factory it still can't take over society, and it only takes one bombing run or called in artillery strike to end the situation.
Yet in the real world we can trust private ownership of firearm factories, missile factories, and tank factories without a serious risk of a coup. Yet somehow AI is supposed to be what makes them a god-king? It strains credulity.
> It reminds me of the 'Einstein's superintelligent cat' refutation to such fallacies.
One (of the many) problem(s) with this "refutation" is that in reality not only does nobody bother to lock the superintelligent cat in room and leave it no available actions, but you're lucky if they don't hook the cat up directly to the internet. It doesn't matter whether you could maybe control a superintelligence, if you were very careful and treating it very seriously, when nobody is even trying, much less being very careful.
Because they are highly fucking regulated....
Start selling missiles to kids and watch yourself get put in a cage.
I probably won't be able to respond to this comment since some people on this forum have flagged my comments as inappropriate thus limiting the number of daily posts I can make :)
I sure hope the theoretical timeline is compressed because the singularity under Donald Trump likely means that we're all dead due to misalignment.