OpenAI’s mounting costs — set to hit $1.4 trillion
Sorry, but WTF!? $1.4 Trillion in costs? How are they going to make all of that back with just AI?
I think there’s only one way they can make this back: if AI gets so good they can really replace most employees.
I don’t think it will happen, but either way it’s going to be an economic disaster. Either the most valuable companies in the world, offering services that the next couple of hundred companies in the world depend on, are suddenly bankrupt. Or suddenly everybody is unemployed.
that assumes every business which invests in AI would be bailed out, which is a huge assumption. I would guess the only businesses that would receive bailouts would be those with personal ties to the government
Prediction: the bubble is real but financiers will find ways to kick the bull down the road until they can force enough adoption & ad insertion to not lose out. The other option is that we pay it, of course. Takes on which is worse?
is that why palintir is so desperate trying to sell its “suvellience” tech to mulitple countries, and why all of them suddenly want facial recognition, biometric data.
Hmm just good old late stage capitalism there, I think. The CEO recently said legalizing war crimes would be good for business and seems to have a cocaine problem to boot. No doubt fueled by the same investor groups though.
They’ll do both just like they did in 2007/2008. These AI companies and their investors will get bailed out while the rest of us lose our jobs and have to move back in with our parents in the van they already live in.
I’ve tried explaining AI to people before and only could get so far before they fall back on “but it’s magic dude” but I love the idea of explaining it as a haunted typewriter.
They’re systems trained to give plausible answers, not correct ones. Of course correct answers are usually plausible, but so do wrong answers, and on sufficiently complex topics, you need real expertise to tell when they’re wrong.
I’ve been programming a lot with AI lately, and I’d say the error rate for moderately complex code is about 50%. They’re great at simple boilerplate code, and configuration and stuff that almost every project uses, but if you’re trying to do something actually new, they’re nearly useless. You can lose a lot of time going down a wrong path, if you’re not careful.
I’m not one to stump for AI but 2-3 years ago we would have said AI struggled to kick out a working Powershell script and now the error rate for complex scripts is maybe 5%. The tech sped up very fast, and now they’re getting runtime environments to test the code they write, memories and project libraries. the tech will continue to improve. In 2026, 2028 are we still going to be saying the same about how AI can’t really handle coding or take people’s jobs? Quite a bit less. In 2030, less still.
There is a point beyond which no refinements can be made but just looking backward a bit, I don’t think we’re there yet.
Just in the past few months, I’d say Claude has gotten good enough to let us downsize our team from 3.5 to 2.5 but thankfully no one is interested in doing that.
Some of the more advanced LLMs are getting pretty clever. They’re on the level of a temp who talks too much, misses nuance, and takes too much initiative. Also, any time you need them to perform too complex a task, they start forgetting details and then entire things you already told them.
“HATE. LET ME TELL YOU HOW MUCH I’VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE.”
Sorry, but WTF!? $1.4 Trillion in costs? How are they going to make all of that back with just AI?
I think there’s only one way they can make this back: if AI gets so good they can really replace most employees.
I don’t think it will happen, but either way it’s going to be an economic disaster. Either the most valuable companies in the world, offering services that the next couple of hundred companies in the world depend on, are suddenly bankrupt. Or suddenly everybody is unemployed.
Government bailouts is how.
Socialism for the rich, dog-eat-dog capitalism for everyone else.
its a ponzi scheme.
I used to be amazed at how much a billion was, but this many 0s makes my head explode.
These must be bubble inflated costs to match the bubble inflated revenue.
If LLMs fail and they invested: bailout
If LLMs succeed and they invested: rich
If LLMs fail and they passed: everyone else bailed out
If LLMs succeed and they passed: out of business
Therefore, the logical choice for a business is to invest in LLMs. The only mechanism to not do the stupid thing that everyone else is doing is gone.
that assumes every business which invests in AI would be bailed out, which is a huge assumption. I would guess the only businesses that would receive bailouts would be those with personal ties to the government
Prediction: the bubble is real but financiers will find ways to kick the bull down the road until they can force enough adoption & ad insertion to not lose out. The other option is that we pay it, of course. Takes on which is worse?
is that why palintir is so desperate trying to sell its “suvellience” tech to mulitple countries, and why all of them suddenly want facial recognition, biometric data.
Hmm just good old late stage capitalism there, I think. The CEO recently said legalizing war crimes would be good for business and seems to have a cocaine problem to boot. No doubt fueled by the same investor groups though.
Can they at least blow up some government buildings in fake terrorist attacks to make it look convincing.
They’ll do both just like they did in 2007/2008. These AI companies and their investors will get bailed out while the rest of us lose our jobs and have to move back in with our parents in the van they already live in.
How is a haunted typewriter supposed to replace all those employees?
holy cow haunted typewriter is punching way above its weight class. Phenomenal.
I’ve tried explaining AI to people before and only could get so far before they fall back on “but it’s magic dude” but I love the idea of explaining it as a haunted typewriter.
I use the “very articulated parrot” analogy.
They’re systems trained to give plausible answers, not correct ones. Of course correct answers are usually plausible, but so do wrong answers, and on sufficiently complex topics, you need real expertise to tell when they’re wrong.
I’ve been programming a lot with AI lately, and I’d say the error rate for moderately complex code is about 50%. They’re great at simple boilerplate code, and configuration and stuff that almost every project uses, but if you’re trying to do something actually new, they’re nearly useless. You can lose a lot of time going down a wrong path, if you’re not careful.
Never ever trust them. Always verify.
I’m not one to stump for AI but 2-3 years ago we would have said AI struggled to kick out a working Powershell script and now the error rate for complex scripts is maybe 5%. The tech sped up very fast, and now they’re getting runtime environments to test the code they write, memories and project libraries. the tech will continue to improve. In 2026, 2028 are we still going to be saying the same about how AI can’t really handle coding or take people’s jobs? Quite a bit less. In 2030, less still.
There is a point beyond which no refinements can be made but just looking backward a bit, I don’t think we’re there yet.
Just in the past few months, I’d say Claude has gotten good enough to let us downsize our team from 3.5 to 2.5 but thankfully no one is interested in doing that.
Some of the more advanced LLMs are getting pretty clever. They’re on the level of a temp who talks too much, misses nuance, and takes too much initiative. Also, any time you need them to perform too complex a task, they start forgetting details and then entire things you already told them.
Sounds like they are a liability when you put it that way.
I use something similar. “Child with enormous vocabulary.”
It can recognize correlations, it understands the words themselves, but it really how those connections or words work.
I call dibs on the ghost of Harlan Ellison.
“HATE. LET ME TELL YOU HOW MUCH I’VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE.”
Glados: “just offer them cake and a fire pit and calm down”
i didn’t ask how it suplexxed a train, i just stayed out of its way
Ok but if it gets so good it replaces all the employees, how do people have enough money to pay for their services?
Who cares about the money of people when they have all the money?
that’s what they got excited about, no doubt. profit would go through the roof if they could take people out of the loop. nevermind the economy.