These companies have BILLIONS in revenue and millions of customers, and you’re saying very few want to pay…
The money is there, they just need to optimize the LLMs to run more efficiently (this is continually progressing), and the hardware side work on reducing hardware costs as well (including electricity usage / heat generation). If OpenAI can build a datacenter that re-uses all it’s heat for example to heat a hospital nearby, that’s another step towards reaching profitability.
I’m not saying this is an easy problem to solve, but you’re making it sound no one wants it and they can never do it.
not saying this is an easy problem to solve, but you’re making it sound no one wants it and they can never do it.
… That’s all in your head, mate. I never said that nor did I imply it.
What I am implying is that the uptake is so small compared to the investment that it is unlikely to turn a profit.
If OpenAI can build a datacenter that re-uses all it’s heat for example to heat a hospital nearby, that’s another step towards reaching profitability.
😐
I’ve worked in the building industry for over 20 years. This is simply not feasible both from a material standpoint and physics standpoint.
I know it’s an example, but this kind of rhetoric is exactly the kind of wishful thinking that I see in so many people who want LLMs to be a main staple of our everyday lives. Scratch the surface and it’s all just fantasy.
You > the trends show that very few want to pay for this service.
Me > These companies have BILLIONS in revenue and millions of customers, and you’re saying very few want to pay
Me > … but you’re making it sound no one wants it
You > … That’s all in your head, mate. I never said that nor did I imply it.
Pretty sure it’s not all in my head.
The heat example was just one small example of things these large data centers (not just AI ones) can do to help lower costs, and they are a real thing that are being considered. It’s not a solution to their power hungry needs, but it is a small step forward on how we can do things better.
This system “allows us to cover between 50% and 70% of the hospital’s heating demand, and save up to 4,000 tons of CO2 per year,” he said, also noting that “there are virtually no heat losses” since “the connecting pipe is quite short.”
It’s not easy to solve because its not possible to solve. ML has been around since before computers, it’s not magically going to get efficient. The models are already optimized.
Revenue isn’t profit. These companies are the biggest cost sinks ever.
Heating a single building is a joke marketing tactic compared to the actual energy impact these LLM energy sinks have.
I’m an automation engineer, LLMs suck at anything cutting edge. Its basically a mainstream knowledge reproducer with no original outputs. Meaning it can’t do anything that isnt already done.
Why on earth do you think things can’t be optimized on the LLM level?
There are constant improvements being made there, they are not in any way shape or form fully optimized yet. Go follow the /r/LocalLlama sub for example and there’s constant breakthroughs happening, and then a few months later you see a LLM utilizing them come out, and they’re suddenly smaller, or you can run a larger model on smaller memory footprint, or you can get a larger context on the same hardware etc.
This is all so fucking early, to be so naive or ignorant to think that they’re as optimized as they can get is hilarious.
I’ll take a step back. These LLM models are interesting. They are being trained in interesting new ways. They are becoming more ‘accurate’, I guess. ‘Accuracy’ is very subjective and can be manipulated.
Machine learning is still the same though.
LLMs still will never expand beyond their inputs.
My point is it’s not early anymore. We are near or past the peak of LLM development. The extreme amount of resources being thrown at it is the sign that we are near the end.
That sub should not be used to justify anything, just like any subreddit at any point in time.
Improved efficiency would reduce the catastrophic energy demands LLMs will have in the future. Assuming your reality comes true it would help reduce their environmental impact.
We’ll see. This isn’t first “it’s the future” technology I’ve seen and I’m barely 40.
I just wanted to add one other thing on the hardware side.
These H200’s are power hogs, no doubt about it. But the next generation H300 or whatever it is, will be more efficient as the node process (or whatever its called) gets smaller and the hardware is optimized and can run things faster. I could still see NVIDIA coming out and charging more $/flop or whatever the comparison would be though even if it is more efficient power wise.
But that could mean that the electricity costs to run these models starts to drop if they truly are plateaued. We might not be following moores law on this anymore (I don’t actually know), but were not completely stagnant either.
So IF we are plateaued on this one aspect, then costs should start coming down in future years.
Edit: but they are locking in a lot of overhead costs at today’s prices which could ruin them.
These companies have BILLIONS in revenue and millions of customers, and you’re saying very few want to pay…
The money is there, they just need to optimize the LLMs to run more efficiently (this is continually progressing), and the hardware side work on reducing hardware costs as well (including electricity usage / heat generation). If OpenAI can build a datacenter that re-uses all it’s heat for example to heat a hospital nearby, that’s another step towards reaching profitability.
I’m not saying this is an easy problem to solve, but you’re making it sound no one wants it and they can never do it.
Yep, I am. Just follow the money. Here’s an example:
https://www.theregister.com/2025/10/29/microsoft_earnings_q1_26_openai_loss/
… That’s all in your head, mate. I never said that nor did I imply it.
What I am implying is that the uptake is so small compared to the investment that it is unlikely to turn a profit.
😐
I’ve worked in the building industry for over 20 years. This is simply not feasible both from a material standpoint and physics standpoint.
I know it’s an example, but this kind of rhetoric is exactly the kind of wishful thinking that I see in so many people who want LLMs to be a main staple of our everyday lives. Scratch the surface and it’s all just fantasy.
You > the trends show that very few want to pay for this service.
Me > These companies have BILLIONS in revenue and millions of customers, and you’re saying very few want to pay
Me > … but you’re making it sound no one wants it
You > … That’s all in your head, mate. I never said that nor did I imply it.
Pretty sure it’s not all in my head.
The heat example was just one small example of things these large data centers (not just AI ones) can do to help lower costs, and they are a real thing that are being considered. It’s not a solution to their power hungry needs, but it is a small step forward on how we can do things better.
https://www.bbc.com/news/articles/cew4080092eo
Edit: Another that is in use: https://www.itbrew.com/stories/2024/07/17/inside-the-data-center-that-heats-up-a-hospital-in-vienna-austria
It’s not easy to solve because its not possible to solve. ML has been around since before computers, it’s not magically going to get efficient. The models are already optimized.
Revenue isn’t profit. These companies are the biggest cost sinks ever.
Heating a single building is a joke marketing tactic compared to the actual energy impact these LLM energy sinks have.
I’m an automation engineer, LLMs suck at anything cutting edge. Its basically a mainstream knowledge reproducer with no original outputs. Meaning it can’t do anything that isnt already done.
Why on earth do you think things can’t be optimized on the LLM level?
There are constant improvements being made there, they are not in any way shape or form fully optimized yet. Go follow the /r/LocalLlama sub for example and there’s constant breakthroughs happening, and then a few months later you see a LLM utilizing them come out, and they’re suddenly smaller, or you can run a larger model on smaller memory footprint, or you can get a larger context on the same hardware etc.
This is all so fucking early, to be so naive or ignorant to think that they’re as optimized as they can get is hilarious.
I’ll take a step back. These LLM models are interesting. They are being trained in interesting new ways. They are becoming more ‘accurate’, I guess. ‘Accuracy’ is very subjective and can be manipulated.
Machine learning is still the same though.
LLMs still will never expand beyond their inputs.
My point is it’s not early anymore. We are near or past the peak of LLM development. The extreme amount of resources being thrown at it is the sign that we are near the end.
That sub should not be used to justify anything, just like any subreddit at any point in time.
I think we’re just going to have to agree to disagree on this part.
I’ll agree though that IF what you’re saying is true, then they won’t succeed.
Fair enough. I’d be fine being wrong.
Improved efficiency would reduce the catastrophic energy demands LLMs will have in the future. Assuming your reality comes true it would help reduce their environmental impact.
We’ll see. This isn’t first “it’s the future” technology I’ve seen and I’m barely 40.
I just wanted to add one other thing on the hardware side.
These H200’s are power hogs, no doubt about it. But the next generation H300 or whatever it is, will be more efficient as the node process (or whatever its called) gets smaller and the hardware is optimized and can run things faster. I could still see NVIDIA coming out and charging more $/flop or whatever the comparison would be though even if it is more efficient power wise.
But that could mean that the electricity costs to run these models starts to drop if they truly are plateaued. We might not be following moores law on this anymore (I don’t actually know), but were not completely stagnant either.
So IF we are plateaued on this one aspect, then costs should start coming down in future years.
Edit: but they are locking in a lot of overhead costs at today’s prices which could ruin them.