It’s about as productive as trying to turn a lion vegetarian.
Yeah, Russia should’ve just let the ethnic cleansing in Donbas continue. At lest you fascists have consistent values from Gaza to Donbas.


Frankly, I’ve never really understood the logic of bailouts. If a company is not solvent, but it’s deemed to be strategically important then the government should simply be taking a stake in it. That’s what would happen on the private markets with another company buying it out. The whole notion that the government should just throw money at the failing companies with no strings attached is beyond absurd.


ah makes sense


Russia actually operates 8 nuclear powered ice breakers right now, and they’re making more. https://www.thebarentsobserver.com/news/here-comes-yakutia-russias-newest-nuclear-icebreaker/422559


kind of yeah


I mean if you have a verifiable set of steps that build from the answer to first principles, that does seem to enable trust worthiness. Specifically because it makes it possible for a human to follow the chain and verify it as well. This is basically what underpins the scientific method and how we compensate for the biases and hallucinations that humans have. You have a reproducible set of steps that can explained and followed. And what they’re building is very useful because it lets you apply this method to many problems where it would’ve been simply too much effort to do manually.


It’s like watching a grand master play chess with a toddler.


Cory Doctorow had a good take on this incidentally https://pluralistic.net/2025/10/16/post-ai-ai/


Which aspect of their approach do you doubt?


@[email protected] kind of related to your recent post about hallucinations https://lemmy.ml/post/38324318
it’s just such a great meme, you don’t even have to edit it, it’s perfect the way it is
Think of it this way, the investors are basically like people going to a casino. They start with a bunch money, and they start losing that money over time. That’s what’s happening here. Right now, they still haven’t lost enough money to quit playing, they still think they’ll make their investment back. At some point they either run out of money entirely, or they sober up and decide to cut their losses. That’s what’s going to change between now and when the bubble starts to pop. We simply haven’t hit the inflection point when the investors start to panic.
It does actually
The economic nightmare scenario is that the unprecedented spending on AI doesn’t yield a profit anytime soon, if ever, and data centers sit at the center of those fears. Such a collapse has come for infrastructure booms past: Rapid construction of canals, railroads, and the fiber-optic cables laid during the dot-com bubble all created frenzies of hype, investment, and financial speculation that crashed markets. Of course, all of these build-outs did transform the world; generative AI, bubble or not, may do the same.
The scale of the spending is absolutely mind blowing. We’re talking about $400 billion in AI infrastructure spending this year alone, which is like funding a new Apollo program every 10 months. But the revenue is basically pocket change compared to the spending.
As the article notes, the reality check is already happening.
Much is in flux. Chatbots and AI chips are getting more efficient almost by the day, while the business case for deploying generative-AI tools remains shaky. A recent report from McKinsey found that nearly 80 percent of companies using AI discovered that the technology had no significant impact on their bottom line. Meanwhile, nobody can say, beyond a few years, just how many more data centers Silicon Valley will need. There are researchers who believe there may already be enough electricity and computing power to meet generative AI’s requirements for years to come.
The whole house of cards is propped up by this idea that AI will at some point pay for itself, but the math just doesn’t add up. These companies need to generate something like $2 trillion in AI revenue by 2030 to even break even on all this capex, and right now, they’re nowhere close. OpenAI alone is burning through cash like it’s going out of style, raising billions every few months while losing money hand over fist.
I expect that once it’s finally acknowledged that the US is in a recession, that’s finally going to sober people up and make investors more cautious. The VCs who were happily writing checks based on vibes and potential will start demanding to see actual earnings, and that easy money environment that’s been fuelling this whole boom is going to vanish overnight.
When a few big institutional investors get spooked and start quietly exiting their positions, it could trigger a full blown market panic. At that point, we’ll see a classic death spiral. The companies that have been living on investor faith, with no real path to profitability, are going to run out of cash and hit the wall leading to an extinction level event in the AI ecosystem.
If tech stocks fall because of AI companies failing to deliver on their promises, the highly leveraged hedge funds that are invested in these companies could be forced into fire sales. This could create a vicious cycle, causing the financial damage to spread to pension funds, mutual funds, insurance companies, and everyday investors. As capital flees the market, non-tech stocks will also plummet: bad news for anyone who thought to play it safe and invest in, for instance, real estate. If the damage were to knock down private-equity firms (which are invested in these data centers) themselves—which manage trillions and trillions of dollars in assets and constitute what is basically a global shadow-banking system—that could produce another major crash.
When that all actually starts happening ultimately depends on how long big investors are willing to keep pouring billions into these companies without seeing any return. I can see at least another year before reality starts setting in, and people realize that they’re never getting their money back.

Nah, you just have political theory of an edgy 12 year old. You probably also think that your bedtime is authoritarian.
Again, this is a very US centred perspective. I highly urge you to watch this interview with the Alibaba cloud founder on how this tech is being approached in China https://www.youtube.com/watch?v=X0PaVrpFD14
You’re such an angry little ignoramus. The GPT-NeoX repo on GitHub is the actual codebase they used to train these models. They also open-sourced the training data, checkpoints, and all the tools.
However, even if you were right that the weights were worthless, which they’re obviously not, and there were no open projects which there are, the solution would be to develop models from scratch in the open instead of screeching at people and pretending this tech is just going to go away because it offends you personally.
And nobody says LLMs are anything other than Markov chains at a fundamental level. However, just like Markov chains themselves, they have plenty of real world uses. Some very obvious ones include doing translations, generating subtitles, doing text to speech, and describing images for visually impaired. There are plenty of other uses for these tools.
I love how you presumed to know better than the entire world what technology to focus on. The megalomania is absolutely hilarious. Like all these researchers can’t understand that this tech is a dead end, it takes the brilliant mind of some lemmy troll to figure it out. I’m sure your mommy tells you you’re very special every day.
You seem to have a very US centric perspective on this tech the situation in China looks to be quite different. Meanwhile, whether you personally think the benefits are outweighed by whatever dangers you envision, the reality is that you can’t put toothpaste back in the tube at this point. LLMs will continue to be developed. The only question is how that’s going to be done and who will control this tech. I’d much rather see it developed in the open.
usually having coupling with the database being used as shared state