Governments and tech moguls have bet hundreds of billions on artificial intelligence. If the technology does what it promises, we will have to radically rethink how the global economy functions.
I’ve still not heard a convincing argument explaining how these companies are going to make enough money to offset the billions they’ve spent on R&D and hardware.
It’s strange really, if I was an investor that would be the first question I’d ask but I guess VCs are smarter than I am.
the bubble pops, most companies fail. Mostly bankruptcies, massive layoffs but also huge tax writeoffs
of the surviving companies, a couple strike the jackpot.
Most of that huge overall investment is lost, but everyone wants to be in on the one or two that succeed, and those specific investments could have huge returns
I’m not seeing the market for LLMs in any meaningful roles given they are prone to saying things that aren’t true. Would you hire someone who does good work 90% of the time and for the rest, tells you the work is done, when it’s not, or worse.
LLM vendors are starting to charge money. I’m sure it’s not even close to profitable but it’s a start. Perhaps when the bubble pops and market consolidates, fewer vendors with more paying customers each …
Using an LLM is a skill just like any other. If you just take what it gives you, you can’t expect good results. If you evaluate what it gives you and prompt it to improve, the results aren’t as bad.
I use an LLM for coding and a definitely a skeptic, but I do find it a useful tool and am really interested in seeing if I can make it work.
Initially I found some amount of success at lower levels, saving me some time
it could auto complete entire lines of code (and that’s trivial to evaluate and correct if necessary)
it was pretty good about generating unit tests since they tend to be simple and repetitive. In general corrections tend to be smarter coverage, tweaking the tests to cover more functionality with fewer tests
it’s pretty good with utility scripts. For example today I had a decision and wanted supporting data: in minutes it generated a script to call APIs in my scm and generate some stats for 4,000 code repos …. And it worked
Currently I’ve created rulesets and project context so
it’s been quite successful at code reviews (it finds things I miss, and has resulted in my human reviewers finding less)
I’m proud of one for identifying refactoring opportunities. It finds good spots and makes good suggestions, but so far I have to implement myself: its code hasn’t been usable. I can also objectively verify by reduced cyclomatic complexity.
Trying to find other scenarios it can be successful, it’s clear that insufficient context is a limiting factor. The fun challenge is to see if there are more successful scenarios if you can give it enough context. I’ve gone past rulesets and project context, to connect relevant services and metadata about our product set and environment. They want a team to try vibe coding and I’m still very skeptical, but my part of the effort is a real solvable problem and fun challenge whether they succeed or not
Thats a pretty good attitude. I have unfortunately been forced to use as much as possible for work for over a year. On the one hand Claude opus 4.6 is really a massive improvement to what I was using at the beginning of last year, which is honestly a scary trajectory.
On the other, I still don’t have any trust at all for production code as I see far too many errors. I can pump out rapid prototypes way faster than before, (and I was always very very fast at that) but I learn less from them. I still feel like using the LLM is like stealing from the future. For the most part I need to do the actual work eventually, understanding the code takes as long as writing it, and fixing takes longer.
Where I find it really useful is exploratory. It errors a lot but has compressed essentially the whole of human writing, so I can ask about approaches to specific problems and find apis and techniques I wouldn’t necessarily have found before. It still hallucinates an api more than once a day for me, but as long as you check it that’s something.
I still don’t think the revolution is here. It only feels like it could be because it’s been subsidized to hell and back, and I am terrified of the human cost: insane data center use, the economic toll of bubble popping (which of course will be felt by the masses), all the layoffs, and what happens to humans when we offset thinking rather than just memory to computers. There’s gonna be a lot of pain in the coming years
Definitely one of the weaknesses is: what about maintenance? Ai has been poor at maintaining existing code, and we all know that maintenance is much more expensive than development. Will it be able to maintain its own code? What if there are no longer enough developers to do it manually? Where is our future then?
I’ve definitely been adding priority to refactoring. It was always a good idea for maintainability, for new developers to get up to speed and be able to contribute, but now we have the idiot developer that is LLMs. Perhaps more refactoring is meeting it halfway
I’ve still not heard a convincing argument explaining how these companies are going to make enough money to offset the billions they’ve spent on R&D and hardware.
It’s strange really, if I was an investor that would be the first question I’d ask but I guess VCs are smarter than I am.
The same way they do in every other bubble.
Most of that huge overall investment is lost, but everyone wants to be in on the one or two that succeed, and those specific investments could have huge returns
How do they succeed though?
I’m not seeing the market for LLMs in any meaningful roles given they are prone to saying things that aren’t true. Would you hire someone who does good work 90% of the time and for the rest, tells you the work is done, when it’s not, or worse.
LLM vendors are starting to charge money. I’m sure it’s not even close to profitable but it’s a start. Perhaps when the bubble pops and market consolidates, fewer vendors with more paying customers each …
Using an LLM is a skill just like any other. If you just take what it gives you, you can’t expect good results. If you evaluate what it gives you and prompt it to improve, the results aren’t as bad.
I use an LLM for coding and a definitely a skeptic, but I do find it a useful tool and am really interested in seeing if I can make it work.
Initially I found some amount of success at lower levels, saving me some time
Currently I’ve created rulesets and project context so
Trying to find other scenarios it can be successful, it’s clear that insufficient context is a limiting factor. The fun challenge is to see if there are more successful scenarios if you can give it enough context. I’ve gone past rulesets and project context, to connect relevant services and metadata about our product set and environment. They want a team to try vibe coding and I’m still very skeptical, but my part of the effort is a real solvable problem and fun challenge whether they succeed or not
Thats a pretty good attitude. I have unfortunately been forced to use as much as possible for work for over a year. On the one hand Claude opus 4.6 is really a massive improvement to what I was using at the beginning of last year, which is honestly a scary trajectory.
On the other, I still don’t have any trust at all for production code as I see far too many errors. I can pump out rapid prototypes way faster than before, (and I was always very very fast at that) but I learn less from them. I still feel like using the LLM is like stealing from the future. For the most part I need to do the actual work eventually, understanding the code takes as long as writing it, and fixing takes longer.
Where I find it really useful is exploratory. It errors a lot but has compressed essentially the whole of human writing, so I can ask about approaches to specific problems and find apis and techniques I wouldn’t necessarily have found before. It still hallucinates an api more than once a day for me, but as long as you check it that’s something.
I still don’t think the revolution is here. It only feels like it could be because it’s been subsidized to hell and back, and I am terrified of the human cost: insane data center use, the economic toll of bubble popping (which of course will be felt by the masses), all the layoffs, and what happens to humans when we offset thinking rather than just memory to computers. There’s gonna be a lot of pain in the coming years
Definitely one of the weaknesses is: what about maintenance? Ai has been poor at maintaining existing code, and we all know that maintenance is much more expensive than development. Will it be able to maintain its own code? What if there are no longer enough developers to do it manually? Where is our future then?
I’ve definitely been adding priority to refactoring. It was always a good idea for maintainability, for new developers to get up to speed and be able to contribute, but now we have the idiot developer that is LLMs. Perhaps more refactoring is meeting it halfway