When people ask me what artificial intelligence is going to do to jobs, they’re usually hoping for a clean answer: catastrophe or overhype, mass unemployment or business as usual. What I found after months of reporting is that the truth is harder to pin down—and that our difficulty predicting it may be the most important part of
In 1869, a group of Massachusetts reformers persuaded the state to try a simple idea: counting.
The Second Industrial Revolution was belching its way through New England, teaching mill and factory owners a lesson most M.B.A. students now learn in their first semester: that efficiency gains tend to come from somewhere, and that somewhere is usually somebody else. The new machines weren’t just spinning cotton or shaping steel. They were operating at speeds that the human body—an elegant piece of engineering designed over millions of years for entirely different purposes—simply wasn’t built to match. The owners knew this, just as they knew that there’s a limit to how much misery people are willing to tolerate before they start setting fire to things.
Still, the machines pressed on.
…



LodeMike, I’m curious about something. What’s the latest set of AI models and tools you’ve used personally? Have you used Opus 4.5 or 4.6, for instance?
I am not disagreeing with the points you’ve made, but it’s been my experience that the increase in capabilities over the last six months has been so rapid that it’s hard to realistically evaluate what the current frontier models are capable of unless you’ve uused them meaningfully and with some frequency.
I’d welcome your perspective.
Not OP but I use these on the regular.
I’d still agree with the OP that there are hard limits to what these can do. I’ve gotten Claude stuck in loops before on removing unrelated code, then adding it back, then removing it again hoping it’ll fix something.
And OP is still correct. At the heart of all of this it’s “given input x guess the probability of response Y”. Even frontier models don’t think. They can output tokens to call tools to try and get more input x but it’s still a best guess.
You can also give them too much context and get “context rot” which makes their output absolutely horrible too. I think cursor had a problem with that where too many Claude skills caused cursor to hallucinate and go nuts.
All valid points.
However, the actual capabilities of the AIs might not matter with respect to job displacement, since the people making the hiring decisions are absorbing the marketing hype but not using the tools.
Even if folks are still hired, they might experience second order effects like increased job stress and burnout: https://fortune.com/2026/02/10/ai-future-of-work-white-collar-employees-technology-productivity-burnout-research-uc-berkeley/
I’m rather glad that I’m reaching the end of my career and not trying to break into the market as a junior software engineer.
Opus like the audio codec?
I use the GPT mini or similar models