Altman’s remarks in his tweet drew an overwhelmingly negative reaction.
“You’re welcome,” one user responded. “Nice to know that our reward is our jobs being taken away.”
Others called him a “f***ing psychopath” and “scum.”
“Nothing says ‘you’re being replaced’ quite like a heartfelt thank you from the guy doing the replacing,” one user wrote.



I find that a great many people prefer to subordinate themselves to “their boss” whoever or whatever that may be… it’s just so much easier than fighting for what you might believe “is right” but you are obviously powerless to fix.
And that’s the difficult thing to measure: is this task just annoyingly packed with detail and volume that you could work through if you spent the time and effort? (If so, AI could be a very useful tool) Or, is this task really beyond your understanding? In which case, you’re trusting the AI to fill in your blanks, which is irresponsible and today likely to fail - but in the future there will be a big grey area where the AI is usually “good enough” - but how can you tell? In computer coding, there’s a certain amount to be gained by having “independent” AI agents review the code and eventually reach consensus. In other areas, you can leverage AI to do what I have done in the past and teach yourself what you need to know in order to do what you’re trying to do. The question there is: how do you know when you have learned enough to actually “know what you are doing” well enough to do it successfully? There are far too many people in the world who are overconfident of their insufficient understanding of what they are messing with, and AI is like a gasoline spray fountain on their smoldering embers.
I feel like writing a “guide to AI development” is a bit futile at the moment because by the time you have written it and somebody reads it, the field will have evolved sufficiently to invalidate much of what you wrote. However, one thing that has remained constant over the past 6 months in my opinion is the need for visibility. Don’t just ask AI to design you a bridge with construction drawings. Ask it to show its work, include the structural analysis - equations, graphs of the solutions, references to standards - copies of the relevant parts of the standards, enough visibility and detail to spot its mistakes and oversights. In code this includes requirements, implementation plans, test plans, test execution results, traceability from the code to the requirements and tests.
I find that when I find and fix errors for AI (or junior programmers) it will often proceed to just make the same mistake again, even going so far as to overwrite my working solution with its faulty code again. If, instead, you work with it - Socratic method style - to find the issue, document what went wrong, and solve it for itself, it tends to repeat that particular kind of problem less in the future. Until you start a new project and don’t bring over the “memory files” from the old one…
I find it’s a bit of a mix in that respect. I “learned Rust” by having AI code in Rust for me. I certainly know more about Rust than I did when I started, I certainly have built bigger, more complex, and more successful projects with AI/Rust than if I had just started out plucking away at Rust the way I did BASIC in the 1980s… have I “learned Rust” better, or not as well, by using AI compared to if I had gone at it without AI? Is that even a relevant question? Rust is here, AI is here, it’s probably better, or at least more efficient, to learn how to code Rust with AI tools than it is to first learn Rust without AI and then learn all the pitfalls of using AI to code with Rust later… I’m sure if I invested 2000 hours learning Rust without AI I would know more about coding with Rust than I do after having invested 200 hours learning Rust with AI, but is that a comparison that’s even worth making?
That’s a thing that’s hard for me to really judge. Me making programs with AI has improved dramatically over the past 6 months, how much of that is the AI models improving? Clearly they are improving, but then, how much is me learning how to work more effectively with AI? I feel like the experience working with the inferior models has been valuable, because the methods I have developed to work with inferior AI models also help get better results from the newer models. If I had waited 12 months to jump in after the models had improved dramatically, I might not be as good at getting results from the superior models because they can at least make something functional with poor prompts, whereas the inferior models wouldn’t give you anything of value unless you were using them with some skills of specification, scope and refinement.
Increasing your own proficiency is an investment well worth making, but after 40 years of coding experience, I find that AI is saving me significant time and effort beyond anything I’m likely to “learn better” before I die. Mostly what AI is good at, for me, is doing the voluminous detail documentation, unit test coverage, reviews for consistency. In development (of anything) there’s a tension between single source of truth, don’t repeat yourself, and copious examples, unit tests, redundancy of information to ensure that things don’t get off-track when you’re not looking at them. AI doesn’t do it automatically, but you can direct it to constantly review the redundant information for consistency and then fix the unwanted deviations to get back in line with your intent.