I came across this article in another Lemmy community that dislikes AI. I’m reposting instead of cross posting so that we could have a conversation about how “work” might be changing with advancements in technology.
The headline is clickbaity because Altman was referring to how farmers who lived decades ago might perceive that the work “you and I do today” (including Altman himself), doesn’t look like work.
The fact is that most of us work far abstracted from human survival by many levels. Very few of us are farming, building shelters, protecting our families from wildlife, or doing the back breaking labor jobs that humans were forced to do generations ago.
In my first job, which was IT support, the concept was not lost on me that all day long I pushed buttons to make computers beep in more friendly ways. There was no physical result to see, no produce to harvest, no pile of wood being transitioned from a natural to a chopped state, nothing tangible to step back and enjoy at the end of the day.
Bankers, fashion designers, artists, video game testers, software developers and countless other professions experience something quite similar. Yet, all of these jobs do in some way add value to the human experience.
As humanity’s core needs have been met with technology requiring fewer human inputs, our focus has been able to shift to creating value in less tangible, but perhaps not less meaningful ways. This has created a more dynamic and rich life experience than any of those previous farming generations could have imagined. So while it doesn’t seem like the work those farmers were accustomed to, humanity has been able to shift its attention to other types of work for the benefit of many.
I postulate that AI - as we know it now - is merely another technological tool that will allow new layers of abstraction. At one time bookkeepers had to write in books, now software automatically encodes accounting transactions as they’re made. At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.
These days we have fewer bookkeepers - most companies don’t need armies of clerks anymore. But now we have more data analysts who work to understand the information and make important decisions. In the future we may need fewer software coders, and in turn, there will be many more software projects that seek to solve new problems in new ways.
How do I know this? I think history shows us that innovations in technology always bring new problems to be solved. There is an endless reservoir of challenges to be worked on that previous generations didn’t have time to think about. We are going to free minds from tasks that can be automated, and many of those minds will move on to the next level of abstraction.
At the end of the day, I suspect we humans are biologically wired with a deep desire to output rewarding and meaningful work, and much of the results of our abstracted work is hard to see and touch. Perhaps this is why I enjoy mowing my lawn so much, no matter how advanced robotic lawn mowing machines become.
The problem is the capitalist investor class, by and large, determines what work will be done, what kinds of jobs there will be, and who will work those jobs. They are becoming increasingly out of touch with reality as their wealth and power grows and seem to be trying to mold the world into something, somewhere along the lines of what Curtis Yarven advocates for, that most people would consider very dystopian.
This discussion is also ignoring the fact that currently, 95% of AI projects fail, and studies show that LLM use hurts the productivity of programmers. But yeah, there will almost surely be breakthroughs in the future that will produce more useful AI tech; nobody knows what the timeline for that is though.
Thou shalt not make a machine in the likeness of a human mind.
– The Orange Catholic Bible
Also, that pompous chucklefuck can go fuck himself. There are people who could barely feed themselves at less than a couple dollars per day.
Then that software engineer that was replaced by AI becomes Sam’s personal chef to kill him
So, how do we go about making him our collective lawnmower in chief? He is giving us an exit plan on a platter. To sweeten the deal, I’m happy to let him uptalk and vocal fry all he wants. He’s saying on record that he ‘enjoys’ mowing his lawn. Pretty sure we can have him mowing all of our lawns and ‘enjoy’ it too.
What do we need the mega rich for anyway? They aren’t creative and easily replaced with AI at this point.
What do we need the mega rich for anyway?
Supposedly the creation and investment of industries, then managing those businesses which also supposedly provide employment for thousands who make the things for them. Except they’ll find ways to cut costs and maximize profit. Like looking for cheaper labor while at the same time thinking of building the next megayacht for which to flex off at Monte Carlo next summer.
Can’t AI replace Sam Altman?
Sam, I say this will all my heart…
Fuck you very kindly. I’m pretty sure what you do is not “a real job” and should be replaced by AI.
To be fair, a lot of jobs in capitalist societies are indeed pointless. Some of them even actively do nothing but l subtract value from society.
That said, people still need to make a living and his piece of shit artificial insanity is only making it more difficult. How about stop starving people to death and propose solutions to the problem?
There’s a book Bullshit Jobs that explores this phenomenon. Freakonomics also did an episode referring to the book, which I found interesting.
Bullshit Jobs: A Theory is a 2018 book by anthropologist David Graeber that postulates the existence of meaningless jobs and analyzes their societal harm. He contends that over half of societal work is pointless and becomes psychologically destructive when paired with a work ethic that associates work with self-worth
They may seem pointless to those outside of the organization. As long as someone is willing to pay them then someone considers they have value.
No one is “starving to death” but you’d have people just barely scraping by.
This is the tricky nature of “value”, isn’t it?
Something can be both valuable and detrimental to humanity.
With many bearaucracies there’s plenty of practically valueless work going on.
Because some executive wants to brag about having over a hundred people under them. Because some proceas requires a sort of document be created that hasn’t been used in decades but no one has the time to validate what does or does not matter anymore. Because of a lot of little nonsense reasons where the path of least resistance is to keep plugging away. Because if you are 99 percent sure something is a waste of time and you optimize it, there’s a 1% chance you’ll catch hell for a mistake and almost no chance you get great recognition for the efficiency boost if it pans out.
why capitalist societies specifically?
Sam Altman is a huckster, not a technologist. As such, I don’t really care what he says about technology. His purpose has always been to transfer as much money as possible from investors into his own pocket before the bubble bursts. Anything else is incidental.
I am not entirely writing off LLMs, but very little of the discussion about them has been rational. They do some things fairly well and a lot of things quite poorly. It would be nice if we could just focus on the former.
The guys name is too perfect.
Altman. Alternative man.
Just not a good alternative.
After his extreamly creepy interview with Tucker Carlsson about that whistleblower who died, I know he is not right in the head.
Is this where they get rid of the telephone sanitizers and middle managers?
CEO isn’t an actual job either, it’s just the 21st century’s titre de noblesse.
deleted by creator
We could create jobs by opening a guillotine factory
I have been working with computers, and networks, and the internet since the 1980s. Over this span of 40-ish years, “how I work” has evolved dramatically through changes in how computers work and more dramatically through changes in information availability. In 1988 if you wanted to program an RS-232 port to send and receive data, you read books. You physically traveled to libraries, or bookstores - maybe you might mail order one, but that was even slower. Compared to today the relative costs to gain the knowledge to be able to perform the task were enormous, in time invested, money spent, and physical resources (paper, gasoline, vehicle operating costs).
By 20 years ago, the internet had reformulated that equation tremendously. Near instant access to worldwide data, organized enough to be easier to access than a traditional library or bookstore, and you never needed to leave your chair to get it. There was still the investment of reading and understanding the material, and a not insignificant cost of finding the relevant material through search, but the process was accelerated from days or more to hours or less, depending on the nature of the learning task.
A year ago, AI hallucination rates made them curious toys for me - too unreliable to be of net practical value. Today, in the field of computer programming, the hallucination rate has dropped to a very interesting point: almost the same as working with a not-so-great but still useful human colleague. The difference being: where a human colleague might take 40 hours to perform a given task (not that the colleague is slow, just it’s a 40 hour task for an average human worker), the AI can turn around the same programming task in 2 hours or less.
Humans make mistakes, they get off on their own tracks and waste time following dead ends. This is why we have meetings. Not that meetings are the answer to everything, but at least they keep us somewhat aware of what other members of the team are doing. That not so great programmer working on a 40 hour task is much more likely to create a valuable product if you check in with them every day or so, see “how’s it going”, help them clarify points of confusion, check their understanding and direction of work completed so far. That’s 4 check points of 15 minutes to an hour in the middle of the 40 hour process. My newest AI colleagues are ripping through those 40 hour tasks in 2 hours, impressive, and when I don’t put in the additional 2 hours of managing them through the process, they get off the rails, wrapped around the axles, unable to finish a perfectly reasonable task because their limited context windows don’t keep all the important points in focus throughout the process. A bigger difficulty is that I don’t get 23 hours of “offline wetware processing” between touch points to refine my own understanding of the problems and desired outcomes.
Humans have developed software development processes to help manage human shortcomings, humans’ limited attention spans and memory. We still out-perform AI in some of this context window span thing, but we have our own non-zero hallucination rates. Asking an AI chatbot to write a program one conversational prompt at a time only gets me so far. Providing an AI with a more mature software development process to follow gets much farther. AI isn’t following these processes (that it helped to translate from human concepts into its own language of workflows, skills, etc.) 100% perfectly, I catch it skipping steps in simple 5 step workflows, but like human procedures, there’s a closed loop procedure improvement procedure to help perform better in the future.
Perhaps most importantly, the procedures are constantly reminding AI to be “self aware” of its context window limitations, do RAG (research augmented generation) of best practices for context management, DRY (reduce through non-repetition and use of references to single points of truth) its own procedures and documentation it generates. Will I succeed in having AI rebuild a 6 month project I did five years back, doing it better this time - expanding its scope to what would have been a year long development effort if I had continued doing it solo? Unclear, I’m two weeks in and I feel like I’m about where I was after two weeks of development last time, but it also feels like I have a better foundation to complete the bigger scope this time using the AI tools, and there’s that tantalizing possibility that at any point now it might just take off and finish it by itself.
At one time software developers might spend days setting up the framework of a new project, and now an LLM can do the bulk of the work in minutes.
I’d not put an LLM in charge of developing a framework that is meant to be used in any sort of production environment. If we’re talking about them setting up the skeleton of a project, then templates have already been around for decades at this point. You also don’t really set up new projects all that often.
Most of what LLMs present as solutions have been around for decades, that’s how they learned them: from source material they train to.
So far, AI hasn’t surprised me with anything clever or new, mostly I’m just reminding it to follow directions, and often I’m pointing out better design patterns than what it implements on the first go around.
Above all: you don’t trust what an LLM spits out any more than you trust a $50/hr “consultant” from the local high school computer club to give you business critical software… you test it, if you have the ability you review it at the source level, line by line. But there ARE plenty of businesses out there running “at risk” with sketchier software developers than the local computer club, OF COURSE they are going to trust AI generated code further than they should.
Get the popcorn, there will be some entertaining stories about that over the coming year.
This is my take with it too. They seem to be good at creating “high fidelity” mock-ups, and creating a basic framework for something, but try to even get them to change a background color or something and they just lie to you.
They’re basically a good tool for stubbing stuff out for a web application…which, it’s insane that we had to jump through all of these hoops and spend unknown billions in order to get that. At this point, I would assume that we have a rapid application development equivalent for web apps…but maybe not.
All of the “frameworks” involved in front-end application delivery certainly don’t seem to provide any benefit of speeding up development cycles. Front-end development seems worse today than when I used to be a full-time full stack engineer (and I had fucking IE6 to contend with at the time).
Fuck, I barely let AI make functions in my code because half the time the fuckin idiot can’t even guess the correct method name and parameters when it can pull up the goddamned help page like I can or even Google the basic syntax.
A year ago AI answers were only successfully compiling for me about 60% of the time. Now they’re up over 80%, and I’m no longer in the loop when they screw up, they get it right on the first try 80% of the time, then 96% of the time by the 2nd try, 99% by the third try, 99.84% of the time by the 4th try, and the beauty is: they retry for themselves until they get something that actually compiles.
Now we can talk about successful implementation of larger feature sets…
I tried to demo an agentic AI in Jetbrains to a coworker, just as a “hey look at this neat thing that can make changes on its own”. As the example I told it to convert a constructor in c# to a primary constructor.
So it “thought” and made the change, “thought” again and reverted the change, “thought” once again and made the change again, then it “thought” for a 4th time and reverted the changes again. I stopped it there and just shook my head.
I had similar experiences a few months back, like 6-8. Since Anthropic Sonnet 4.0 things have changed significantly. 4.5 is even a bit better. Competing models have been similarly improving.
If we’re talking about them setting up the skeleton of a project, then templates have already been around for decades at this point.
That’s what LLMs are good at - taking old work (without consent) and regurgitating it while pretending it’s new and unique.
Yup. If it takes me more than a day to get started working on business logic, that’s on me. That should take max 4 hours.











