Vibe coding is collapsing the distance between idea and deployment. But the real risk is whether your company has the judgment system to govern what AI can now build.
There are a lot of folks saying that Bluesky’s recent outages were due to the vast amounts of vibe coding in their systems. It was days of not working.
As an “I wonder” exercise… say that BlueSky wasn’t vibe coded, but instead was done “the old fashioned way” with 20x as many people taking 10x as long to produce the same product. Over that 10x as long timeframe, would they have experienced less or more total downtime with traditionally coded software? Not theoretically perfect software, the actual stuff that “professionals” building social media sites write?
Also, if they have staffed up with the same number of people as were traditionally required, can those people respond to and correct issues slower or faster than a traditional team?
LLMs are powerful tools, which have evolved fairly dramatically in the area of software devleopment across the last 12 months. I suspect as people learn to use them properly, safely, appropriately, they are going to prove out to be quite useful. In the meantime, there will be mistakes made…
There was an article a bit ago explaining that most AI companies are making a 95% loss. You know, spending 100, receiving 5 loss. All that debt is going to mean the price for AI is about 20 times lower than it needs to be just to break even. The software teams that came to rely on AI to save costs will soon enough find themselves on the hook for this mountain of debt. Enshitification is real. Enshitification is coming. AI will not stay cheap, convenient and free of advertising.
People forget this. Yes it has real use in very narrow contexts, yes it may get slightly better, but right now they are JUL getting the kids addicted to vapes and it is drawing ungodly amounts of power and electricity to do so.
Two things can both be wrong. And removing something that’s been in place for millennia and deeply embedded in the culture is likely to be more challenging than eliminating something that is still more planned than actually materialized.
right nowthey’re basically discovering what are real uses and what’s frivolous non-value add uses.
at least as used for software development, the past 12 months didn’t get slightly better, it got dramatically night to day better.
simultaneously, some pretty significant advances have been made at reducing costs of delivering value. I think this is hitting hardest in basic chatbot areas, getting the simple answers cheaper - in programming not as clearly, yeah it’s getting the simple answers cheaper there too, but it’s also succeeding at getting much more complex answers that just weren’t possible even a few months ago - those answers cost more, but they’re also worth more… will be interesting to see where this all shakes out.
Yeah, they are running loss leader stuff, yeah it’s going to go up in price when they figure out what its worth to people, because things aren’t priced at what they cost to make or deliver, things are priced at what people are willing to pay. The players with the deep pockets are jockeying for control of future markets, they’re investing their existing wealth in future power. Let’s hope the winners are slightly less ghoulish than our Oil barons.
Let’s hope the winners are slightly less ghoulish than our Oil barons.
What a foolish hope!
$200,000,000,000 debt.
Who well pay it?
You talk like gravity doesn’t exist!
You’re wrong if you think that it won’t be heavily reliant AI customers like software companies who spend five years removing codewriting skills from their workforce and building up technical debt in their codebase because no one has to understand it in those five years and there’s a lot of subtle, hard to spot bugs that got through code review because humans simply don’t make those kinds of errors and no one ever had to spot one in their life before claude came along.
Did you think that enshitification wouldn’t affect the product? Yesterday’s computers and cars were easy to disassemble to replace parts. Now it’s much, much harder, and it’s very common to void your warranty if you do that. Today’s ai generated code is easy to tinker with and you can do what you like with your end product. Why would it stay that way? Why wouldn’t they engineer it to make that harder? It’s not difficult to make code confusing by changing variable names. I could fuck up your codebase for humans by simply swapping names like productSKU and customerID, let alone writing obfuscated code for any purpose whatsoever and with whatever variable names I like.
Some software companies are outsourcing their talent to AI behemoths with mountains of debt to recoup. Guess who’s going to pay the debt! And what’s the point of such a company in the long run? Why are you speedrunning paying to replace yourself?
There will be an AI crash and “consolidation”, meaning a switch to monopolies or near monopolies. Some companies are shedding institutional knowledge and programming skill like it was waste water. Once dependence comes, value extraction will follow it like disease follows unvaccinated infection.
There is already $200bn in debt and growing rapidly. The shareholders aren’t going to be paying it. The ai customers are.
Why are you speedrunning paying to replace yourself?
I’m old enough to qualify for the next buyout offer, if there is one. Speedrunning “the new tools” is what I have done for 35 years, it has always served me well in the past. Maybe this one backfires? Not my personal problem if it does - disposal of the elderly from the workforce is a tale as old as time, that’s what retirement accounts are for.
Today’s ai generated code is easy to tinker with and you can do what you like with your end product. Why would it stay that way? Why wouldn’t they engineer it to make that harder? It’s not difficult to make code confusing by changing variable names.
Code obfuscators have existed for decades - they are rarely used in practice, and 10 years back when a vendor provided me a driver in obfuscated code I explained to them: “If we don’t get real source code, we won’t be buying your products.” The non-obfuscated code was in my in-box the next morning.
A year ago, the AI engines couldn’t code anything too complicated, successfully. It had to be assembled from “human sized chunks” or it just wouldn’t work.
I notice in a code review I’m doing just this morning, the AI is now managing chunk sizes that are annoyingly large, and doing it successfully. At this point, I’m having to apply push-back pressure, not to keep the code working, but to keep it manageable. The same kind of pressure has been necessary for management of most human developers / development teams for decades.
Enshittification wins most successfully in “free tier” products, people who care enough to pay for something do get influence of the products provided - sometimes. Your counter example of automobiles is a good one, along with appliances, etc. The industrial makers of these products have enshittified our legislatures with rules, regulations and laws which protect their industries and enable them to keep colluding to push overpriced under-durable garbage at us with no real alternatives. We need to push back on government for that, that’s the level where the impediments to customer influence exist.
Over that 10x as long timeframe, would they have experienced less or more total downtime with traditionally coded software?
i have a homework for you: if you ask professional chef how to keep the cheese on pizza, are they going to tell you to use some glue? once you figure out an answer to that, you should be able to answer your original question.
There are a lot of folks saying that Bluesky’s recent outages were due to the vast amounts of vibe coding in their systems. It was days of not working.
As an “I wonder” exercise… say that BlueSky wasn’t vibe coded, but instead was done “the old fashioned way” with 20x as many people taking 10x as long to produce the same product. Over that 10x as long timeframe, would they have experienced less or more total downtime with traditionally coded software? Not theoretically perfect software, the actual stuff that “professionals” building social media sites write?
Also, if they have staffed up with the same number of people as were traditionally required, can those people respond to and correct issues slower or faster than a traditional team?
LLMs are powerful tools, which have evolved fairly dramatically in the area of software devleopment across the last 12 months. I suspect as people learn to use them properly, safely, appropriately, they are going to prove out to be quite useful. In the meantime, there will be mistakes made…
There was an article a bit ago explaining that most AI companies are making a 95% loss. You know, spending 100, receiving 5 loss. All that debt is going to mean the price for AI is about 20 times lower than it needs to be just to break even. The software teams that came to rely on AI to save costs will soon enough find themselves on the hook for this mountain of debt. Enshitification is real. Enshitification is coming. AI will not stay cheap, convenient and free of advertising.
People forget this. Yes it has real use in very narrow contexts, yes it may get slightly better, but right now they are JUL getting the kids addicted to vapes and it is drawing ungodly amounts of power and electricity to do so.
The meat you eat has more of an impact on the environment than electricity from AI usage
Two things can both be wrong. And removing something that’s been in place for millennia and deeply embedded in the culture is likely to be more challenging than eliminating something that is still more planned than actually materialized.
Whataboutism doesn’t work here.
Plus the basis of generalized use is founded upon willful mass copytheft.
And which vehicle you go to work in dwarfs them both. Two things can be true at the same time.
Three things here:
right nowthey’re basically discovering what are real uses and what’s frivolous non-value add uses.
at least as used for software development, the past 12 months didn’t get slightly better, it got dramatically night to day better.
simultaneously, some pretty significant advances have been made at reducing costs of delivering value. I think this is hitting hardest in basic chatbot areas, getting the simple answers cheaper - in programming not as clearly, yeah it’s getting the simple answers cheaper there too, but it’s also succeeding at getting much more complex answers that just weren’t possible even a few months ago - those answers cost more, but they’re also worth more… will be interesting to see where this all shakes out.
Yeah, they are running loss leader stuff, yeah it’s going to go up in price when they figure out what its worth to people, because things aren’t priced at what they cost to make or deliver, things are priced at what people are willing to pay. The players with the deep pockets are jockeying for control of future markets, they’re investing their existing wealth in future power. Let’s hope the winners are slightly less ghoulish than our Oil barons.
What a foolish hope!
$200,000,000,000 debt.
Who well pay it?
You talk like gravity doesn’t exist!
You’re wrong if you think that it won’t be heavily reliant AI customers like software companies who spend five years removing codewriting skills from their workforce and building up technical debt in their codebase because no one has to understand it in those five years and there’s a lot of subtle, hard to spot bugs that got through code review because humans simply don’t make those kinds of errors and no one ever had to spot one in their life before claude came along.
Did you think that enshitification wouldn’t affect the product? Yesterday’s computers and cars were easy to disassemble to replace parts. Now it’s much, much harder, and it’s very common to void your warranty if you do that. Today’s ai generated code is easy to tinker with and you can do what you like with your end product. Why would it stay that way? Why wouldn’t they engineer it to make that harder? It’s not difficult to make code confusing by changing variable names. I could fuck up your codebase for humans by simply swapping names like productSKU and customerID, let alone writing obfuscated code for any purpose whatsoever and with whatever variable names I like.
Some software companies are outsourcing their talent to AI behemoths with mountains of debt to recoup. Guess who’s going to pay the debt! And what’s the point of such a company in the long run? Why are you speedrunning paying to replace yourself?
There will be an AI crash and “consolidation”, meaning a switch to monopolies or near monopolies. Some companies are shedding institutional knowledge and programming skill like it was waste water. Once dependence comes, value extraction will follow it like disease follows unvaccinated infection.
There is already $200bn in debt and growing rapidly. The shareholders aren’t going to be paying it. The ai customers are.
I’m old enough to qualify for the next buyout offer, if there is one. Speedrunning “the new tools” is what I have done for 35 years, it has always served me well in the past. Maybe this one backfires? Not my personal problem if it does - disposal of the elderly from the workforce is a tale as old as time, that’s what retirement accounts are for.
Code obfuscators have existed for decades - they are rarely used in practice, and 10 years back when a vendor provided me a driver in obfuscated code I explained to them: “If we don’t get real source code, we won’t be buying your products.” The non-obfuscated code was in my in-box the next morning.
A year ago, the AI engines couldn’t code anything too complicated, successfully. It had to be assembled from “human sized chunks” or it just wouldn’t work.
I notice in a code review I’m doing just this morning, the AI is now managing chunk sizes that are annoyingly large, and doing it successfully. At this point, I’m having to apply push-back pressure, not to keep the code working, but to keep it manageable. The same kind of pressure has been necessary for management of most human developers / development teams for decades.
Enshittification wins most successfully in “free tier” products, people who care enough to pay for something do get influence of the products provided - sometimes. Your counter example of automobiles is a good one, along with appliances, etc. The industrial makers of these products have enshittified our legislatures with rules, regulations and laws which protect their industries and enable them to keep colluding to push overpriced under-durable garbage at us with no real alternatives. We need to push back on government for that, that’s the level where the impediments to customer influence exist.
It’s much more likely that the banks and their insurers will be left holding the bag, and they’ll then be bailed out by the taxpayers.
There’s already negative ROI at even the current loss-leader prices.
I was going to say, this isn’t a gravity thing, this is a bank thing, and the “laws of banking” are indeed much more flexible than gravity.
i have a homework for you: if you ask professional chef how to keep the cheese on pizza, are they going to tell you to use some glue? once you figure out an answer to that, you should be able to answer your original question.