• Emi@ani.social
    link
    fedilink
    English
    arrow-up
    21
    ·
    18 hours ago

    At first I thought vibe coding was just coding stuff for fun using whatever comes to your mind. Then I learned that it’s just letting ai code for you mostly and just copy paste the code.

    Now I wonder if there are some cases of real vibe coding like my first assumption.

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      15 hours ago

      Not copy-paste. Let the ai do it for you. …… else how would we get these entertaining stories of idiots letting ai delete their production database

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      16
      ·
      16 hours ago

      The danger here is that many people think that software is all about having code that seems to work when you try it. Those people have never been able to get past “Hello, World” in X for Dummies, so they don’t realize all the practical realities of software distribution that are very much more nuanced and complicated than just writing the code. They get their hands on some working code and wheeeee!!! Ship it!!!

      A while back I compared LLMs to lightsabers - and pointed out how many amputees are found in the Galaxy far far away that has lightsabers.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          6
          ·
          10 hours ago

          Produce correct results even when encountering “edge cases.”

          Not crash, even when encountering “edge cases.”

          Work correctly in all deployment environments.

          Work correctly after scope creep multiplies the feature set by 3x, 10x, 30x… yeah, successful projects experience that kind of expansion.

          Work correctly after the operating environments shift under your feet - can the code be updated to work with the next version of Android? iOS? Windows? Linux? After “security updates” take away the infrastructure you were depending on for correct functioning?

          Will it scale to 100 users? 10,000? 10,000,000?

          What happens when “threat actors” actively target the system?

          What happens when your methods / development processes aren’t compliant with new government regulations?

          Are you ready for IP lawsuits, whether deserved or not?

    • apfelwoiSchoppen@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      16 hours ago

      There are a lot of folks saying that Bluesky’s recent outages were due to the vast amounts of vibe coding in their systems. It was days of not working.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        12
        ·
        16 hours ago

        As an “I wonder” exercise… say that BlueSky wasn’t vibe coded, but instead was done “the old fashioned way” with 20x as many people taking 10x as long to produce the same product. Over that 10x as long timeframe, would they have experienced less or more total downtime with traditionally coded software? Not theoretically perfect software, the actual stuff that “professionals” building social media sites write?

        Also, if they have staffed up with the same number of people as were traditionally required, can those people respond to and correct issues slower or faster than a traditional team?

        LLMs are powerful tools, which have evolved fairly dramatically in the area of software devleopment across the last 12 months. I suspect as people learn to use them properly, safely, appropriately, they are going to prove out to be quite useful. In the meantime, there will be mistakes made…

        • Log in | Sign up@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          15 hours ago

          There was an article a bit ago explaining that most AI companies are making a 95% loss. You know, spending 100, receiving 5 loss. All that debt is going to mean the price for AI is about 20 times lower than it needs to be just to break even. The software teams that came to rely on AI to save costs will soon enough find themselves on the hook for this mountain of debt. Enshitification is real. Enshitification is coming. AI will not stay cheap, convenient and free of advertising.

          • apfelwoiSchoppen@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            edit-2
            12 hours ago

            People forget this. Yes it has real use in very narrow contexts, yes it may get slightly better, but right now they are JUL getting the kids addicted to vapes and it is drawing ungodly amounts of power and electricity to do so.

            • mermella@piefed.social
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              4
              ·
              9 hours ago

              The meat you eat has more of an impact on the environment than electricity from AI usage

              • phutatorius@lemmy.zip
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 hours ago

                Two things can both be wrong. And removing something that’s been in place for millennia and deeply embedded in the culture is likely to be more challenging than eliminating something that is still more planned than actually materialized.

            • MangoCats@feddit.it
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              6
              ·
              10 hours ago

              Three things here:

              • right nowthey’re basically discovering what are real uses and what’s frivolous non-value add uses.

              • at least as used for software development, the past 12 months didn’t get slightly better, it got dramatically night to day better.

              • simultaneously, some pretty significant advances have been made at reducing costs of delivering value. I think this is hitting hardest in basic chatbot areas, getting the simple answers cheaper - in programming not as clearly, yeah it’s getting the simple answers cheaper there too, but it’s also succeeding at getting much more complex answers that just weren’t possible even a few months ago - those answers cost more, but they’re also worth more… will be interesting to see where this all shakes out.

              Yeah, they are running loss leader stuff, yeah it’s going to go up in price when they figure out what its worth to people, because things aren’t priced at what they cost to make or deliver, things are priced at what people are willing to pay. The players with the deep pockets are jockeying for control of future markets, they’re investing their existing wealth in future power. Let’s hope the winners are slightly less ghoulish than our Oil barons.

              • Log in | Sign up@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                8 hours ago

                Let’s hope the winners are slightly less ghoulish than our Oil barons.

                What a foolish hope!

                $200,000,000,000 debt.
                Who well pay it?
                You talk like gravity doesn’t exist!

                You’re wrong if you think that it won’t be heavily reliant AI customers like software companies who spend five years removing codewriting skills from their workforce and building up technical debt in their codebase because no one has to understand it in those five years and there’s a lot of subtle, hard to spot bugs that got through code review because humans simply don’t make those kinds of errors and no one ever had to spot one in their life before claude came along.

                Did you think that enshitification wouldn’t affect the product? Yesterday’s computers and cars were easy to disassemble to replace parts. Now it’s much, much harder, and it’s very common to void your warranty if you do that. Today’s ai generated code is easy to tinker with and you can do what you like with your end product. Why would it stay that way? Why wouldn’t they engineer it to make that harder? It’s not difficult to make code confusing by changing variable names. I could fuck up your codebase for humans by simply swapping names like productSKU and customerID, let alone writing obfuscated code for any purpose whatsoever and with whatever variable names I like.

                Some software companies are outsourcing their talent to AI behemoths with mountains of debt to recoup. Guess who’s going to pay the debt! And what’s the point of such a company in the long run? Why are you speedrunning paying to replace yourself?

                There will be an AI crash and “consolidation”, meaning a switch to monopolies or near monopolies. Some companies are shedding institutional knowledge and programming skill like it was waste water. Once dependence comes, value extraction will follow it like disease follows unvaccinated infection.

                There is already $200bn in debt and growing rapidly. The shareholders aren’t going to be paying it. The ai customers are.

                • MangoCats@feddit.it
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  48 minutes ago

                  Why are you speedrunning paying to replace yourself?

                  I’m old enough to qualify for the next buyout offer, if there is one. Speedrunning “the new tools” is what I have done for 35 years, it has always served me well in the past. Maybe this one backfires? Not my personal problem if it does - disposal of the elderly from the workforce is a tale as old as time, that’s what retirement accounts are for.

                • MangoCats@feddit.it
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  52 minutes ago

                  Today’s ai generated code is easy to tinker with and you can do what you like with your end product. Why would it stay that way? Why wouldn’t they engineer it to make that harder? It’s not difficult to make code confusing by changing variable names.

                  Code obfuscators have existed for decades - they are rarely used in practice, and 10 years back when a vendor provided me a driver in obfuscated code I explained to them: “If we don’t get real source code, we won’t be buying your products.” The non-obfuscated code was in my in-box the next morning.

                  A year ago, the AI engines couldn’t code anything too complicated, successfully. It had to be assembled from “human sized chunks” or it just wouldn’t work.

                  I notice in a code review I’m doing just this morning, the AI is now managing chunk sizes that are annoyingly large, and doing it successfully. At this point, I’m having to apply push-back pressure, not to keep the code working, but to keep it manageable. The same kind of pressure has been necessary for management of most human developers / development teams for decades.

                  Enshittification wins most successfully in “free tier” products, people who care enough to pay for something do get influence of the products provided - sometimes. Your counter example of automobiles is a good one, along with appliances, etc. The industrial makers of these products have enshittified our legislatures with rules, regulations and laws which protect their industries and enable them to keep colluding to push overpriced under-durable garbage at us with no real alternatives. We need to push back on government for that, that’s the level where the impediments to customer influence exist.

                • phutatorius@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 hours ago

                  The shareholders aren’t going to be paying it. The ai customers are.

                  It’s much more likely that the banks and their insurers will be left holding the bag, and they’ll then be bailed out by the taxpayers.

                  There’s already negative ROI at even the current loss-leader prices.

                  • MangoCats@feddit.it
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    59 minutes ago

                    I was going to say, this isn’t a gravity thing, this is a bank thing, and the “laws of banking” are indeed much more flexible than gravity.

        • 14th_cylon@lemmy.zip
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          14 hours ago

          Over that 10x as long timeframe, would they have experienced less or more total downtime with traditionally coded software?

          i have a homework for you: if you ask professional chef how to keep the cheese on pizza, are they going to tell you to use some glue? once you figure out an answer to that, you should be able to answer your original question.

    • Imgonnatrythis@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 hours ago

      What’s it called when I get real high and code something that I can’t even figure out the next day if it was genius or insanity?

    • otp@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      16 hours ago

      I don’t think copy/paste is involved. With vibe coding, the AI agent typically has access to your repo/files directly!