• Mika@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    10
    ·
    1 day ago

    you can’t be critical about the answer

    You actually can, and you should be. And the process is not destructive since you can always undo in tools like cursor, or discard in git.

    Besides, you can steer a good coding LLM in a right direction. The better you understand what are you doing - the better.

    • HarkMahlberg@kbin.earth
      link
      fedilink
      arrow-up
      6
      ·
      20 hours ago

      You misunderstood, I wasn’t saying you can’t Ctrl Z after using the output, but that the process of training an AI on a corpus yields a black box. This process can’t be reverse engineered to see how it came up with it’s answers.

      It can’t tell you how much of one source it used over another. It can’t tell you what it’s priorities are in evaluating data… not without the risk of hallucinating on you when you ask it.

    • MoreZombies@aussie.zone
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      How would you be critical of the answer without also doing a traditional search to compare its answer? If you have to search and verify the answer anyway, didn’t we just add an unnecessary step to the process?

      • Mika@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        1 day ago

        You can have knowledge of the technology firsthand and just need to generate the code? I mean I would need to google different function names and conversion tricks all the time anyway, even if I’m really good at it. If AI slops it for me, it just speeds it up by a lot, and I can notice bad moments.

        Again, the better you know what you are doing, the more it could help.

        • very_well_lost@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          18 hours ago

          That would be all well and good, if corpos weren’t pushing AI as a technology that everyone should be using all the time to reshape their daily lives.

          The people most attracted to AI as a technology (and the ones that AI companies are marketing to the hardest) are the ones who want to use it for things where they don’t already have domain-specific expertise. Non-artists generating art, or non-coders making apps on “vibes”, etc. Have you ever heard of Travis Kalanick? He’s one of the co-founders of Uber and he recently made the news after he went on some podcast to breathlessly rave about how he’s been using LLMs to do “vibe physics”. Kalanick, as you can guess, is not a physicist. In fact he’s not a scientist of any kind.

          The vast, vast majority of people using AI aren’t using it to augment their existing skills, and they aren’t using their own expertise to evaluate the output critically. This was never the point nor the promise of AI, and it’s certainly not the direction that the people pushing this technology are attempting to push it.

          • Mika@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            18 hours ago

            AI marketing is total BS, but it doesn’t mean AI is not useful in it’s current state. People try to argue as if that was the case, but it simply isn’t. Agentic AI + LLM does speed up usual tasks by a whole fucking lot.

            Next day, these people would be wondering why they don’t have access to essential tools they need to be effective (means of production), completely forgotten they were against these tools completely out of principle. This is as shortsighted as it can get.

            • very_well_lost@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              11 hours ago

              AI marketing is total BS, but it doesn’t mean AI is not useful in it’s current state.

              But the AI only exists because of the marketing BS! The fact that AI is useful to qualified people in specialized fields doesn’t matter when the technology is being mass marketed to a completely different group of people for completely different use cases.

              LLMs are called “large” for a reason — their existence demands large datasets, large data centers, large resource consumption, and large capital expenditure to secure all of those things. The only entries with the resources to make that happen are large corporations (and rich nation-states, but they seem to be content to keep any of their own LLM efforts under wraps for now). You can only say “don’t blame the technology, blame the technologist” when it’s possible to separate the two, but in this case it’s not. LLMs don’t exist without the corpos, and the corpos are determined to push LLMs into places and use cases where they have no business being.

              • Mika@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                16 hours ago

                Openweight/Opensource LLMs do exist though. And isn’t not only tiny models.