• Deceptichum@quokk.au
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    7
    ·
    2 days ago

    I use them frequently, they’re extremely helpful just don’t get it to write everything.

    As for the comic, it’s pretty inaccurate. The only one that I find true is the too much water, sometimes the bots like to take … longer methods.

    • Karjalan@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      11 hours ago

      Everyone has different experiences, but it’s very hit and miss for me. Sometimes it gives some very useful boiler plate, saving me quite a bit of time, sometimes it hallucinates some insane stuff that isn’t related to what I asked or makes functions that don’t return, or call each other.

      Like defining a function “getTheThing” then later calling “getSomethingElse” that doesn’t exist. It’s a simple enough error to fix, but sometimes it’s so close to “correct” that debugging it takes quite a lot to find, because it looks right.

    • itkovian@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      2 days ago

      From what I understand of LLMs your assessment does seem likely to me. LLMs might actually be pretty accurate when asked to do relatively simpler, shorter tasks.

      • Aneb@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        21 hours ago

        Yeah I asked it to generate sdks from api documentation and it failed to pull all the routes into methods so its very much temperamental. If there’s an easier SDK conversion program that I’m missing I would prefer hard coded logic machines than fuzzy LLMs.