• Brainsploosh@lemmy.world
    link
    fedilink
    arrow-up
    18
    ·
    edit-2
    11 hours ago

    It doesn’t reason, and it doesn’t actually know any information.

    What it excels at is giving plausible sounding averages of texts, and if you think about how little the average person knows you should be abhorred.

    Also, where people typically can reason enough to make the answer internally consistent or even relevant within a domain, LLMs offer a polished version of the disjointed amalgamation of all the platitudes or otherwise commonly repeated phrases in the training data.

    Basically, you can’t trust the information to be right, insightful or even unpoisoned, while sabotaging your strategies and systems to sift information from noise.

    EtA: All for the low low cost of personal computing, power scarcity and drought.

    • undeffeined@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      8 hours ago

      The less you know how LLMs work, the more impressed you are by them. The clever use of the term AI seems like the culprit to me since it will most likely evoke subconscious associations with the AI we have seen portraid in entertainment.

      LLMs can be useful tools, when applied in restricted contexts and in the hands of specialists. This attempt to make it permeate every aspect of our lives is, in my honest opinion, insane