• anomnom@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 hours ago

    It’s not AI. It’s LLMs that don’t actually think in any meaningful way. They just repeat what they have ingested. And was most mathematically likely.

    That’s why imma pessimist about LLMs doing anything truly revolutionary. They’re another productivity tool to solve problems that shouldn’t exist in the first place and middle-managment loves it for the same fucking reason.

    • mojofrododojo@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      46 minutes ago

      yup. a roided up eliza isn’t going to synthesize anything new. they can do some tasks, but it’s most certainly not artificial intelligence. and chaining a bunch of eliza’s together isn’t going to make them smarter (claw etc.,), much less make them reliable and useful.