• CeeBee_Eh@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 hours ago

    This guy has completely lost the plot. I don’t think it’s possible to be even more disconnected from reality.

    • VindictiveJudge@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 hours ago

      Fun fact: if true AGI were a thing, those AI programs would be people and not paying them for their work would be slavery.

  • entropiclyclaude@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    14 hours ago

    These fuckers will claim whatever nonsense to keep themselves relevant enough to take on more debt before they collapse.

    • awake@lemmy.wtf
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      Looking at their history they were always able to create markets for their GPUs and AI has been obviously incredible for them. There will be the next hot thing after AI and they’ll try to have that, too. The alternatives to CUDA are not there yet, ROCm is still lacking and fiddly. I see a lot of things happening but NVIDIA collapsing for whatever reason is not part of that.

    • fierysparrow89@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 hours ago

      I agree, they start to sound desperate to keep their current momentum going. I think the bubble will burst soon. Things look solid until they’re not.

  • MonkderVierte@lemmy.zip
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    20 hours ago

    The Turing thing again, how good a system is at mimicking a human? Like, lot’s of dog owners could swear; the dog is smarter than a cat. But dogs are only better at reading their human.

    I’ll believe him, if he let’s the LLM do his job.

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      18 hours ago

      Cats may be able to read their human just as well or better, but as they don’t give a shit, there’s no feedback to base anything on.

  • PushButton@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    18 hours ago

    His can we take this idiot seriously; slop DLSS, tgen telling us we are wrong about this (the buddy telling me what I prefer), then we achieved AGI…

    How low can he falls?

  • AudaciousArmadillo@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    22 hours ago

    Oh yes we have achieved AGI! But what we really need is Artificial General Super Intelligence! Just another trillion and it will be useful bro!

  • Zozano@aussie.zone
    link
    fedilink
    English
    arrow-up
    61
    arrow-down
    2
    ·
    1 day ago

    LLMs aren’t AI, let alone AGI.

    They’re fucking prediction engines with extra functions.

    • Onihikage@piefed.social
      link
      fedilink
      English
      arrow-up
      26
      ·
      23 hours ago

      The best description I’ve ever heard of LLMs is “a blurry jpeg of the internet”. From the perspective of data compression and retrieval, they’re impressive… but they’re still a blurry jpeg. The image doesn’t change, you can only zoom in on different parts of it and apply extra filters, and there’s nothing you can truly do about the compression artifacts (what we call “hallucinations”). It can’t think, it can’t learn, it just is, and that’s all it will ever be.

    • unnamed1@feddit.org
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      11
      ·
      20 hours ago

      So are we. Your definition of AI also seems off. It’s a field of computer science dealing with seemingly cognitive algorithms. Basically everything that is not rule based programming. I work in AI production since over ten years. It is absolutely valid and necessary to hate AI, but not to deny technical functionality. Also the other answer to your comment: of course training a neural network is a form of learning. Wether it is by reinforcement or by training data. There are many applications of ML since many years before LLMs, it makes no sense to deny that it exists.

    • MojoMcJojo@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      5
      ·
      20 hours ago

      It’s an industrial sized prediction engine. And when you apply that to bioscience, it predicts things that saves lives.