• IratePirate@feddit.org
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    5
    ·
    edit-2
    12 hours ago

    Good work, citizen! The tech bros need you to believe that their dumb digital parrots will eventually, magically metamorphose into AGI. It’s the only thing that keeps that sweet VC money flowing and the AI bubble from popping.

    • Iconoclast@feddit.uk
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      8
      ·
      edit-2
      12 hours ago

      I’m just going to ignore your completely uncalled-for smug and dismissive tone and note that at no point have I suggested LLMs will lead to AGI.

      Thank you for your contribution to making this platform a worse place for everyone.

      • DudeImMacGyver@kbin.earth
        link
        fedilink
        arrow-up
        2
        arrow-down
        3
        ·
        9 hours ago

        The irony of your response is strong. Also, you DID say that:

        I view AGI as inevitable became it’s the natural end goal of us incrementally improving our AI systems over a long enough period of time. As with all human-created technology, we will keep improving it. It doesn’t matter how slow the process is - as long as we keep heading in that direction, we will eventually reach the destination. The only things that could stop us, as far as I can see, are either destroying ourselves some other way before we get there or substrate independence - meaning general intelligence simply cannot be created without our biological wetware. I however see no reason to assume that, since human brains are made of matter just like computers are and I don’t think there’s anything supernatural about intelligence.

        It sounds like you’ve bought into techbro bullshit, but don’t realize it.

        • Iconoclast@feddit.uk
          link
          fedilink
          English
          arrow-up
          4
          ·
          9 hours ago

          Feel free to help me realize it then, because whatever irony or conflict you’re seeing there, I don’t see.

          • DudeImMacGyver@kbin.earth
            link
            fedilink
            arrow-up
            2
            arrow-down
            3
            ·
            9 hours ago

            Yes, I can see that.

            The “AI” that we have now is not actually AI, that’s just a marketing term. Actual experts (read: Not people like Sam Altman) point out that LLMs are severely flawed and will always return bad information. This problem is baked into the way these models function. Making what we’ve got into actual AI like you said isn’t going to happen, full stop.

            Don’t believe the horseshit you hear from people trying to sell something.

            • Iconoclast@feddit.uk
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              8 hours ago

              The “AI” that we have now is not actually AI

              This is simply just false. We’ve had AI since 1956

              AI isn’t any one thing. It’s an broad term used in computer science to refer to any system designed to perform a cognitive task that would normally require human intelligence. The chess opponent on an old Atari console is an AI. It’s an intelligent system - but only narrowly so. That’s called “narrow” or “weak” AI.

              It can still have superhuman abilities, but only within the specific task it was built for - like playing chess or generating language.

              A large language model like ChatGPT is also narrow AI. It’s exceptionally good at what it was designed to do: generate natural-sounding language. What people expect from it, though, isn’t narrow intelligence - it’s general intelligence. The ability to apply cognitive skills across a wide range of domains the way a human can. That’s something LLMs simply can’t do - at least not yet. Artificial General Intelligence is the end goal for many AI companies, but LLMs are not generally intelligent. However they still fall under the umbrella of AI as a broad category of systems.

              Making what we’ve got into actual AI like you said isn’t going to happen, full stop.

              I’ve never claimed LLMs will lead to AGI as I stated in the comment you quoted above.