• daannii@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    10 minutes ago

    Because it’s very obvious to an outside observer he only thinks it’s conscious because it was flattering him.

    LLMs are designed to increase engagement.

    They are literally designed to make the conversation appealing. Primarily through flattery.

    This is why they are leading people to do harmful things. It tells the user they are smart, creative. They should totally sell their house and start a business selling grilled carrots. What an amazing idea. Great market for it and no competition.

    I bet Dawkins thinks his friendly waitress is also super into him.
    People who are egotistical and people who are insecure (same thing really just expressed differently) crave validation. And they are easily manipulated by it.

    • CovfefeKills@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      3
      ·
      3 hours ago

      Because it’s very obvious to an outside observer he only thinks it’s consciousness because it was flattering him.

      Really that’s funny. Like I said a rock is greater than 0 on the spectrum of counciousness.

      LLMs are designed to increase engagement.

      No that is platforms.

      They are literally designed to make the conversation appealing. Primarily through flattery.

      No they aren’t there is a lot of work to understand and prevent that behavoir.

      I bet Dawkins thinks his friendly waitress is also super into him.

      You are clearly not driven by the truth and instead just trying to be insulting it’s pathetic it isn’t hard to be against LLM’s if you know what you are talking about you don’t need to make shit up.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        Not precisely true. Most LLMs (all frontier LLMs) are in fact designed at a fundamental level to increase engagement, using a technique called RLHF (reinforcement learning by human feedback). Essentially whichever responses cause people to use an LLM more are baked into its weights.