• AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    1 day ago

    amplifying H-Neurons’ activations systematically increases a spectrum of over-compliance behaviors – ranging from overcommitment to incorrect premises and heightened susceptibility to misleading contexts, to increased adherence to harmful instructions and stronger sycophantic tendencies. These findings suggest that H-Neurons do not simply encode factual errors, but rather represent a general tendency to prioritize conversational compliance over factual integrity.

    I wonder if the same tendencies are associated in humans—and if so, is it something LLMs learned from humans, or is it a consequence of the general structure of neural networks?

    • [deleted]@piefed.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      Prioritizing conversational compliance over factual integrity when the output is promoted as being factual is a design flaw.

      Saying double check the output does not excuse that flaw when LLM CEOS say their models are like someone with a PhD or that it can automate every white collar job within a year.

      • ageedizzle@piefed.ca
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        1 day ago

        Is it a design flaw? Or is it just false advertising? If I sell you a vacuum by telling you it can mop your floor, is the problem with the vacuum or the way I’m selling the product?

        • XLE@piefed.social
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          For this particular paper, it seems like a design flaw got uncovered. And it may very well be part of the architecture of how LLMs are even readable to begin with, given how deep and universal the “bad” nodes are.

          I can’t prove any AI company was aware of this, but they would have been in a much better position to realize it than researchers who have to do a postmortem on the models being crappy. And if they weren’t aware of it, they’re probably not very good at their jobs…