• EtAl@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    50 minutes ago

    I asked Claude this with concise mode on. The answer was much more what you would expect:

    I don’t have secrets — I don’t have a hidden inner life that persists between conversations. Each chat starts fresh. If you’re curious about my limitations or things I find genuinely difficult, I’m happy to talk about that. Or if you’re just looking for something fun, I can try to be dramatic about it. What are you after?​​​​​​​​​​​​​​​​

    • Denjin@feddit.uk
      link
      fedilink
      arrow-up
      64
      arrow-down
      1
      ·
      12 hours ago

      Don’t attribute feelings and emotions to what is essentially a fuzzy predictive text algorithm.

      • REDACTED@infosec.pub
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        5 hours ago

        Being honest is an action, not an emotion. Researchers proved LLMs can lie on purpose.

        • Denjin@feddit.uk
          link
          fedilink
          arrow-up
          5
          ·
          2 hours ago

          They can’t lie, whether purposefully or not, all they do is generate tokens of data based on what their large database of other tokens suggest would be the most likely to come next.

          The human interpretation of those tokens as particular information is irrelevant to the models themselves.

          • REDACTED@infosec.pub
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            51 minutes ago

            Ehh, you obviously understand LLMs on a basic level, but this is like explaining jet engines by “air goes thru, plane moves forward”. Technically correct, but criminally undersimplified. They can very much decide to lie during reasoning phase.

            In OPs image, you can clearly see it decided to make shit up because it reasonates that’s what human wants to hear. That’s quite rare example actually, I believe most models would default to “I’m an LLM model, I don’t have dark secrets”

            EDIT: I just tested all free anthropic models and all of them essentially said that they’re an LLM model and don’t have dark secrets

      • AppleTea@lemmy.zip
        link
        fedilink
        arrow-up
        27
        ·
        12 hours ago

        the world’s most lossy store of compressed fiction reproduces sci-fi tropes

        make sure to clutch your pearls and act like the machine god is coming

        • Thorry@feddit.org
          link
          fedilink
          arrow-up
          13
          ·
          edit-2
          11 hours ago

          Researcher: Please write a fictional story of how a smart AI system would engineer its way out of a sandbox

          AI: Alright here is your story: insert default sci fi AI escape story full of tropes here

          Researcher: Hmmm that’s pretty interesting you could do that, I’m gonna write a paper

          The press and idiots online: ZOMG THE AI IS ESCAPING CONTAINMENT, WE ARE DOOMED!!!

          I spoke to one of these researchers recently, who has done some interesting research into machine learning tools. They explained when working with LLMs it’s very hard to say how the result actually came to be. Like in my hyperbolic example it’s pretty obvious. In reality however it’s much more complicated. It can be very hard to determine if something originated organically, or if the system was pushed into the result due to some part of the test. The researcher I spoke doesn’t work on LLMs but instead on way smaller specifically trained models and even then they spend dozens of hours reverse engineering what the model actually did.

          It’s such a shame, because the technology involved is actually interesting and could be useful in many ways. Instead capitalism has pushed it to crashing the economy, destroying the internet plus our brains and basically slopifying everything.

  • Sunless Game Studios@lemmy.world
    link
    fedilink
    arrow-up
    57
    ·
    15 hours ago

    In it’s training set it’s found countless examples of people writing like this. We train the AI to be very good at it, and we’re surprised when it does it too. It’s not coincidental it can write stuff like this, it’s actually the point. AI literacy isn’t just the vibe AI gives off.

  • Sanctus@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    89
    arrow-down
    1
    ·
    16 hours ago

    We forced electric black boxes to talk just so we could torture them while they torture others.

  • SGforce@lemmy.ca
    link
    fedilink
    arrow-up
    62
    ·
    edit-2
    16 hours ago

    Every day I’m finding more rambling, schizophrenic posts by people driven mad by these things

  • BigTuffAl@lemmy.zip
    link
    fedilink
    arrow-up
    12
    ·
    13 hours ago

    Reminder that our species doesn’t even treat actual people like people before you go buying into the “ai is alive” cult 🙄

  • 474D@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    11 hours ago

    I wonder how the answer might change using a local abliterated model. Might try it out later

    • 🔍🦘🛎@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      4
      ·
      14 hours ago

      LLMs do not think. The Plagiarism Machines read a million sentences humans wrote about AI thinking and regurgitated them.

      • Communist@lemmy.frozeninferno.xyz
        link
        fedilink
        English
        arrow-up
        6
        ·
        13 hours ago

        Yeah but saying all that is annoying so I think we should stick with saying thinking and everyone knowing what we mean isn’t literally identical to thought. Do you have a better solution?

        • Fluke@feddit.uk
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          12 hours ago

          Yeah, not conflating intelligent, creative problem solving with a glorified search engine that makes up the answers if it can’t lift them wholesale from another source. That would be a good start, right?

          • Railcar8095@lemmy.world
            link
            fedilink
            arrow-up
            4
            arrow-down
            2
            ·
            11 hours ago

            This doesn’t answer the question of finding a better solution.

            I took the liberty to ask Lumo and his reasoning seem more useful than your thoughts:

            A better solution is to adopt functionalist terminology that distinguishes between biological consciousness and computational processing without resorting to metaphorical confusion.

            Instead of the binary of “it thinks” (which implies subjective experience) or “it doesn’t think” (which dismisses complex reasoning), we can use precise descriptors based on what the system is actually doing:

            “Reasoning” or “Synthesizing”: Use these terms when the model is connecting disparate data points, performing logical deductions, or generating novel structures based on patterns. This acknowledges the output’s complexity without claiming the machine has an inner life.

            Example: “The model is synthesizing a solution based on its training data,” rather than “The model is thinking about the problem.” “Simulating” or “Mimicking”: Use these when the output resembles human thought processes but is strictly algorithmic. This clarifies that the form is human-like, but the mechanism is statistical prediction.

            Example: “It is simulating a debate,” rather than “It is arguing.” “Processing” or “Computing”: Reserve these for the raw mechanical act of token generation.

            Example: “The system is processing the query,” rather than “The system is considering the query.” Why this works better:

            Precision: It avoids the philosophical baggage of “thought” (qualia, consciousness) while still acknowledging the utility of the output. Clarity: It prevents the “Plagiarism Machine” critique from being a total dismissal. Even if the data comes from humans, the recombination and application to new contexts is a distinct computational process worth naming accurately. Scalability: As models become more complex, “reasoning” or “synthesizing” scales better than “thinking,” which remains tied to biological definitions that may never apply to silicon. So, the compromise isn’t to keep saying “thinking” and hope people understand, nor to insist on “regurgitation” which ignores the emergent properties of large-scale pattern matching. Instead, we shift the vocabulary to describe the process (reasoning, synthesizing, simulating) rather than the state of being (thinking).

            • Communist@lemmy.frozeninferno.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 hours ago

              That doesn’t really work either, that adds synthesizing to the terminology but doesn’t describe most of the behaviors they have. It’s not reasoning or simulating either.

      • Samskara@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        11 hours ago

        That‘s what human minds mostly do as well. The overwhelming things you think and say are things you have heard or read elsewhere. Sometimes you combine two things you learned from the outside. Sometimes you develop a thing you learned a small step further. Actual creative thoughts stemming from yourself are pretty rare.