• artyom@piefed.social
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    4
    ·
    16 hours ago

    Sure. The point is it’s entirely possible to use a firearm safely. There is no safe use for LLMs because they “make decisions”, for lack of a better phrase, for themselves, without any user input.

    • etchinghillside@reddthat.com
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      15 hours ago

      That is not at all how LLMs work. It’s the software written around LLMs that aide it in constructing and running commands and “making decisions”. That same software can also prompt the user to confirm if they should do something or sandbox the actions in some way.

        • suicidaleggroll@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          edit-2
          14 hours ago

          Only if the user has configured it to bypass those authorizations.

          With an agentic coding assistant, the LLM does not decide when it does and doesn’t prompt for authorization to proceed. The surrounding software is the one that makes that call, which is a normal program with hard guardrails in place. The only way to bypass the authorization prompts is to configure that software to bypass them. Many do allow that option, but of course you should only do so when operating in a sandbox.

          The person in this article was a moron, that’s all there is to it. They ran the LLM on their live system, with no sandbox, went out of their way to remove all guardrails, and had no backup. The fallout is 100% on them.

          • artyom@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            6
            ·
            14 hours ago

            As I said elsewhere, if you’re denying access to your agentic AI, what is the point of it? It needs access to complete agentic tasks.

            The person in this article was a moron, that’s all there is to it. They ran the LLM

            No disagreement there.

            • suicidaleggroll@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              edit-2
              14 hours ago

              if you’re denying access to your agentic AI, what is the point of it? It needs access to complete agentic tasks.

              Yes, which it can prompt you for. Three options:

              1. Deny everything
              2. Prompt for approval when it needs to run a command or write a file
              3. Allow everything

              Obviously optional 1 is useless, but there’s nothing wrong with choosing option 2, or even option 3 if you run it in a sandbox where it can’t do any real-world damage.

              • thebestaquaman@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                10 hours ago

                You can fine-grain nr. 2 even more: You can give access to e.g. modify files only in a certain sub-tree, or run only specific commands with only specific options.

                A restrictive yet quite safe approach is to only permit e.g. git add, git commit, and only allow changes to files under the VC. That effectively prevents any irreversible damage, without requiring you to manually approve all the time.

              • artyom@piefed.social
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                4
                ·
                edit-2
                13 hours ago
                1. Prompt for approval when it needs to run a command or write a file

                And then when you give it access, it fucks shit up. I don’t know why this is hard to understand.

                • suicidaleggroll@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  12 hours ago

                  You clearly have absolutely zero experience here. When you’re prompted for access, it tells you the exact command that’s going to be run. You don’t just give blind approval to “run something”, you’re shown the exact command it’s going to run and you can choose to approve or reject it.