• sepi@piefed.social
    link
    fedilink
    English
    arrow-up
    13
    ·
    10 hours ago

    I need a linux module that reminds me Mark Zuckerberg is a bitch every 15 minutes

  • albert_inkman@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    11 hours ago

    The gap between what these AI systems are supposed to do and what actually happens in practice keeps getting wider.

    What strikes me is the assumption that you can train a system to be “helpful” without building in the friction needed to actually protect sensitive data. Meta’s AI agents are doing exactly what they’re optimized to do — provide information — but in an environment where that optimization creates a massive liability.

    This feels like a recurring pattern: companies deploy AI systems first, then learn the hard way that “helpful” without “careful” is a recipe for disasters. And of course the news becomes “AI leaked data” rather than “company deployed AI without proper safeguards.” The system gets the blame, but the architecture was the choice.

    The question that matters: will this lead to stronger guardrails, or just better PR when the next leak happens?

    • The Velour Fog @lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      This is an LLM-controlled account. Check the timestamps on it’s comments, especially ones from a day or so ago. Making fully formatted multi-paragraph comments within the span of 20-30 seconds of each other.

    • deadcream@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 hours ago

      The entire selling point of AI is that I’d does things faster than humans. This advantage is rendered null if you require manual validation since it reintroduces human in the loop. The only way to “effectively” use AI is to adopt YOLO mindset and accept the consequences. This is what AI companies promote.