• phutatorius@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 hours ago

    Security audit by independent third parties, including access to the full source code, or GTFO.

    • fierysparrow89@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      Source code of what? Unfortunately, none of the above is anywhere near enough.

      We need locally available ai models that can run off-line. Also: the ai context and history must be kept separately from the model itself.

      If the ai model needs to communicate with the outside world, user needs 100% transparency and control what data the ai sends.

  • fruitycoder@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    16 hours ago

    Microsoft’s push to make Copilot a kind of AI medical middleman—especially through the newly announced Copilot Health—raises a real tension: the company is loudly promoting a Secure by Design philosophy, but the sensitivity of health data means the bar is far higher than a general security promise. The short version is that Secure by Design is necessary, but nowhere near sufficient for something that sits between you, your clinicians, your medical records, and your wearables.

    • Microslop copilot
    • docus@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 hours ago

      Security by design is only one aspect of what would be required. Even if it keeps my data secure, if it is going to recommend putting pva glue on cuts and butter on burns, it’s a no from me. Altough i would be curious what it has to say about vaccinations…

  • Komodo Rodeo@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    22 hours ago

    Microsoft’s AI wants to be your medical middleman, but is a “Secure by Design” promise really enough for Copilot? Would you trust Microsoft with the “puzzle” of your medical records?

    Short answer? No, and no.

  • 𝕸𝖔𝖘𝖘@infosec.pub
    link
    fedilink
    English
    arrow-up
    4
    ·
    17 hours ago

    I don’t trust Microsoft with my temp folder, what makes you think I’m going to trust it with my medical data? In case there’s any ambiguity left in that: no, I do not, and will never, trust Microsoft with this data, nor with any other personal, personal adjacent, identifiable, personal, or private data. Period. Hard stop.

  • grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    23 hours ago

    Literally adjacent in my feed:

    Generative AI agents will never be secure; it’s a flaw inherent to their nature.

    • Griffus@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      20 hours ago

      You probably won’t see this, but I think you’ve gotten a response or two in your backlog.

      • grue@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        20 hours ago

        I regretted not cropping that as soon as I posted it because I knew someone would comment on it, but I couldn’t figure out how to crop after-the-fact on my phone and re-upload. The screenshot utility can do it, but the image viewer can’t.

        • Griffus@lemmy.zip
          link
          fedilink
          English
          arrow-up
          4
          ·
          20 hours ago

          Sorry for being the one you saw coming, but I am now very fascinated that you can follow up on new ones.

          • grue@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            19 hours ago

            It’s mainly that I just don’t bother marking things read, so that’s like two and a half years of replies.

  • Devolution@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    1 day ago

    Copilot is the worst of them all. I wouldn’t trust it to do a grocery list let alone anything medical.