• SaraTonin@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 hours ago

    I honestly don’t get why OpenAI and Apple seem to be trying to explicitly market LLMs as being capable of giving medical advice. It’s so obviously a lawsuit waiting to happen

  • floquant@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 hours ago

    If ChatGPT wants to replace health professionals, it should be held liable for the “advice” it gives.

    Not should, it’s fucking mental that it isn’t.

  • thebestaquaman@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    edit-2
    7 hours ago

    In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment

    So it performs slightly worse than a coin flip…

    In one of the simulations, eight times out of 10 (84%), the platform sent a suffocating woman to a future appointment she would not live to see

    Holy shit! That’s a lot worse than a coin flip.

    Meanwhile, 64.8% of completely safe individuals were told to seek immediate medical care

    And there are real people out there that actually trust this tech to make real decisions for them. It literally performs significantly worse than a coin flip both with regards to false positives and false negatives. You are literally better off flipping a coin or throwing a dice than asking this thing what to do.

    • Dave@lemmy.nz
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 hours ago

      Even better than a coin flip is asking this what to do then doing the opposite!

    • FallenWalnut@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 hours ago

      It is truly horrifying when you drill into the numbers.

      I can see that it MIGHT be useful as a tool for medical professionals, but exposing it to the public is an insane risk.

    • U7826391786239@piefed.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 hours ago

      they’ll never be regulated because fascists love the mass surveillance. who cares about false positives–number of people bagged goes up either way

  • artyom@piefed.social
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    8 hours ago

    Holy shit, TIL there’s a ChatGPT Health!? How is this not unauthorized practice of medicine?

    • Wammityblam@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      7 hours ago

      Past that, how is it HIPAA compliant?

      There is no fucking way I believe that Open AI is not skimming these interactions for training.

        • expr@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 hours ago

          You can also revoke that consent, and HIPAA requires data to be able to be completely destroyed. no way they are compliant.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      Or they live in the United States, where good healthcare doesn’t exist, and the private sector is skirting regulations and stealing your data to offer you vastly inferior services.

      There’s a ton of “wellness” hardware and services that are being thrown at customers who are entirely clueless. But if the garbage service is something they can afford, and a trip to the hospital or something they can’t, they might choose the garbage service.

      https://www.theverge.com/column/878337/optimizer-oura-wearables-fda-regulation-digital-health-screeners

    • Kaul@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      A little harsh. Some people (myself included) may not be getting the help or answers they need from a doctor. My insurance sucks ass and each time I see a doctor it’s at least $400. Repetitive appointments with no real answers. No diagnosis. Just chronic discomfort.

      Then the other day I decide to actually summarize everything I’ve tried, everything that hasn’t helped, and all my symptoms to an AI, and it was able to at least give me suggestions on what it COULD be, and what I can do to alleviate some symptoms. While I’m not convinced the AI knows exactly what’s wrong with me, I at least have more options to stay in control rather than feeling overwhelmed by it all.

      Maybe don’t use AI if something’s wrong that just started happening recently, but for chronic illness it may be beneficial for learning other ways to cope without spending hundreds of dollars to see a specialist that only has ~30 minutes to understand an issue you’ve had for 10 years.

    • Sturgist@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      Sure, fair, ok.

      …soooo…what about when ChatGPT is the patient facing part of every medical practice in your country?

      The people in charge of regulation are too old and/or corrupt to prevent this shit being crammed into literally everything.

  • kescusay@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    7 hours ago

    How about a $10 billion fine for OpenAI for every mistake? Make it hurt. Make them pull the plug on this travesty.