• stinky@redlemmy.com
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    8 hours ago

    “Don’t rely on it for anything important” is something uneducated people say, just so you’re aware

    AI is being used in the field of medicine safely and reliably. It’s actively saving people’s lives, reducing costs and improving outcomes. If you’re not aware of those things it’s because you’re too lazy or stupid to look them up; you’re literally just parroting others’ criticisms of chatbots. This is your failure, not AI’s.

    Aid in medical imaging diagnostics (e.g., detecting anomalies in radiology scans) https://pmc.ncbi.nlm.nih.gov/articles/PMC10487271/

    Administrative and documentation (auto paperwork to allow more visitation time between patient and doctor) https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(23)01668-9/fulltext

    Population health/predictive analysis https://www.frontiersin.org/journals/medicine/articles/10.3389/fmed.2024.1522554/full

    This is an exciting opportunity for you to educate yourself on how AI is changing the landscape. Seems like you’ve already made your mind up about chatbots and character.ai so maybe there’s room in your schedule to learn about something valuable. Good luck! :)

    • alt_xa_23@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      7 hours ago

      I think the sentiment is better expressed as “Don’t rely on it for anything important, unless you know what you’re doing”

      If you’re already an expert in your field, then you know enough to be able to identify errors and notice problems.

      A layperson with no knowledge can’t make those differentiations, so shouldn’t use it for important tasks.

      There’s a big difference between an oncologist using AI classification algorithms to detect breast cancer, and someone asking ChatGPT if a mushroom is safe.

      • stinky@redlemmy.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        edit-2
        7 hours ago

        You don’t need to be an expert in your field to know that you shouldn’t ask a stranger to decide whether or not to eat something potentially deadly. Sorry, but that’s a fact of life. It’s not like you’re being forced to eat the thing.

        And for the last time, identifying whether a handheld item is poisonous is not one of the use cases for ChatGPT, and you do not need to be an expert to know that. Just read the documentation.

        Please stop being lazy and do your own research before you hurt yourself or someone else.