• itkovian@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    18 hours ago

    Where is the idea that LLMs will ever to curing diseases coming from? What is the possible mechanism? LLMs generate text from probability distributions. There is no reason to trust their output because they don’t have built-in concept of true or false. When one cannot judge the quality of the output, how can one reliably use it as a tool for any purpose, let alone scientific research?

    • Doomsider@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      There are other general AIs that can look over imaging like cat scans and in some situations catch things a doctor can’t.

      There are also ones that can simulate drug interactions with the body and can be used to model creating novel drugs for treatments.

      These are not LLMs though.