• manualoverride@lemmy.world
    link
    fedilink
    arrow-up
    68
    arrow-down
    1
    ·
    3 天前

    I’ve said this on Lemmy a few times before but 25+ years ago my AI dissertation was on a mushroom identification algorithm, which concluded that even with all the computing power in the world it would not be possible to create an infallible system, and as such was wholly unethical to create, when the cost of failure is death.

    25 years later and AI is still the same, we’ve just decided to give it all that computing power.

    • Ignotum@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      7
      ·
      edit-2
      3 天前

      By that logic it would be unethical for an expert to give advice, or to even teach others to identify mushrooms, since they too are fallible and it could lead to death?

      Or saying it was unethical to invent cars because they can (and most certainly do) cause deaths.

      Almost everything would be unethical really, the world is chaotic, nothing is perfect, deaths happen, all we can do is work to reduce the risks

      • floquant@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        21
        ·
        3 天前

        What makes an expert is the ability to say “this is unequivocally safe to eat, because I can positively identify it based on this and this feature”, as well as “it is not possible/I am not able to confidently identify this mushroom as safe”

        • Ignotum@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          5
          ·
          3 天前

          So an AI that can identify mushrooms and also tell the user if a mushroom is too similar to a different dangerous mushroom to be identified with a high enough certanity for it to be safe, would be ethical?

          Then how can anyone claim that no such system can ever be created? That makes no sense

            • Ignotum@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              3 天前

              So experts cannot identify mushrooms at all by looking at it?

              They might turn it around and look at it from different angles, but then just make an AI that takes in multiple images from different angles, maybe have it ask for different angles if it cannot see everything it needs to see.

              And if the experts use other senses besides vision, like smell and touch, just make an AI that says “it might be X or Y, only way to tell them apart is through the smell, so i can’t be sure”

              • petrol_sniff_king@lemmy.blahaj.zone
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                1 天前

                just make an AI that says “it might be X or Y, only way to tell them apart is through the smell, so i can’t be sure”

                I love when idea people tell me to just do this or that as if it’s easy.

                • Ignotum@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  24 小时前

                  Of course you wouldn’t be able to do it, but there is nothing preventing such a system from being created

            • Ignotum@lemmy.world
              link
              fedilink
              arrow-up
              3
              arrow-down
              3
              ·
              3 天前

              Yeah, over the top AI hype is annoying, and there are many valid criticisms to be had with regard to how AI is being trained and used (mainly generative AI),
              but all this absolutist anti-AI nonsense beats everything

      • manualoverride@lemmy.world
        link
        fedilink
        arrow-up
        12
        arrow-down
        3
        ·
        3 天前

        Now I don’t profess to remember the entire paper, but one section was certainly “Human factors” the difference between an expert is a human can place emphasis on the dangers above all else which an AI is often incapable of portraying, and the car will still have a human driver.

        The whole point was this was a very limited and narrow language model, with AI image recognition with the assumption that the thing the human was describing and picturing is a mushroom and it’s still fallible. Specifically a mushroom identification program is a really bad idea and absolutely unethical to create, a system that answers any question you ask it where you sort out the guardrails as you go… that’s dangerous.

        • Ignotum@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          4
          ·
          3 天前

          So the argument is that you tried an AI once and it didn’t do a thing, therefore it is impossible to create an AI that is able to do it?

          Let’s say we reach the point where we can scan and then simulate the entire brain of a mushroom expert, then you’d have an AI that would give the same responses as a human expert would, is it ethical now? (Ignoring the ethics of simulating a person like that)

          Simple classification problems are relatively trivial, just train an image classifier to take in a picture of a mushroom and have it predict the type, as well as whether or not the mushroom is similar to a dangerous one, and for good measure whether the picture is good enough to give reliable results. Train it based on feedback from experts and it should end up as reliable as the experts it was based on

          • manualoverride@lemmy.world
            link
            fedilink
            arrow-up
            7
            ·
            2 天前

            Well I did study for 5 years, code the AI myself and spent 4 months training it using screensaver processing on ~800 computers. Not like I downloaded an AI from the play store and declared it to be rubbish. 😀

            Even with reinforcement learning from human feedback, this is still a neural network where not every pathway leads to the correct outcome.

            Regardless of all the complexities people are still far more accepting of human error than AI error in extreme situations.

            • Ignotum@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              3
              ·
              2 天前

              Oh are you walking back the “it would be unethical” claim, and the claim that AI model cannot give nuanced responses like a human can?

              Sounds like you are now saying that a model can be made that is far better than any human expert, but since it can never be perfect and because people are far less forgiving when machines make mistakes, therefore what exactly?

              If we could make something that would reduce the absolute amount of yearly mushroom poisonings, then i would view that as an ethically good thing, not doing so would be like not making a medicine because it can give side effects, if the benefits outweigh the risks then i view it as a good thing

              • manualoverride@lemmy.world
                link
                fedilink
                arrow-up
                3
                ·
                2 天前

                Can you see the irony of us having a nuanced debate which is leading to misunderstanding, because we are using a medium where detail and emphasis are difficult to achieve? 😀

                My assumption of my mushroom identification program was they it would become widely available, which would be unethical.

                In the hands of a trained Mycologist using it purely as a check on their established results. Possibly useful but easy to misuse.

                A Mycologist using the program to perform the identification first, which they would then check, also dangerous as human factors would lead to confirmation bias.

                AI systems inevitably lead to overconfident conclusions from people without the time or knowledge to know the potential risks.

              • petrol_sniff_king@lemmy.blahaj.zone
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                1 天前

                If we could make something that would reduce the absolute amount of yearly mushroom poisonings,

                You are begging the question. This is not known.

    • WorldsDumbestMan@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      8
      ·
      2 天前

      Just simulate an actual brain on a computer, forget AI.

      We are a few years away from that.

      The real challenge is x10 million speed simulation of a human brain.

  • Sparky@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    71
    arrow-down
    1
    ·
    edit-2
    3 天前

    I love how this meme went from being hand drawn poking fun at Ai slop, to someone slopifying it, to whatever this is…

    Obligatory fuck Ai slop

  • lIlIlIlIlIlIl@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    18
    ·
    3 天前

    Wish we could go back to web search when every answer was 100% correct and this would never ever happen

    Curse you AI for allowing lies on the internet!

    • Glide@lemmy.ca
      link
      fedilink
      arrow-up
      20
      arrow-down
      1
      ·
      3 天前

      This sarcasm is completely unwarranted.

      People recognized that random answers on the internet were inclined to be from questionable sources. AI answers provide a sense of authority to what is being said, and then backs up that sense of authority with speech patterns and confidence which we are trained to trust in.

      Web searches were naturally met with a sense of scrutiny, but LLMs lean into habits and patterns that convince us it is right, often subtly.

      • FosterMolasses@leminal.space
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 天前

        AI answers provide a sense of authority to what is being said

        I think that’s the real issue, people have with zero discernment consumed the marketing ploy that AI is infallible when in actuality its output has more questionable results than any other mechanism so far.

        If people would simply grasp this then it wouldn’t be so big of a deal. No one had to ban Photoshop for us to eventually catch on that not all photos shared online are authentic lol

      • lIlIlIlIlIlIl@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        12
        ·
        3 天前

        This sarcasm is completely and wholly warranted.

        Did people recognize random answers on the internet as lies when it was new? Of course not. We collectively grew that skill organically as we all came online.

        The next version is completely different - and exactly the same

        • [deleted]@piefed.world
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          3 天前

          With web searches we learned who were likely to be reliable sources and able to dismiss other obviously shitty sites.

          How do you learn to identify which answers from the same LLM were likely to be wrong?