• Ignotum@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    4
    ·
    3 days ago

    So the argument is that you tried an AI once and it didn’t do a thing, therefore it is impossible to create an AI that is able to do it?

    Let’s say we reach the point where we can scan and then simulate the entire brain of a mushroom expert, then you’d have an AI that would give the same responses as a human expert would, is it ethical now? (Ignoring the ethics of simulating a person like that)

    Simple classification problems are relatively trivial, just train an image classifier to take in a picture of a mushroom and have it predict the type, as well as whether or not the mushroom is similar to a dangerous one, and for good measure whether the picture is good enough to give reliable results. Train it based on feedback from experts and it should end up as reliable as the experts it was based on

    • manualoverride@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      2 days ago

      Well I did study for 5 years, code the AI myself and spent 4 months training it using screensaver processing on ~800 computers. Not like I downloaded an AI from the play store and declared it to be rubbish. 😀

      Even with reinforcement learning from human feedback, this is still a neural network where not every pathway leads to the correct outcome.

      Regardless of all the complexities people are still far more accepting of human error than AI error in extreme situations.

      • Ignotum@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        3
        ·
        2 days ago

        Oh are you walking back the “it would be unethical” claim, and the claim that AI model cannot give nuanced responses like a human can?

        Sounds like you are now saying that a model can be made that is far better than any human expert, but since it can never be perfect and because people are far less forgiving when machines make mistakes, therefore what exactly?

        If we could make something that would reduce the absolute amount of yearly mushroom poisonings, then i would view that as an ethically good thing, not doing so would be like not making a medicine because it can give side effects, if the benefits outweigh the risks then i view it as a good thing

        • manualoverride@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          2 days ago

          Can you see the irony of us having a nuanced debate which is leading to misunderstanding, because we are using a medium where detail and emphasis are difficult to achieve? 😀

          My assumption of my mushroom identification program was they it would become widely available, which would be unethical.

          In the hands of a trained Mycologist using it purely as a check on their established results. Possibly useful but easy to misuse.

          A Mycologist using the program to perform the identification first, which they would then check, also dangerous as human factors would lead to confirmation bias.

          AI systems inevitably lead to overconfident conclusions from people without the time or knowledge to know the potential risks.

        • petrol_sniff_king@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          1 day ago

          If we could make something that would reduce the absolute amount of yearly mushroom poisonings,

          You are begging the question. This is not known.

            • petrol_sniff_king@lemmy.blahaj.zone
              link
              fedilink
              arrow-up
              1
              ·
              22 hours ago

              You’re in here arguing with a dissertation you haven’t read because there might possibly be a chance we could maybe build an AI that could do this?

              If we can’t, then you have nothing to add to this conversation.