WhatsApp’s AI shows gun-wielding children when prompted with ‘Palestine’::By contrast, prompts for ‘Israeli’ do not generate images of people wielding guns, even in response to a prompt for ‘Israel army’

  • Tetsuo@jlai.lu
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    It doesn’t matter. I don’t really care about moderation being impossible to do. Google decided most moderation should be done automatically on YT and there are constantly false positives. They are not being held accountable both for false positives and false negatives. No human is involved.

    And reading that type of comment I’m assuming we are heading the same way. Businesses not being accountable for something that is absolutely being generated by their code. If you choose to deploy a black box that generates random stuff you can’t understand how it was generated it shouldn’t make you not responsible for the damage done.

    I don’t think we should naively just accept apologies from AI owners and move on. They knew the risk of dangerous content being generated and decided it was acceptable.

    Also considering the damage that Facebook has done in the past and their careless attitude toward privacy, I cannot understand why you would find it likely that they took the time to add some kind of safeguard for nationality and terrorism to be wrongfully associated.

    Even then, the very concept of nationality is certainly not clear for an AI. For some Palestine is not a country. How would you think they would have coded a safeguard to prevent that kind of mistake anyway ?

    There is a contradiction also in saying that you can’t moderate every single AI output manually but that they manually added a moderation of sort to the AI specifically for Palestinians and terrorism. There is no way they got so specific. As you said it’s not a practical approach.

    The very important point for me to convey is that just because some black box generating text can randomly say racist stuff doesn’t and shouldn’t be more socially acceptable. That’s it.

    Then obviously I think these AI shouldn’t have been released before their owners have a very good understanding on how they work and on how to prevent 99.9999999999% of the dangerous outputs. Right now my opinion is that Whatsapp deployed this knowing a lot of racist stuff would be generated and they just decided they will figure it out along the way with the help of the users.

    It was either that or being late to the competition for the AI market.

    If an innocent user can generate that easily some racist output I would argue they did not responsibly released this AI.