• Otter@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    9 months ago

    Yea I share the same concerns about the “AI”, but this sounds like a good thing. It’s going through footage that wasn’t going to be looked at (because there wasn’t a complaint / investigation), and it’s flagging things that should be reviewed. It’s a positive step

    What we should look into for this program is

    • how the flags are being set, and what kind of interaction will warrant a flag
    • what changes are made to training as a result of this data
    • how the privacy is being handled, and where the data is going (ex. Don’t use this footage to train some model, especially because not every interaction is out in the public)